UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

In which the fixation probability of a superstar is determined and a contradiction in the literature… Jamieson-Lane, Alastair David 2014

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2014_november_jamiesonlane_alastair .pdf [ 734.14kB ]
Metadata
JSON: 24-1.0135587.json
JSON-LD: 24-1.0135587-ld.json
RDF/XML (Pretty): 24-1.0135587-rdf.xml
RDF/JSON: 24-1.0135587-rdf.json
Turtle: 24-1.0135587-turtle.txt
N-Triples: 24-1.0135587-rdf-ntriples.txt
Original Record: 24-1.0135587-source.json
Full Text
24-1.0135587-fulltext.txt
Citation
24-1.0135587.ris

Full Text

In Which The Fixation Probability OfA Superstar Is DeterminedAnd A Contradiction In The Literature Is AddressedbyAlastair David Jamieson-LaneB.Sc., The University of Canterbury, 2012A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCEinThe Faculty of Graduate and Postdoctoral Studies(Mathematics)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)October 2014c© Alastair David Jamieson-Lane 2014AbstractPopulation structures can be crucial determinants of evolutionary processes.For the spatial Moran process certain structures suppress selective pressure,while others amplify it (Lieberman et al. 2005 Nature 433 312-316). Evo-lutionary amplifiers suppress random drift and enhance selection. Recently,some results for the most powerful known evolutionary amplifier, the super-star, have been invalidated by a counter example (Dı´az et al. 2013 Proc.R. Soc. A 469 20130193). Here we correct the original proof and deriveimproved upper and lower bounds, which indicate that the fixation proba-bility remains close to 1− 1/r4H for population size N →∞ and structuralparameter H > 1. This correction resolves the differences between the twoaforementioned papers. We also confirm that in the limit N,H →∞ super-stars remain capable of providing arbitrarily strong selective advantages toany beneficial mutation, eliminating random drift. In addition, we investi-gate the robustness of amplification in superstars,and find that it appears tobe a fragile phenomenon with respect to changes in the selection or mutationprocesses.iiPrefaceThis project was initiated by Professor Christoph Hauert in response to thepreprint of Dı´az et al. [2013]. All mathematical investigation and proofs con-tained were constructed by Alastair Jamieson-Lane, with guidance providedby C. Hauert.The paper upon which this thesis was based was originally drafted byAlastair J.L. Feedback was provided by C. Hauert.iiiTable of contentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viList of figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix1 In which concepts are introduced . . . . . . . . . . . . . . . . 11.1 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Introduction to the Moran process on graphs . . . . . . . . . 31.2.1 Superstars . . . . . . . . . . . . . . . . . . . . . . . . 62 In which prior work is discussed . . . . . . . . . . . . . . . . 92.1 Original proof: Lieberman et al (2005) . . . . . . . . . . . . 92.2 Counter example: Dı´az et al (2013) . . . . . . . . . . . . . . 112.3 Initial comparison . . . . . . . . . . . . . . . . . . . . . . . . 133 In which a proof is described . . . . . . . . . . . . . . . . . . 173.1 Timescales . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.2 Top of stem . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.3 Middle of stem . . . . . . . . . . . . . . . . . . . . . . . . . . 193.4 Bottom of stem . . . . . . . . . . . . . . . . . . . . . . . . . 233.5 Slow dynamics in reservoirs . . . . . . . . . . . . . . . . . . . 233.6 Upper bound on fixation probability . . . . . . . . . . . . . . 233.7 Lower bound on fixation probability . . . . . . . . . . . . . . 243.8 Narrow window . . . . . . . . . . . . . . . . . . . . . . . . . 25ivTable of contents4 In which error bounds are found . . . . . . . . . . . . . . . . 264.1 Initial conditions . . . . . . . . . . . . . . . . . . . . . . . . . 264.2 Details of expected train length T . . . . . . . . . . . . . . . . 274.2.1 Bounding T . . . . . . . . . . . . . . . . . . . . . . . 274.2.2 Train collisions . . . . . . . . . . . . . . . . . . . . . . 294.3 Interaction of trains with root node . . . . . . . . . . . . . . 304.3.1 An upper bound on train success probability . . . . . 324.3.2 A lower bound on train success probability . . . . . . 334.4 Bounds on fixation probabilities . . . . . . . . . . . . . . . . 354.4.1 Upper bound . . . . . . . . . . . . . . . . . . . . . . . 354.4.2 Lower bound . . . . . . . . . . . . . . . . . . . . . . . 364.5 Bringing it all together . . . . . . . . . . . . . . . . . . . . . 395 In which some loose ends are tied up and others are leftopen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415.1 Selection & sequence . . . . . . . . . . . . . . . . . . . . . . 415.2 Mutations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435.3 Deleterious mutations, r < 1 . . . . . . . . . . . . . . . . . . 436 In which concluding remarks occur . . . . . . . . . . . . . . 46Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48AppendicesA Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51A.1 superstarSolve . . . . . . . . . . . . . . . . . . . . . . . . . . 51A.2 SimpleChainTest . . . . . . . . . . . . . . . . . . . . . . . . . 54vList of tables2.1 Predictions based on Lieberman et al ’s density argument,compared with the recorded density... . . . . . . . . . . . . . 14viList of figures1.1 Spatial Moran process . . . . . . . . . . . . . . . . . . . . . . 41.2 The superstar... . . . . . . . . . . . . . . . . . . . . . . . . . . 62.1 An illustration of Dı´az et al ’s Markov chain argument. . . . . 122.2 Comparing simulation to density prediction.. . . . . . . . . . 153.1 Two possible histories of a train... . . . . . . . . . . . . . . . 203.2 Grid showing collection of possible train states . . . . . . . . 214.1 Possible states of the end of the stem . . . . . . . . . . . . . . 31viiAcknowledgementsI would like to thank my supervisor, Professor Christoph Hauert, for pro-viding me with puzzles to be solved, and for guidance, both in mathematics,and in the more extended maze of academia.I would also like to thank Dr. David Brydges for instruction in probabil-ity, and for providing a sounding board for my thoughts when I felt stuck.Thanks is also due to two anonymous reviewers who caught a flaw in theoriginal proof, giving me the chance to correct it before publication.In terms of funding, this work was supported by the Natural Sciences andEngineering Research Council of Canada (NSERC) and the FoundationalQuestions in Evolutionary Biology Fund (FQEB), grant RFP-12-10.viiiTo the students of Mrs Lynn’s class, grade three, division twelve, LordStrathcona Elementary school, 2013-2014.Good luck, have fun, don’t grow up too quickly.And never stop learning.ixChapter 1In which concepts areintroduced1.1 ContextPopulations evolve according to the principles of natural selection and ran-dom drift. The balance between the two competing processes is determinedby numerous factors, including both population size and structure [Antalet al., 2006, Bu¨rger and Lande, 1994, Nowak and May, 1992, Fu and Nowak,2013]. The most malignant tumour is unlikely to cause harm if it arises inthe outermost layer of skin and is easily brushed aside, and the most imper-ative model for climate change has limited influence until it has worked itsway from a researchers desk, through the literature into policy making andpublic awareness. Position matters.One of the simplest and most influential models of stochastic evolu-tionary processes in finite populations is the Moran process [Moran, 1962,Nowak, 2006]. It is based on an unstructured (or well-mixed) population ofsize N , where each individual is classified either as a resident (wild type)or mutant. Each type is assigned a constant fitness, which determines itspropensity to reproduce. The fitness of wild types is normalized to 1 andmutants have fitness r. An advantageous mutant has r > 1, a disadvanta-geous mutant has r < 1 and a neutral mutant is indistinguishable in termsof fitness, r = 1. In every time step, an individual is randomly selected forreproduction with each individual’s probability proportional to their fitness.The selected individual then produces a clonal offspring that replaces anindividual, selected uniformly at random, in the population. This process isrepeated until eventually the population has reached one of the homogenousstates of all residents, if the mutant went extinct, or all mutants, if the mu-tant successfully took over the entire population [Moran, 1962, Nowak et al.,2004, Lieberman et al., 2005]. In both cases, the population has reached fix-ation, although for the sake of this document when talking of fixation we willbe referring to fixation of mutants, unless stated otherwise. In the absence11.1. Contextof mutation, the two homogeneous states are absorbing.The Moran process models evolutionary dynamics based on selection andrandom drift in finite populations: an advantageous mutant has a higherprobability, but no guarantee, to reach fixation and, similarly, an inferiormutant is more likely to be eliminated, but not with certainty. Because ofthe simple (or rather, non-existent) topology of the original Moran process,all possible states of the system can be described by simply stating the cur-rent number of mutants. At any given time step, this number can eitherincrease or decrease by one, with known probabilities. Any time step wherethe actual make up of the population changes must be associated with someinteraction between a resident and a mutant – one replacing the other. Be-cause mutants are always r times more likely to replace resident individualsthan the converse, it can be shown that the chance of an increase in thenumber of mutant individuals is always r times greater than the chance ofa decrease, leading to a “forward bias” of r regardless of the make-up ofthe population. These simple dynamics allow the system to be described asa random walk, where Xt denotes the current number of mutants, and theforward bias is always r. Simple martingale arguments can then be used todetermine the probabilities of hitting either end of this random walk, fromany starting point, and thus the fixation probabilities of either residents ormutants can be found for any given initial configuration. Of particular in-terest is the fixation probability, ρM , of a single mutant that arises in anotherwise homogeneous population of resident individuals:ρM =1− 1r1− 1rN. (1.1)In the neutral limit, r → 1, all individuals in the population are equally likelyto end up as the single common ancestor, leading to a fixation probabilityof 1/N .The original Moran process ignores population structures – but this iseasily addressed by arranging individuals of a population on a graph, suchthat each node refers to one individual and the links to other nodes define itsneighbourhood. Maruyama [1970] and Slatkin [1981] conjectured that thefixation probability of a mutant in this Moran process on graphs remainsunaffected by population structures. Lieberman et al. [2005] proved thatthis is indeed true for a broad class of structures and, in particular, holdsfor simple structures such as lattices or regular networks. At the sametime, this classification indicated that fixation probabilities, ρ, may differfor some structures by tilting the balance between selection and random21.2. Introduction to the Moran process on graphsdrift. Evolutionary suppressors enhance random drift and suppress selection(ρ < ρM for r > 1 and ρ > ρM for r < 1), whereas evolutionary amplifiersenhance selection and suppress random drift (ρ > ρM for r > 1 and ρ < ρMfor r < 1).In recent years, various aspects of the Moran processes on graphs havebeen explored, including effects of population structures on fixation prob-abilities [Antal et al., 2006, Broom and Rychta´rˇ, 2008, Burton Voorhees,2013], or fixation times [Joshua L. Payne, 2009, Frean et al., 2013], as wellas computational techniques [Shakarian and Roos, 2011, Fu et al., 2009].However, the most surprising result remains that even perfect evolutionaryamplification appears to be possible: “The superstar. . . [has] the amazingproperty that for large [population sizes] N , the fixation probability of anadvantageous mutant converges to one, while the fixation probability of adisadvantageous mutant converges to zero.” [Lieberman et al., 2005].A more recent paper Dı´az et al. [2013] provided a counter example thatcontradicted the fixation probabilities reported in Lieberman et al. [2005].In this thesis, I give a brief overview of the arguments in both papers,compare their predictions to some numerical results, and go on to identifythe problem in the original proof and correct it. This yields new upper andlower bounds on the fixation probability for superstars. It is found that forany r > 1, a graph can be constructed such that ρ is arbitrarily close to 1,thus confirming the existence of perfect evolutionary amplification.1.2 Introduction to the Moran process on graphsPopulation structure can be represented by assigning individuals to nodeson a graph, with links representing each individual’s neighbourhood1. TheMoran process on graphs follows the same procedure as the original Moranprocess except for the crucial difference that the offspring does not replacea random member of the entire population but rather replaces a neighbourof the reproducing individual, selected uniformly at random (Fig. 1.1).On directed graphs, the offspring replaces a downstream neighbour byselecting one outgoing link uniformly at random. As before, the populationhas reached fixation once either one of the absorbing homogeneous states isreached. For any number of mutants, m (0 < m < N), the fixation prob-1Models also exist where nodes represent “islands” or some other unit of geographicallyisolated subpopulation. While interesting and certainly biologically relevant, this was theassumption of neither of the articles that this work is based on, thus is not what we willbe assuming here.31.2. Introduction to the Moran process on graphsA BC DFigure 1.1: Spatial Moran process. (A) graph structure and initial distribu-tion of residents (dark blue) and mutants (pale yellow). (B) selection: anindividual (dashed outline) is selected to reproduce with a probability pro-portional to its fitness. (C) replacement: a downstream neighbour (dashedarrow) is randomly selected for replacement. (D) reproduction: the neigh-bour is replaced by the clonal offspring of the upstream reproducing indi-vidual.abilities of residents and mutants are both non-zero on strongly connectedgraphs, i.e. graphs where every node can be reached from any other nodethrough a series of moves between nodes that are connected by links (fordirected graphs, only moves in the direction of the link are permitted). If agraph is not strongly connected, then the structure may prevent the spread-ing or elimination of a mutant type regardless of its fitness and hence thefixation probability for either or both types may be zero.For the Moran process on graphs, the fixation probabilities are the sameas in unstructured populations, c.f. Eq. (1.1), provided that the graph is acirculation [Lieberman et al., 2005]. A circulation is defined as a graph such41.2. Introduction to the Moran process on graphsthat the sum of weights of all outgoing links is equal to the sum of weightsof all incoming links for each node. This means that every node has thesame impact on the environment as the environment has on the node.A graph is an evolutionary suppressor if the fixation probability of anadvantageous mutant is less than for the original Moran process, ρ < ρM .The simplest example is a linear chain: a graph with a single source node,which connects to one end of a (directed) chain of nodes [Nowak et al., 2003].Any mutation that does not occur at the top of the chain has no chanceof reaching fixation. However, if the mutation occurs at the top node itwill eventual reach fixation with certainty. Assuming that mutations arisespontaneously and are equally likely in any location, the resulting fixationprobability is simply 1/N , regardless of the mutant’s fitness r. The linearchain is an example of a graph that is not strongly connected, because thesource node cannot be reached from any node in the chain. Evolutionarysuppressors are sometimes found when high fidelity copying is of paramountimportance, such as in slowing down the somatic evolution of cancer [Nowaket al., 2003, Michor et al., 2004].In contrast, an evolutionary amplifier is a graph which increases thefixation probability of advantageous mutants as compared to the originalMoran process, ρ > ρM . The simplest evolutionary amplifier is the stargraph: a single root node is connected to a reservoir of peripheral leaf nodesthrough bi-directional links. The fixation probability of a single mutant forN  1 is [Lieberman et al., 2005, Broom and Rychta´rˇ, 2008]ρ0 ≈1−1r21−1r2N. (1.2)On the star, a mutant with fitness r has roughly the same fixation probabilityas a mutant with fitness r2 would in an unstructured population. Thus, thefixation probability of beneficial mutations (r > 1) is enhanced, but fordeleterious mutants (r < 1) it is reduced. Note that the fixation probabilitydepends on where the single mutant arises. If the mutant is located in theroot node then, for N  1, it is almost certainly replaced in the next timestep because one of the N − 1 reservoir nodes is selected for reproduction.However, if mutants arise at random, then for N  1 they almost surelyarise in the reservoir and the fixation probability is as specified in Eq. (1.2).Evolutionary amplifiers would seem to provide promising structures for taskswhere strong selection is advantageous, such as in the adaptive immunesystem or in collaboration networks.51.2. Introduction to the Moran process on graphsA AFigure 1.2: The superstar consists of three distinct types of nodes: the rootnode (pale blue), the reservoir nodes (green) and the stem nodes (dark red).The reservoir nodes connect to the start of the stem, the end of the stemconnects to the root node and the root node connects to all reservoir nodesin each branch. The depicted superstar has B = 5 branches each with L = 5reservoir nodes and a stem of length H = 4, which yields a total populationsize of N = B(L+H) + 1 = 46.1.2.1 SuperstarsThe two most prominent features of the star graph are the large reservoirwhere changes occur on a slow time scale, and the bottleneck caused by thehub, where changes occur quickly. In particular, the bottleneck introducesa second level for selection to act upon. Lieberman et al. [2005] claim thatthis basic insight can be exploited to increase evolutionary amplificationby elongating the bottleneck and providing further levels where selectioncan act. Superstars act as a more extreme version of the basic star, and areproposed as a way to increase evolutionary amplification further [Liebermanet al., 2005]. The superstar consists of a single root node surrounded by Bbranches (Fig. 1.2). Each branch consists of a large reservoir of L nodesfeeding into one end of a directed chain of length H, the stem. The last61.2. Introduction to the Moran process on graphsstem node in each branch feeds into the root node, which then connectsto all reservoir nodes in every branch. The population size is thus givenby N = B(L + H) + 1. Nodes are classified based on their locations onthe graph. This classification is designed to simplify discussions but doesnot affect the rate of reproduction of the individual occupying the node.Lieberman et al. [2005] report the fixation probability for superstars withL,B  H asρH ≈1−1rK1−1rKN, (1.3)where K = H + 2 is a structural parameter and indicates the number ofmoves needed to reach any reservoir node from any other reservoir node.This is the number of levels selection can act upon. Consequently it isargued that a single mutant that arises in the reservoir of a superstar withfitness r has approximately the same fixation probability as a mutant withfitness rK in an unstructured population. This result would then implythat by increasing the length of the stem, the fixation probability, ρH , ofany advantageous mutant, r > 1, could be brought arbitrarily close to 1,indicating arbitrarily strong amplification or perfect selection.Recently Dı´az et al. [2013] provided a counter-example demonstratingthat the fixation probability in Eq. (1.3) is too optimistic in the particularcase of H = 3 and thus invalidated the proof in Lieberman et al. [2005].In addition, they provided substantial simulation based evidence indicatingthat Eq. (1.3) also fails for higher values of H. For the counter-examplethey show that in the limit N →∞:ρ3 < 1−1 + r2r5 + r + 1. (1.4)This upper bound reflects the probability that a mutant in a reservoir cre-ates a second mutant in any reservoir before getting replaced by residentoffspring. The fixation probability according to Eq. (1.3) grows faster withincreasing r than Eq. (1.4) and for r > 1.42 results in a contradiction.Upon arriving at the University of British Columbia I was asked to ex-amine these conflicting results. In particular, I was tasked with:• Reading both proofs in full detail.• Determining whether the two results did in fact contradict each other71.2. Introduction to the Moran process on graphs• In the case of conflict, determining precisely where the conflict oc-curred.• In the case of conflict, determine which of the two was correct• Having determined which was correct, find the flaw in the incorrectproofThis initial work is covered in chapter 2. Having identified the criticalflaw in the original proof [Lieberman et al., 2005], we then go on to considera corrected proof, more accurate than [Lieberman et al., 2005], and moregeneral than Dı´az et al. [2013]. A general sketch of this proof is providedin chapter 3, with error terms and full rigor provided in chapter 4. Furtherexplorations, and other loose ends are explored in chapter 5.8Chapter 2In which prior work isdiscussedThe work in this thesis is concerned mainly with the work of two previouspapers, namely Lieberman et al. [2005] and Dı´az et al. [2013]. This chapterdescribes the basic arguments made in each paper, then goes on to discussthe significant conflicts between the two arguments, as well as some simula-tion results used to guide research near the start of the investigation. Whereconflicts between our notation and that of the original authors arise, we useour own notation, so as to maintain internal consistency of this document.2.1 Original proof: Lieberman et al (2005)In this section, I describe the original argument, sketched in the supple-mentary material of Lieberman et al. [2005]. For the sake of completeness,I follow the proof as presented in Lieberman [2010, p. 47-54], which goessubstantially beyond the original sketch.The argument begins by first recognizing that, with overwhelming like-lihood, the initial mutant (placed uniformly at random) will occur in thereservoir.It is then argued that, if we have a series of populations in series,each downstream from the last, then, given one population with fixed mu-tant density d, the density of mutants in the following population will bedr/(1 + d(r − 1)). This can be viewed as the weighted fitness of mutantsin the upstream population, dr relative to the total fitness of the upstreampopulation 1 + d(r − 1). Density is not described precisely in the proof,but might be thought of as “the likelihood that a randomly chosen node,observed at a random time in the distant future, will contain a mutant”.From this, induction is used to argue that, for population ν steps belowour initial population, the density of mutants must bed(ν) =drν1 + d(rν − 1)(2.1)92.1. Original proof: Lieberman et al (2005)This argument is extended to include the root node, thus leading to theroot node having a mutant density ofd(K − 1) =drK−11 + d(rK−1 − 1)(2.2)Where d = Xt/BL (Xt is the current number of reservoir mutants) and Kis the length of the shortest cycle on the superstar (equivalent to H + 2) 2.Lieberman et al. [2005] then gives a timing argument to argue that thedensity of mutants in the root has time to update such that it accuratelyreflects the density of mutants in the reservoirs (before any change occurs inthe reservoirs). Unfortunately the timing argument as presented is flawed.The argument repeatedly uses the probability that a mutant will eventuallypass a particular node in the stem before being replaced. In actual fact,the timespan needed is the per time step probability that a particular nodeis replaced by any upstream individual (since all upstream individuals canbe considered as accurate measures of the upstream mutant density). Asit happens, the corrected timing argument ends up showing that the timeuntil update is faster than that predicted by Lieberman [2010], and thus thisparticular flaw in the original proof can be easily rectified without makingany alterations to the surrounding arguments.Next in the proof Eq. (2.2) is used to conclude that as long as the numberof reservoir mutants is not too high, the probability that the number ofreservoir mutants increases is very close to:rN +Xt(r − 1)drK−11 + d(rK−1 − 1)(1− d). (2.3)While the probability of the number decreasing is close to1N +Xt(r − 1)1− d1 + d(rK−1 − 1)d. (2.4)The proof then considers a slower time scale, such that we only track timesteps where the number of reservoir mutants changes (effectively condition-ing on the occurrence of one of the above events). This allows them to2For the benefit of any reader comparing this thesis to the original proof, we are forcedto note that the original proof incorrectly refers to the root as node K. Because thereservoir is treated as level 0 it is believed that the root should be K − 1. Later in thethe proof formulas are given treating the root as K − 1, leading to a slight inconsistencywith previous working, but correcting for the previous mistake. Here we treat the root asnode K − 1 throughout, in an attempt to follow the presumed intent of Lieberman et al.[2005] rather than the exact wording.102.2. Counter example: Dı´az et al (2013)divide out all common terms in the above, normalising such that the prob-abilities sum to 1. The probability of reservoir mutants increasing ratherthan decreasing is found to berK1 + rK. (2.5)The total number of reservoir mutants is then treated as a random walk,with the above bias when Xt is small, and no bias otherwise. We omit theremainder of the proof, as it diverges substantially from anything we shallneed to cover in this thesis.2.2 Counter example: Dı´az et al (2013)Rather than attempting to form general results about all superstars, Dı´azet al. [2013] instead focus on the particular case K = 5 (equivalently H = 3).They consider the behavior of the first mutant in the first branch, beforeany other reservoir mutants have been created. They show that at this earlystage, the system has only 25 possible states, one for each combination ofstates of the initial mutant and its four downstream nodes (the last beingthe root). By the time any other node on the superstar is a mutant, weare no longer in the early stages of mutant propagation. Dı´az et al. [2013]add one additional state to the above list, representing mutant propagationto another reservoir node. They then explicitly calculate the transitionprobabilities between all 33 states, resulting in a Markov chain with twoabsorbing states (extinction and propagation).All transition probabilities are calculated symbolically in Dı´az et al.[2013]. By excluding absorbing states, the problem is reduced to a sys-tem of 31 equations with 31 variables where each variable represents theprobability of propagation occurring before extinction, given a particularstarting state. This can be written in matrix notation asMρ = ρ+ swhere M is our transition matrix, ρ is a vector containing the eventualmutant propagation probabilities of each state, and s is a vector containingthe probability of our mutant strain propagating to a new branch this timestep. This variable is referred to as “FS” in Dı´az et al. [2013].Mathematica is used to solve this system symbolically, resulting in anexplicit expression for the probability that the mutant propagates to at leastone other branch before extinction occurs. By taking the limit of large B112.2. Counter example: Dı´az et al (2013)A AFigure 2.1: An illustration of Dı´az et al. [2013]’s Markov chain argument.Possible states of the initial branch are listed, and all transition probabilitiesare calculated. Please note- many possible states and transitions are notrepresented, as doing so for all possible states would be impractical.and L (M and L in Dı´az et al. [2013]) all “dominated terms” are removed,resulting inρ3 ≤ 1−1 + r1 + r + 2r5, (2.6)an upper bound in direct conflict with the result stated in Lieberman et al.[2005] for sufficiently large r.The paper then goes on to present results from a significant number ofsimulations, for a variety of r and K values. Generically, the results of thesimulation disagree with the result stated in Lieberman et al. [2005], with afew minor exceptions for r ≈ 1 and small K.122.3. Initial comparison2.3 Initial comparisonExamining the proofs it can be seen that fundamentally the two proofsdisagree on the probability that a single mutant will give rise to a secondmutant. Any further arguments about fixation probability are made on thebasis of this initial “mutant propagation probability”, and thus any attemptto resolve or explain this contradiction must begin with an explanation fortheir differing propagation probabilities.My first step (after reading both proofs) was to implement the techniquesof Dı´az et al. [2013] numerically using MatLab. The code used can be foundat the end of this thesis in appendix A.1. This was done primarily as ameans of validating their program. If similar results were found using anindependently written set of code, in a different programming language,using numerical rather than symbolic logic, this would strongly suggest thatthe Mathematica results were correct, and that any possibility for errorwould thus be limited to the conceptual part of the proof.Running the code for K = 5, B = 5000, L = 5000 and r = 1.5, we findthe probability of early extinction is 0.1415. We compare this to Eq. (2.6)’sprediction of 0.1413 and Eq. (2.5)’s prediction of 0.1164 .It was observed that for any r value tested, the propagation probabilityfound by the code converged to Dı`az et al ’s prediction for large L, B. Thiswas considered strong evidence for the correctness of Dı´az et al. [2013]’scode.Next, I wished to test the validity of Lieberman et al. [2005]’s densityargument. This I did by writing a simple java program to simulate the be-havior of a single branch of my superstar. The small size of this subsystemallowed me to avoid many of the computational limitations encountered bysimulators who attempted to analyze the entire system from t = 1 all theway to fixation. By only dealing with a single branch, the data structures re-quired were also significantly simplified. This code can be found in appendixA.2.Running this system for several days, for a variety of parameter values, Iwas able to extract the mutant density for all nodes in the stem, as well as thecorrelation between the states of neigbouring nodes. It was soon observedthat the results of the simulation did not agree with the predictions madein Lieberman et al. [2005].Table 2.1 contains some sample results.132.3. Initial comparisonNode Prediction ×10−5 Simulation ×10−5 Correlation1 7.499 2.538 0.6003162617712 11.24 3.045 0.7196315740023 16.87 3.452 0.7758733797844 25.30 3.802 0.8079575447155 37.95 4.098 0.8296922067276 56.92 4.342 0.8447526360087 85.35 4.536 0.8548187250148 127.9 4.683 0.8621407942899 191.8 4.793 0.86795188303110 287.4 4.879 0.87185582658111 430.6 4.944 0.87470330192212 644.5 4.997 0.87770631994713 963.7 5.046 0.879744964568Table 2.1: Predicted mutant density based on Lieberman et al ’s densityargument, compared with the recorded density from the ChainTest programin A.2. Parameter values L = 20000, r = 1.5, K = 15. The results arerecorded at time t = 67108864000 in the special and condensed time frameof the program. Predictions are calculated using Eq. (2.1). When we refer tocorrelation here we mean “the probability that node i+1 contains a mutant,given that i contains a mutant”It turns out that Eq. (2.2) is calculated under the implicit assumptionthat the states of neighbouring nodes are independent of one another. Itimplies that whenever mutants are rare the probability of a mutant repro-ducing and replacing another mutant must also be low, even though thestates of neighbouring nodes are highly correlated.We might hope that taking neighbour-neighbour correlations into ac-count will give us a more accurate induction formula. If we consider Mi,the state of node i being a mutant at a given time step, and Ri the state ofnode i being a resident type individual at a given time step, we can see thattrivially:P(Mi+1) = P(Mi+1|Mi)P(Mi) + P(Mi+1|Ri)P(Ri)because R and M are the only possible states. We might also remember that,in any given position, for every resident that gets replaced by a mutant, onemutant must be replaced by a resident (a lightbulb must be turned off beforeit can be turned back on again) . Because these two events must alternate142.3. Initial comparisonA A2345678910111213141.00E-0051.00E-0041.00E-0031.00E-002Comparing simulation to density prediction with and without correlation effects# steps below reserviorMutant DensityFigure 2.2: Here we compare the density recorded during our simula-tion (solid orange line) with density predictions made either accounting for(yellow, fine dash) or not accounting for (blue, long dash) the effects ofneighbour-neighbour correlation. When accounting for correlation we useEq. (2.7), and when ignoring it we use Eq. (2.1). It is observed that ac-counting for correlation in this most simple manner significantly improvesthe resulting density prediction, but still results in error.with one another, and can get at most 1 out of sync, we can predict that inthe long term the probability of these two events must equal. To calculatethese probabilities we multiply the probability our graph is in a suitablestate (mutant following resident, or resident following mutant) by the pertime step probability that the upstream member of the pair reproduces.This gives:rNP(Ri+1|Mi)P(Mi) ≈1NP(Mi+1|Ri)P(Ri)Where here we use 1/N to approximate the fitness normalisation factor.Taken together these give:P(Mi+1) ≈ P(Mi) [P(Mi+1|Mi) + r(1− P(Mi+1|Mi))] . (2.7)This we can use to inductively predict the mutant density for nodesfar down the chain (see Fig. 2.3), thus illustrating the importance of thiscorrelation.The “corrected” density formula identified gives a better prediction, butstill differs significantly from our experimental observations – possibly due152.3. Initial comparisonto failure to taken long distance correlation into account. Worse still, it cannot be computed without using correlation data, which in the above examplewas collected from the simulation itself. Thus, its predictive power is largelypost hoc. It serves as an illustration of the importance of correlation, butis of little use in predicting long term behavior. In order to truly predictthe behavior of mutants in the stem, and hence calculate the overall fixationprobability of superstar, we will need far more sophisticated arguments.16Chapter 3In which a proof is describedHaving determined that neighbour-neighbour correlation along the stem isan important feature of the system, it is clear this must be explicitly mod-eled if we wish to find the true fixation probability. Despite its rigor, thestructure of Dı´az et al. [2013] does not readily lend itself to extension to ar-bitrary H (although extension to any particular H is straight forward, butcomputationally expensive as H becomes large). Lieberman et al. [2005],while flawed in its details, is well structured, and was used as a starting pointin what follows. Many of Lieberman et al ’s steps are echoed - although theresults and argument for each step often differ.In this chapter we lay out a proof that in the asymptotic case the truefixation probability is1−1r4(H − 1)(1− 1/r)2≤ 1−1r4T≤ ρH ≤ 1−11 + r4T≤ 1−11 + r4H(3.1)where T is “train length”- a value to be defined in section 3.3. In orderto match previous work as closely as possible, it is assumed that there aremany branches, B, and large reservoirs, L. Additionally, mutations areassumed to be beneficial, r > 1. As has been argued previously, if mutationsarise spontaneously and with equal probability at any node, then the initialmutant almost certainly arises in a reservoir node, because reservoir nodesvastly outnumber nodes of all other types. As in both Dı´az et al. [2013]and Lieberman et al. [2005], we study the dynamics of a single branch indetail, and use this to determine the much slower dynamics of changes inthe reservoirs.On occasion, it will prove useful to refer to the total fitness of all indi-viduals in our superstar at a given time, we label this Ft. N < Ft < rN .All instances of Ft cancel throughout the proof and hence the exact value isnot tracked.This chapter sketches out the main thrust of the proof, following (roughly)the argument as it was created. For simplicity in sketching it is assumedB = L ≈√N . In the following chapter such simplifying assumptions are173.1. Timescalesdropped, and due attention is given to error bounds, taking limits simulta-neously, and other omitted details.3.1 TimescalesDifferent nodes get updated at different rates. More precisely, any givennode of interest is updated if one of its upstream neighbours reproduces andthe node of interest is chosen for replacement. Thus, individuals in nodeswith high (weighted) in-degrees, tend to be short lived, while individuals innodes with low (weighted) in-degrees, tend to be long lived. Here we followthe convention that each node has a total weighted out-degree of 1, and thatweight is evenly distributed among outgoing links (each link has weight 1/kwhere k is the number of links).Assuming 1 ≈ r  N , every node is selected for reproduction with prob-ability ≈ 1/N . The root node has an in-degree of B and all its upstreamneighbours have out degrees of 1, hence it updates with a probability closeto B/N ≈ 1/√N . Similarly, reservoir nodes are replaced with probabilityof approximately 1/N2, the first stem node with probability on the orderof 1/√N , and all other stem nodes with probability of approximately 1/N .For N  1, this results in three different timescales: the slowest for reser-voir nodes that get replaced, on average, only once in N2 time steps; anintermediate timescale for the stem nodes (with the exception of the firstnode of each stem), which get replaced once in N time steps; and a fasttimescale for the root node as well as the first stem node in each branch,which update once in√N time steps, respectively.For N  1, it is possible to separate the three timescales and analyzethe dynamics of the different types of nodes individually. More specifically,this allows us to focus on the intermediate timescale associated with the dy-namics in the stem, while treating the state of the slowly updating reservoirnodes as constant. Because the fast updating nodes are unlikely to repro-duce twice without first being replaced many times themselves, we can treatthem as random variables, with some probability of being a mutant on anygiven time step.In the following, we derive the evolutionary dynamics for the top, middleand bottom of the stem in a single branch. The results determine the slowdynamics of reservoir nodes and describes the early stages of the invasionprocess, when mutants are rare among the reservoir nodes. This allows usto derive upper and lower bounds on the fixation probabilities.183.2. Top of stem3.2 Top of stemThe first node of the stem gets replaced on the fast time scale, which allowsus to treat its state as a random variable. Out of all L upstream neighboursin the reservoir of the corresponding branch, only one is a mutant. Hence, atany given time step, the top node is occupied by a mutant with probabilityclose to r/L. This mutant reproduces with a probability r/Ft and hence theprobability that a mutant is placed in the second stem node is approximatelyr2/(FtL) in each time step (for error terms, see 4.1). For reasons which willsoon become apparent, this event is referred to as “train creation”.3.3 Middle of stemWe refer to the collection of nodes that update on the medium time scale(that is, all stem nodes except the first) as the stem body. Simulations nicelyillustrate the dynamics along the body of the stem: clusters of mutants beginat the top of the stem, then grow and move along the stem. In the following,we refer to these clusters as trains. A train moves forward and increases inlength whenever the front mutant reproduces, which happens at a rate r/Ft.A train shrinks whenever a resident reproduces and replaces the back endof the train, which occurs at a rate 1/Ft, see Fig. 3.1. Thus, as the trainmoves along the stem, the train length for beneficial mutants increases, onaverage.Note that for small superstars with a single node in the stem body, whichcorresponds to H = 2 (or K = 4), the two stem nodes can be treated aseffectively uncorrelated, due to their differing timescales. No train argumentis needed. However, for H > 2 this assumption breaks down and resultsin an overestimation of the fixation probabilities. This explains why thepredictions of Lieberman et al. [2005] hold in all cases they simulated, butbreak down for larger H.In order to link the stem dynamics to the slow timescale of reservoirnodes, we need to know the expected train length, T , when the train firstreaches the root end of the stem.At any time, t, the state of a train can be described by two integers:At and Zt. Here At refers to the position of the mutant at the front of thetrain, and Zt refers to the position of the resident directly behind the train.The current length of the train is thus given by At − Zt. Because in mosttime steps no change occurs in this particular stem, we consider the processin a condensed time frame which only accounts for events that change the193.3. Middle of stemA A2324252627232425262789109.0E2-C 89ompaE0p9920 89109.0E2-C 89ompaE0p9920a bFigure 3.1: Two possible histories of a train of mutants (white) proceedingalong a stem filled with residents (black). a We begin with two mutants (t0).The top node is quickly replaced by a resident (on the fast time scale) (t1).Some time later the remaining mutant reproduces (t2), and then the new topnode reproduces again (t3). Finally we lose a single mutant from the back ofour train (t4). This general growing pattern applies whenever r > 1. b Webegin with two mutants (t0), and immediately lose the back mutant of ourtrain (t1). The front of the train reproduces, creating a second mutant (t2),but both fall prey to bad fortune (or low fitness) and are removed (t3, t4).This behaviour is likely when r < 1, but even for beneficial mutations manytrains do not reach the end of the stem.state of the train. On this timescale At increases with probability r/(1 + r)while Zt increases with probability 1/(1 + r). Thus, for beneficial mutantsthe length tends to increase over time. If at any time Zt ≥ At the train hasvanished and the stem is cleared of mutants. In this case, we say that thetrain, which “arrives” at the end of the stem, has length zero.In order to determine the expected train length, T , we consider the aboveprocess on a grid, where the horizontal axis represents the position of thefront, At, and the vertical axis the back of the train, Zt. Each point onthe grid and below the diagonal, At = Zt, represents a possible configura-tion of a train in the stem, see Fig. 3.2. All other points represent invalidconfigurations, which we refer to as ghost states. For each train, the initial203.3. Middle of stemA A23456537839861057891376.8E234565378398610578-0Co.8mpa657C65378r57i8mnEEng En Ens Enu Enl Entndmngmn mnsmnumnlFigure 3.2: Grid showing collection of possible train states. Permitted states(black outline), ghost states (grey outline) and extinction states (black fill)as well as a number of possible paths from our initial state (black outline,green fill) to a sample end state (black outline orange fill). Depicted area permitted path (continuous), an invalid path (leading to extinction, longdash) and the associated “ghost path” from the reflection of our initial stateto the sample end state (fine dash).configuration is (A0, Z0) = (2, 1), that is, the second stem node is a mutant,while the state of the first stem node is a resident. Note that this assumesthat the top of the stem is replaced by a resident on the fast timescale whichleads to a slight underestimate.Each train produces a path on the grid that originates in (A0, Z0) andends when the train reaches its destination, Aτ = H. Here τ refers to thetrains time of arrival. If at any point in time At ≤ Zt then this represents aninvalid path because the train has gone extinct part way down the stem. Theexpected train length, T , is the weighted average over all paths, with invalidpaths being considered as having length zero. The number of valid paths toany given end state (H,Zτ ), can be calculated using the reflection principle213.3. Middle of stem[Koralov and Sinai, 2007, p. 88], which describes a one-to-one correspon-dence between invalid paths and “ghost paths” starting from (Z0, A0). Thetrajectory of a ghost path is the reflection of the corresponding invalid pathalong the diagonal At = Zt, up to the point where the invalid path touchesAt = Zt for the first time. From then on the ghost path and the remainderof the invalid path coincide, see Fig. 3.2. In order to calculate the expectedtrain length, T , we consider all possible paths ending in Aτ = H subtractthe number of invalid paths, to obtain the number of valid paths leadingto a given end state. The number of ghost paths corresponds to all pathsstarting from (Z0, A0), the reflection of (A0, Z0) and hence the name of themethod. Having counted the number of valid paths reaching a particularend state we then weigh each path by its probability, multiply each end stateby the corresponding train length, and sum over all paths to find:T = (1− α)H−2H−1∑z=1(H − z)αz−1[(H − 3 + z − 1z − 1)−(H − 3 + z − 1z − 2)](3.2)with α = 1/(1 + r). We assume beneficial mutations, r > 1, such that0 < α < 1/2. All paths require H−2 steps that increase At from the startingpoint at 2 to the end point at H, each of these steps occur with probability1 − α. The combinatorial sum then accounts for every possible series ofincrements that Zt might undergo along all valid paths. In particular, theindex variable z indicates the final position of the tail of the train and henceH− z specifies the train length. The tail starts at 1 and, for any valid path,reaches at most H−1. Because we are interested in the length of the train atthe moment of arrival, the final step must be an increment of At, this acts tolimit the number of possible paths, lowering our binomials significantly. Itfollows that z = H − 1 has zero valid paths – a reassuring result as we knowthat no train could possibly have length one at the moment of its arrival.Note that we have used the convention that(nk)= 0 for k < 0, which appliesonly if the tail remains at Zt = 1 and admits only a single valid path.For H ≥ 2, r > 1 simple bounds for T exist (see section 4.2.1):(H − 1)(1−1r)2≤ T ≤ H. (3.3)This indicates that for r > 1 the expected train length T grows approxi-mately linearly with increasing stem length H.223.4. Bottom of stem3.4 Bottom of stemWhenever a train reaches the root end of the stem, its mutants competewith the resident nodes from the other branches to occupy the root node.Since the root node is updated on the fast timescale we can again treat itsstate as a random variable. Thus, once a train has reached the root endof the stem, the root node is a mutant with probability close to r/B. Aslong as the train sits at the root end of the stem, the probability in anygiven time step that the root node is a mutant and reproduces is close tor2/(FtB). However, the train is simultaneously being eroded from behind,with train mutants being replaced with probability 1/Ft. Thus, the trainremains at the root end for TFt time steps, on average. Put together, thismeans that any given train succeeds in producing a second mutant in anyreservoir with a probability close to r2T/B (for detailed error bounds see4.3).3.5 Slow dynamics in reservoirsAt any given time step, the probability of losing the initial mutant in thereservoir is close to 1/(FtBL). In order to calculate the probability of cre-ating a new reservoir mutant, we note that for each new reservoir mutanta successful train must have been launched, hence it suffices to find the pertime step probability of launching a successful train. This can be easilycalculated by simply multiplying the per time step probability of launchinga train (r2/(FtL), see section 3.2 ) by the probability that a given train issuccessful (r2T/B, see section 3.4 ), yielding approximately r4T/(FtBL).Thus, the probability to eventually go from one to two mutants in thereservoirs, as opposed to losing the initial mutant, is close tor4T1 + r4T. (3.4)Since T can be made arbitrarily large (by increasing the stem length H, seeEq. (3.3)), the transition from one to two mutants becomes almost certainand, similarly, the probability of losing the initial mutant becomes vanish-ingly small.3.6 Upper bound on fixation probabilityTo find an upper bound on the fixation probability, ρH , we note that beforeour mutant can reach fixation, the superstar must first transition from a233.7. Lower bound on fixation probabilitystate with one mutant in a reservoir to a state with two mutants in thereservoirs (this is the basis of Dı´az et al. [2013]). Thus, an upper bound onthis transition probability serves as an upper bound on the mutant fixationprobability for the system. Moreover, the upper bound can be made inde-pendent of T by assuming that all trains have the maximum possible trainlength. Thus, in the limit of large B and L we findρH ≤ 1−11 + r4T≤ 1−11 + r4H. (3.5)For any given H, r we can find T explicitly using Eq. (3.2). In particular,we note that for H = 3, we find T = 2r/(r + 1), thus recovering the upperbound identified in Dı´az et al. [2013].3.7 Lower bound on fixation probabilityWe find a lower bound on the fixation probability using the same approach asLieberman [2010, p. 47-54], although the results we reach are very different.We approximate the dynamics of the system with a random walk, and thencalculate the fixation probability on our random walk. This random walkhas a forward bias given by Eq. (3.4) as long as mutants are rare, andwe assume no forward bias otherwise. Because even for larger numbers ofmutants the forward bias persists (but there is no simple way to quantifythe bias) we obtain a lower bound of the fixation probability, ρH .For any finite number of steps, a sufficiently strong initial bias wouldsuffice to ensure that the random walk eventually reaches fixation with highprobability. However, the limit N → ∞ also requires an arbitrarily largenumber of forward steps. In order to resolve the interplay between these twolimiting behaviours we set up a martingale and apply the optional stoppingtheorem [Klenke, 2006, p. 210] (see 4.4.2 for details).In the limit of large B and L we find:ρH ≥ ρˆH = 1−1r4T≥ 1−1r4(H − 1)(1− 1/r)2. (3.6)Once again we note that for any given H, r we can find T explicitly, Eq. (3.2).243.8. Narrow window3.8 Narrow windowTaken together, Eq. (3.6) and Eq. (3.5) give us narrow bounds on the possiblevalues of fixation probability. For B = L we find in the limit B →∞:1−1r4(H − 1)(1− r−1)2≤ 1−1r4T≤ ρH ≤ 1−11 + r4T≤ 1−11 + r4H.(3.7)25Chapter 4In which error bounds arefoundDespite any intuitive appeal the previous chapter may have, and to someextent because of this, it is necessary to keep a most careful eye on anyand all error terms that may arise. Previous dealings with superstars havedemonstrated how delicate the balance is between the various limits we aredealing with. Our own investigations were occasionally led astray by themost innocuous of assumptions. Thus it is that we proceed with caution.In this chapter, more formal arguments are given, to back up the intuitivenotions in the previous chapter. We find exact error bounds, i, on thenumerous types of error that finite L and B introduce, and show that allsuch errors tend to zero. At the end of the chapter, all such error terms arecollated to create the finite case upper and lower bounds for the fixationprobability on superstars.4.1 Initial conditionsIf mutations arise spontaneously and with equal probability in any node thenthe initial mutant arises in a reservoir node with probability BL/(BL+ 1 +HB). This probability can be made arbitrarily close to one, for suitablylarge L. The mutant arises in a stem or root node with probability0 =1 +HBBL+ 1 +HB. (4.1)Thus, the final fixation probability, ρH (see Eq. (6.1)), will need to include a1− 0 factor to account for this possibility. Because 0 is only used to derivethe lower bound on ρH , assuming extinction of all mutants not arising ina reservoir node preserves inequalities. By doing this we are effectivelyignoring the small possibility that an invading mutant placed in stem nodesor the root node could still reach fixation.The first node of the stem gets replaced on the fast time scale, whichallows us to treat its state as a random variable. However, at early stages264.2. Details of expected train length T .of invasion, only one of the L upstream neighbours is a mutant. Hence, atany given time step, the top node is occupied by a mutant with probabilityr/(L− 1 + r) = (1− 1)r/L, where1 =r − 1L+ r − 1. (4.2)This mutant reproduces with a probability r/Ft and hence the probabil-ity that a mutant is placed in the second stem node is r2/(Ft(L + r − 1))in each time step. However, we need to account for the possibility that theinitial mutant in the reservoir is replaced before the first node in that stem.On a given time step, the chance that the reservoir mutant is replaced by aresident is less than 1/(FtBL). Conversely, the probability of the first nodein the chain being replaced exceeds L/Ft. Thus the chance that the initialmutant is replaced before the first node in the chain is2 < 1/(1 +BL2). (4.3)For our proof we assume that the first node in any chain can be treated asa random variable. The above error terms 1 and 2 account for the slightsimplifications we make in doing so.4.2 Details of expected train length T .In section 3.3, we showed that the expected length of a train in a stem wasT = (1− α)H−2H−1∑z=1(H − z)αz−1[(H − 3 + z − 1z − 1)−(H − 3 + z − 1z − 2)](4.4)In this section, we identify upper and lower bounds on this expectationvalue, and deal with another source of minor error.4.2.1 Bounding TFinding an upper bound on T is very straight forward: because of the struc-ture of the graph H > T .To find a useful lower bound we are forced to resort to algebraic manip-ulation. Various binomial coefficient identities are used throughout. It isassumed that r > 1 and hence 0 < α < 1/2T = (1− α)H−2H−1∑z=1(H − z)αz−1[(H + z − 4z − 1)−(H + z − 4z − 2)]274.2. Details of expected train length T .using Pascal’s rule= (1− α)H−2H−1∑z=1(H − z)αz−1[(H + z − 3z − 1)− 2(H + z − 4z − 2)]splitting sum= (1− α)H−2H−1∑z=1(H − z)αz−1(H + z − 3z − 1)− 2α(1− α)H−2H−1∑z=1(H − z)αz−2(H + z − 4z − 2)changing second summation to obtain lower bound≥ (1− α)H−2H−1∑z=1(H − z)αz−1(H + z − 3z − 1)− 2α(1− α)H−2H∑z=2(H + 1− z)αz−2(H + z − 4z − 2)merging sums and relabeling indices= (1− 2α)(1− α)H−2H−2∑z=0(H − 1− z)αz(H + z − 2z)expanding factor= (1− 2α)(1− α)H−2H−2∑z=0(2(H − 1)− (H − 1)− z)αz(H + z − 2z)using the combinatorial identity (n+ k)(n+k−1k)= (n+ k)(n+k−1n−1)= n(n+kn)= n(n+kk)= (H − 1)(1− 2α)(1− α)H−2H−2∑z=0αz[2(H + z − 2z)−(H + z − 1z)]extending the sum to∞ can only decrease the lower bound because 2(n+kk)−(n+k+1k)=(n+kk)−(n+kk−1)≤ 0 for k > n and (1− 2α) > 0≥ (H − 1)(1− 2α)(1− α)H−2∞∑z=0αz[2(H + z − 2z)−(H + z − 1z)]284.2. Details of expected train length T .using (1− α)−n−1 =∑∞k=0 αk(n+kk)since |α| < 1= (H − 1)(1− 2α)(1− α)H−2[2(1− α)1−H − (1− α)−H]= (H − 1)(1− 2α1− α)2.And so we haveT ≥ (H − 1)(1−1r)2. (4.5)Hence, for r > 1, the expected train length, T , can be made arbitrarily longby choosing a suitably long chain, H.4.2.2 Train collisionsThe above derivation of the expected train length neglects the possibilitythat two trains may collide and merge, which introduces a source of error.Any train collision effectively reduces the train length of the first train be-cause it is eredicated from behind at a greater rate than expected. Therefore,collisions decrease the expected train length T – despite the fact that merg-ing trains may lead to longer overall lengths – hence Eq. (3.2) overestimatesthe expected train length. A lower bound for T is obtained by assumingthat the second train completely eradicates the first train. In order to findthis lower bound we need to determine the probability that a train collisionoccurs. An upper bound on this collision probability is given by the proba-bility that a second train is generated while another train is still occupyingthe stem and can be formulated in terms of a negative binomial distributionwhere the generation of a new train counts as a “success” while a decreasein length of the existing train in the stem counts as a “failure”. If the rearof the train has incremented H times before a new train is generated, thenthe trains can not even co-habit the stem, let alone collide.In each time step a new train is generated with probability r2/(Ft(L +r − 1)) whereas the probability that the existing train length decreases, i.e.the resident directly behind the train reproduces, is at least 1/Ft (exactly1/Ft along the stem, but greater for the first stem node). After H failureevents we know that the stem must be cleared and contain only residents.Therefore, train collisions occur at most with the probability that a newtrain is generated prior to H failure events:P (no 2nd train ≥(1−r2L+ r − 1 + r2)H> 1−Hr2L+ r − 1 + r2. (4.6)294.3. Interaction of trains with root nodeThe inequality in Eq. (4.6) results from expanding and truncating the powerterm. This yields an alternating sum and, for sufficiently large L, the ab-solute value of the terms is strictly decreasing. The chance that a secondtrain is launched while another one still occupies the stem is at most3 =Hr2L+ r − 1 + r2(4.7)and becomes small for L  Hr2. Thus, the true expected train length liesbetween T (1− 3) and T .4.3 Interaction of trains with root nodeHere we calculate upper and lower bounds on the probability that a trainof mutants, which arrives at the base of the stem with an initial length l,succeeds in taking over the root node and placing a new mutant in one of thereservoirs (T = E(l)). We adapt the technique used in Dı´az et al. [2013] andconsider a finite state Markov process with two absorbing states: either anew mutant is placed in one reservoir, or the mutant train has disappeared.All other states correspond to a current train length and state of the rootnode. This is illustrated in Fig. 4.3.For each state, the probability of eventually succeeding is pil, where i ≤ lindicates the current train length and l indicates whether the root node isoccupied by a mutant, ↑, or resident, ↓. Clearly p0↓ = 0 because the trainhas disappeared, the root is a resident and hence an absorbing state hasbeen reached. Similarly, p0↑ = r/(B + r), denotes the probability that themutant in the root node reproduces before being replaced by the offspringof residents in any of the B branches. By examining all possible transitionswe obtain:(B + r)pi↑ = (B − 1)pi↓ + r + pi−1↑ (4.8)(r + 1)pi↓ = rpi↑ + pi−1↓ (4.9)If the train is i mutants long, and the root is a mutant, the system cantransition into the “successful” absorbing state, with relative weight r, canlose the root mutant, with relative weight B − 1 (because there are B − 1other branches), or the train may erode, with relative weight 1, leavingthe root node unchanged. If the root node is a resident, then the onlypossible actions are for the train to replace the root node, or for the trainto be eroded from behind, with relatively probabilities r and 1 respectively.304.3. Interaction of trains with root nodeA AFigure 4.1: Possible states of the end of the stem, along with the transitionsbetween them. Here mutants are yellow, residents blue. States further to theleft, with longer mutant trains, have been omitted. We note the similarityto the argument of Dı´az et al. [2013], but emphisize the simplicity gained byconsidering only the base of the stem, rather than the entire branch. Thissystem contains far fewer states and far fewer transitions making it moreamenable to analytic reasoning.314.3. Interaction of trains with root nodeFinally, coefficients on the left hand side of the equations normalise over allpossible courses of action. Written as a matrix equation this gives:[B + r 1−B−r r + 1] [pi↑pi↓]=([r0]+[pi−1↑pi−1↓])which yields[pi↑pi↓]=1B + 2r + r2[r + 1 B − 1r r +B]([r0]+[pi−1↑pi−1↓]). (4.10)We now calculate both upper and lower bounds on the expected successprobability, E(pl↓), when our train first arrives at the root.4.3.1 An upper bound on train success probabilityBy neglecting terms from the denominator of our fraction, and increasingseveral of our matrix entries, we find an upper bound on the R.H.S. ofEq. (4.10). This works because pi↑, pi↓ ≥ 0, and thus we are making theR.H.S. strictly more positive. It also significantly simplifies our equation.[pi↑pi↓]<1B + 2r + 1[r + 1 B + rr + 1 r +B]([r0]+[pi−1↑pi−1↓])where the inequality applies element-wise. Substituting p0↑ = r/(B +r), p0↓ = 0 into the above givesp1l <r2 + rB + 2r + 1(1 +1B + r).Using the fact that the upper bounds for pi↑ and pi↓ are equal (because bothrows of the matrix are identical) gives:pil <r2 + rB + 2r + 1+(r + 1) + (B + r)B + 2r + 1pi−1l =r2 + rB + 2r + 1+ pi−1l.By induction we findpil <r2 + rB + 2r + 1(i+1B + r).324.3. Interaction of trains with root nodeThis yields an upper bound for pi↑, which we then use in calculating a tighterbound for pi↓. From Eq. (4.8) and Eq. (4.9) we can derive:pi↑ <Bpi↓ + r + pi−1↑B(r + 1)pi↓ <rB(Bpi↓ + r + pi−1↑) + pi−1↓pi↓ <rB(r + pi−1↑) + pi−1↓⇒ pi↓ <i∑n=1rB(r + pn−1↑) ≤ir2B(1 +r + 1(B + 2r + 1)(B + r)+H − 12r + 1B + 2r + 1)And thus we have pi↓ < ir2(1 + 4+)/B, where4+ =1 + rB + 2r + 1(1B + r+H − 12)(4.11)is our error term. This error term can be made arbitrarily small for suffi-ciently large B.4.3.2 A lower bound on train success probabilityFor the lower bound, we must deal with the possibility that a small numberδ of branches contain mutants. Because the train collision argument (sec-tion 4.2.2) is dependent on at most a single mutant existing in each branch,we wish to only consider reproductive events which place new mutants intobranches that currently have no mutants, neglecting the rest (this causesno issue when calculating a lower bound). Further, trains from other mu-tants may compete for control of the root node, this must be accountedfor. Although generically trains do not encounter one another, this lowerbound is calculated as if all other mutant occupied branches have mutantsat the base of their stems at all times. This arrangement, while unrealistic,describes the situation which minimizes the success probability of a giventrain, and is thus useful for finding lower bounds. The following equationsare written under the assumption of this “worst case scenario” (worst fromthe perspective of the train we are focusing on).[B + r + (δ − 1)(r − 1) 1−B − (δ − 1)(r − 1)−r r + 1] [pi↑pi↓]=([rB−δB0]+[pi−1↑pi−1↓])334.3. Interaction of trains with root node[pi↑pi↓]=1B + δ(r − 1) + 1 + r + r2[r + 1 B + δ(r − 1)− rr 1 +B + δ(r − 1)]([rB−δB0]+[pi−1↑pi−1↓])By ignoring the positive effects of pi−1↑ on pi↓ we form the inequalitypi↓ >1B + δ(r − 1) + 1 + r + r2(r2B − δB+ (1 +B + δ(r − 1))pi−1↓)for i ≥ 1. By induction this leads topi↓ >B − δBr2B + δ(r − 1) + 1 + r + r2i−1∑n=0(B + 1 + δ(r − 1)B + δ(r − 1) + r2 + r + 1)n=B − δBr2B + δ(r − 1) + 1 + r + r21−(B+1+δ(r−1)B+δ(r−1)+r2+r+1)i1− B+1+δ(r−1)B+δ(r−1)+r2+r+1By re-writing B+1+δ(r−1)B+δ(r−1)+r2+r+1 as 1−r2+rB+δ(r−1)+r2+r+1 , and then simplfyingthe denominator of the equation we findpi↓ >B − δBr2r + r2(1−(1−r2 + rB + δ(r − 1) + r2 + r + 1)i)Using the binomial theorem to expand the inner bracket, we find analternating series. As long as i(r2 + r)/(B + δ(r − 1) + r2 + r + 1) < 1the series expansion of the inner bracket will give an alternating sequencewith monotone decreasing absolute terms. Truncating the series after threeterms will preserve the inequality because the sum of the first three termsis greater than any subsequent sum. This leads to:pi↓ >B − δBir2B + δ(r − 1) + r2 + r + 1(1−i− 12r2 + rB + δ(r − 1) + r2 + r + 1).Remembering that 1/(1 + x) < 1− x whenever x > −1, we find:pi↓ >ir2B(1−δB−δ(r − 1) + r2 + r + 1B−H − 12r2 + rB+O(B−2))We are free to drop the very small positive terms at the end, and find theresult pi↓ > (1− 4−)ir2/B, where4− =δr + r2 + r + 1B+(H − 1)(r2 + r)2B(4.12)344.4. Bounds on fixation probabilitiescan be made small whenever δ,H  B. Thus E(pl↓) > Tr2B−1(1 − 4−).Armed with the constraints(1− 4−)ir2/B < pi↓ < (1 + 4+)ir2/B, (4.13)we note that E(pl↓) ≈ E(ir2/B) = Tr2/B.4.4 Bounds on fixation probabilities4.4.1 Upper boundThe probability of transitioning from 1 to 2 mutants in the reservoir servesas an upper bound on the mutant fixation probability. We therefore makeseveral optimistic (from the mutants point of view) assumptions:1. The original mutant appears in a reservoir node (ignoring 0, seeEq. (4.1)).2. Simplified train launching probability (ignoring 1, see Eq. (4.2)).3. No detrimental effects based on our initial conditions (ignoring 2, seeEq. (4.3)).4. No train collisions (ignoring 3, see Eq. (4.7))5. We use the upper bound for the probability that a train succeeds inproducing another reservoir mutant (Tr2(1 + 4+)/B, see Eq. (4.11)).A single mutant in any reservoir produces a new train with a probabilityof at most r2/(FtL) per time step. Subsequently, each train succeeds inplacing another mutant in any reservoir with a probability of at most (1 +4+)Tr2/B. At the same time, the central root node has a probability ofat least B−1B+r−11FtBLto remove the mutant node from the reservoir. Thus,the chance of the mutant producing a successful train before being erasedby the root node is at most :ρH ≤ ρH+ =Tr4(1 + 4+)Tr4(1 + 4+) + B−1B+r−1≈ 1−1Tr4 + 1(4.14)The approximation in Eq. (4.14) becomes exact in the limit of large B. Notethat because of T < H we can replace T by H in Eq. (4.14) to obtain amore generous upper bound.354.4. Bounds on fixation probabilities4.4.2 Lower boundOn the slow timescale the dynamics can be approximated by a random walkXt on the number of mutants in the reservoir. When reservoir mutants arerare, our forward bias is as calculated previously. Unfortunately, becausethe analytic arguments in the previous sections are based on the assumptionof having only a few reservoir mutants, further arguments must be madeto determine a lower bound on forward bias when reservoir mutants arecommon.All changes made in the reservoir can ultimately be treated as the de-scendants of some reservoir occupant replacing the occupant of some otherreservoir node. The chance of any particular reservoir mutant replacing anyparticular reservoir resident is always higher than the converse, as even inthe worst case scenario where a mutant is competing with L− 1 other mu-tants in its branch, and a reservoir resident suffers from no such competition(thus eradicating and advantage the mutant might attain along the stem),the mutant will still have an advantage of at least r when competing for andpropagating from the central root node. This would indicate that, condi-tioning on a particular pair of nodes replacing one another, the bias mustbe at least r2. Because all pairwise interactions between reservoir nodesare biased in the mutants favour, the random walk itself must be at leastsomewhat biased in the mutants favour for all values of Xt. A random walkon the integers from 0 to BL + 1 with forward bias γ for Xt < δ  B andno bias for Xt ≥ δ will thus significantly underestimate the fixation proba-bility of the true process. We bound our walk at BL + 1 rather than BLbecause we require not only that all reservoir nodes are mutants, but alsothat this propagates down to the root and all stem nodes. By overshootingour intended target we demand that the system remain in the state BL longenough to propagate forward- and hence long enough that all stems willhave been replaced by mutant reservoir nodes multiple times (as changes tothe reservoir nodes are on a slower timescale). Please note that the actualsuperstar never enters a state with BL + 1 reservoir mutants, and this ismerely a useful tool for dealing with the random walk. The fixation prob-ability of the random walk described thus acts as a lower bound on thefixation probability of the true process. In the following, we assume that Hand r are fixed and that B,L H.In order to determine the fixation probability of the random walk Xt,we construct a martingale, Q(Xt). A martingale is a stochastic process such364.4. Bounds on fixation probabilitiesthat the expected value in the next time step is equal to the current value:E(Q(Xt+1)|X1, X2...Xt) = Q(Xt). (4.15)Because our system is Makov, it is sufficient to condition only on Xt. ForQ(Xt) to be a martingale, we thus require:Q(k) =γ1 + γQ(k + 1) +11 + γQ(k − 1) 0 < k < δ (forward bias)12Q(k + 1) +12Q(k − 1) δ ≤ k < BL+ 1 (no bias).(4.16)These constraints admit the solution Q(k) = γ−k for k < δ, and Q(k) =Ak +D for k ≥ δ. For Q(k) to satisfy the martingale conditions as needed,we demand δ ∈ N. The constants A,D are determined by connecting thesolutions for the two regions. In particular,γ−δ = Q(δ) = Aδ +Dmust hold such that Q(δ) is well defined and2(Aδ +D) = γ−δ+1 +Aδ +A+Dto satisfy the martingale property at δ. Thus,A = γ−δ(1− γ)D = γ−δ(1− δ(1− γ))which yieldsQ(0) = 1Q(1) = γ−1Q(BL+ 1) = γ−δ + γ−δ(1− γ)(BL+ 1− δ).Let τ be the first time Xt reaches one end of the random walk. BecauseQ(k) is bounded for all relevant values of k we are able to invoke the optionalstopping theorem [Klenke, 2006, p. 210]. The O.S.T. then implies thatQ(1) = Q(X0) = E(Q(Xτ )) = Q(0)P (0) +Q(BL)P (BL),374.4. Bounds on fixation probabilitieswhere P (0) and P (BL) represent the probabilities of reaching either end ofour random walk. Using P (0) = 1− P (BL) we findP (BL) =Q(1)−Q(0)Q(BL)−Q(0)=1− γ−11− γ−δ − γ−δ(1− γ)(BL+ 1− δ). (4.17)In order to keep error terms small, we must select δ such that γδ  BLand δ  B,L. As long as B and L are large and sufficiently similar, this ispossible provided that γ > 1 (to be shown). Thus, for any choice of H, andfor any r > 1 we can select B and L such thatP (BL) =Q(1)−Q(0)Q(BL)−Q(0)=1− γ−11 + 5(4.18)with5 = γ−δ ((γ − 1)(BL+ 1− δ)− 1) 1.In order to bound 5, a lower bound on γ is required. This is found bytaking the lower bound on the production rate of successful trains, (1 −1)(1 − 3)(1 − 4−)Tr4/(BLFT )), and comparing to our upper bound onthe removal probability for reservoir mutants, 1/(BLFt). This representsthe eventual forward bias once the first node in the stem has been replacedat least once. To account for the possibility of mutant loss before the firststem node is replaced we must consider 2, which acts as an additive penalty(because it only applies once per reservoir mutant). This yieldsγ ≥ r4T (1− 1)(1− 3)(1− 4−)− 2. (4.19)In the limit of large B and L all error terms tend to zero. Thus, to showγ > 1, it is sufficient to show that r4T > 1. Recalling that At−Zt representsthe length of a train at time t (see 4.2), and noting that it is submartingale(the expected future value is greater than the current value) whenever r > 1,we can easily show that T = E(Aτ − Zτ ) ≥ A0 − Z0 = 1, and thus, in thelimit, γ ≥ r4 > 1. Thus 5 can be made arbitrarily small.Substituting Eq. (4.19),our lower bound on γ into Eq. (4.18) yields alower bound on P (BL), which in turn provides a lower bound on the fixationprobability.In cases where generic bounds not dependent on T are desired, the lowerbound (H − 1)(1 − 1/r)2) can be substituted into Eq. (4.19) in T ′s place,resulting in a looser bound on fixation probability.384.5. Bringing it all together4.5 Bringing it all togetherCollating Eq. (4.18) and Eq. (4.14), we find:1− 01 + 5(1−1r4T (1− 1)(1− 3)(1− 4−)− 2)≤ ρH ≤ 1−B−1B+r−1Tr4(1 + 4+) + B−1B+r−1(4.20)whereT = (1− α)H−2H−1∑z=1(H − z)αz−1[(H − 3 + z − 1z − 1)−(H − 3 + z − 1z − 2)],(H − 1)(1− r−1)2 ≤ T ≤ H, Length of train in stem, Eq. (3.2), section 4.2.10 =1 +HBBL+ 1 +HB,Chance that initial mutant is not in reservoir, Eq. (4.1)1 =r − 1L+ r − 1, Simplification of train launch probability, Eq. (4.2)2 =11 +BL2,Mutant removal before top of stem updates, Eq. (4.3)3 = Hr2L+ r − 1 + r2,Upper bound on train collisions, Eq. (4.7)4− =δr + r2 + r + 1B+(H − 1)(r2 + r)2B,Train success, lower bound Eq. (4.12)4+ =(r + 1)B + 2r + 1(1B + 1+H − 12),Train success, upper bound Eq. (4.11)5 = γ−δ[(γ − 1)(BL+ 1− δ)− 1],Martingale error term, Eq. (4.18) .All error terms can be simultaneously made small when B,L  H, δ and(r4T )δ  BL. To find looser upper and lower bounds independent of Twe can substitute in our upper and lower bounds for T (respectively) intoEq. (4.20).In the particular limit B → ∞, with√B − 1 < δ ≤√B and L = Ball error terms disappear. Other L,B relations are possible – for exampleL = Bβ will also work if β > 0, although L = βB is liable to cause problemsfor our 5 bound, as it is not immediately clear that γ−δβB must tend tozero in the limit.Even though fixation probabilities can be made arbitrarily close to 1 onlarge superstars and sufficiently large H, the fixation probability remainsbounded away from 1 for any finite graph. As a concrete example, consider394.5. Bringing it all togetherr = 2 and H = 50, which leads to T ≈ 13.25 and 0.995283 ≤ ρ50 ≤ 0.995306in the appropriate limits of large B,L. In comparison, a sizable superstarwith B = L = 5000 (N ≈ 2.5 · 105) yields 0.985323 ≤ ρN50 ≤ 0.995375,which includes all error terms. The fixation probability for a similarly sizedisothermal graph is just short of 0.5. It is interesting to note that in thiscase, the greatest source of error (in the lower bound) is the possibility thatthe original mutant will arise in either the stem or root, leading to fixationwith low probability.40Chapter 5In which some loose ends aretied up and others are leftopenIt turns out that superstars act as evolutionary amplifiers only under veryspecific conditions. Here we discuss the most important requirements.5.1 Selection & sequenceThe original Moran process is formulated as a fecundity based birth-deathprocess, that is, fitness affects the rate of birth (reproduction) whereas death(replacement) occurs uniformly at random. Alternatively, fitness could justas well affect survival such that birth events occur uniformly at randombut death events occurring with probability inversely proportional to fit-ness. Similarly, the sequence of events could be reversed such that first anindividual dies and then the remaining individuals compete to repopulatethe vacant site. This yields a total of four distinct scenarios: Bd, bD, dBand Db, where capital letters refer to the fitness dependent selection step.The original Moran process corresponds to Bd and the fixation probabilityis given in Eq. (1.1). In unstructured populations the four dynamical sce-narios result in only marginal differences in fixation probabilities. However,they can have crucial effects on the evolutionary outcome in structured pop-ulations [Ohtsuki et al., 2006, Ohtsuki and Nowak, 2006, Zukewich et al.,2013]. Frean and Baxter [2008] examine all four cases for both completegraphs, and star graphs, showing that stars act as evolutionary suppressorsin both the dB and Db cases, and are significantly less effective in the bDcase compared to the original Bd case. Similar results apply to superstars:bD updates: For the birth-death process with selection on survival, mu-tants only gain any advantage when the root node reproduces. Wheneverany other node reproduces, there is only a single downstream node, and thus415.1. Selection & sequenceno opportunity for competition, rendering any fitness advantage irrelevant.This lack of advantage in the stem leads to an expected train length of 1,regardless of stem length or mutant’s fitness. The chance of launching asuccessful train in a given time step is 1/(NLB) and the chance of replac-ing the original mutant is 1/(NBLr − N(r − 1)). This results in a biasof approximately r1+r for the initial mutant to eventually create a secondreservoir mutant – the same bias as for the original Moran process. Thus,we might expect fixation probabilities similar to the original Moran processon BL nodes, and certainly nowhere close to the amplification observed forBd.Db updates: For the death-birth process with selection on survival, theprospects of mutants drop even further. The probability to successfullyplace even a single offspring in the top of the stem is only r/(L + r). Asthe train propagates along the stem it tends to grow because the mutantat the back of the train is less likely to die than the resident in front ofit, leading to the same train dynamics observed for the Bd process. Notethat for death-birth processes the top of the stem no longer changes onthe fast timescale and hence trains start at the top instead of the secondnode. Upon reaching the end of the stem the train competes with the otherbranches for control over the root node and succeeds with probability nearrT/B (over the lifetime of the train). Once a mutant occupies the root, itis predestined to have many offspring – in each time step a reservoir nodedies with high probability and gets replaced by an offspring of the mutant inthe root node, whereas the probability is low that the root node is replaced.More specifically, we expect rN/(1 + r) reservoir nodes to become mutantsbefore the root node is replaced. At that point it is reasonable to assumethat mutants reach fixation with high probability. We conjecture that theprobability of mutant fixation on the superstar is close to the probabilityof a mutant eventually being placed in the root node. Thus, we expect afixation probability close to r2T/BL. This result is significantly less thanthe almost certain fixation found in superstars for the Bd process, and is infact smaller than even the fixation probability of the original Moran process(c.f. Eq. (1.1)), suggesting that for Db dynamics superstars act as strongevolutionary supressors. The result does, however, match well with the 1/Nscaling found for the fixation on stars [Frean and Baxter, 2008].dB updates: The final case is the death-birth process with selection onreproduction. Once again the probability of placing a mutant offspring in the425.2. Mutationsstem before losing the reservoir mutant is near r/L, but now without furtherbenefits along the stem. Consequently, trains that do reach the root stillhave an expected length of 1. Thus, each train has a probability of roughlyr/B for claiming the root node, which then produces N/2 mutants in thereservoirs, on average – enough to suggest fixation with high probability,but less than for Db. Thus, we conjecture fixation probabilities near r2/N –the worst outcome of the four scenarios. Once again, we note the significantpenalty as compared to the original Moran process as well as the similaritiesto the 1/N scaling for stars [Frean and Baxter, 2008].5.2 MutationsEven though we did not explicitly model the process of mutation, we implic-itly assumed that mutations are rare and arise spontaneously in any nodeselected uniformly at random. For the superstar this means that most mu-tations arise in a reservoir node – simply because the overwhelming majorityof nodes are reservoir nodes.An alternative and equally natural assumption is that mutations ariseduring reproduction events. Such a change does not affect the fixation prob-abilities in the original Moran process. However, in highly heterogeneouspopulation structures crucial differences in the fixation probabilities canarise because mutants preferentially arise in certain locations [Maciejewskiet al., 2014]. For superstars, when using the Bd or bD update rules, mu-tants most likely arise at the top of a stem. This is an unfortunate positionbecause the mutant is likely to be replaced before reproducing even once –extinction is almost certain. In contrast, for Db and dB, mutants again mostlikely arise among the reservoir nodes – but for those updates superstars donot act as evolutionary amplifiers.Even though the dynamical properties of superstars are intriguing, thelist of caveats demonstrates that the evolutionary amplification is highlysensitive to the details of the model – maybe this is the reason that superstar-like structures have not been reported in nature.5.3 Deleterious mutations, r < 1As a final remark, we note that all discussion so far is based on the assump-tion of a beneficial mutation, r > 1. In addition to promoting beneficialmutations, an evolutionary amplifier must also suppress the fixation of dele-terious mutations, r < 1. Here we argue that a deleterious mutant indeed435.3. Deleterious mutations, r < 1disappears almost surely on sufficiently large superstars.Consider a single mutant with fitness 1, in a population of residentswith fitness 1/r. Note that we can rescale “fitness” without changing thedynamics of the system, since fitness is used only as a weighting factor andnever in an absolute sense, only relative fitness matters. We next observethat all calculations performed previously with respect to a rare mutant witha fitness advantage would now apply to a resident, if it were to become rare -that is, if residents were rare we would expect trains of residents to propagatedown the stem, incrementing with probability 1/(r+1) > 1/2 and shrinkingwith probability α = r/(1 + r) < 1/2, leading to T > (H − 1)(1− r)2. Thesame martingale argument that we previously used to find a lower boundon mutant fixation probability can now be used to form a lower bound onresident fixation probability. This time we use Xt to track the number ofresidents in our reservoir nodes. Thus X0 = BL − 1. Xt can be treatedas a random walk that has forward bias γ ≈ r−4T when residents are rare,and no bias otherwise. This leads to a formula for the fixation probabilityof residents1− ρH ≥ P (BL) =Q(0)−Q(BL− 1)Q(0)−Q(BL)=1− γ−δ − γ−δ(1− γ)(BL− 1− δ)1− γ−δ − γ−δ(1− γ)(BL− δ)= 1−γ−δ(γ − 1)1− γ−δ − γ−δ(1− γ)(BL− δ)ρH ≤γ−δ(γ − 1)1− γ−δ − γ−δ(1− γ)(BL− δ)ρH ≤ γ1−δIn the above we require γ to be the bias in favour of the resident. In orderto calculate it we need to find the expected train lengths of our residenttrains. Structurally the train equation is the same as previously, but withr replaced by 1/r (the fitness of our resident strain). Because 1/r is nowgreater than one, all train length arguments that previously depended onr > 1 for beneficial mutations can now be applied to the resident when1/r > 1. Thus, we can show that the expected length of resident trains islarge. Because γ ≈ r−4T , large T leads directly to a large γ (in the residentsfavour), and γ−δBL  1. Our Martingale will significantly underestimatethe fixation probability for our resident, thus we expect the above bound tooverestimate the probability of mutant fixation. In order to calculate exactbounds on our new γ we must apply error terms similar to 1, 2, 3, 4−,445.3. Deleterious mutations, r < 1giving us lower bounds on the effectiveness of “resident trains”. No errorterm equivalent to 0 will be required, as the possibility of our initial mutantbeing placed not in a reservoir node can only reduce our mutant fixationprobability. Thus, it can be seen that the fixation probability of a deleteriousmutant tends to zero, and in particular that ρH ≤ γ1−δ. In the limit asB andL are taken to infinity (in an appropriate manner) we find ρH ≤ (r−4T )1−δ45Chapter 6In which concluding remarksoccurIn this thesis I have explained and explored two mutually contradictoryproofs. In doing so, I went beyond the counter-example provided by Dı´azet al. [2013], and managed to identify the errors in Lieberman et al. [2005]that led the argument astray, thus plugging a distinctive gap in the liter-ature, and providing an explanation for the contradiction between the twoproofs, rather than merely a proof that one of them was wrong.Further than that, I was able to determine true limits on the fixationprobability of superstars.In completing this proof I have drawn upon a wide variety of mathemati-cal and computational techniques, ranging from basic simulations, induction,and a wide range of algebraic identities, to more conceptual tools such asMartingales and reflection principles.While this paper may have answered at least one question regardingsuperstars- namely the identification of the true fixation probability – manyquestions still remain. While chapter 5 gives some suggestions regardinghow to proceed in finding the fixation probability under a variety of differ-ent regimes, it by no means provides solid proof. Lieberman et al. [2005]states that the fixation probability for both funnels and metafunnels arealso governed by Eq. (1.3), but given the difficulties regarding superstars, itwould seem prudent for such claims to be investigated further. One mightalso question the interaction between the superstar structures and fixationtime – in pushing the fixation probability towards certainty, do we also pushthe fixation time to infinity, placing our so called certainty out of reach?Superstars represent the most prominent representatives of evolutionaryamplifiers – structures that are capable of increasing selection and suppress-ing random drift. For r > 1 and in the limit of large N we have derivedupper and lower bounds for the fixation probability, ρH :1−1r4T≤ ρH ≤ 1−11 + r4T, (6.1)46Chapter 6. In which concluding remarks occurwhere (H − 1)(1− 1/r)2 ≤ T ≤ H.The upper bound for ρH in Eq. (6.1) results in a contradiction with theoriginally reported fixation probability, Eq. (1.3), for sufficiently large r. Forthe specific case of H = 3 the discrepancy was pointed out in Dı´az et al.[2013]. At the same time, the lower bound for ρH confirms that superstarsare indeed capable of providing an arbitrarily strong evolutionary advantageto any beneficial mutation, as suggested in Lieberman et al. [2005]. Usingsymmetry arguments, it also follows that for r < 1 the fixation probabilitycan be made arbitrarily small (see 5.3).In the case H = 2 (or k = 4 in Lieberman et al. [2005]) we obtain anexpected train length of T = 1 and recover the original bias, r41+r4 . Discrep-ancies arise only forH ≥ 3 (or k ≥ 5) but those cases were not included in thesimulations in Lieberman et al. [2005]. For H = 3, we obtain T = 2r/(1+r),which results in a bias of 2r51+r+2r5 and recovers the upper bound reported byDı´az et al. [2013], but our derivation offers a more illuminating explanation.The agreement extends to higher values of H when extending the techniquein Dı´az et al. [2013] numerically (using the program avaliable in A.1).An appropriately skeptical reader might ask why the theory presentedhere should be trusted over those previously presented in the literature –after all, both claim to offer rigorous proof. First, we note the agreementbetween predictions made here, and both Lieberman et al. [2005] and Dı´azet al. [2013] for the appropriate values of H. Second, we identify correlationsbetween neighbouring stem nodes as the cause for the discrepancies betweenthe two previous papers. Finally, we invite readers to scrutinize the proofoffered here most thoroughly, to convince themselves of its rigor. Superstarshave already presented unexpected subtleties, and as always, we need cau-tion and vigilance to discern between scientific selection and random drift.47BibliographyT. Antal, S. Redner, and V. Sood. Evolutionary dynamics on degree-heterogeneous graphs. Physical Review Letters, 96(18):188104, 2006.M. Broom and J. Rychta´rˇ. An analysis of the fixation probability of a mutanton special classes of non-directed graphs. Proceedings of the Royal SocietyA, 464:2609–2627, 2008. doi: 10.1098/rspa.2008.0058.R Bu¨rger and R Lande. On the distribution of the mean and variance of aquantitative trait under mutation-selection-drift balance. Genetics, 138,1994.Alex Murray Burton Voorhees. Fixation probabilities for simple digraphs.Proc. R. Soc. A, 469, 2013.Josep Dı´az, Leslie Ann Goldberg AD George B. Mertzios, David Richerby,Maria Serna, and Paul G. Spirakis. On the fixation probability of super-stars. Proc. R. Soc. A, 469, 2013.M. Frean, P. Rainey, and A. Traulsen. The effect of population structureon the rate of evolution. Proceedings of the Royal Society B, 280(1762):20130211, 2013. doi: 10.1098/rspb.2013.0211.M. R. Frean and G. J. Baxter. Death-birth orderingand suppression of fitness in networks. working paper:http://homepages.mcs.vuw.ac.nz/˜marcus/manuscripts/FreanBaxterJTB.pdf,October 2008.F. Fu, M. A. Nowak, and C. Hauert. Evolutionary dynamics on graphs:Efficient method for weak selection. Physical Review E, 79:046707, 2009.Feng Fu and Martin A Nowak. Global migration can lead to stronger spatialselection than local migration. Journal of Statistical Physics, 151:637–653,2013. doi: 10.1007/s10955-012-0631-6.48Chapter 6. BibliographyMargaret J. Eppstein Joshua L. Payne. Evolutionary dynamics on scale-freeinteraction networks. IEEE, transactions on evolutionary computation,2009.Achim Klenke. Probability Theory: A comprehensice Course. Springer, 2006.Leonid B. Koralov and Yakov G. Sinai. Theory of Probability and RandomProcess. Springer, 2nd edition, 2007.E. Lieberman. Evolution and the emergence of structure. Harvard UniversityPress, 2010.E. Lieberman, C. Hauert, and M. A. Nowak. Evolutionary dynamics ongraphs. Nature, 433:312–316, 2005. ISSN 1476-4687.W. Maciejewski, F. Fu, and C. Hauert. Evolutionary game dynamics inpopulations with heterogeneous structures. PLoS Computational Biology,10(4):e1003567, 2014.T. Maruyama. Effective number of alleles in a subdivided population. The-oretical Population Biology, 1:273–306, 1970.F. Michor, Y. Iwasa, and M. A. Nowak. Dynamics of cancer progression.Nature Reviews Cancer, 4:197–205, 2004.P. A. P. Moran. The Statistical Processes of Evolutionary Theory. ClarendonPress, Oxford, 1962.M. A. Nowak. Evolutionary Dynamics. Harvard University Press, Cam-bridge MA, 2006.M. A. Nowak and R. M. May. Evolutionary games and spatial chaos. Nature,359:826–829, 1992.M. A. Nowak, F. Michor, and Y. Iwasa. The linear process of somaticevolution. Proceedings of the National Academy of Sciences USA, 100:14966–14969, 2003.M. A. Nowak, A. Sasaki, C. Taylor, and D. Fudenberg. Emergence of co-operation and evolutionary stability in finite populations. Nature, 428:646–650, 2004.H. Ohtsuki and M. A. Nowak. The replicator equation on graphs. Journalof Theoretical Biology, 243:86–97, 2006.49Chapter 6. BibliographyH. Ohtsuki, C. Hauert, E. Lieberman, and M. A. Nowak. A simple rule forthe evolution of cooperation on graphs and social networks. Nature, 441:502–505, 2006.Paulo Shakarian and Patrick Roos. Fast and deterministic compu-tation of fixation probability in evolutionary graphs. In T. Znati,Z. Bar-Joseph, and L. Rothrock, editors, Computational Intelligenceand Bioinformatics / Modelling, Identification, and Simulation, Cal-gary AB, 2011. ACTA Press. URL http://shakarian.net/papers/SHAKRIAN-ROOS-fixation-prob-alg-CIB11.pdf.M. Slatkin. Fixation probabilities and fixation times in a subdivided popu-lation. Evolution, 35:477–488, 1981.Joshua Zukewich, Venu Kurella, Michael Doebeli, and Christoph Hauert.Consolidating birth-death and death-birth processes in structured popu-lations. PLoS One, 8(1):e54639, 2013.50Appendix AProgramsA.1 superstarSolveSuperstarSolve.m is a piece of matlab code I used to test various predictionsfor the probability of early extinction against the numerical results foundusing markov chains on possible states of a branch. The code requires aparticular k, r, L and B, and hence can not be used in a general sense.k = H + 2. This is a polished version of the code, with slight changes inoutput format and variable names compared to the original, but is effectivelyidentical.This program can be found online at:https://github.com/alastair-JL/Academic/tree/master/Superstar-Project-2013-2014 .function [x,b,A] = superstarSolve(k,r,L,B)A=zeros(2^k,2^k);b=zeros(2^k,1);q=zeros(k,1);Row=1;A(1,1)=1;while (Row<2^k)q(1)=q(1)+1;Row=Row+1;jjj=1;while (jjj<k)if (q(jjj)>1.2)q(jjj)=0;q(jjj+1)=1+q(jjj+1);endjjj=jjj+1;end%%have now calibrated the q to the correct label.51A.1. superstarSolveif (q(k)>0)b(Row)=-r;if(q(k-1)==1)A(Row,Row-2^(k-1))=L-1;endif(q(k-1)==0)A(Row,Row-2^(k-1))=L;endendif (q(k)==0 && q(1)==1)A(Row,Row-1)=1/(L*B);end % Possible actions of the center,%either ending the test, or erasing Xjjj=1;while (jjj<k-1)jjj=jjj+1;if (q(jjj)==1 && q(jjj+1)==0)A(Row,Row+2^(jjj))=r;endif (q(jjj)==0 && q(jjj+1)==1 && jjj+1~=k )A(Row,Row-2^(jjj))=1;endend %%mutation passing down chain%%special behavior of first node.if (q(1)==1 && q(2)==0)A(Row,Row+2)=r;endif (q(1)==1 && q(2)==1)A(Row,Row-2)=B-1;endif (q(1)==0 && q(2)==1)A(Row,Row-2)=B;endjjj=0;%%sum up rows to get diagonals...sum=0;while (jjj<2^k)jjj=jjj+1;sum= sum + A(Row,jjj);endsum=sum- b(Row);52A.1. superstarSolveA(Row,Row)= - sum;endx=A\b;PropagationProbability= x(2)LiebermanPrediction= r^k/(1+r^k)LieberRatio= (1-PropagationProbability)/(1-LiebermanPrediction)if(k==5)DiazPrediction= 2*r^5/(1+r+2*r^5)DiazRatio= (1-PropagationProbability)/(1-DiazPrediction)endif(k>=4&& k<=10)%%Note, Train length makes no predictions for small k, and quickly%%breaks the machine for large k.T= makeTrainLength(k,r);AJLpredict= r^4*T/(r^4*T+1)AJLRatio= (1-PropagationProbability)/(1-AJLpredict)endend%%A few comments to those who may use this program in the future.%% I used the ratio of the FAILURE PROBABILITIES compared%% to the actual result for the failure probabilities as my%% test. This is because failure probability is close to zero,%% and thus its ratio is more sensitive then dividing a%% bunch of things near one.%%It is worth noting that for k=5, Diaz’s predictions get more and more%%accurate for larger values of L(leaf number) and B(branch number).%%Lieberman’s predictions plateau- reaching a maximal accuracy for%% any given k,r combination.function [T] = makeTrainLength(k,r)53A.2. SimpleChainTestif (k==4)T=1;elseH=k-2;alpha= 1/(1+r)T=(H-1)*alpha^0*nchoosek(H-4+1,1-1);z=2;while(z<H)T=T+(H-z)*alpha^(z-1)*(nchoosek(H-4+z,z-1)-nchoosek(H-4+z,z-2) );z=z+1;endT=(r/(1+r))^(H-2)*T;endendA.2 SimpleChainTestSimpleChainTest is a Java program that simulates a single chain for a largeamount of time, and was used for the simulations in this paper. This versionis an extention of the version used for my initial investigation, and collectssignificantly more data than the original. The underlying simulation struc-ture remains unchanged.This program can be found online at:https://github.com/alastair-JL/Academic/tree/master/Superstar-Project-2013-2014 .package simplestartest;import java.util.Random;/**** @author Babblefish*/public class SimpleChainTest {/*** @param args the command line arguments*/54A.2. SimpleChainTestpublic static void main(String[] args) {int k=15;boolean weirdRecord=true;boolean changeMade=false;int numsuccess=0;double Mcount[]= new double[k];int MM[]= new int[k-1];int TrainArrive[][]= new int[k][k];//first index refers to location, second to train length.int TrainTime[][]= new int[k][k];//first index refers to location, second to train length.int MostRecentTrainLength[]=new int[k];// Index refers to location;long WW[]= new long[k-1];int numTrainsLeave=0;long numsample=0;int printseperator=4;double TotalTotalWeight=0;String trainPrint;for (int jij=0; jij<k-1; jij++){Mcount[jij]=0;MM[jij]=0;WW[jij]=0;}Mcount[k-1]=0;for (int iii=0; iii<20; ++iii){boolean chain[]= new boolean[k];double r=1.5;Random picker= new Random();chain[0]=true;int M=20000;double totalweight= chain.length;double pick=0;55A.2. SimpleChainTestboolean success=false;boolean extinct=false;while (numsample< Long.MAX_VALUE-5){changeMade=false;chain[0]= (picker.nextDouble()*(M +r-1)<r);totalweight=0;for (boolean b: chain){totalweight+= (b?r:1);}TotalTotalWeight+=totalweight;pick= picker.nextDouble()*totalweight;int select=0;double pickcopy = pick;while (pickcopy>0 && select<k){pickcopy-= (chain[select]?r:1);select++;}select--;if (select==k-1){changeMade=true;//if final rung... do nothing!}else{if (chain[select] && !chain[select+1]){ //new train arriveint trainLength=0;while((trainLength<select)&& (chain[select-trainLength]) ) {trainLength++;}TrainArrive[select+1][trainLength]++;MostRecentTrainLength[select+1]=trainLength;}if (chain[select+1]!=chain[select]){changeMade=true; }chain[select+1]=chain[select];if (chain[select]&& select==0){56A.2. SimpleChainTestnumTrainsLeave++; }}for (int ppp=0; ppp<k;ppp++){//loop to add time to correct train length.if (chain[ppp]){TrainTime[ppp][MostRecentTrainLength[ppp]]++;}}if (changeMade || !weirdRecord){for (int jij=0; jij<k; ++jij){Mcount[jij]+= (chain[jij]?1:0);if (jij<k-1 && chain[jij] &&chain[jij+1] ){MM[jij]++;}if (jij<k-1 && !chain[jij] && !chain[jij+1] ){WW[jij]++;}}}/*//NOTE: Uncomment this section to view trains.//Also, it is worth reducing L to something small//before doing so.trainPrint="";for (int iij=0; iij<k-1; iij++){trainPrint=trainPrint+ (chain[iij]?"0":"-");}System.out.println(trainPrint);*/numsample++;if (numsample/1000==printseperator){// NOTE: this is where the output goes.//Output is seperated further and further apart as time//goes on. This part of the function is frequently// rewritten, depending on what output is needed.57A.2. SimpleChainTestprintseperator*=2;// System.out.println("Train density"+ (float)(numTrainsLeave)/numsample// +" weight average"+TotalTotalWeight/numsample);double OldPredict=0;double cunningPredict=Mcount[0]/numsample;System.out.println("t"+ numsample);for (int iij=1; iij<k-1; iij++){//System.out.print("\n Height" + iij+" ");OldPredict=(Math.pow(r,iij)/M)/((Math.pow(r,iij)/M)+1-1/M);System.out.println(""+iij+ " & " + OldPredict+ " & "+ Mcount[iij]/numsample+" & "+ (MM[iij]/Mcount[iij])+" \\\\" );/* //When uncommented this section//prints information about trains.//This was used to test the predictions made in// the paper against actual simulated results.for (int ddd=1; ddd<k-1; ddd++){if (TrainArrive[iij][ddd]>0){System.out.print("length" + ddd+" occurance"+(double)(TrainArrive[iij][ddd])/numTrainsLeave+ "Exptime"+(double)TrainTime[iij][ddd]/(TrainArrive[iij][ddd]) );}}*/}for (int iij=1; iij<k-1; iij++){OldPredict=(Math.pow(r,iij)/M)/((Math.pow(r,iij)/M)+1-1/M);cunningPredict=cunningPredict*(MM[iij]/Mcount[iij])+ r*cunningPredict*(1-MM[iij]/Mcount[iij]);System.out.println(""+iij+ ";" + OldPredict+ ";"+Mcount[iij]/numsample+";"+ (MM[iij]/Mcount[iij])+";"+cunningPredict);}}}}58A.2. SimpleChainTest}}59

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0135587/manifest

Comment

Related Items