Essays on game theory and stochastic evolutionbyAlexander Patrick McAvoyA THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFDoctor of PhilosophyinTHE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES(Mathematics)The University of British Columbia(Vancouver)July 2016c© Alexander Patrick McAvoy, 2016AbstractEvolutionary game theory is a popular framework for modeling the evolution of populations via naturalselection. The fitness of a genetic or cultural trait often depends on the composition of the population asa whole and cannot be determined by looking at just the individual (“player”) possessing the trait. Thisfrequency-dependent fitness is quite naturally modeled using game theory since a player’s trait can be encodedby a strategy and their fitness can be computed using the payoffs from a sequence of interactions with otherplayers. However, there is often a distinct trade-off between the biological relevance of a game and the easewith which one can analyze an evolutionary process defined by a game. The goal of this thesis is to broadenthe scope of some evolutionary games by removing restrictive assumptions in several cases. Specifically, weconsider multiplayer games; asymmetric games; games with a continuous range of strategies (rather thanjust finitely many); and alternating games. Moreover, we study the symmetries of an evolutionary processand how they are influenced by the environment and individual-level interactions. Finally, we present amathematical framework that encompasses many of the standard stochastic evolutionary processes andprovides a setting in which to study further extensions of stochastic models based on natural selection.iiPrefaceThe majority of the research in this thesis was published jointly with Christoph Hauert. (I am the firstauthor of all work appearing here.) Since these topics were assembled into publishable units as they werecompleted, this manuscript contains a number of separate research papers as its chapters.Chapter 2 is published as:McAvoy, A. and Hauert, C. “Structure coefficients and strategy selection in multiplayer games,”Journal of Mathematical Biology 72 (1): 203–238, 2016.Chapter 3 is published as:McAvoy, A. and Hauert, C. “Asymmetric evolutionary games,” PLoS Computational Biology 11(8): e1004349, 2015.Chapter 4 is published as:McAvoy, A. and Hauert, C. “Structural symmetry in evolutionary games,” Journal of the RoyalSociety Interface 12 (111): 20150420, 2015.Chapter 5 is available as a preprint at:McAvoy, A. “Stochastic selection processes,” arXiv preprint arXiv:1511.05390, 2015.Chapter 6 is published as:McAvoy, A. and Hauert, C. “Autocratic strategies for iterated games with arbitrary actionspaces,” Proceedings of the National Academy of Sciences 113 (13): 3573–3578.Chapter 7 is available as a preprint at:McAvoy, A. and Hauert, C. “Autocratic strategies for alternating games,” arXiv preprint arXiv:1602.02792,2016.iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixGlossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiiAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii1 Introduction and thesis structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Structure coefficients and strategy selection in multiplayer games . . . . . . . . . . . . . 72.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.2 Reducible games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.2.1 Well-mixed populations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.2.2 Structured populations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.3 Selection conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.3.1 Symmetric games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.3.2 Asymmetric games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312.5 Methods: reducibility in well-mixed populations . . . . . . . . . . . . . . . . . . . . . . . . . . 33iv2.6 Methods: selection conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382.6.1 Asymmetric games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382.6.2 Symmetric games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422.7 Methods: explicit calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 Asymmetric evolutionary games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503.2.1 Ecological asymmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503.2.2 Genotypic asymmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593.4 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613.4.1 Notation and general remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623.4.2 Death-birth updating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643.4.3 Birth-death updating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683.4.4 Imitation updating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693.4.5 Pairwise comparison updating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713.4.6 Computer simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734 Structural symmetry in evolutionary games . . . . . . . . . . . . . . . . . . . . . . . . . . . 754.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754.2 Markov chains and evolutionary equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794.2.1 General Markov chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794.2.2 Markov chains defined by evolutionary games . . . . . . . . . . . . . . . . . . . . . . . 804.3 Evolutionary games on graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814.3.1 The Moran process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824.3.2 Symmetric games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834.3.3 Asymmetric games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 954.5 Methods: fixation and absorption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 984.5.1 Fixation probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 984.5.2 Absorption times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99v4.6 Methods: symmetry and evolutionary equivalence . . . . . . . . . . . . . . . . . . . . . . . . . 1014.6.1 Symmetries of graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1014.6.2 Symmetries of evolutionary processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1024.7 Methods: explicit calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1074.7.1 The Moran process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1074.7.2 Frequency-dependent games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1075 Stochastic selection processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1115.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1125.2 Stochastic games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1155.2.1 Evolutionary processes as stochastic games . . . . . . . . . . . . . . . . . . . . . . . . 1175.3 Stochastic selection processes with fixed population size . . . . . . . . . . . . . . . . . . . . . 1205.3.1 Population states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235.3.2 Aggregate payoff functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1255.3.3 Update rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1295.4 Stochastic selection processes with variable population size . . . . . . . . . . . . . . . . . . . 1405.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1435.5.1 Evolutionary games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1435.5.2 Symmetries of evolutionary processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1445.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1456 Autocratic strategies for repeated games with simultaneous moves . . . . . . . . . . . . . 1496.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1496.2 Autocratic strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1526.3 Continuous Donation Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1556.3.1 Two-point autocratic strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1566.3.2 Deterministic autocratic strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1596.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1616.5 Methods: iterated games with two players and measurable action spaces . . . . . . . . . . . . 1646.6 Methods: detailed proofs of the main results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1696.6.1 Two-point autocratic strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1726.7 Methods: examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174vi6.7.1 Games with finitely many actions and no discounting . . . . . . . . . . . . . . . . . . 1756.7.2 Continuous Donation Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1767 Autocratic strategies for repeated games with alternating moves . . . . . . . . . . . . . . 1807.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1817.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1857.2.1 Strictly-alternating games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1857.2.2 Randomly-alternating games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1887.3 Example: classical Donation Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1897.4 Example: continuous Donation Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1907.4.1 Extortionate, generous, and equalizer strategies . . . . . . . . . . . . . . . . . . . . . . 1907.4.2 Strictly-alternating moves; player X moves first . . . . . . . . . . . . . . . . . . . . . . 1927.4.3 Strictly-alternating moves; player Y moves first . . . . . . . . . . . . . . . . . . . . . . 1937.4.4 Randomly-alternating moves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1937.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1987.6 Methods: strictly-alternating games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2007.6.1 X moves first . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2017.6.2 Y moves first . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2057.7 Methods: randomly-alternating games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2107.7.1 Two-point autocratic strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2137.7.2 Deterministic autocratic strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2147.8 Methods: continuous Donation Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2147.8.1 Strictly-alternating moves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2157.8.2 Randomly-alternating moves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2178 Summary and conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223viiList of Tables4.1 Absorption times of the 12 initial configurations of a single mutant in a wild-type populationfor the Moran process on the Frucht graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1074.2 Fixation probabilities and absorption times of the 12 initial configurations of a single cooper-ator among defectors for the death-birth process on the Frucht graph . . . . . . . . . . . . . . 1084.3 Fixation probabilities and absorption times of the 12 initial configurations of a single cooper-ator among defectors for the death-birth process on the Tietze graph . . . . . . . . . . . . . . 1094.4 Structure coefficients for the death-birth process on a vertex-transitive (but not symmetric)graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110viiiList of Figures2.1 Payoffs on a heterogeneous network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.2 Reducibility of the linear public goods game . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.3 Ho¨lder public goods game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.4 Payoffs on a square lattice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.5 Number of distinct structure coefficients vs. number of players . . . . . . . . . . . . . . . . . 283.1 Change in cooperator frequency for a Donation Game and a Snowdrift Game . . . . . . . . . 553.2 Change in cooperator frequency for a spatially non-additive Snowdrift Game . . . . . . . . . 563.3 Change in cooperator frequency for a Snowdrift Game with stronger selection . . . . . . . . . 564.1 Three levels of symmetry for connected graphs . . . . . . . . . . . . . . . . . . . . . . . . . . 824.2 Frucht graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834.3 Fixation probability and absorption time versus initial vertex of mutant for the Moran processon the Frucht graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844.4 Fixation probability and absorption time versus initial vertex of mutant (cooperator) for adeath-birth process on the Frucht graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 864.5 Well-mixed population of size three . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 884.6 Tietze graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 914.7 Vertex-transitive graphs with six vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 914.8 Fixation probability and absorption time versus initial vertex of mutant (cooperator) for adeath-birth process on the Tietze graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096.1 Reactive, memory-one strategies for a game with continuous action spaces . . . . . . . . . . . 1536.2 Extortion in the continuous Donation Game without discounting . . . . . . . . . . . . . . . . 1586.3 Region of feasible payoffs for the continuous Donation Game . . . . . . . . . . . . . . . . . . 159ix6.4 Two-point extortionate, generous, and equalizer strategies for the continuous Donation Gamewith simultaneous moves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1606.5 Deterministic extortionate, generous, and equalizer strategies for the continuous DonationGame with simultaneous moves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1617.1 Three types of interactions in the alternating Donation Game: (A) strictly-alternating gamein which player X moves first; (B) strictly-alternating game in which player Y moves first;and (C) randomly-alternating game in which, in each round, player X moves with probabilityωX and player Y with probability 1´ ωX . For each type of alternating game, a player moveseither C or D (cooperate or defect) in each round and both players receive a payoff from thismove. Unlike in strictly-alternating games, (A) and (B), a player might move several times ina row in a randomly-alternating game, (C). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1837.2 Three examples of memory-one strategies for player X in a strictly-alternating game whoseaction spaces are SX “ SY “ r0,Ks for some K ą 0. (A) depicts a reactive stochastic strategyin which solely Y ’s last move is used to determine the probability distribution with which Xchooses her next action. The mean of this distribution is an increasing function of y, whichmeans that X is more likely to invest more (play closer to K) as y increases. (B) illustrates areactive two-point strategy, i.e. a strategy that plays only two actions, 0 (defect) or K (fullycooperate). Player Y ’s last move is used to determine the probability with which X plays Kin the next round; if X does not use K, then she uses 0. As Y ’s last action, y, increases, Xis more likely to reciprocate and use K in response. (C) shows a strategy that gives X’s nextmove deterministically as a function of both of the players’ last moves. Unlike in (A) and(B), X’s next move is uniquely determined by her own last move, x, and the last move of heropponent, y. If Y used y “ 0 in the previous round, then X responds by playing 0 as well.X’s subsequent action is then an increasing function of y whose rate of change is largest whenX’s last move, x, is smallest. In particular, if Y used y ą 0 in the previous round, then X’snext action is a decreasing function of her last move, x. Therefore, in (C), X exploits playerswho are unconditional cooperators. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186x7.3 Feasible payoff pairs, ppiY , piXq, when X uses a two-point strategy (hatched) and when Xuses the entire action space (light blue). The benefit function is b psq “ 5 `1´ e´2s˘, thecost function is c psq “ 2s, and the action spaces are SX “ SY “ r0, 2s (see Killingback andDoebeli, 2002). The probability that player X moves in any given round is (A) ωX “ 1{3,(B) ωX “ 1{2, and (C) ωX “ 2{3. In each figure, the payoffs for mutual defection (0) areindicated by a red point and for mutual full-cooperation (K “ 2) by a green point. The bluepoint marks the payoffs when X defects and Y fully cooperates, and the magenta point viceversa. From (A) and (C), we see that if ωX ‰ 1{2, then the payoffs for the alternating gameare typically asymmetric even though the one-shot game is symmetric. . . . . . . . . . . . . . 1947.4 Two-point extortionate, generous, and equalizer strategies for the randomly-alternating, con-tinuous Donation Game. In each panel, the red (resp. green) point indicates the payoffsfor mutual defection (resp. full cooperation). The blue point gives the payoffs when X de-fects and Y fully cooperates in every round, and the magenta point vice versa. In the toprow, both players move with equal probability in a given round (ωX “ 1{2), whereas in thebottom row player X moves twice as often as player Y (ωX “ 2{3). The extortionate strate-gies in (A) and (D) enforce piX “ χpiY , while the generous strategies in (B) and (E) enforcepiX ´ κKKX “ χ`piY ´ κKKY˘with χ “ 2 (black) and χ “ 3 (blue). The equalizer strate-gies in (C) and (F) enforce piY “ γ with γ “ κ00Y “ 0 (black) and γ “ κKKY (blue). Thesimulation data in each panel show the average payoffs, ppiY , piXq, for player X’s two-pointstrategy against 1000 random memory-one strategies for player Y . The benefit function isb psq “ 5 `1´ e´2s˘ and the cost function is c psq “ 2s for action spaces SX “ SY “ r0, 2s. . . 1967.5 Deterministic extortionate, generous, and equalizer strategies for the randomly-alternating,continuous Donation Game. In each round, both players have the same probability of moving(ωX “ 1{2). In (A), extortionate strategies enforce piX “ χpiY and in (B), generous strategiesenforce piX ´ κKKX “ χ`piY ´ κKKY˘with χ “ 2 (black) and χ “ 3 (blue). In (C), equalizerstrategies enforce piY “ γ with γ “ κ00Y “ 0 (black) and γ “ κKKY (blue). Since deterministicstrategies utilize a larger portion of the action space than two-point strategies, the playerscan attain a broader range of payoff pairs, ppiY , piXq (c.f. Fig. 7.4). The simulation data ineach panel shows the average payoffs, ppiY , piXq, for X’s deterministic strategy against 1000randomly-chosen, memory-one strategies of the opponent. The benefit function is b psq “5`1´ e´2s˘ and the cost function is c psq “ 2s for s P r0, 2s. . . . . . . . . . . . . . . . . . . 198xi7.6 Feasible payoff regions for three values of λ in the strictly-alternating, continuous DonationGame when player X moves first (top) and player Y moves first (bottom). The shaded regionrepresents the feasible payoffs when X plays a two-point strategy (only 0 and K). As thediscounting factor, λ, gets smaller (i.e. discounting stronger), the first move has more of apronounced effect on the expected payoffs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216xiiGlossaryS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . action/strategy space available to a playerpi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . generic symbol for a player’s payoffui . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . payoff function for player ipayoff matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .matrix whose entry pr, sq is the payoff for strategy r against smatrix game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . two-player game whose payoffs are described by a payoff matrixsymmetric game . . . . . . . . . . . . . game in which payoffs depend on strategies only (not locations or otherwise)asymmetric game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .game that is not symmetricaijrs . . . . . . . . . . . . . . . . . . . . payoff to player i using strategy r against j using s in an asymmetric matrix gamemultiplayer game . . . . . . . . . . . . . . . . . . game whose payoffs depend on the strategies of more than two playersfi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . fitness function for player iβ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . selection strength (always non-negative)weak selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . scenario in which the selection strength, β, is close to 0N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . size of the populationstate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . specification of the strategies of all players in the populationfixation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .characterized by reaching an absorbing (monomorphic) stateρs,i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . probability of reaching state i when starting in state sts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . average number of steps to absorption when starting in state sconnected graph . . . . . . . . . . . . . . . . . . . . graph in which any two vertices are connected by a sequence of edgesregular graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . graph in which each vertex has the same number of neighborsdegree (denoted k) of regular graph . . . . . . . . . . . . . . . . number of neighbors of each vertex in a regular graphgraph automorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .map from a graph to itself that preserves edgesvertex-transitive graph . . . . . . . . . . . . . . . . . . graph in which any two vertices are related by an automorphismpr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . frequency of strategy rxiiiqr|s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . conditional frequency of r as a neighbor of sε . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . strategy mutation probability (between 0 and 1)rare mutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . scenario in which the mutation rate, ε, is close to 0µ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . stationary distribution for a Markov chainaverage abundance . . long-run average frequency of a strategy, i.e. frequency in the stationary distributionσ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .structure coefficientλ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . temporal discounting factor (between 0 and 1)weak discounting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . scenario in which λ is close to 1one-shot game . . . . . . . . . . . . . . . . . . . . game in which one or both players choose an action and receive payoffsrepeated game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sequence of one-shot games played over timesimultaneous-move game . . . . . . repeated game in which both players move at the same time in each roundstrictly-alternating game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . repeated game in which moves alternate strictlyrandomly-alternating game . . . . . . . . . . . . . . . . . . . . . . . . repeated game in which moves alternate stochasticallyωX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . probability that player X moves in a randomly-alternating gamememory-one strategy . . . . . . .strategy for a repeated game that uses the outcome of just the previous roundSX (resp. SY ) . . . . . . . . . . . . . . . . . . . . action space of player X (resp. Y ) when the two spaces do not coincideσ0X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . player X’s initial (mixed) action in a repeated gameσX rx, ys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . player X’s (mixed) action after observing X “ x and Y “ ypiX (resp. piY ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . average payoff of X (resp. Y ) in a repeated gameautocratic strategy . . . . . . . . . . . . . . . . strategy that unilaterally enforces αpiX ` βpiY ` γ “ 0 for some α, β, γκ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . baseline payoff for an autocratic strategyχ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extortion factor (slope)Detailed descriptions of selected termsfixation probabilityThe fixation probability of a focal strategy is the probability that, given some initial starting configurationof strategies, the focal strategy ends up taking over the population via a sequence of evolutionary updates(births and deaths or imitations, for example). This quantity defines a standard metric for the success of amutant with respect to selection since one can ask the following simple question: when does selection increasethe probability that a mutant fixates in the population? In other words, when do selective differences betweenxivtraits increase the fixation probability of a mutant trait? We examine the notion of fixation probability indetail in Chapter 4.average abundanceThe average abundance of a strategy refers to it’s long-run frequency as the process unfolds. In each stageof an evolutionary process, one can calculate the frequency of a given strategy. If the grand average ofthese frequencies is then taken over a sufficiently long time frame, one obtains the average abundance ofthat strategy. The average abundance of a strategy serves as an alternative way to quantify the evolutionarysuccess of a trait, particularly in cases in which there are no absorbing states due to the existence of mutationrates (in which case “fixation probability”) is undefined. Average abundance is considered carefully inChapters 2 and 4.weak selectionWeak selection describes a scenario in which the selection strength, β, is close to 0 but still positive. Sinceorganisms often have many traits, any specific trait under examination might contribute only a small amountto overall fitness. The parameter β allows one to quantify just how small a trait’s effect on fitness is. Typicallyβ is assumed to be “sufficiently” small, which, in effect, means that a differentiable function of β (such asfixation probability) can be approximated by a first-order Taylor expansion about β “ 0. Weak selection isa common assumption throughout this thesis, particularly in Chapters 2, 3, and 4.rare mutationRare mutation refers to a scenario in which the probability of a strategy mutation, ε, is close to 0 butstill positive. Similar to weak selection, it is typically assumed that ε is sufficiently small, which results in apopulation spending most of its time in an absorbing (monomorphic) state. However, occasionally a mutationoccurs, and when such a mutant arises the process will return to an absorbing state prior to the appearanceof another mutant. Rare mutation is convenient mathematically and often biologically reasonable.structure coefficientA structure coefficient, σ, is a parameter appearing in a condition for (weak) selection to favor a particularstrategy. When selection is weak, the condition for strategy to be favored by selection can be writtenin the form of a linear combination of entries of the payoff matrix, where the coefficients in this linearxvcombination are themselves independent of the payoff matrix. The term “structure coefficients” comesfrom the dependence of these coefficients on the structure of the evolutionary process (including populationstructure and mutation rates). These coefficients are discussed in detail in Chapters 2 and 4.memory-one strategyA memory-one strategy for a repeated game takes into account the outcome of the previous round in orderto devise a (mixed) action for the present round. Such a strategy for player X is denoted σX rx, ys, where xand y are the moves played by X and Y , respectively, in the previous round. For fixed x and y, σX rx, ys isa probability distribution over X’s actions in the present round (i.e. a mixed action). If X moves in the firstround, then there is no history on which to condition his or her action, and in this case the initial mixedaction for player X is denoted σ0X . Therefore, a complete memory-one strategy is a pair,`σ0X , σX rx, ys˘.The notion of a memory-one strategy is treated in detail in Chapters 6 and 7.autocratic strategyAn autocratic strategy for player X enforces αpiX `βpiY `γ “ 0 for any strategy of player Y , where piX andpiY are the expected payoffs to players X and Y , respectively. A common linear relationship of this form ispiX ´κ “ χ ppiY ´ κq, where κ is known as the “baseline payoff” and χ is known as the “extortion factor.” Inwords, the baseline payoff is the average payoff received by an autocratic strategy against itself. Autocraticstrategies for repeated games are treated in Chapters 6 and 7.xviAcknowledgmentsI read in a recent Nature editorial1 that “the average number of people who read a PhD thesis all theway through is 1.6. And that includes the author.” While some might dispute this statistic, I find it to becompletely reasonable and understandable–at least in the applied sciences in 2016. I, myself, was encouragedto publish my projects as I completed them. As a result, putting this thesis together was less of a nightmareof eleventh-hour writing and more of an exercise in navigating bureaucracy, although in the end I was leftjust as dazed. Regardless, even though there is only an unfortunate 3{5 of a person out there who mightactually read this document, and despite the fact that those who have helped me over the past several yearsalready know who they are and are aware of my appreciation for them, I am including my thanks here.The majority of the work in this thesis came out of close collaborations with my advisor, ChristophHauert. I am grateful to Christoph for sharing my research interests, for encouraging me to pursue technicaland theoretical aspects of game theory, and for teaching me how to write a proper scientific paper. I stillhaven’t come close to mastering the art of writing a clear and compelling paper, but Christoph’s advice hashelped me significantly in my transition from pure mathematics to the applied sciences. Beyond research, Iwould like to heartily thank Christoph and his family for their hospitality during my three years at UBC.In no particular order, I would also like to thank Daniel Coombs and Michael Doebeli for being interestedin my research and serving on my thesis committee; Joshua Plotkin for carefully reading and evaluatingmy thesis as the external examiner; Rachel Kuske and Michael Peters for serving as university examinersand for providing helpful comments during my defens(c)e that improved the final version of this thesis;members of the Doebeli-Hauert lab group for comments on earlier versions of the projects that make upthis thesis; Christian Hilbe, Martin Nowak, and Arne Traulsen for a great deal of helpful feedback; the MaxPlanck Institute for Evolutionary Biology (Plo¨n) and the Program for Evolutionary Dynamics at HarvardUniversity for their hospitality while parts of this thesis were written; and many anonymous referees who1This editorial is entitled “The past, present and future of the PhD thesis” (Nature, vol. 535 no. 7610, 2016).xviihelped to substantially improve these chapters. Financial support from a Four Year Fellowship at UBC andalso from the Natural Sciences and Engineering Research Council of Canada (through Christoph’s grant)permitted me to travel and talk to many other researchers working in game theory, and I am grateful for theopportunities these grants provided. On a final professional note: although the material in this thesis shareslittle resemblance to the work I did before starting my Ph.D., I would also like to thank my earlier advisors,Siman Wong and Max Lieblich, for their advice and for teaching me how to learn and do mathematics. Theirperspectives had a profound impact on my approach to mathematics and research in general.On a personal note, I would like to thank my parents, Carla and John, and my brother, Jimmy, for beinginterested in my work, even though I typically did a terrible job of explaining it. Their support, even fromfar away, made being in graduate school much more manageable and enjoyable. I thank Fiona’s family fortheir encouragement and for always being welcoming in Seattle. I’d also like to thank my friends back eastfor providing a reprieve from the (sometimes) predictable and superficial nature of late-twentysomethingsinteractions on the west coast (or maybe just everywhere). I thank Farhan for being a good dude, particularlyduring some irritating times in Seattle, and for having a sixth sense about where to find good Italian food.Finally, and most importantly, I thank Fiona for being a fun and spontaneous person, making sure I eat well,and for reminding me that her salary will be greater than mine as long as I continue on this career path.I guess I should also add here my appreciation for something less tangible. Applied mathematics providesa welcome escape from the traditional environment of working in an office or lab, and I took full advantageof the opportunity to have a “rotating” office. These chapters were produced in the company of strangersat many cafe´s throughout Vancouver, Seattle, and several cities in Massachusetts and Europe. An essay byAriel Rubenstein entitled “The University of Cafe´s” states the joys of working in coffee shops much moreeloquently than I could. Apart from the pleasant nature of having a new desk in an unfamiliar environmenteach day, this arrangement allowed me to be more productive than I would have been in an office, and I amgrateful that it afforded me the opportunity to finish this thesis a bit earlier than scheduled.xviiiChapter 1Introduction and thesis structure1.1 BackgroundDarwin’s theory of evolution through natural selection is by now a foundational component of biology. Inhis landmark treatise, Darwin (1859) postulated that the great diversity of life observed in nature can beattributed to evolution. Selection pressures, which act on individual variation within a population, resultin heterogeneous survival rates and contribute to phenotypic changes seen over the course of generations.While Darwin’s numerous examples provide evidence to support his theory, it is difficult to observe evolutiondirectly due to the timescale on which it occurs. However, the principles on which Darwin’s theory restscan be translated into artificial, inherently non-biological contexts and observed directly via mathematicalmodels. Although mathematical models strip away many of the biological details of a system, such distilledmodels can provide surprisingly relevant insights into the mechanisms that drive natural phenomena.Simple mathematical models have proven to be extremely useful in the study of evolution. By assigningnumerical fitness values to traits within a population, one can quantify the degree to which individualsdiffer from a reproductive standpoint. Over time, through a sequence of births and deaths, the populationevolves and the frequencies of these traits may be viewed as functions of their respective reproductive fitness.Through such a lens one can observe in real time what Spencer (1866) termed “survival of the fittest.”One of the most well-known models of stochastic evolution is the Moran process (Moran, 1958). Thisprocess involves a finite population of fixed size and two types of players–a wild type and a mutant type.The wild and mutant types differ only in their reproductive fitness, and a population of these players isrepeatedly updated via a balanced sequence of births and deaths until only one type remains. A typical1question one can then ask is: what is the probability that a single mutant takes over a wild-type population?This question may be used to quantify the evolutionary success of the mutant type, with a larger fixationprobability corresponding to a higher mutant fitness relative to the wild type. However, it’s important tokeep in mind the assumptions underlying the Moran model. These assumptions include (i) the populationhas fixed size; (ii) a player’s fitness depends on only its type (wild or mutant); and (iii) the population isspatially homogeneous, meaning (roughly) that the players are not distinguished by their locations.While the Moran process is a cornerstone of mathematical evolution, it cannot model the evolution oftraits whose fitness depends on the composition of the population. This shortcoming is due to the assumptionthat the fitness of the mutant type is an inherent property of the individual and does not depend on theother players in the population. But what if a mutant trait is fitter when rare than when frequent? Similarly,how might one model traits that perform well past a threshold but are outcompeted by the wild type atlow frequencies? A trait of this form is cooperation, which, depending on the setting, might involve payinga cost to provide another individual with a benefit. Since the benefit received by a cooperator depends onhow many other cooperators are present in the population, cooperation is a frequency-dependent trait.Likely the most well-studied model of cooperation is the Prisoner’s Dilemma. The Prisoner’s Dilemmais a two-player game (one opponent versus another) that consists of two strategies, cooperation (C) anddefection (D). An intuitive description of the Prisoner’s Dilemma, and indeed the source of its name, isas follows: two criminal accomplices are arrested for a crime and placed into separate interrogation rooms.Each of these “players” has two options: (i) confess to the authorities (defect against the co-conspirator) andreceive a reduced sentence or (ii) remain silent (cooperate with the co-conspirator). If both players confess,then they each get two years in prison. If both players stay silent, they each serve a sentence of just one year.If one player confesses and the other is silent, then the player who confesses is set free and the co-conspiratoris given three years in prison. The dilemma of this interaction is that the optimum for both players togetheris to cooperate with one another, yet there is always a temptation to defect (irrespective of the opponent).The Prisoner’s Dilemma just introduced can be described by a payoff matrix,¨˚˝C DC R,R S, TD T, S P, P‹˛‚, (1.1)where R “ ´1, S “ ´3, T “ 0, and P “ ´2. Note that these payoffs are negative because a prison sentenceharms a player and thus reduces his or her “payoff.” Thus, if the row player (left side) plays D and the2column player (top side) plays C, then the row player gets T and the column player gets S. Since payoffsdepend on only the players’ actions, this matrix can be replaced by the simpler, more common payoff matrix¨˚˝C DC R SD T P‹˛‚, (1.2)where the payoff to the column player is omitted. Instead of using specific values of R, S, T , and P , thenature of this social dilemma can be captured by the generic ranking T ą R ą P ą S of payoffs.A common realization of a Prisoner’s Dilemma is the donation game, which is defined by the matrix¨˚˝C DC b´ c ´cD b 0‹˛‚. (1.3)Informally, c is the cost a cooperator pays in order to provide the opponent with a benefit, b. This gamecaptures the qualitative features of the Prisoner’s Dilemma provided b ą c ą 0 but is easier to work withanalytically. One reason it is a convenient model for Prisoner’s Dilemma interactions is that it has the “equalgains from switching” property, which means that a player’s change in payoff from swapping strategies isindependent of the opponent’s strategy. This property makes the donation game relatively straightforwardto work with in an evolutionary setting, and indeed the donation game is one of our main topics.Another important social dilemma is the Snowdrift Game, which is similar to the Prisoner’s Dilemma buthas one crucial difference. Whereas a Prisoner’s Dilemma is characterized by the ranking T ą R ą P ą Sin Eq. (1.2), the Snowdrift Game is characterized by T ą R ą S ą P . The intuition behind this ranking isas follows: Imagine two people are driving their cars in opposite directions and come across a snowdrift inthe road. The benefit from being able to pass is b ą 0, and the cost of a single person clearing the entiresnowdrift is c, where 0 ă c ă b. If both players work to clear the snowdrift, then they each receive thebenefit of being able to pass but they share the cost. If one person shovels and the other does nothing, thenthey both receive b but only the one who did the work pays the cost. If neither shovels, then both players3pay no cost and receive no benefit. Therefore, in terms of a payoff matrix, this game is defined by¨˚˝C DC R SD T P‹˛‚ÝÑ¨˚˝C DC b´ 12c b´ cD b 0‹˛‚. (1.4)In other words, this social dilemma is similar to the Prisoner’s Dilemma except that it is still individuallyrational to cooperate even when the opponent defects because b´ c ą 0 (i.e. S ą P ).Game theory turns out to be a tremendously useful tool for understanding the evolution of frequency-dependent traits in biology. Games allow traits to be encoded as strategies and reproductive fitness to bederived from the payoffs that result from individual encounters. The Prisoner’s Dilemma, for instance, canbe used to model social grooming in primates, blood donation in vampire bats, siderophore production bybacterial parasites, trade agreements between humans (or countries), alarm calls in monkeys, and many otherwidely-observed traits. For such a trait, one might then ask the following question: in a resident population(of defectors, say), what is the probability that a single mutant (cooperator) takes over the entire populationthrough a sequence of births and deaths? In other words, what is the probability that the population reachesa state of all cooperators via an evolutionary updating process? This fixation probability of cooperators is acentral metric for quantifying evolutionary success in finite populations and is one of our main topics here.There is a clear tradeoff between how easy a model is to treat analytically and how realistic it is. Inorder to make evolutionary models analytically tractable, it is often assumed that the game is symmetric,payoffs come from pairwise interactions, population structures are sufficiently symmetric, mutation rates areuniform, and every player has finitely many (often two) strategies. Some of these assumptions are strong(symmetric payoffs) while others are not always that restrictive. For example, a common assumption is thatselection is weak, which means that the payoff from a trait contributes only a small portion to reproductivefitness. This assumption typically makes it much easier to work with a model analytically, but it is alsoreasonable from a biological viewpoint since organisms can possess many traits and any given trait likelydoes not contribute much to fitness. While even strong assumptions can sometimes preserve the qualitativenature of the system under study (such as conflicts of interest in social dilemmas), they need not accuratelyreflect what is observed in general biological populations. The goal of this thesis is to study the implicationsof these common (and often overly restrictive) assumptions and to relax them in several natural ways.41.2 OrganizationSince the chapters (or “essays”) presented here each focus on different (although closely related) topics, weinclude with each one a self-contained introduction and discussion. The organization is as follows:Chapter 2 studies multiplayer evolutionary games in structured populations. Here, we examine thecomplexity of multiplayer games on graphs and show how simple, well-known selection conditions becomecomplicated as the number of players involved in each interaction grows. As an intermediate step, we definea notion of reducibility for games that captures what it means for a multiplayer interaction to be brokendown into several interactions with fewer participants. We show that, even for games that are reduciblein well-mixed populations (like the linear public goods game), population structure can inhibit the abilityto break a game down into interactions with fewer players. Therefore, for determining the success of aparticular strategy, multiplayer games are typically much more difficult to analyze than two-player games.The focus of Chapter 3 is on two forms of asymmetry in evolutionary matrix games. Typically, it isassumed that two players are distinguished by only their strategies, and our goal in this chapter is to relaxthis assumption. Specifically, we study matrix games when the players’ payoffs depend on (i) their locations(ecological asymmetry) and (ii) the players themselves (genotypic asymmetry). For games with ecologicalasymmetry, the payoff to a player at location i against a player at location j depends on both i and j (inaddition to their strategies). For games with genotypic asymmetry, this payoff depends on the strategies ofthe players as well as their background traits (e.g. “strong” or “weak”). On large, regular networks, we showthat each of these types of asymmetry can be resolved by a symmetric game if the intensity of selection isweak. Thus, under these conditions, one can study the dynamics using the theory of symmetric games.Chapter 4 considers a notion of structural symmetry for evolutionary games. Specifically, we examinehow population structure, mutation rates, and payoffs affect symmetries of evolutionary processes. In a ho-mogeneous process, any two rare-mutant states are mathematically equivalent. We show that non-transitivepopulation structure, asymmetric mutation rates, and asymmetric payoffs can all result in distinctions be-tween two rare-mutant states. In particular, even symmetric games on regular graphs need not be homoge-neous, which we demonstrate using the 12-vertex Frucht graph. Furthermore, we show that in populationsof any finite size, it is (in general) necessary for the population structure to be arc-transitive in order forasymmetric games to be resolved by symmetric games in the limit of weak selection. This result extends themain observation of Chapter 3 to populations of any finite size (as opposed to just large populations).In Chapter 5, we present a formal mathematical framework for stochastic selection processes. The maincomponents of a stochastic selection process are a population state space, an aggregate payoff function,5and an update rule. A population state space contains information about the population such as its spatialstructure, the mutation rates of different locations, and background phenotypes. An aggregate payoff functiontakes this information and combines it with the strategies of the players to assign a payoff (fitness) to eachplayer. The update rule then uses these payoffs (fitness values) to update the players’ strategies and thestate of the population. This framework, which includes populations of varying size, helps to demonstratethe distinctions between classical and evolutionary game theory. Moreover, we use this framework to arguethat the state space of an evolutionary process is naturally a quotient space, which is constructed to takeadvantage of symmetries and eliminate the dependence on how the players are labeled within the population.Chapter 6 focuses on repeated games and represents a slightly different topic from the earlier chapters.Since here we deal with repeated interactions between the same players and not with evolving populations,it belongs under the umbrella of classical game theory. Specifically, we study when a player can unilaterallyenforce linear payoff relationships in repeated interactions and extend a surprising result of Press and Dyson(2012) for the iterated Prisoner’s Dilemma. We show that these autocratic strategies exist for discountedgames with measurable action spaces, which allows one to consider autocratic strategies in games with acontinuous range of actions, for instance. Using the continuous Donation Game as an example, we show thatthere exist simple two-point strategies that allow a player to enforce linear payoff relationships while playingonly two actions throughout the game, despite the fact that the opponent might have a continuous range ofactions available to use. However, passing from the classical Prisoner’s Dilemma to the continuous DonationGame results in a loss of several Nash equilibria among autocratic strategies. Thus, autocratic strategies inthe continuous version of this social dilemma represent a departure from those of the two-action setting.In Chapter 7, we show that, furthermore, autocratic strategies exist in asynchronous games with alter-nating moves. Based on the results in Chapter 6 for games with simultaneous moves, it is perhaps not overlysurprising that these strategies exist for strictly-alternating games since the order in which the players moveis known at the outset of the game. What is more surprising is the fact that autocratic strategies exist forrandomly-alternating games for which, in each round, player X moves with probability ωX and her opponentmoves with probability 1´ωX . We use this result to show that in dominance hierarchies, subordinate play-ers often have more autocratic strategies available to them than do their dominant opponents. Therefore,autocratic strategies can be useful in exerting one-sided control over asymmetric interactions.6Chapter 2Structure coefficients and strategyselection in multiplayer gamesEvolutionary processes based on two-player games such as the Prisoner’s Dilemma or Snowdrift Gameare abundant in evolutionary game theory. These processes, including those based on games with morethan two strategies, have been studied extensively under the assumption that selection is weak. However,games involving more than two players have not received the same level of attention. To address thisissue, and to relate two-player games to multiplayer games, we introduce a notion of reducibility formultiplayer games that captures what it means to break down a multiplayer game into a sequence ofinteractions with fewer players. We discuss the role of reducibility in structured populations, and wegive examples of games that are irreducible in any population structure. Since the known conditionsfor strategy selection, otherwise known as σ-rules, have been established only for two-player games withmultiple strategies and for multiplayer games with two strategies, we extend these rules to multiplayergames with many strategies to account for irreducible games that cannot be reduced to those simplertypes of games. In particular, we show that the number of structure coefficients required for a symmetricgame with d-player interactions and n strategies grows in d like dn´1. Our results also cover a type ofecologically asymmetric game based on payoff values that are derived not only from the strategies of theplayers, but also from their spatial positions within the population.72.1 IntroductionOver the past several years, population structure has become an integral part of the foundations of evolu-tionary game theory (Nowak et al., 2009). Among the popular settings for evolutionary processes in finitepopulations are networks (Lieberman et al., 2005; Ohtsuki et al., 2006; Szabo´ and Fa´th, 2007; Taylor et al.,2007; Lehmann et al., 2007), sets (Tarnita et al., 2009a), and demes (Taylor et al., 2001; Wakeley and Taka-hashi, 2004; Rousset, 2004; Ohtsuki, 2010a; Hauert and Imhof, 2012). A common way in which to studysuch a process is to use it to define an ergodic Markov chain and then examine the equilibrium distributionof this chain. One could take a birth-death or imitation process on a network, for example, and incorporatea small strategy mutation rate, ε, that eliminates the monomorphic absorbing states (Fudenberg and Imhof,2006). This Markov chain will have a unique stationary distribution, µ, which is a probability distributionover the set of all strategy configurations on the network. This set of all possible configurations – the statespace of the Markov chain – is quite large and difficult to treat directly; one seeks a way in which to use thisdistribution to determine which strategies are more successful than others.A prototypical process in evolutionary game theory is the (frequency-dependent) Moran process. Classi-cally, this process takes place in a well-mixed population and proceeds as follows: At each discrete time step,a player is selected for reproduction with probability proportional to fitness. A member of the populationis then chosen for death uniformly at random and is replaced by the offspring of the individual chosen forreproduction (Moran, 1958). The fitness of a player is calculated from a combination of (i) the outcome ofa sequence of two-player games and (ii) the intensity of selection (Nowak et al., 2004; Taylor et al., 2004).For example, in the donation game, each player in the population is either a cooperator (strategy C) or adefector (strategy D). A cooperator provides a benefit b to the opponent at a cost of c, whereas a defectorprovides no benefit and incurs no cost (Ohtsuki et al., 2006; Sigmund, 2010). The payoff matrix for thisgame is¨˚˝C DC b´ c ´cD b 0‹˛‚. (2.1)A player adds the payoffs from the two-player interactions with his or her neighbors to arrive at a totalpayoff value, pi. If β ě 0 represents the intensity of selection, then the total payoff value is converted to fitnessvia f :“ exp tβpiu (Traulsen et al., 2008; Maciejewski et al., 2014). For our purposes, we will assume that8β ! 1, i.e. that selection is weak. This assumption is necessary here for technical reasons, but it turns outto be quite sensible for many applications of game theory to biology since most organisms possess multipletraits and no single trait (strategy) is expected to have a particularly strong influence on fitness (Tarnitaet al., 2011; Wu et al., 2013a). This frequency-dependent Moran process has two absorbing states: all C andall D. If ε ą 0 is a small mutation rate, one may define a modified process by insisting that the offspringinherit the strategy of the parent with probability 1 ´ ε and take on a novel strategy with probability ε.The small mutation rate eliminates the monomorphic absorbing states of the process, resulting in a Markovchain with a unique stationary distribution. This setup is readily extended to structured populations, andis precisely the type of process we wish to study here.Rather than looking at the long-run stationary distribution of the Markov chain itself, one may insteadconsider just the proportion of each strategy in this equilibrium. This approach ignores particular strategyconfigurations and suggests a natural metric for strategy success: when does selection increase the abundance,on average, of a particular strategy? Such conditions are determined by a multitude of factors: game type,payoff values, update rule, selection strength, mutation rates, population structure, etc. In the limit of weakselection, these conditions are the so-called “σ-rules” or selection conditions, which are given in terms of linearcombinations of the payoff values of the game, with coefficients independent of these values (Tarnita et al.,2009b, 2011; Van Cleve and Lehmann, 2013). The coefficients appearing in these linear combinations areoften referred to as “structure coefficients” to indicate their dependence on the structure of the evolutionaryprocess. What is remarkable about these σ-rules is that they can be stated with very few assumptions onthe underlying evolutionary process and do not change when the payoff values are varied.Tarnita et al. (2011) state a selection condition for games with n strategies and payoff values that aredetermined by an n ˆ n matrix. This rule is extremely simple and is stated as a sum of pairwise payoffcomparisons, weighted by three structure coefficients. Implicit in this setup is the assumption that theaggregate payoff to each player is determined by the individual payoffs from his or her pairwise interactions.In many cases, this assumption is reasonable: The Prisoner’s Dilemma, Snowdrift Game, Hawk-Dove Game,etc. are defined using pairwise interactions, and a focal player’s total payoff is simply defined to be the sumof the pairwise payoffs. This method of accounting for payoffs essentially defines a multiplayer game fromsmaller games, and this property of the multiplayer game produces a simple selection condition regardlessof the number of strategies.Multiplayer games that are not defined using pairwise interactions have also been studied in evolutionarydynamics (Broom et al., 1997; Broom, 2003; Kurokawa and Ihara, 2009; Ohtsuki, 2014; Pen˜a et al., 2014).9One of the most prominent multiplayer games arising in the study of social dilemmas is the public goodsgame. In the public goods game, each player chooses an amount to contribute to a common pool and incursa cost for doing so; the pool (“public good”) is then distributed equally among the players. In the linearpublic goods game, this common pool is enhanced by a factor r ą 1 and distributed evenly among theplayers: if players 1, . . . , d contribute x1, . . . , xd P r0,8q, respectively, then the payoff to player i isui px1, . . . , xdq “ rˆx1 ` ¨ ¨ ¨ ` xdd˙loooooooooomoooooooooonplayer i’s share of the public good´ xilomonplayer i’s contribution(2.2)(Archetti and Scheuring, 2012). In well-mixed populations, the linear dependence of these distributions onthe individual contributions allows one to break down the multiplayer payoff into a sum of payoffs frompairwise interactions, one for each opponent faced by a focal player (Hauert and Szabo´, 2003). Thus, in thissetting, the linear public goods game is equivalent to a two-player matrix game and the study of its dynamicsdoes not require a theory of multiplayer games. We will see that this phenomenon is fortuitous and does nothold for general multiplayer games in structured populations. In particular, a theory of two-player matrixgames suffices to describe only a subset of the possible evolutionary games.Wu et al. (2013b) generalize the rule of Tarnita et al. (2011) and establish a σ-rule for multiplayer gameswith two strategies. The number of coefficients needed to define this σ-rule grows linearly with the number ofplayers required for each interaction. The strategy space for the linear public goods game could be chosen tobe t0, xu for some x ą 0, indicating that each player has the choice to (i) contribute nothing or (ii) contributea nonzero amount, x, to the public good. The rule of Wu et al. (2013b) applies to this situation, and thenumber of coefficients appearing in the selection condition is a linear function of the number of players in theinteraction. On the other hand, knowing that the linear public goods game can be reduced to a sequence ofpairwise games allows one to apply the rule of Tarnita et al. (2011), giving a selection condition with threestructure coefficients regardless of the number of players required for each public goods interaction. Thus,knowing that a game can be reduced in this way can lead to a simpler selection condition.Assuming that payoff values can be determined by a single payoff function, such as the function definedby a 2ˆ2 payoff matrix, is somewhat restrictive. Typically, a game is played by a fixed number of players, sayd, and there is a function associated to this game that sends each strategy profile of the group to a collectionof payoff values, one for each player. For degree-homogeneous population structures, i.e. structures in whichevery player has k neighbors for some k ě 1, one can define a d-player game for d ď k`1 and insist that eachplayer derives a total payoff value by participating in a collection of d-player interactions with neighbors.10On the other hand, if degree-heterogeneous structures are considered instead, then some players may havemany neighbors while others have few. Rather than fixing d first, one can often define a family of payofffunctions, parametrized by d. For example, suppose that the number of players involved in an interactionvaries, and that each player chooses a strategy from some finite subset S of r0,8q. For a group of size d andstrategy profile px1, . . . , xdq P r0,8q‘d, define the payoff function for player i to beudi px1, . . . , xi, . . . , xdq :“ r px1 ¨ ¨ ¨xdq1d ´ xi, (2.3)where r ą 1. This payoff function may be thought of as defining a nonlinear version of the public goods game,but for the moment the interpretation is not important. We will later see that these payoff functions cannotbe reduced in the same way that those of the linear public goods game can, regardless of the populationstructure. This property suggests that each function in this family must be considered separately in thesetting of evolutionary game theory. If player i has ki neighbors, then each player may initiate a pki ` 1q-player game with all of his or her neighbors and receive a payoff according to uki`1i . Thus, a family ofpayoff functions like this one is relevant for studying evolutionary games in degree-heterogeneous structuredpopulations since one function from this collection is needed for each distinct integer appearing as a degreein the population structure (see Figure 2.1). It follows that a general evolutionary game could potentiallyinvolve many distinct payoff functions.Even for games in which the payoff values for pairwise interactions are determined by a 2ˆ 2 matrix, theoverall payoff values may be calculated in a nonlinear fashion due to synergy and discounting of accumulatedbenefits in group interactions (Hauert et al., 2006), and this nonlinearity can complicate conditions for onestrategy to outperform another (Li et al., 2014). For example, if pi1, . . . , pid´1 are the payoff values fromd´ 1 pairwise interactions between a focal player and his or her neighbors, then the total payoff to the focalplayer might beřd´1k“1 δk´1pik, where δ is some discounting factor satisfying 0 ă δ ă 1. It might also bethe case that δ ą 1, in which case δ is not a discounting factor but rather a synergistic enhancement of thecontributions. In each of these situations, the overall payoff is not simply the sum of the payoffs from allpairwise interactions. This observation, combined with the example of the linear public goods game, raisesan important question: when is a multiplayer game truly a multiplayer game, and not a game whose payoffsare derived from simpler interactions? To address this question, we introduce the notion of reducibility ofpayoff functions. Roughly speaking, a payoff function is reducible if it is the sum of payoff functions for gamesinvolving fewer players. Our focus is on games that are irreducible, i.e. games that cannot be broken downin a manner similar to that of the linear public goods game. We give several examples of irreducible games,11Figure 2.1: Each player initiates an interaction with all of his or her neighbors, and the players in thisgroup receive a payoff according to the family of payoff functions ud(dě2, where ui px1, . . . xi, . . . , xdq “r px1 ¨ ¨ ¨xdq 1d ´ xi. The total payoff value to a particular player is calculated as the sum of the payoffs fromeach interaction in which he or she is involved. These total payoff values are indicated in the diagram. Forthis network, the functions u2, u3, and u4 are needed to calculate the total payoffs.including perturbations of the linear public goods game that are irreducible for any number of players. Theexistence of such perturbations illustrates that one can find irreducible games that are “close” to reduciblegames, and indeed one can define irreducible games from straightforward modifications of reducible games.In the absence of reductions to smaller games, new σ-rules are needed in order to determine the successof chosen strategies. For games with many strategies and multiplayer interactions, these rules turn out tobe quite complicated. We show how many structure coefficients appear in the selection condition for thesegames and give several examples of explicit σ-rules. In particular, for d-player interactions with n strategies,we show that the number of structure coefficients appearing in these rules grows like a polynomial of degreen´ 1 in d. If d “ 2 or n “ 2, these rules recover the results of Tarnita et al. (2011) and Wu et al. (2013b),respectively. Although the selection conditions are concise for pairwise interactions with many strategies andfor multiplayer interactions with two strategies, explicit calculations of structure coefficients can be difficulteven for simple population structures (Gokhale and Traulsen, 2011; van Veelen and Nowak, 2012; Wu et al.,2013b; Du et al., 2014). Here we do not treat the issue of calculating structure coefficients (although wedo some calculations for special cases), but instead focus on extending the σ-rules to account for morecomplicated games. More complicated games generally require more structure coefficients, and we quantify12precisely the nature of this relationship.2.2 Reducible gamesWe begin by looking at multiplayer interactions in the simplest type of population:2.2.1 Well-mixed populationsIn a finite, well-mixed population, each player is a neighbor of every other player. In the language ofevolutionary graph theory, the interaction matrix of a well-mixed population is simply a complete graph,and edges between nodes indicate who interacts with whom. Suppose that the population size is N “ d, andlet ui : Sd Ñ R be the payoff function for player i in a d-player game, i.e. reflecting interactions with allother members of the population. Although the population is well-mixed, it may still be the case that thepayoff to player i depends on i. If this d-player game can be “reduced,” it should ideally be composed ofseveral games with fewer players, and it should be possible to derive the payoffs from interactions in smallergroups. For example, if payoff values from pairwise interactions are determined by the matrix¨˚˝A BA a bB c d‹˛‚, (2.4)then the payoff to a focal player using A against j opponents using A (and d´ 1´ j opponents using B) isaj “ ja` pd´ 1´ jq b. (2.5)In this setting, the payoff to a focal individual depends on only (i) his or her strategy and (ii) the number ofopponents playing each strategy, so the function u defines a d-player game. By construction, this d-playerpayoff function was formed by adding together d´ 1 payoff values from 2ˆ 2 games, so it can be “reduced”into a sequence of pairwise interactions. Of course, this type of reducibility is stronger than one can hopefor in general since it requires all interactions in the smaller games to involve only two players. Startingwith an d-player payoff function, a natural question to consider is whether or not the d-player interactioncan be reduced into a sequence of smaller (but not necessarily two-player) games. The following definitionof reducibility in well-mixed populations captures this idea:13Definition 1 (k-reducibility). u is k-reducible if (i) there exists a collection of payoff functions for groupsof size m ď k,"!vti1,...,imu : Sm Ñ Rm)ti1,...,imuĎt1,...,du*km“2, (2.6)such that for each i “ 1, . . . , d and ps1, . . . , sdq P Sd,ui ps1, . . . , sdq “kÿm“2ÿti1,...,im´1uĎt1,...,du´tiuvti,i1,...,im´1ui`si, si1 , . . . , sim´1˘, (2.7)and (ii) k is the smallest positive integer for which (i) holds.The reducibility condition of Definition 1 says that the d-player game may be broken down into a collectionof subgames, each with at most k players, but that it cannot be broken down any further. The sumřti1,...,im´1uĎt1,...,du´tiu appearing in this definition simply means that we sample all subsets of size m ´ 1from the neighbors of the focal player. The order in which these players are sampled is irrelevant; we careonly about each subset as a whole. The quantity vti,i1,...,im´1ui`si, si1 , . . . , sim´1˘is the payoff to player iin the m-player subgame when he or she plays si and player ij plays sij , where ij is the jth interactionpartner of player i. These smaller games are well-defined since the population is well-mixed: each player canparticipate in a multiplayer interaction with any subset of the population. We say that a game is simplyreducible if it is k-reducible for some k ă d, and irreducible otherwise.Although Definition 1 is stated for arbitrary payoff functions, we will mainly work with symmetric payofffunctions:Definition 2 (symmetric payoff function). A payoff function u for an d-player game is symmetric if, foreach i P t1, . . . , du,ui ps1, . . . , sdq “ upi´1piq`spip1q, . . . , spipdq˘. (2.8)whenever ps1, . . . , sdq P Sd and pi P Sd, where Sd is the group of permutations on d letters. pi´1 is just theinverse permutation of pi, and pi´1 piq is the player using strategy si after rearrangement according to pi.In other words, a game is symmetric if the payoffs depend on only the strategies being played and not onidentities, types, or locations of the players. Intuitively, if a symmetric game is reducible into a sequence ofsmaller games, then these smaller games should also be symmetric. Indeed, if u is symmetric and reducible,14then the functions vti,i1,...,im´1u need not themselves be symmetric, but they can be replaced by symmetricfunctions:Proposition 1. If u is reducible and symmetric, then the functions vti1,...,im´1u may be chosen so that theyare symmetric and depend only on m (as opposed to the particular choice of opponents). That is, the overalld-player game may be broken down into a sequence of smaller symmetric games, one for each number ofopponents.A proof of Proposition 1 may be found in §2.5. The basic idea behind the proof is that one may exploitthe symmetry of u to average over the asymmetric functions vti1,...,im´1u and thus obtain “symmetrized”versions of these functions.Remark 1. Proposition 1 simplifies the process of showing that a game is reducible. It is often easier tofirst write down a reduction via smaller asymmetric games rather than directly establishing a reduction viasymmetric games. One may simply establish the existence of the asymmetric subgames of Definition 1, andthen the reducibility to symmetric subgames follows from the proposition. In the proof of Proposition 1,we give explicit symmetrizations of the payoff functions vti,i1,...,im´1ui , which can be quite complicated ingeneral.Ohtsuki (2014) defines the notion of degree for symmetric multiplayer games with two strategies: Supposethat S “ ta, bu, and let aj (resp. bj) denote the payoff to an a-player (resp. b-player) against j opponentsusing a and d´ 1´ j opponents using b. There are unique polynomials p pjq and q pjq in j of degree at mostd´1 such that aj “ p pjq and bj “ q pjq, and the degree of the game is defined to be max tdeg p, deg qu. Thisconcept of degree is closely related to our notion of reducibility:Proposition 2. If n “ 2, then a game is k-reducible if and only if its degree is k ´ 1.The proof of Proposition 2 may be found in §2.5.What is particularly noteworthy about this equivalence is that while degree is defined only for symmetricgames with two strategies, k-reducibility is defined for multiplayer – even asymmetric – games with anynumber of strategies. Therefore, we can use Proposition 2 to extend the notion of degree to much moregeneral games:Definition 3 (degree of game). The degree of a game is the value k for which the game is pk ` 1q-reducible.One could easily generalize the definition of reducibility to allow for the aggregate payoff values to benonlinear functions of the constituent payoffs, but this generalization is somewhat unnatural in evolutionary15game theory. Typically, the total payoff to a player who is involved in multiple interactions is calculatedby either accumulating or averaging the payoffs from individual interactions. On spatially-homogeneousstructures, the evolutionary dynamics of these two methods are the same; the difference amounts to a scalingof the intensity of selection (Maciejewski et al., 2014). Moreover, Maciejewski et al. (2014) show that themethod of averaging on a heterogeneous network results in an asymmetry in the games being played: playersat locations with different numbers of neighbors are effectively playing different games. This asymmetryis not present if payoff values are accumulated, suggesting that for general population structures the lattermethod is more natural. Using the method of accumulated payoffs, the total payoff to a player is calculatedas the sum of all of the payoffs from the multiplayer games in which he or she is a participant. If thesemultiplayer games can be reduced, then the total payoff should retain this property of being the sum of thepayoffs from smaller games, giving the (linear) notion of reducibility.In the limit of weak selection, it can also be argued that more general notions of reducibility are equivalentto our definition of reducibility: To simplify notation, suppose that some payoff value Π may be written as afunction, f , of a collection of payoff values, pi1, . . . , pim, from smaller games. In evolutionary dynamics, it isnot unreasonable to assume that attenuating (or enhancing) the effect of Π by a factor of β ě 0 is the sameas multiplying each of pi1, . . . , pim by β. That is, βΠ “ βf ppi1, . . . , pimq “ f pβpi1, . . . , βpimq for each β ě 0.If f is also assumed to be continuously differentiable at 0, then f must necessarily be a linear function ofpi1, . . . , pim. Thus, if β is interpreted as the intensity of selection, then, in the limit of weak selection, anysuch function f must be linear, which implies that more general notions of reducibility involving such anf are captured by Definition 1. We will see in the next section that selection conditions require similarassumptions that make these two requirements on f necessary in order to derive a σ-rule for the reducedgame.With a definition of reducibility in place, we now focus on specific examples of multiplayer games. Amongthe simplest multiplayer games is the public goods game. In a public goods game, each player chooses aninvestment level from the interval r0,8q and contributes this amount to a common pool. (In general, thestrategy space for each player in the public goods game is S “ r0,Ks, where K is a maximum investmentlevel.) In the linear public goods game, the total contibution to this pool is enhanced by a factor of r ą 1and distributed equally among the players. Thus, if xj denotes player j’s contribution to the public good,then the payoff function for player i is given by Eq. 2.2. The linearity of this payoff function allows thegame to be broken down into a sequence of pairwise interactions:16Figure 2.2: The reducibility of the linear public goods game (for the central player) is illustrated here withfour players. The central player (red) invests x, while the players at the periphery (blue) invest y, z, and w,respectively. The payoff to the central player for this interaction is a sum of pairwise payoffs, one for eachneighbor. Each of these two-player games is a public goods game with multiplicative factor r{2, and in eachof the smaller games the central player contributes one third of his or her total contribution, x.Example 1 (Linear public goods). The function v : S2 Ñ R2 defined byvi pxi, xjq :“ 2rd¨˝´1d´1¯xi ` xj2‚˛´ ˆ 1d´ 1˙xi (2.9)satisfies u1i px1, . . . , xdq “ÿj‰ivi pxi, xjq for each i “ 1, . . . , d. Therefore, this (symmetric) linear public goodsgame is reducible to a sequence of pairwise (symmetric) games (Hauert and Szabo´, 2003).As the next example shows, the introduction of nonlinearity into the payoff functions does not guaranteethat the resulting game is irreducible:Example 2 (Nonlinear public goods). A natural way to introduce nonlinearity into the linear public goodsgame is to raise the average contribution, px1 ` ¨ ¨ ¨ ` xdq {d, to some power, say k. However, if k is an integerand 2 ď k ď d´ 2, then the payoff functionui px1, . . . , xdq :“ rˆx1 ` ¨ ¨ ¨ ` xdd˙k´ xi (2.10)is reducible (see §2.5 for details).Another convenient way in which to introduce nonlinearity into the payoff functions of the public goodsgame is to replace the average, px1 ` ¨ ¨ ¨ ` xdq {d, by the Ho¨lder (or “generalized”) averageˆxp1 ` ¨ ¨ ¨ ` xpdd˙ 1p(2.11)17for some p P r´8,`8s. In the limiting cases,limpÑ´8ˆxp1 ` ¨ ¨ ¨ ` xpdd˙ 1p“ min tx1, . . . , xdu ; (2.12a)limpÑ0ˆxp1 ` ¨ ¨ ¨ ` xpdd˙ 1p“ px1 ¨ ¨ ¨xdq 1d ; (2.12b)limpÑ`8ˆxp1 ` ¨ ¨ ¨ ` xpdd˙ 1p“ max tx1, . . . , xdu (2.12c)(see Bullen, 2003).Remark 2. Several special cases of the Ho¨lder public goods game have been previously considered inthe literature. For p “ 1, this game is simply the linear public goods game (see Eq. 2.2), which has anextensive history in the economics literature. Hirshleifer (1983) refers to the cases p “ ´8 and p “ `8as the “weakest-link” and “best-shot” public goods games, respectively. The weakest-link game can beused to describe collaborative dike building, in which the collective benefit is determined by the player whocontributes the least to the public good (see Hirshleifer, 1983, p. 371). The search for a cure to the diseasecould be modeled as a “best-shot” game since many institutions may invest in finding this cure, but thepublic good (the cure) is provided by the first institution to succeed. The Ho¨lder public goods game providesa continuous interpolation between these two extreme scenarios.The Ho¨lder average gives an alternative way to take the mean of players’ contributions and, for almostevery p, leads to a public goods game that is irreducible for any number of players:Example 3 (Nonlinear public goods). For p P r´8,`8s, consider the payoff functionsupi px1, . . . , xdq :“ rˆxp1 ` ¨ ¨ ¨ ` xpdd˙ 1ploooooooooomoooooooooonHo¨lder average of x1,...,xd´xi, (2.13)where r is some positive constant. We refer to the game defined by these payoff functions as the Ho¨lderpublic goods game. The payoff functions of Example 1 are simply u1i . If p, q P r´8,`8s and q ă p, then,by Ho¨lder’s inequality,uqi px1, . . . , xdq ď upi px1, . . . , xdq (2.14)with equality if and only if x1 “ x2 “ ¨ ¨ ¨ “ xd. It is shown in §2.5 that these public goods games areirreducible if and only if 1p R t1, 2, . . . , d´ 2u. In particular, if 1p R t1, 2, . . . u, then upi is irreducible for each180 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2−1−0.8−0.6−0.4−0.200.20.40.60.81xup(x,1,1,1) p = 0 .1p = 0 .4p = 0 .7p = 1 .0p = 1 .3p = 1 .6p = 1 .9Figure 2.3: up px, 1, 1, 1q versus x for r “ 1.5 and seven values of p. In each of these four-player public goodsgames, the investment levels of the opponents are 1 unit. The payoff functions for the nonlinear public goodsgames (p ‰ 1) are slight perturbations of the payoff function for the linear public goods game (p “ 1) aslong as p is close to 1.d. This example exhibits the (reducible) linear public goods game as a limit of irreducible public goodsgames in the sense that limpÑ1 upi “ u1i (see Figure 2.3).Example 4 (Nonlinear public goods). Rather than modifying the average contribution, one could also adda term to the payoff from the linear public goods game to obtain an irreducible game. For example, supposethat in addition to the payoff received from the linear public goods game, each player receives an addedbonus if and only if every player contributes a nonzero amount to the public good. If ε ą 0, this bonus termcould be defined as εśdj“1 xj so that the payoff function for the game isui px1, . . . , xdq :“ u1i px1, . . . , xdq ` εdźj“1xj . (2.15)In fact, this payoff function is irreducible for all ε P R ´ t0u. The details may be found in §2.5, but theintuition is simple: a function that requires simultaneous information about each player (the cross-termx1 ¨ ¨ ¨xd) should not be able to be broken down into a collection of functions that account for only subsets19of the population. Letting εÑ 0, we view the linear public goods game once again as a limit of irreduciblegames.2.2.2 Structured populationsA fundamental difficulty in extending Definition 1 to structured populations arises from the fact that it maynot even be possible to define an m-player interaction from an d-player interaction if m ă d. The squarelattice is a simple example of a structured population that is not well-mixed. The lattice is homogeneousin the sense that it is vertex-transitive, meaning roughly that the network looks the same from every node(Taylor et al., 2007). The square lattice has infinitely many nodes, but this property will not be importantfor our discussion of reducibility. Suppose that each player in the population chooses an investment levelfrom r0,Ks, where K ą 0 is the maximum amount that can be invested by a single player in the publicgood. With this investment level as a strategy, every player in the population initiates a public goods gamewith each of his or her four neighbors. For each game, the five players involved receive a payoff, and a focalplayer’s total payoff is defined to be the sum of the payoffs from each of the five games in which he or she isa participant: one initiated by the focal player, and four initiated by the neighbors of this player.The total payoff to a fixed focal player depends not only on that focal player’s neighbors, but also onthe neighbors of the focal player’s neighbors. In other words, the payoff to the focal player is determined bythose players who are within two links of the focal player on the lattice. However, for each of the five publicgoods interactions that contribute to the overall payoff value, the only strategies that matter are those offive players: a central player, who initiates the interaction, and the four neighbors of this central player.These interactions should be examined separately to determine the reducibility of the game; intuitively, thispublic goods game is “reducible” if each player initiating an interaction can instead initiate a sequence ofsmaller games that collectively preserve the payoffs to each player involved. For each of these interactions,the interacting group appears to be arranged on a star network, i.e. a network with a central node andfour leaves connected to the central node, and with no links between the leaves (see Figure 2.4). Therefore,although the square lattice is homogeneous, an analysis of the star network – a highly heterogeneous structure– is required in order to consider reducibility in this type of population.Example 5 (Linear public goods game on a star network). A star network consists of a central node, 0,connected by an undirected link to ` leaf nodes 1, . . . , ` (Lieberman et al., 2005; Hadjichrysathou et al., 2011).There are no other links in the network. Suppose that player i in this network uses strategy xi P S Ď r0,8q.Each player initiates a linear public goods game that is played by the initiator and all of his or her neighbors.20Figure 2.4: The total payoff to the focal player (purple) depends on his or her immediate neighbors (blue) andthe players who are two links away (turquoise). Each of the four-leaf star networks indicates an interactioninitiated by the central player. The total payoff to player 0 is then calculated as the sum of the payoffs fromthese five interactions.Thus, the player at the central node is involved in d “ `` 1 interactions, while each player at a leaf node isinvolved in 2 interactions. A focal player’s payoff is calculated by adding up the payoffs from each interactionin which he or she is involved. From the perspective of the player at the central node, the interaction he orshe initiates is reducible to a sequence of pairwise interactions by Example 1. However, on a star networkno two leaf nodes share a link, and thus players at these nodes cannot interact directly with one another;any interaction between two players at leaf nodes must be mediated by the player at the central node.A natural question one can ask in this context is the following: can the d-player game initiated by thecentral player be broken down into a sequence of symmetric m-player interactions (with m ă d), all involvingthe central player, in such a way that if player i adds all of the payoffs from the interactions in which he orshe is involved, the result will be ui px0, x1, . . . , x`q? That is, can the central player initiate a sequence ofstrictly smaller interactions from which, collectively, each player receives the payoff from the d-player linearpublic goods game? The answer, as it turns out, is that this d-player game can be broken down into acombination of a two-player game and a three-player game, i.e. it is 3-reducible: For i P t0, 1, . . . , `u, letαi pxi, xjq :“ ´ rdpd´ 3q„ˆ1d´ 1˙xi ` xj`ˆd´ 3d´ 1˙xi; (2.16a)βi pxi, xj , xkq :“ rd„ˆ2d´ 1˙xi ` xj ` xk´ˆ2d´ 1˙xi. (2.16b)αi is the payoff function for a two-player game, βi is the payoff function for a three-player game, and both21of these games are symmetric. If the player at the central node (using strategy x0) initiates a two-playergame with every neighbor according to the function α and a three-player game with every pair of neighborsaccording to the function β, then the total payoff to the central player for these interactions isÿ`i“1α0 px0, xiq `ÿ1ďiăjď`β0 px0, xi, xjq “ u10 px0, x1, . . . , x`q . (2.17)Each leaf player is involved in one of the two-player interactions and d ´ 2 of the three-player interactionsinitiated by the player at the central node. The payoff to leaf player i P t1, . . . , `u for playing strategy xi inthese interactions isαi pxi, x0q `ÿ1ďjď`j‰iβi pxi, x0, xjq “ u1i px0, x1, . . . , x`q . (2.18)In fact, αi and βi are the unique symmetric two- and three-player payoff functions satisfying equations p2.17qand p2.18q. Thus, the d-player linear public goods interaction initiated by the player at the central nodecan be reduced to a sequence of symmetric two- and three-player games that preserves the payoffs for eachplayer in the population.It follows from Example 5 that the reducibility of a game is sensitive to population structure. Thestar network is a very simple example, but it already illustrates that the way in which a game is reducedcan be complicated by removing opportunities for players to interact in smaller groups. The linear publicgoods game can be reduced to a two-player game in well-mixed populations (it is 2-reducible), but the samereduction does not hold on a star network. The best that can be done for this game on the star is a reductionto a combination of a two-player game and a three-player game. From Figure 2.4 and our previous discussionof the lattice, we see that this reduction is also the best that can be done on the square lattice. In particular,it is not possible to reduce the linear public goods game to a two-player game on the square lattice.What is perhaps more useful in the present context is the fact that a game that is irreducible in awell-mixed population remains irreducible in structured populations. Indeed, as we observed, any playerinitiating an interaction can choose to interact with any subset of his opponents, so from the perspective ofthis focal player the population might as well be well-mixed. With Examples 3 and 4 in mind, we now turnour attention to the dynamics of multiplayer games in structured populations.222.3 Selection conditionsLet S “ tA1, . . . , Anu be the finite strategy set available to each player. Consider an evolutionary processthat updates the strategies played by the population based on the total payoff each player receives for playinghis or her current strategy, and let β ě 0 denote the intensity of selection. Assuming that all payoff valuesmay be calculated using an nˆ n payoff matrix paijq, there is the following selection condition:Theorem 1 (Tarnita et al, 2011). Consider a population structure and an update rule such that (i) thetransition probabilities are infinitely differentiable at β “ 0 and (ii) the update rule is symmetric for the nstrategies. Leta˚˚ “ 1nnÿs“1ass; ar˚ “ 1nnÿs“1ars; a˚r “ 1nnÿs“1asr; a “ 1n2nÿs,t“1ast. (2.19)(a˚˚ is the expected payoff to a strategic type against an opponent using the same strategy, ar˚ is theexpected payoff to strategic type r when paired with a random opponent, a˚r is the expected payoff to arandom opponent facing strategic type r, and a is the expected payoff to a random player facing a randomopponent.) In the limit of weak selection, the condition that strategy r is selected for isσ1 parr ´ a˚˚q ` σ2 par˚ ´ a˚rq ` σ3 par˚ ´ aq ą 0, (2.20)where σ1, σ2, and σ3 depend on the model and the dynamics, but not on the entries of the payoff matrix,paijq. Moreover, the parameters σ1, σ2, and σ3 do not depend on the number of strategies as long as n ě 3.The statement of this theorem is slightly different from the version appearing in (Tarnita et al., 2011), dueto the fact that one further simplification may be made by eliminating a nonzero structure coefficient. Wedo not treat this simplification here since (i) the resulting condition depends on which coefficient is nonzeroand (ii) only a single coefficient may be eliminated in this way, which will not be a significant simplificationwhen we consider more general games. This theorem generalizes the main result of (Tarnita et al., 2009b)for n “ 2 strategies, which is a two-parameter condition (or one-parameter if one coefficient is assumed tobe nonzero).For d-player games with n “ 2 strategies, we also have:Theorem 2 (Wu et al, 2013). Consider an evolutionary process with the same assumptions on the populationstructure and update rule as in Theorem 1. For a game with two strategies, A and B, let aj and bj be thepayoff values to a focal player using A (resp. B) against j players using A. In the limit of weak selection,23the condition that strategy A is selected for isd´1ÿj“0σj paj ´ bd´1´jq ą 0 (2.21)for some structure coefficients σ0, σ1, . . . , σd´1.It is clear from these results that for n “ 2, the extension of the selection conditions from two-playergames to multiplayer games comes at the cost of additional structure coefficients. Our goal is to extend theselection conditions to multiplayer games with n ě 3 strategies and quantify the cost of doing so (in termsof the number of structure coefficients required by the σ-rules).2.3.1 Symmetric gamesExamples 3 and 4 show that a single game in a structured population can require multiple payoff functions,even if the game is symmetric: In a degree-heterogeneous structure, there may be some players with twoneighbors, some with three neighbors, etc. If each player is required to play an irreducible Ho¨lder publicgoods game with all of his or her neighbors, then we need a d-player payoff function up for each d appearingas the number of neighbors of some player in the population. With this example in mind, suppose thatďjPJ!uj : Sdpjq ÝÑ R)(2.22)is the collection of all distinct, symmetric payoff functions needed to determine the payoff values to theplayers, where J is some finite indexing set and d pjq is the number of participants in the interaction definedby uj . The main result of this section requires the transition probabilities of the process to be smooth atβ “ 0 when viewed as functions of the selection intensity, β. This smoothness requirement is explained in§2.6. We also assume that the update rule is symmetric with respect to the strategies. These requirementsare the same as those of Tarnita et al. (2011), and they are satisfied by the most salient evolutionary processes(birth-death, death-birth, imitation, pairwise comparison, Wright-Fisher, etc.).Theorem 3. In the limit of weak selection, the σ-rule for a chosen strategy involvesÿjPJ¨˝d pjq `dpjq´1ÿm“1´d pjq ´m¯q pm,n´ 2q‚˛ (2.23)structure coefficients, where q pm, kq denotes the number of partitions of m with at most k parts.24The proof of Theorem 3 is based on a condition that guarantees the average abundance of the chosenstrategy increases with β as long as β is small. Several symmetries common to evolutionary processes andsymmetric games are then used to simplify this condition. The details may be found in §2.6.As a special case of this general setup, suppose the population is finite, structured, and that each playerinitiates an interaction with every pd´ 1q-player subset of his or her neighbors. That is, there is only onepayoff function and every interaction requires d players. (Implicitly, it is assumed that the population isstructured in such a way that each player has at least d ´ 1 neighbors.) A player may be involved in moregames than he or she initiates; for example, a focal player’s neighbor may initiate a d-player interaction, andthe focal player will receive a payoff from this interaction despite the possibility of not being neighbors witheach participant. The payoffs from each of these games are added together to form this player’s total payoff.In this case, we have the following result:Corollary 1. If only a single payoff function is required, u : Sd Ñ R, then the selection condition involvesd`d´1ÿm“1pd´mq q pm,n´ 2q (2.24)structure coefficients.Since the notation is somewhat cumbersome, we relegate the explicit description of the rule of Theorem3 to §2.6 (Eq. 2.96). Here, we give examples illustrating some special cases of this rule:Example 6. For a symmetric game with pairwise interactions and two strategies,2`2´1ÿm“1p2´mq q pm, 2´ 2q “ 2` 0 “ 2 (2.25)structure coefficients are needed, recovering the main result of Tarnita et al. (2009b).Example 7. For a symmetric game with pairwise interactions and n ě 3 strategies,2`2´1ÿm“1p2´mq q pm,n´ 2q “ 2`2´1ÿm“1p2´mq q pm, 1q “ 2` 1 “ 3 (2.26)structure coefficients are needed, which is the result of Tarnita et al. (2011).Example 8. For a symmetric game with d-player interactions and two strategies,d`d´1ÿm“1pd´mq q pm, 2´ 2q “ d` 0 “ d (2.27)25structure coefficients are needed, which gives Theorem 2 of Wu et al. (2013b).Strictly speaking, the number of structure coefficients we obtain in each of these specializations is onegreater than the known result. This discrepancy is due only to an assumption that one of the coefficients isnonzero, which allows a coefficient to be eliminated by division (Tarnita et al., 2009b; Wu et al., 2013b).Example 9. Suppose that each player is involved in a series of two-player interactions with payoff valuespaijq and a series of three-player interactions with payoff values pbijkq. Let a˚˚, ar˚, a˚r, and a be as theywere in the statement of Theorem 1. Similarly, we letb˚˚˚ :“ 1nnÿs“1bsss; br˚‚ :“ 1n2nÿs,t“1brst; b :“ 1n3nÿs,t,u“1bstu, (2.28)and so on. (b˚˚˚ is the expected payoff to a strategic type against opponents of the same strategic type, br˚‚is the expected payoff to strategic type r when paired with a random pair of opponents, b is the expectedpayoff to a random player facing a random pair of opponents, etc.) The selection condition for strategy r isthenσ1 parr ´ a˚˚q ` σ2 par˚ ´ a˚rq ` σ3 par˚ ´ aq` σ4`brrr ´ b˚˚˚˘` σ5 `brr˚ ´ b˚˚r˘` σ6 `br˚˚ ´ b˚rr˘` σ7 `brr˚ ´ b˚˚‚˘` σ8`br˚‚ ´ b˚r‚˘` σ9 `br˚˚ ´ b˚‚‚˘` σ10 `br˚‚ ´ b˘ ą 0, (2.29)where tσiu10i“1 is the set of structure coefficients for the process.We may also extend the result of Wu et al. (2013b) to games with more than two strategies:Example 10. For a symmetric game with d-player interactions and three strategies,d`d´1ÿm“1pd´mq q pm, 3´ 2q “ d pd` 1q2(2.30)structure coefficients appear in the selection condition. In fact, we can write down this condition explicitlywithout much work: Suppose that S “ ta, b, cu and let ai,j , bi,j , and ci,j be the payoff to a focal playing a,b, and c, respectively, when i opponents are playing a, j opponents are playing b, and d´1´ i´ j opponents26are playing c. Strategy a is favored in the limit of weak selection if and only ifÿ0ďi`jďd´1σ pi, jq˜pad´1´i,i ´ bi,d´1´iq ` paj,d´1´j ´ bd´1´j,jq` paj,i ´ bi,jq ` pad´1´i,0 ´ cd´1´j,0q` paj,d´1´i´j ´ ci,d´1´i´jq ` paj,0 ´ ci,0q¸ą 0 (2.31)for some collection of d pd` 1q {2 structure coefficients, tσ pi, jqu0ďi`jďd´1.Let ϕn pdq be the number of structure coefficients needed for the condition in Corollary 1. The followingresult generalizes what was observed in Example 10 and by Wu et al. (2013b):Proposition 3. For fixed n ě 2, ϕn pdq grows in d like dn´1. That is, there exist constants c1, c2 ą 0 suchthatc1 ď limdÑ8ϕn pdqdn´1ď c2. (2.32)Proof. We establish the result by induction on n. We know that the result holds for n “ 2 (Wu et al.,2013b). Suppose that n ě 3 and that the result holds for n ´ 1. As a result of the recursion q pm, kq “q pm, k ´ 1q ` q pm´ k, kq for q, we see that ϕn pdq “ ϕn´1 pdq ` ϕn pd´ pn´ 2qq ´ d. Thus,ϕn pdq “t dn´2 u´1ÿk“0ϕn´1 pd´ k pn´ 2qq ` ϕnˆd´Zdn´ 2^pn´ 2q˙´Zdn´ 2^d, (2.33)and it follows from the inductive hypothesis that the result also holds for n.Wu et al. (2013b) give examples of selection conditions illustrating the linear growth predicted for n “ 2.For n “ 3 and n “ 4, we give in Figure 2.5 the number of distinct structure coefficients for an irreducibled-player game in a population of size d for the pairwise comparison process (see Szabo´ and To˝ke, 1998;Traulsen et al., 2007). Although the number of distinct coefficients is slightly less than the number predictedby Corollary 1 in these examples, their growth (in the number of players) coincides with Proposition 3.That is, growth in the number of structure coefficients is quadratic for three-strategy games and cubic forfour-strategy games. Even for these small population sizes, one can already see that the selection conditionsbecome quite complicated. The details of these calculations may be found in §2.7.272 4 6 8 10 12 14050100150200250300350number of players in the game, dnumber of distinct structure coefficientsnumber of distinct structure coefficients vs. d for n=3 predictedactual(a)2 4 6 8 10 12 14050100150200250300350number of players in the game, dnumber of distinct structure coefficientsnumber of distinct structure coefficients vs. d for n=4 predictedactual(b)Figure 2.5: The number of distinct structure coefficients vs. the number of players, d, in an irreducibled-player interaction in a population of size N “ d. By Proposition 3, the blue circles grow like d2 and d3in (a) (n “ 3) and (b) (n “ 4), respectively. In both of these figures, the process under consideration is apairwise comparison process. The actual results, whose calculations are described in §2.7, closely resemblethe predicted results, suggesting that in general, one cannot expect fewer than « dn´1 distinct structurecoefficients in a d-player game with n strategies.2.3.2 Asymmetric gamesAs a final step in increasing the complexity of multiplayer interactions, we consider payoff functions that donot necessarily satisfy the symmetry condition of Definition 2. One way to introduce such an asymmetryinto evolutionary game theory is to insist that payoffs depend not only on the strategic types of the players,but also on the spatial locations of the participants in an interaction. For example, in a two-strategy game,the payoff matrix for a row player at location i against a column player at location j is¨˚˝A BA aij , aji bij , cjiB cij , bji dij , dji‹˛‚. (2.34)This payoff matrix defines two payoff functions: one for the player at location i, and one for the player atlocation j. More generally, for each fixed group of size d involved in an interaction, there are d different payofffunctions required to describe the payoffs resulting from this d-player interaction. For example, if playersat locations i1, . . . , id are involved in a d-player game, then it is not necessarily the case that uij “ uik ifj ‰ k; for general asymmetric games, each of the payoff functions ui1 , . . . , uid is required. Suppose that J isa finite set that indexes the distinct groups involved in interactions. For each j P J , this interaction involves28d pjq players, which requires d pjq distinct payoff functions. Thus, there is a collection of functionsďjPJ!uji : Sdpjq ÝÑ R)dpjqi“1(2.35)that describes all possible payoff values in the population, where uji denotes the payoff function for the ithplayer of the jth group involved in an interaction.Let S p´,´q denote the Stirling number of the second kind, i.e.S pm, kq “$’’’&’’’%1k!kÿj“0p´1qk´jˆkj˙jm 0 ď k ď m,0 k ą m.(2.36)In words, S pm, kq is the number of ways in which to partition a set of size m into exactly k parts. Therefore,the sumřmk“0 S pm, kq is the total number of partitions of a set of size m, which is denoted by Bm andreferred to as the mth Bell number (see Stanley, 2009).Theorem 4. Assuming the transition probabilities are smooth at β “ 0 and that the update rule issymmetric with respect to the strategies, the number of structure coefficients in the selection condition fora chosen strategy isÿjPJd pjqnÿk“0´S p1` d pjq , kq ´ S pd pjq , kq¯. (2.37)The proof of Theorem 4 may be found in §2.6, along with an explicit description of the condition (equation2.91). Note that if the number of strategies, n, satisfies n ě 1`maxjPJ d pjq, thennÿk“0S p1` d pjq , kq “ B1`dpjq; (2.38a)nÿk“0S pd pjq , kq “ Bdpjq. (2.38b)From these equations, p2.37q reduces toÿjPJd pjq´B1`dpjq ´Bdpjq¯. (2.39)Therefore, for a fixed set of interaction sizes td pjqujPJ , the number of structure coefficients grows with the29number of strategies, n, until n “ 1 `maxjPJ d pjq; after this point, the number of structure coefficients isindependent of the number of strategies.Interestingly, the selection condition for asymmetric games gives some insight into the nature of thestructure coefficients for symmetric games:Example 11. Suppose that the population structure is an undirected network without self-loops (Ohtsukiet al., 2006), and let pwijq be the adjacency matrix of this network. If the interactions are pairwise and thepayoffs depend on the vertices occupied by the players, then for n ě 3 strategies there areB2`1 ´B1`1 “ 5´ 2 “ 3 (2.40)structure coefficients needed for each ordered pair of neighbors in the network. Suppose thatMij :“¨˚˚˚˚˚˚˚˚˚˝A1 A2 ¨ ¨ ¨ AnA1 aij11, aji11 aij12, aji21 ¨ ¨ ¨ aij1n, ajin1A2 aij21, aji12 aij22, aji22 ¨ ¨ ¨ aij2n, ajin2.......... . ....An aijn1, aji1n aijn2, aji2n ¨ ¨ ¨ aijnn, ajinn‹˛‹‹‹‹‹‹‹‹‚(2.41)is the payoff matrix for a player at vertex i against a player at vertex j. Ifaij˚˚ “ 1nnÿs“1aijss; aijr˚ “ 1nnÿs“1aijrs; aij˚r “ 1nnÿs“1aijsr; aij “ 1n2nÿs,t“1aijst (2.42)are the “localized” versions of the strategy averages given in display p2.19q, then strategy r is favored in thelimit of weak selection if and only ifNÿi“1ÿtj : wij“1u´σij1´aijrr ´ aij˚˚¯` σij2´aijr˚ ´ aij˚r¯` σij3´aijr˚ ´ aij¯¯ą 0. (2.43)Whereas there are 3 structure coefficients in this model if there is no payoff-asymmetry, there are3Nÿi,j“1wij “ 6ˆ´# of links in the network¯(2.44)structure coefficients when the payoff matrices depend on the locations of the players. Of course, if we30remove the asymmetry from this result and take Mij to be independent of i and j (so that aijst “ ast foreach s, t, i, j), then the selection condition p2.43q takes the formσ1 parr ´ a˚˚q ` σ2 par˚ ´ a˚rq ` σ3 par˚ ´ aq ą 0 (2.45)whereσ1 “Nÿi“1ÿtj : wij“1uσij1 ; (2.46a)σ2 “Nÿi“1ÿtj : wij“1uσij2 ; (2.46b)σ3 “Nÿi“1ÿtj : wij“1uσij3 . (2.46c)In this way, the structure coefficients of Tarnita et al. (2011) are a sum of “local” structure coefficients.2.4 DiscussionWe have introduced a notion of reducibility in well-mixed populations that captures the typical way in whichmultiplayer games defined using a 2ˆ2 payoff matrix “break down” into a sequence of pairwise games. Basedon the usual methods of calculating total payoff values (through accumulation or averaging), a game shouldbe irreducible if it cannot be broken down linearly into a sequence of smaller games. An irreducible gamein a well-mixed population will remain irreducible in a structured population because population structureeffectively restricts possibilities for interactions among the players. Although reducible games in well-mixedpopulations need not remain reducible when played in structured populations, the existence of irreduciblegames shows that, in general, one need not assume that a game may be broken down into a sequence ofsimpler interactions, regardless of the population structure. This observation is not unexpected, but manyof the classical games studied in evolutionary game theory are of the reducible variety.As we observed with the linear public goods game, there are reducible games that may be perturbedslightly into irreducible games. For example, the “Ho¨lder public goods” games demonstrate that it is possibleto obtain irreducible games quite readily from reducible games. However, one must use caution: In the Ho¨lderpublic goods game, the irreducibility of up depends on the number of players if 1p P t1, 2, . . . u. For sucha value of p, interactions with sufficiently many participants may be simplified if the population is well-31mixed. Therefore, perturbing a linear public goods game to obtain a nonlinear public goods game does notnecessarily guarantee that the result will be irreducible for every type of population. The deformations weintroduced turn out to be irreducible for almost every p, however, so one need not look hard for multiplayerpublic goods games that cannot be broken down.Ohtsuki (2014) defines the notion of degree of a multiplayer games with two strategies. For this type ofgame, reducibility and degree are closely related in the sense that a game is k-reducible if and only if itsdegree is k ´ 1. Degree has been defined only for symmetric multiplayer games with two strategies. On theother hand, reducibility makes sense for any multiplayer game, even asymmetric games with many strategies.As a result, we have extended the concept of degree to much more general games: the degree of a game isdefined to be the value k for which the game is pk ` 1q-reducible. Thus, the d-player irreducible games areprecisely those whose degree is d´ 1; those built from pairwise interactions only are the games of degree 1.We derived selection conditions, also known as σ-rules, for multiplayer games with any number of strate-gies. In the limit of weak selection, the coefficients of these conditions are independent of payoff values.However, they are sensitive to the types of interactions needed to determine these payoff values. By fixinga d-player game and insisting that each player plays this game with every pd´ 1q-player subset of his or herneighbors, we arrive at a straightforward generalization of the known selection conditions of Tarnita et al.(2011) and Wu et al. (2013b). Of particular significance is the fact that the number of structure coefficientsin a game with n strategies grows in d like dn´1. Implicit in this setup is the assumption that each playerhas at least d ´ 1 neighbors, which may or may not be the case. To account for a more general case inwhich each player simply plays a game with all of his or her neighbors, it is useful to know that one maydefine games that are always irreducible, independent of the number of players. There are correspondingselection conditions in this setting formed as a sum of selection conditions, one for each distinct group sizeof interacting players in the population. For each of these cases, we give a formula for the number of struc-ture coefficients required by the selection condition; this number grows quickly with the number of playersrequired for each game.The payoff functions of the game are not always independent of the population structure. If 1p R t1, 2, . . . u,then the public goods game with payoff functions upi is irreducible for any number of players. If the populationstructure is a degree-heterogeneous network, and if each player initiates this public goods game with all ofhis or her neighbors, then there is an irreducible k-player game for each k appearing as a degree of a nodein the network. We established that the number of σ-coefficients depends on the number of players in eachgame, so in this example the number of coefficients depends on the structure of the population as well. This32result contrasts the known result for two-player games in which the values of the structure coefficients maydepend on the network, but the number of structure coefficients is independent of the network structure.Our focus has been on the rules for evolutionary success for a general game and not on the explicitcalculation of the structure coefficients (although calculations for small populations are manageable – see§2.7). We extended these rules to account for more complicated types of interactions in evolutionary gametheory. These selection conditions are determined by the signs of linear combinations of the payoff values,weighted by structure coefficients that are independent of these payoff values. Our general rules quantifyprecisely the price that is paid – in terms of additional structure coefficients – for relaxing the assumptionson the types of games played within a population. Based on the number of structure conditions requiredfor the selection conditions, Theorems 3 and 4 seem to be in contravention of the tenet that these rulesshould be simple. Indeed, the simplicity of the selection condition of Tarnita et al. (2011) appears to besomething of an anomaly in evolutionary game theory given the vast expanse of evolutionary games onecould consider. This observation was made by Wu et al. (2013b) using a special case of the setup consideredhere, but our results show that σ-rules can be even more complicated in general. The simplicity of the ruleof Tarnita et al. (2011), however, is due to the number of structure coefficients required, not necessarilythe structure coefficients themselves. In theory, these structure coefficients can be calculated by looking atany particular game. In practice, even for pairwise interactions, these parameters have proven themselvesdifficult to calculate explicitly. Due to the difficulties in determining structure coefficients, the fact thatthere are more of them for multiplayer games may not actually be much of a disadvantage. With an efficientmethod for calculating these values, the general σ-rules derived here allow one to explicitly relate strategysuccess to payoff values in the limit of weak selection for a wide variety of evolutionary games.2.5 Methods: reducibility in well-mixed populationsProof of Proposition 1 If u is reducible, then we can find"!vti1,...,imu : Sm Ñ Rm)ti1,...,imuĎt1,...,du*d´1m“2(2.47)such that, for each i “ 1, . . . , d and ps1, . . . , sdq P Sd,ui ps1, . . . , sdq “d´1ÿm“2ÿti1,...,im´1uĎt1,...,du´tiuvti,i1,...,im´1ui`si, si1 , . . . , sim´1˘. (2.48)33Suppose that u is also symmetric. We will show that the right-hand side of Eq. 2.48 can be “symmetrized”in a way that preserves the left-hand side of the equation. If Sd denotes the symmetric group on N letters,thenui ps1, . . . , sdq“ 1pd´ 1q!ÿpiPSdpipiq“iui`spip1q, . . . , spipdq˘“ 1pd´ 1q!ÿpiPSdpipiq“i¨˝d´1ÿm“2ÿti1,...,im´1uĎt1,...,du´tiuvti,i1,...,im´1ui`si, spipi1q, . . . , spipim´1q˘‚˛“d´1ÿm“2ÿti1,...,im´1uĎt1,...,du´tiu¨˚˚˝ 1pd´ 1q! ÿpiPSdpipiq“ivti,i1,...,im´1ui`si, spipi1q, . . . , spipim´1q˘‹˛‹‚. (2.49)SinceÿpiPSdpipiq“ivti,i1,...,im´1ui`si, spipi1q, . . . , spipim´1q˘“ pd´mq!ÿtj1,...,jm´1uĎt1,...,du´tiuÿτPSm´1vti,i1,...,im´1ui`si, sjτp1q , . . . , sjτpm´1q˘, (2.50)it follows thatui ps1, . . . , sdq “d´1ÿm“2ÿti1,...,im´1uĎt1,...,du´tiuwmi`si; si1 , . . . , sim´1˘, (2.51)wherewmi`si; si1 , . . . , sim´1˘:“ pd´mq!pd´ 1q!ÿtj1,...,jm´1uĎt1,...,du´tiuÿτPSm´1vti,j1,...,jm´1ui`si, siτp1q , . . . , siτpm´1q˘. (2.52)(The semicolon in wmi`si; si1 , . . . , sim´1˘is used to distinguish the strategy of the focal player from thestrategies of the opponents.) If pi is a transposition of i and j, then the symmetry of u implies thatui ps1, . . . , si, . . . , sj , . . . , sdq “ uj ps1, . . . , sj , . . . , si, . . . , sdq . (2.53)34Therefore, with vm :“ 1dřdi“1 wmi for each m, we see thatui ps1, . . . , sdq “d´1ÿm“2ÿti1,...,im´1uĎt1,...,du´tiuvm`si; si1 , . . . , sim´1˘. (2.54)The payoff functions vm are clearly symmetric, so we have the desired result.We now compare the notion of “degree” of a game (Ohtsuki, 2014) to reducibility. For m P R andk P Zě0, consider the (generalized) binomial coefficientˆmk˙:“ m pm´ 1q ¨ ¨ ¨ pm´ k ` 1qk!. (2.55)(We use this definition of the binomial coefficient so that we can make sense of`mk˘if m ă k.) A symmetricd-player game with two strategies is k-reducible if and only if there exist real numbers α`i and β`i for ` “1, . . . , k ´ 1 and i “ 0, . . . , ` such thataj “k´1ÿ`“1ÿ`i“0ˆji˙ˆd´ 1´ j`´ i˙α`i ; (2.56a)bj “k´1ÿ`“1ÿ`i“0ˆji˙ˆd´ 1´ j`´ i˙β`i . (2.56b)` denotes the number of opponents in a smaller game, and α`i (resp. β`i ) is the payoff to an a-player (resp.b-player) against i players using a in an p`` 1q-player game. Ohtsuki (2014) notices that both aj and bj canbe written uniquely as polynomials in j of degree at most d´ 1, and he defines the degree of the game to bethe maximum of the degrees of these two polynomials. In order to establish a formal relationship betweenthe reducibility of a game and its degree, we need the following lemma:Lemma 1. If q pjq is a polynomial in j of degree at most k ě 1, then there exist coefficients γ`i with` “ 1, . . . , k and i “ 0, . . . , ` such that for each j “ 0, . . . , d´ 1,q pjq “kÿ`“1ÿ`i“0ˆji˙ˆd´ 1´ j`´ i˙γ`i . (2.57)Proof. If k “ 1, then let q pjq “ c0 ` c1j. For any collection γ`i , we have1ÿ`“1ÿ`i“0ˆji˙ˆd´ 1´ j`´ i˙γ`i “ pd´ 1q γ10 ``γ11 ´ γ10˘j, (2.58)35so we can set γ10 “ c0{ pd´ 1q and γ11 “ c0{ pd´ 1q` c1 to get the result. Suppose now that the lemma holdsfor polynomials of degree k ´ 1 for some k ě 2, and letq pjq “ c0 ` c1j ` ¨ ¨ ¨ ` ckjk (2.59)with ck ‰ 0. If γki “ 0 for i “ 0, . . . , k ´ 1 and γkk “ k!ck, thenq pjq ´kÿi“0ˆji˙ˆd´ 1´ jk ´ i˙γki (2.60)is a polynomial in j of degree at most k ´ 1, and thus there exist coefficients γ`i withq pjq ´kÿi“0ˆji˙ˆd´ 1´ jk ´ i˙γki “k´1ÿ`“1ÿ`i“0ˆji˙ˆd´ 1´ j`´ i˙γ`i (2.61)by the inductive hypothesis. The lemma for k follows, which completes the proof.We have the following equivalence for two-strategy games:Proposition 2. If n “ 2, then a game is k-reducible if and only if its degree is k ´ 1.Proof. If the game is k-reducible, then at least one of the polynomialsaj “k´1ÿ`“1ÿ`i“0ˆji˙ˆd´ 1´ j`´ i˙α`i ; (2.62a)bj “k´1ÿ`“1ÿ`i“0ˆji˙ˆd´ 1´ j`´ i˙β`i (2.62b)must have degree k ´ 1. Indeed, if they were both of degree at most k ´ 2, then, by Lemma 1, one couldfind coefficients γ`i and δ`i such thataj “k´2ÿ`“1ÿ`i“0ˆji˙ˆd´ 1´ j`´ i˙γ`i ; (2.63a)bj “k´2ÿ`“1ÿ`i“0ˆji˙ˆd´ 1´ j`´ i˙δ`i , (2.63b)which would mean that the game is not k-reducible. Therefore, the degree of the game must be k ´ 1.Conversely, if the degree of the game is k ´ 1, then, again by Lemma 1, one can find coefficients α`i and β`isatisfying (2.62a) and (2.62b). Since the polynomials in (2.63a) and (2.63b) are of degree at most k ´ 2 in36j, and at least one of aj and bj is of degree k´ 1, it follows that the game is not k1-reducible for any k ă k1.In particular, the game is k-reducible, which completes the proof.Note that we require n “ 2 in Proposition 2 since “degree” is defined only for games with two strategies.(However, k-reducibility makes sense for any game.)Proposition 4. If p P r´8,`8s and S is a subset of r0,8q that contains at least two elements, then theHo¨lder public goods game with payoff functionsupi : Sd ÝÑ R: px1, . . . , xdq ÞÝÑ rˆxp1 ` ¨ ¨ ¨ ` xpdd˙ 1p´ xi, (2.64)is irreducible if and only if 1p R t1, 2, . . . , d´ 2u.Proof. Without a loss of generality, we may assume that S “ ta, bu for some a, b P r0,8q with b ą a. (Ifthe game is irreducible when there are two strategies, then it is certainly irreducible when there are manystrategies.) Sinceaj “ rˆ pj ` 1q ap ` pd´ 1´ jq bpd˙ 1p´ a, (2.65)we see that for p ‰ 0,˘8,dd´1ajdjd´1“ rˆ pj ` 1q ap ` pd´ 1´ jq bpd˙ 1p´d`1 ˆap ´ bpd˙d´1 d´2źi“0ˆ1p´ i˙, (2.66)which vanishes if and only if 1p P t1, 2, . . . , d´ 2u. Thus, for p ‰ 0,˘8, the degree of the Ho¨lder publicgoods game is d´ 1 if and only if 1p R t1, 2, . . . , d´ 2u.If p “ 0 and a ‰ 0, thendd´1bjdjd´1“ rb´ab¯ jdlnˆ´ab¯ 1d˙d´1‰ 0. (2.67)If p “ 0 and a “ 0, thenbj “ rbˆ p1´ jq p2´ jq ¨ ¨ ¨ ppd´ 1q ´ jqpd´ 1q!˙´ b. (2.68)37Thus, for p “ 0, the degree of the game is d´ 1. If p “ ´8, thenbj “ r pb´ aqˆ p1´ jq p2´ jq ¨ ¨ ¨ ppd´ 1q ´ jqpd´ 1q!˙` ra´ b, (2.69)and again the degree is d´ 1. Similarly, if p “ `8, thenaj “ r pa´ bqˆj pj ´ 1q ¨ ¨ ¨ pj ´ pd´ 2qqpd´ 1q!˙` rb´ a. (2.70)Since we have shown that the degree of the Ho¨lder public goods game is d´1 if and only if 1p R t1, 2, . . . , d´ 2u,the proof is complete by Proposition 2.Remark 3. The irreducibility of the games in Examples 2 and 4 follow from the same type of argumentused in the proof of Proposition 4.2.6 Methods: selection conditionsWe generalize the proof of Theorem 1 of Tarnita et al. (2011) to account for more complicated games:2.6.1 Asymmetric gamesConsider an update rule that is symmetric with respect to the strategies and has smooth transition proba-bilities at β “ 0. Suppose thatďjPJ!uji : Sdpjq ÝÑ R)dpjqi“1(2.71)is the collection of all distinct, irreducible payoff functions needed to determine payoff values to the players.The indexing set J is finite if the population is finite, and for our purposes we do not need any otherinformation about J . In a game with n strategies, this collection of payoff functions is determined by anelement u P RřjPJřdpjqi“1 ndpjq. The assumptions on the process imply that the average abundance of strategyr P t1, . . . , nu may be written as a functionFr : RřjPJřdpjqi“1 ndpjq ÝÑ R. (2.72)38The coordinates of RřjPJřdpjqi“1 ndpjqwill be denoted by aji`si; si1 , . . . , sidpjq´1˘, where j P J , i P t1, . . . , d pjqu,and si, si1 , . . . , sidpjq´1 P t1, . . . , nu. (The semicolon is used to separate the strategy of the focal player, i,from the strategies of the opponents.) By the chain rule, the selection condition for strategy r has the form0 ă ddβˇˇˇˇˇβ“0Fr pβuq“ÿjPJdpjqÿi“1nÿsi;si1 ,...,sidpjq´1“1BFrBaji`si; si1 , . . . , sidpjq´1˘ ˇˇˇˇˇa“0uji`si; si1 , . . . , sidpjq´1˘. (2.73)Let αijr`si; si1 , . . . , sidpjq´1˘:“ BFrBaji`si; si1 , . . . , sidpjq´1˘ ˇˇˇˇˇa“0. Since the update rule is symmetric with respectto the strategies, it follows thatαijr`si; si1 , . . . , sidpjq´1˘ “ αijpiprq `pi psiq ;pi psi1q , . . . , pi `sidpjq´1˘˘ (2.74)for each pi P Sn. We now need the following lemma:Lemma 2. The group action of Sn on rns‘m defined bypi ¨ pi1, . . . , imq “ ppi pi1q , . . . , pi pimqq . (2.75)partitions rns‘m intoˇˇˇrns‘m {Snˇˇˇ“nÿk“0S pm, kq (2.76)equivalence classes, where S p´,´q is the Stirling number of the second kind.Proof. Let P t1, . . . ,mu be the set of partitions of t1, . . . ,mu and consider the mapΦ : rns‘m ÝÑ P t1, . . . ,mu: pi1, . . . , imq ÞÝÑ Φ pi1, . . . , imq , (2.77)where, for ∆j P Φ pi1, . . . , imq, we have s, t P ∆j if and only if is “ it. This map satisfiesΦ ppi ¨ pi1, . . . , imqq “ Φ pi1, . . . , imq (2.78)39for any pi P Sn, and the map Φ1 : rns‘m {Sn Ñ P t1, . . . ,mu is injective. Thus,ˇˇˇrns‘m {Snˇˇˇ“ˇˇˇIm pΦqˇˇˇ“nÿk“0S pm, kq , (2.79)which completes the proof.By the lemma, the number of equivalence classes of the relation induced by the group actionϕ : Sn ˆ rns1`dpjq ÝÑ rns1`dpjq:´pi,`r, si, si1 , . . . , sidpjq´1˘ ¯ ÞÝÑ `pi prq , pi psiq , pi psi1q , . . . , pi `sidpjq´1˘˘ (2.80)isnÿk“0S p1` d pjq , kq.Let Pn t´1, 0, 1, . . . , d pjq ´ 1u be the set of partitions of t´1, 0, 1, . . . , d pjq ´ 1u with at most n parts.´1 is the index of the strategy in question (in this case, r), 0 is the index of the strategy of the focal player,and 1, . . . , d pjq ´ 1 are the indices of the strategies of the opponents. For ∆ P Pn t´1, 0, 1, . . . , d pjq ´ 1u,we let uji p∆, rq denote the quantity obtained by averaging uji over the strategies, once for each equivalenceclass induced by ∆ that does not contain ´1. Stated in this way, the definition is perhaps difficult to digest,so we give a simple example to explain the notation: if d pjq is even and∆ “!t´1, 0, 1u , t2, 3u , t4, 5u , . . . , td pjq ´ 2, d pjq ´ 1u), (2.81)thenuji p∆, rq “ n´dpjq{2`1nÿs1,...,sdpjq{2´1“1uji`r; r, s1, s1, s2, s2, . . . , sdpjq{2´1, sdpjq{2´1˘. (2.82)Using this new notation, we can writeddβˇˇˇˇˇβ“0Fr pβuq“ÿjPJdpjqÿi“1nÿsi;si1 ,...,sidpjq´1“1αijr`si; si1 , . . . , sidpjq´1˘uji`si; si1 , . . . , sidpjq´1˘“ÿjPJdpjqÿi“1ÿ∆PPnt´1,0,1,...,dpjq´1uλijr p∆quji p∆, rq , (2.83)40where each λijr p∆q is a linear function of the coefficients αijr`si; si1 , . . . , sidpjq´1˘(the precise linear expressionis unimportant). As a consequence of equation p2.74q, we see that λijr p∆q “ λijr1 p∆q for each r, r1 P t1, . . . , nu,so we may relabel these coefficients using the notation λij p∆q. Since řnr“1 Fr pβuq “ 1, it follows that0 “nÿr“1ddβˇˇˇˇˇβ“0Fr pβuq “ÿjPJdpjqÿi“1ÿ∆PPnt´1,0,1,...,dpjq´1uλij p∆qnÿr“1uji p∆, rq . (2.84)Let Rj :“ t∆ P Pn t´1, 0, 1, . . . , d pjq ´ 1u : ´1 „ 0u and writeÿ∆PPnt´1,0,1,...,dpjq´1uλij p∆qnÿr“1uji p∆, rq“ÿ∆PRjλij p∆qnÿr“1uji p∆, rq `ÿ∆PRcjλij p∆qnÿr“1uji p∆, rq . (2.85)For ∆ P Rcj , let ∆´1 and ∆0 be the sets containing ´1 and 0, respectively. Since ∆ P Rcj , we know that∆´1 X ∆0 “ ∅. Now, let η p∆q be the partition whose sets are equal to those in ∆ with the exception of∆´1 and ∆0, which are replaced by t´1u Y∆0 and ∆´1 ´ t´1u, respectively. For example, if d pjq “ 5 and∆ “!t´1, 2, 3u , t0, 1u , t4u), (2.86)thenη p∆q “!t´1, 0, 1u , t2, 3u , t4u). (2.87)This assignment defines a surjective map η : Rcj Ñ Rj .For fixed j and i, consider the equivalence relation on Pn t´1, 0, 1, . . . , d pjq ´ 1u defined by∆ „ ∆1 ðñnÿr“1uji p∆, rq “nÿr“1uji`∆1, r˘. (2.88)The map η : Rcj Ñ Rj satisfies ∆ „ η p∆q for ∆ P Rcj . Therefore, it follows thatddβˇˇˇˇˇβ“0Fr pβuq “ÿjPJdpjqÿi“1ÿ∆PRcjλij p∆q´uji p∆, rq ´ uji pη p∆q , rq¯, (2.89)41so the selection condition for strategy r isÿjPJdpjqÿi“1ÿ∆PRcjλij p∆q´uji p∆, rq ´ uji pη p∆q , rq¯ą 0, (2.90)which, for each j and i, involvesřnk“0´S p1` d pjq , kq´S pd pjq , kq¯structure coefficients. To be consistentwith the existing literature on selection conditions, we let σij :“ ´λij and write the selection condition asÿjPJdpjqÿi“1ÿ∆PRcjσij p∆q´uji pη p∆q , rq ´ uji p∆, rq¯ą 0. (2.91)As long as n ě 1 ` maxjPJ d pjq, then the number of structure coefficients in selection condition p2.91q isindependent of n. The same argument used in the proof of Theorem 1 of Tarnita et al. (2011) shows thateach σij p∆q may be chosen to be independent of n for all games with at least 1 `maxjPJ d pjq strategies.One could calculate the structure coefficients for a game with exactly 1 ` maxjPJ d pjq strategies and stillobtain the selection condition for games with fewer strategies in exactly the same way that Tarnita et al.(2011) deduce the result for n “ 2 strategies from the result for n ě 3 strategies.2.6.2 Symmetric gamesIf we assume that the payoff functions of display p2.71q are all symmetric, then for each j P J the function ujiis independent of i: The index j is used to denote the particular group of players involved in the interaction(of size d pjq), and uji denotes the payoff function to the ith player in this group. If the game specified byj is symmetric, then we need know only the payoff values to one of these players. Therefore, we assume inthis setting that the collection of all payoff functions needed to determine payoff values isďjPJ!uj : Sdpjq ÝÑ R). (2.92)Condition p2.91q then takes the formÿjPJÿ∆PRcjσj p∆q´uj pη p∆q , rq ´ uj p∆, rq¯ą 0. (2.93)For fixed j and ∆ P Rcj , we again let ∆´1 and ∆0 be the sets in ∆ that contain ´1 and 0, respectively. Thecollection ∆´t∆´1,∆0u defines a partition of the number 1` d pjq´ |∆´1| ´ |∆0| whose parts are the sizes42of the sets in the collection ∆´ t∆´1,∆0u. For example, if d pjq “ 12 and∆ “!t´1, 2, 3u , t0, 1u , t4, 5u , t6, 7, 8u , t9, 10, 11u), (2.94)then ∆ defines the partition 2 ` 3 ` 3 “ 8 of the number 1 ` 12 ´ 3 ´ 2 “ 8. An equivalence relation maythen be defined on Rcj by letting ∆ „ ∆1 if and only if the following three conditions hold:(a) |∆´1| “ˇˇ∆1´ 1ˇˇ;(b) |∆0| “ |∆10|;(c) the partitions of 1` d pjq ´ |∆´1| ´ |∆0| defined by ∆ and ∆1 are the same.The symmetry of uj implies that if ∆,∆1 P Rcj and ∆ „ ∆1, thenuj p∆, rq “ uj `∆1, r˘ ; (2.95a)uj pη p∆q , rq “ uj `η `∆1˘ , r˘ . (2.95b)Therefore, condition p2.93q becomesÿjPJÿ∆PRcj{„σj p∆q´uj pη p∆q , rq ´ uj p∆, rq¯ą 0. (2.96)For each j, the number of structure coefficients contributed to this condition by payoff function uj isˇˇRcj{ „ˇˇ “ ÿc`k“dpjq´11`ÿ0ďc`kădpjq´1q´d pjq ´ 1´ c´ k, n´ 2¯“ d pjq `dpjq´1ÿm“1´d pjq ´m¯q pm,n´ 2q , (2.97)where q pm, kq denotes the number of partitions of m with at most k parts.2.7 Methods: explicit calculationsLet M be the transition matrix for an evolutionary process with mutations. The existence of nontrivialstrategy mutations ensures that this chain is irreducible, so there is a unique stationary distribution, µ, bythe Perron-Frobenius theorem. Let M1 :“ M´ I, and let M1 pi, νq be the matrix obtained by replacing the43ith column of M1 by ν. Press and Dyson (2012) show that this stationary distribution satisfiesµ ¨ ν “ det M1 pi, νqdet M1 pi,1q (2.98)for any vector, ν. (1 is the vector of ones.) Therefore, if ψr is the vector indexed by S with ψr psq being thedensity of strategy r in state s, then the selection function p2.72q may be writtenFr “ µ ¨ ψr “ det M1 pi, ψrqdet M1 pi,1q . (2.99)By the quotient rule and Jacobi’s formula for the derivative of a determinant,dFrdβˇˇˇˇˇβ“0“ det M1 pi, ψrqdet M1 pi,1qˇˇˇˇˇβ“0ˆ trˆM1 pi, ψrq |´1β“0ddβˇˇˇβ“0M1 pi, ψrq ´M1 pi,1q |´1β“0ddβˇˇˇβ“0M1 pi,1q˙“ 1ntrˆM1 pi, ψrq |´1β“0ddβˇˇˇβ“0M1 pi, ψrq ´M1 pi,1q |´1β“0ddβˇˇˇβ“0M1 pi,1q˙“ 1ntrˆ´M1 pi, ψrq |´1β“0 ´M1 pi,1q |´1β“0¯ ddβˇˇˇβ“0M1 pi,0q˙(2.100)since all strategies have equilibrium density 1{n when β “ 0.In general, the dimension of M is quite large, so this method is not feasible for large structured popula-tions. However, in well-mixed populations, one can greatly reduce the size of the state space of the Markovchain by keeping track of only the number of each strategy present in the population. If there are n strategiesand N “ d players, then a state of the population may effectively be described by an n-tuple pk1, . . . , knq,where kr is the number of players using strategy r for r “ 1, . . . , n. Clearly k1 ` ¨ ¨ ¨ ` kn “ d, so the totalsize of the state space is`d`n´1n´1˘.Using p2.100q, we may explicitly calculate the selection conditions for well-mixed populations as long asd and n are small. These selection conditions could be calculated directly from p2.99q, but p2.100q is moreefficient on computer algebra systems. Each data point in Figure 2.5 in the main text was generated usinga d-player game in a population of size N “ d, i.e. every player in the population participates in everyinteraction. The growth clearly supports the prediction of Proposition 3.44Chapter 3Asymmetric evolutionary gamesEvolutionary game theory is a powerful framework for studying evolution in populations of interactingindividuals. A common assumption in evolutionary game theory is that interactions are symmetric, whichmeans that the players are distinguished by only their strategies. In nature, however, the microscopicinteractions between players are nearly always asymmetric due to environmental effects, differing baselinecharacteristics, and other possible sources of heterogeneity. To model these phenomena, we introduce intoevolutionary game theory two broad classes of asymmetric interactions: ecological and genotypic. Eco-logical asymmetry results from variation in the environments of the players, while genotypic asymmetryis a consequence of the players having differing baseline genotypes. We develop a theory of these formsof asymmetry for games in structured populations and use the classical social dilemmas, the Prisoner’sDilemma and the Snowdrift Game, for illustrations. Interestingly, asymmetric games reveal essentialdifferences between models of genetic evolution based on reproduction and models of cultural evolutionbased on imitation that are not apparent in symmetric games.3.1 IntroductionEvolutionary game theory has been used extensively to study the evolution of cooperation in social dilemmas(Ohtsuki et al., 2006; Nowak, 2006b; Taylor et al., 2007). A social dilemma is typically modeled as a gamewith two strategies, cooperate (C) and defect (D), whose payoffs for pairwise interactions are defined by a45matrix of the form¨˚˝C DC R,R S, TD T, S P, P‹˛‚ (3.1)(Maynard Smith, 1982; Hofbauer and Sigmund, 1998). For a focal player using a strategy on the left-handside of this matrix against an opponent using a strategy on the top of the matrix, the first (resp. second)coordinate of the corresponding entry of this matrix is the payoff to the focal player (resp. opponent).That is, a cooperator receives R when facing another cooperator and S when facing a defector; a defectorreceives T when facing a cooperator and P when facing another defector. Since the same argument appliesto the opponent, the game defined by (3.1) is symmetric. If defection pays more than cooperation whenthe opponent is a cooperator (T ą R), but the payoff for mutual cooperation is greater than the payoff formutual defection (R ą P ), then a social dilemma (Dawes, 1980; Hauert et al., 2006) arises from this gamedue to the conflict of interest between the individual and the group (or pair). The nature of this socialdilemma depends on the ordering of R, S, T , and P . Biologically, the most important rankings are givenby the Prisoner’s Dilemma (T ą R ą P ą S) and the Snowdrift Game (T ą R ą S ą P ) (Maynard Smith,1982; Hauert and Doebeli, 2004; Doebeli and Hauert, 2005; Hauert et al., 2006; Voelkl, 2010).Since matrix (3.1) defines a symmetric game, any two players using the same strategy are indistinguish-able for the purpose of calculating payoffs. In nature, however, asymmetry frequently arises in interspeciesinteractions such as parasitic or symbiotic relationships (Maynard Smith, 1982). Interactions between sub-populations, such as in Dawkins’ Battle of the Sexes Game (Dawkins, 1976; Schuster and Sigmund, 1981;Maynard Smith and Hofbauer, 1987; Hofbauer, 1996), also give rise to asymmetry that cannot be modeled bythe symmetric matrix (3.1). Even intraspecies interactions are essentially always asymmetric: (i) phenotypicvariations such as size, strength, speed, wealth, or intellectual capabilities; (ii) differences in access to andavailability of environmental resources; or (iii) each individual’s history of past interactions, all affect theinteracting individuals differently and result in asymmetric payoffs. The winner-loser effect, for example, isa well-studied example of effects of previous encounters on future interactions and has been reported acrosstaxa (Dugatkin, 1997; Maynard Smith, 1982), including even mollusks (Wright and Shanks, 1993; Shanks,2002). Asymmetry may also result from the assignment of social roles (Selten, 1980; Hammerstein, 1981;Ohtsuki, 2010b), such as the roles of “parent” and “offspring” (Marshall, 2009): cooperation may be tiedto individual energy or strength, for example, which is, in turn, determined by a player’s role. In the realm46of continuous strategies, adaptive dynamics has been used to study asymmetric competition, which appliesto the resource consumption of plants, for instance (Weiner, 1990; Freckleton and Watkinson, 2001; Doebeliand Ispolatov, 2012). In social dilemmas containing many cooperators, accumulated benefits may be syner-gistically enhanced (or discounted) in a way that depends on who or where the players are (Hauert et al.,2006), thereby making larger group interactions asymmetric. To model such interactions using evolutionarygame theory, the payoff matrix must reflect the asymmetry.In the Donation Game, a cooperator pays a cost, c, to deliver a benefit, b, to the opponent, while adefector pays no cost and provides no benefit (Sigmund, 2010). In terms of matrix (3.1), this game satisfiesR “ b ´ c, S “ ´c, T “ b, and P “ 0. Provided b and c are positive, mutual defection is the only Nashequilibrium. If b ą c, then this game defines a Prisoner’s Dilemma. Perhaps the simplest way to modifythis game to account for possible sources of asymmetry is to allow for each pair of players to have a distinctpayoff matrix; that is, the payoff matrix for player i against player j in the Donation Game isMij :“¨˚˝C DC bj ´ ci, bi ´ cj ´ci, biD bj , ´cj 0, 0‹˛‚ (3.2)for some bi, bj , ci, and cj . If player i cooperates, then this player donates bi to his or her opponent andincurs a cost of ci for doing so. As before, defectors provide no benefit and pay no cost. The index i couldrefer to a baseline trait of the player, the player’s location, his or her history of past interactions, motivation(Bergman et al., 2010), or any other non-strategy characteristic that distinguishes one player from another.Games based on matrices of the form (3.2), with payoffs for both players in each entry of the matrix, aresometimes called bimatrix games. Although bimatrix games have appeared in the context of evolutionarydynamics (Hofbauer, 1996; Hofbauer and Sigmund, 2003; Ohtsuki, 2010b), most of the focus on these gameshas been in the setting of classical game theory and economics (see Fudenberg and Tirole, 1991) where“matrix game” generally means “bimatrix game.” Bimatrix games may be used to model classical asymmetricinteractions such as those arising from sexual asymmetry in the Battle of the Sexes Game (Magurran andNowak, 1991). The asymmetric, four-strategy Hawk-Dove Game of (Maynard Smith, 1982) consisting of thestrategies Hawk, Dove, Bourgeois, and anti-Bourgeois may also be framed as a (4 ˆ 4) bimatrix game (seeMesterton-Gibbons, 1992). Symmetric matrix games, such as (3.1), are special cases of bimatrix games. Weexplore here the ways in which bimatrix games can be incorporated into evolutionary dynamics and used tomodel natural asymmetries in biological populations.47We treat two particular forms of asymmetry: ecological and genotypic. Ecological asymmetry is derivedfrom the locations of the players, whereas genotypic asymmetry is based on the players themselves. Withecological asymmetry, Mij is the payoff matrix for a player at location i against a player at location j. Sincethe payoffs depend on the locations of the players, this form of asymmetry requires a structured population.Ecological asymmetry is a natural consideration in evolutionary dynamics since it ties strategy success tothe environment. In the Donation Game, for instance, cooperators might be donating goods or services, butthe costs and benefits may depend on the environmental conditions, i.e. the location of the donor.On the other hand, players might instead differ in ability or strength, and “strong” cooperators mightcontribute greater benefits (or incur lower costs) than “weak” cooperators. This variation results in genotypicasymmetry, where each player has a baseline genotype (strength) and a strategy (C or D). This form ofasymmetry turns out to be subtler than it seems at first glance, however, since genotypes are generallyrepresented by strategies in evolutionary game theory (Maynard Smith, 1982; Dugatkin, 2000). In particular,it might seem that the genotype and strategy of a player could be combined into a single composite strategyand that the symmetric game based on these composite strategies could replace the original asymmetricgame. As it happens, whether genotypic asymmetry can be resolved by a symmetric game depends on thedetails of the evolutionary process.Classically, evolutionary games were studied in infinite populations via replicator dynamics (Taylor andJonker, 1978), and more recently these games have been considered in finite populations (Nowak et al., 2004;Taylor et al., 2004). Because every biological population is finite, we focus on finite populations (which,for technical reasons, we assume to be large). Since ecological asymmetry requires distinguishing differentlocations within the population, we assume that the population is structured and that a network defines thestructure. Network-structured populations have received a considerable amount of attention in evolutionarygame theory and provide a natural setting in which to study social dilemmas (Lieberman et al., 2005;Ohtsuki et al., 2006; Ohtsuki and Nowak, 2006; Taylor et al., 2007; Szabo´ and Fa´th, 2007; De´barre et al.,2014). Compared to well-mixed populations, in which each player interacts with every other player, networkscan restrict the interactions that occur within the population by specifying which players are “neighbors,”i.e. share a link. We represent the links among the N players in the population using an adjacency matrix,pwijq1ďi,jďN , which is defined by letting wij “ 1 if there is a link from vertex i to vertex j and 0 otherwise(and satisfies wij “ wji for each i and j).In an evolutionary game, the state of a population of players is defined by specifying the strategy of eachplayer. Each player interacts with all of his or her neighbors. The total payoff to a player is multiplied by48a selection intensity, β ě 0, and then converted into fitness (see Methods). Once each player is assigned afitness, an update rule is used to determine the state of the population at the next time step (Nowak, 2006a).For example, with a birth-death update rule, a player is chosen from the population for reproduction withprobability proportional to relative fitness. A neighbor of the reproducing player is then randomly chosenfor death, and the offspring, who inherits the strategy of the parent, fills the vacancy. This process is amodification of the Moran process (Moran, 1958), adapted to allow for (i) frequency-dependent fitnesses and(ii) population structures that are not necessarily well-mixed. The order of birth and death could also bereversed to get a death-birth update rule (Ohtsuki et al., 2006). In this rule, death occurs at random andthe neighbors of the deceased compete to reproduce in order to fill the vacancy. These two rules result inthe update of a single strategy in each time step, but one could consider other rules, such as Wright-Fisherupdating, in which all of the strategies are revised in each generation (Imhof and Nowak, 2006). The rulesmentioned to this point define strategy updates via reproduction and inheritance; as such, we refer to themas genetic update rules.Another popular class of update rules is based on revisions to the existing players’ strategy choices.We refer to rules falling into this class as cultural update rules. Examples include imitation updating, inwhich a player is selected at random to evaluate his or her strategy and then probabilistically compares thisstrategy to those of his or her neighbors (Ohtsuki et al., 2006). A more localized version of this update ruleis known as pairwise comparison updating, in which a player chooses a random neighbor for comparisonrather than looking at the entire neighborhood (Szabo´ and To˝ke, 1998; Traulsen et al., 2007). Under bestresponse dynamics, an individual adopts the strategy that performs best given the current strategies of hisor her neighbors (Ellison, 1993). In each of these cultural processes, the strategy of a player can change, butthe underlying genotype is always the same, which suggests that baseline genotype and strategy need to betreated separately.Genotypic asymmetry needs to be handled more carefully if the update rule is genetic since the natureof genotype transmission affects the dynamics of the process. In contrast to cultural processes, the genotypeand strategy of a player at a given location may both change if the update rule is genetic: genotype maybe inherited but not imitated. We will see that this property results in cultural and genetic processesbehaving completely differently in the presence of genotypic asymmetry. Phenotype may have both geneticand environmental components (Mahner and Kary, 1997; Baye et al., 2011), and after treating the genetic(genotypic) and environmental components separately, these two forms of asymmetry may be combined inorder to get a model in which the asymmetry is derived from varying baseline phenotypes. Thus, with49a theory of both ecological asymmetry and genotypic asymmetry based on inherited genotypes, one canaccount for more complicated forms of asymmetry appearing in biological populations.3.2 Results3.2.1 Ecological asymmetryHere we develop a framework for ecologically asymmetric games in which the payoffs depend on the locationsof the players as well as their strategies. We assume that all of the players have the same set of strategies(or “actions”) available to them, tA1, . . . , Anu. The payoff matrix for a player at vertex i against a playerat vertex j isMij “¨˚˚˚˚˚˚˚˚˚˝A1 A2 ¨ ¨ ¨ AnA1 aij11, aji11 aij12, aji21 ¨ ¨ ¨ aij1n, ajin1A2 aij21, aji12 aij22, aji22 ¨ ¨ ¨ aij2n, ajin2.......... . ....An aijn1, aji1n aijn2, aji2n ¨ ¨ ¨ aijnn, ajinn‹˛‹‹‹‹‹‹‹‹‚. (3.3)That is, a player at vertex i using strategy Ar against an opponent at vertex j using strategy As realizes apayoff of aijrs, whereas his opponent receives ajisr. Since aijrs depends on i and j, these payoff matrices capturethe asymmetry of the game.In the simpler setting of symmetric games, the pair approximation method has been used successfully todescribe the dynamics of evolutionary processes on networks (Matsuda et al., 1992; Bolloba´s, 2001; Ohtsukiet al., 2006; Vukov et al., 2006; Ohtsuki and Nowak, 2006). For each r P t1, . . . , nu, this method approximatesthe frequency of strategy Ar, which we denote by pr, using the frequencies of strategy pairs in the population.Pair approximation is expected to be accurate on large random regular networks (Bolloba´s, 2001; Ohtsukiet al., 2006), so we assume that the network is regular (of degree k ą 2) and that N is sufficiently large. (Fork “ 2, the network is just a cycle, which we do not treat here.) We also take β ! 1, meaning that selectionis weak, which results in a separation of timescales: the local configurations equilibrate quickly, while theglobal strategy frequencies change much more slowly. This separation allows us to get an explicit expressionfor the expected change, E r∆prs, in the frequency of strategy Ar for each r. Incidentally, weak selectionhappens to be quite reasonable from a biological perspective since each trait is expected to have only a small50effect on the overall fitness of a player (Wu et al., 2010; Tarnita et al., 2011; Wu et al., 2013a).Interestingly, for two genetic and two cultural update rules, weak selection reduces ecological asymmetryto a symmetric game derived from the spatial average of the payoff matrices:Theorem 5. In the limit of weak selection, the dynamics of the ecologically asymmetric death-birth, birth-death, imitation, and pairwise comparison processes on a large, regular network may be approximated bythe dynamics of a symmetric game with the same update rule and payoff matrix M :“ 1kNřNi,j“1 wijMij ,i.e.M “¨˚˚˚˚˚˚˚˚˚˝A1 A2 ¨ ¨ ¨ AnA1 a11, a11 a12, a21 ¨ ¨ ¨ a1n, an1A2 a21, a12 a22, a22 ¨ ¨ ¨ a2n, an2.......... . ....An an1, a1n an2, a2n ¨ ¨ ¨ ann, ann‹˛‹‹‹‹‹‹‹‹‚, (3.4)where ast :“ 1kNřNi,j“1 wijaijst for each s and t.For a proof of Theorem 5, see Methods. In Methods, we derive explicit formulas for E r∆prs for each r(where pr is the frequency of strategy Ar and E r∆prs is the expected change in pr in one step of the process)and show that these expectations depend on M in the limit of weak selection. If we choose an appropriatetime scale and make the approximation9pr :“ dprdt“ E r∆prs∆t, (3.5)then the dynamics of an ecologically asymmetric process may also be described in terms of the replicatorequation (on graphs) of Ohtsuki and Nowak (2006): If φ :“ řns,t“1 psptast, then9pr “ pr˜nÿs“1ps`ars ` brs˘´ φ¸ , (3.6)where brs is a function of M, k, and the update rule. (For each of the four processes, the explicit expressionfor brs is provided in Methods.) The matrix`brs˘nr,s“1 accounts for local competition resulting from the51population structure (see Ohtsuki and Nowak, 2006). In particular, the Ohtsuki-Nowak transform,parsqnr,s“1 ÝÑ`ars ` brs˘nr,s“1 , (3.7)which transforms the classical replicator equation into the replicator equation on graphs, also applies toevolutionary games with ecological asymmetry.Even though interactions are now governed by a symmetric game, Theorem 5 states that, in general,the dynamics depend on the particular network configuration, pwijq1ďi,jďN ; that is, the symmetric payoffsdefined by M still depend on the network structure, or, equivalently, on the distribution of ecological resourceswithin the population. However, somewhat surprisingly, there is a broad class of games for which thisdependence vanishes:Definition 4. If aijrs “ xirs ` yjrs for each r and s, then Mij is called a spatially additive payoff matrix. IfMij is spatially additive for each i and j, then the game is said to be spatially additive.A game is spatially additive if the payoff for an interaction between any two members of the population canbe decomposed as a sum of two components, one from each player’s location. Note that spatial additivity isdifferent from the “equal gains from switching” property (Nowak and Sigmund, 1990) in that neither impliesthe other. However, spatial additivity is an analogue in the following sense: if two players at differentlocations use the same strategy against a common opponent, then the difference in these two players’ payoffsfor this interaction is independent of the location of the opponent. Interchanging “location” and “strategy,”one obtains the equal gains from switching property. The importance of spatially additive games is due tothe following corollary to Theorem 5:Corollary 2. If Mij is spatially additive for each i and j, then the expected change in the frequency ofstrategy Ar, E r∆prs, is independent of pwijq1ďi,jďN for each r. In particular, the dynamics of the processdo not depend on the particular network configuration.As an example, the asymmetric Donation Game is spatially additive and possesses the equal gains fromswitching property, which greatly simplifies the analysis of its dynamics:Example 12 (Donation Game with ecological asymmetry). The asymmetric Donation Game with payoff52matrices defined by Eq. (3.2) is spatially additive and satisfiesM “¨˚˝C DC b´ c, b´ c ´c, bD b, ´c 0, 0‹˛‚, (3.8)where b “ 1NřNi“1 bi and c “ 1NřNi“1 ci. Therefore, the dynamics of the asymmetric game are the sameas those of its symmetric counterpart with benefit, b, and cost, c, regardless of network configuration orresource distribution. Under death-birth (resp. imitation) updating, this result implies that cooperation isexpected to increase if and only if b{c ą k (resp. b{c ą k` 2), where k is the degree of the (regular) network(Ohtsuki et al., 2006). Fig. 3.1(A) compares the predicted result obtained from M to simulation data forimitation updating when benefit and cost values are distributed according to Gaussian random variables.Example 13 (Snowdrift Game with ecological asymmetry). In order to illustrate when Corollary 2 fails, weturn to cooperation in the Snowdrift Game (Hauert and Doebeli, 2004; Doebeli and Hauert, 2005). In thisgame, two drivers find themselves on either side of a snowdrift. If both cooperate in clearing the snowdrift,they share the cost, c, equally, and both receive the benefit of being able to pass, b. If one player cooperatesand the other defects, both players receive b but the cooperator pays the full cost, c. If both players defect,each receives no benefit and pays no cost. In order to incorporate ecological asymmetry, we assume that thebenefits are all the same since they are derived from being able to pass in the absence of a snowdrift. On theother hand, the cost a player pays to clear the snowdrift may depend on his or her location: the snowdriftmay appear on an incline, for example, in which case one player shovels with the gradient and the otherplayer against it. Moreover, when two cooperators meet, they might clear unequal shares of the snowdrift.Thus, the payoff matrix for a player at location i against a player at location j should be of the formMij pαijq :“¨˚˝C DC b´ αijci, b´ αjicj b´ ci, bD b, b´ cj 0, 0‹˛‚, (3.9)where 0 ď αij ď 1 and αij ` αji “ 1 (Du et al., 2009). Intuitively, when two cooperators face one other,they each begin to clear the snowdrift and stop once they meet; the quantity αij indicates the fraction ofthe snowdrift a cooperator at location i clears before meeting the cooperator at location j. A natural choice53for αij isαij “ cjci ` cj , (3.10)which is the unique value that gives αijci “ αjicj for each i and j, ensuring that the game is fair, i.e. thatthe cooperator with the higher cost clears a smaller portion of the snowdrift than the one with the lowercost. Averaging the payoff to one cooperator against another over all possible locations gives1kNNÿi,j“1wij pb´ αijciq “ b´ 1kNNÿi,j“1wijˆcicjci ` cj˙, (3.11)which is the upper-left entry of M. In contrast, the remaining three entries of M do not depend onpwijq1ďi,jďN . Therefore, provided there are at least two locations with distinct cost values, the dynam-ics of an evolutionary process depend on the particular network configuration (Theorem 5). This networkdependence is illustrated in Fig. 3.2.Suppose now that we set αij ” 1{2 to model ecological asymmetry in the Snowdrift Game; that is, iftwo cooperators meet, they each clear exactly half of the snowdrift. If there are two cost values in thepopulation, c1 and c2, with c1 ă b ă c2 ă 2b, then a player who incurs a cost of c1 finds it beneficial tocooperate against a defector, but a player who incurs a cost of c2 would rather defect in this situation.Thus, based on the social dilemma implied by the ranking of the payoffs, a player who incurs a cost of c1for cooperating is always playing a Snowdrift Game while a player who incurs a cost of c2 is always playinga Prisoner’s Dilemma. It follows that ecological asymmetry can account for multiple social dilemmas beingplayed within a single population, even if the players all use the same set of strategies (C and D). The payoffmatrices of this particular game are spatially additive, so, by Corollary 2, the dynamics do not depend onthe network configuration. If q is the fraction of vertices with cost value c1 then c “ qc1 ` p1´ qq c2 is theaverage cost of cooperation for a particular location and the dynamics are the same as those of the symmetricSnowdrift Game in which the cost of clearing a snowdrift is c (see Fig. 3.1(B)). Fig. 3.3 demonstrates thatthis result does not extend to stronger selection strengths, so Theorem 5 is unique to weak selection.Based on Theorem 5 and the relative rank of payoffs, the social dilemma defined by the asymmetricgame (3.9) (for general αij) is a Prisoner’s Dilemma if b ă c and a Snowdrift Game if b ą c when selectionis weak. That is, microscopically, there is a mixture of Prisoner’s Dilemmas and Snowdrift Games, but,macroscopically, the process behaves like just one of these social dilemmas. Consequently, although thedynamics of this evolutionary process may depend on the network configuration, the type of social dilemma54pC0 0.2 0.4 0.6 0.8 1∆pC×10 -4-1012345678(A)actualpredictedpC0 0.2 0.4 0.6 0.8 1∆pC×10 -3-2-101234567(B)c1 = 34/13c2 = 70/13half c1 / half c2predictedFigure 3.1: Average change in the frequency of cooperators, ∆pC , as a function of the frequency of coopera-tors, pC , in (A) an asymmetric Donation Game and (B) asymmetric Snowdrift Games. The update rules are(A) imitation and (B) death-birth, and each process has for a selection intensity β “ 0.01. In both figures,the network is a random regular graph of size N “ 500 and degree k “ 3. In (A), benefits and costs of coop-eration vary across vertices according to a Gaussian distribution with mean 3.5, variance 1.0 for benefits andmean 0.5, variance 0.25 for costs. In (B), the benefit is b “ 5.0 for all vertices, and the costs are either low,c1 “ 34{13, or high c2 “ 70{13, which actually recovers the payoff ranking of the Prisoner’s Dilemma becausec2 ą b. The costs are the same for all vertices (c1, blue and c2, green) or mixed at equal proportions (red).(B) confirms that the average change in cooperators in the mixed Snowdrift Game/Prisoner’s Dilemma (red)may be obtained by averaging these changes for the Snowdrift Game (blue) and the Prisoner’s Dilemma(green). The small, systematic deviations between simulation data and analytical predictions (solid lines)are explained in Methods (where it is also shown that ∆pC is linear in β for β ! 1).implied by this game does not.3.2.2 Genotypic asymmetryAnother form of asymmetry is based on the genotypes of the players rather than their locations. Each playerin the population has one of ` possible genotypes, and these genotypes are enumerated by the set t1, . . . , `u.For an n-strategy game, the payoff matrix for a player whose genotype is u against a player whose genotype55pC0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1∆pC×10 -4-8-6-4-202468Figure 3.2: Average change in the frequency of cooperators, ∆pC , as a function of the frequency of coopera-tors, pC , for a spatially non-additive Snowdrift Game, Eq. (3.9), with selection intensity β “ 0.01. The blueand green data are obtained using pairwise comparison updating and differ only in the configuration of theunderlying network, which in both cases is a random regular graph of size N “ 500 and degree k “ 3. Everyvertex has a benefit value of b “ 4.0, and the cost values are split equally, with half of the vertices havingc1 “ 0.5 and the remaining half having c2 “ 5.5. The average payoff for mutual cooperation, Eq. (3.11), is3.069 (blue) and 2.961 (green), which suggests that the former arrangement is more attractive for coopera-tion. The analytical predictions (solid lines) are obtained from Eq. (3.48) in Methods (and are linear in βfor β ! 1).pC0 0.2 0.4 0.6 0.8 1∆pC-0.03-0.02-0.0100.010.020.030.040.050.060.07(A)c1 = 34/13c2 = 70/13half c1 / half c2average of • and •predictedpC0 0.2 0.4 0.6 0.8 1∆pC-0.1-0.0500.050.10.150.2(B)c1 = 34/13c2 = 70/13half c1 / half c2average of • and •Figure 3.3: The Snowdrift Games of Fig. 3.1(B) with the stronger selection strengths β “ 0.1 (A) and β “ 0.5(B). For each of the three games (with benefit b “ 5.0 and costs c1, c2, and half c1/half c2, respectively),the simulation results differ from the prediction of pair approximation already for β “ 0.1 (A). Moreover,for β “ 0.5, (B) makes it clear that Theorem 5 no longer holds since the average change in cooperators inthe game with mixed costs (red) differs from the average (grey) of these changes for the games with costs c1only (blue) and c2 only (green). Thus, Theorem 5 is peculiar to weak selection.56is v isMuv :“¨˚˚˚˚˚˚˚˚˚˝A1 A2 ¨ ¨ ¨ AnA1 auv11 , avu11 auv12 , avu21 ¨ ¨ ¨ auv1n, avun1A2 auv21 , avu12 auv22 , avu22 ¨ ¨ ¨ auv2n, avun2.......... . ....An auvn1, avu1n auvn2, avu2n ¨ ¨ ¨ auvnn, avunn‹˛‹‹‹‹‹‹‹‹‚. (3.12)We explore genotypic asymmetry for cultural and genetic processes separately:Cultural updatingIf genotypic asymmetry is incorporated into a cultural process, then the genotypes of the players neverchange; only the strategies of the players are updated. In a structured population, it follows that eachplayer’s genotype may be associated with his or her location, and this association is an invariant of theprocess. Thus, if u piq denotes the genotype of the player at location i, then we may apply Theorem 5 to thematrices defined by Mij “ Mupiqupjq for each i and j. In this sense, genotypic asymmetry may be “reduced”to ecological asymmetry in evolutionary games with cultural update rules. Note that, unlike ecologicalasymmetry, genotypic asymmetry does not require a structured population. However, one can always thinkof a population as structured (even in the well-mixed case), and doing so allows one to make sense of the“locations” of the players and to apply Theorem 5 to cultural processes with genotypic asymmetry.Example 14 (Donation Game with genotypic asymmetry and cultural updating). In the Donation Game,a cooperator of genotype u donates bu at a cost of cu. Defectors contribute no benefit and pay no cost,irrespective of genotype. Consider imitation updating on a large, regular network of degree k, and letu piq denote the genotype of the player at location i (henceforth “player i”). Suppose that player i is acooperator, player j is a defector, and that player i imitates player j and becomes a cooperator. Despitethis strategy change, the genotype of player i is still u piq, and the payoff matrix for player i against player jis still Mupiqupjq. On the other hand, consider the same process but with the genotypic asymmetry replacedby ecological asymmetry (and with Mij :“ Mupiqupjq as the payoff matrix for the player at location iagainst the player at location j). Since the genotype of a player at a given location never changes in animitation process, the process with ecological asymmetry is well-defined; that is, Mij is independent of thedynamics of the process for each i and j. Therefore, we may instead study the evolution of cooperation57in the process with ecological asymmetry, and we already know from Example 12 that, in the limit ofweak selection, the frequency of cooperators in this Donation Game is expected to increase if and only ifpk ` 2qřNi“1 cupiq ă řNi“1 bupiq.In contrast, for genetic update rules, the asymmetry present due to differing genotypes can be removedcompletely if the genotypes of offspring are determined by genetic inheritance:Genetic updatingGenetic update rules are defined by the ability of players to propagate their offspring to other locations inthe population by means of births and deaths. In other words, there is a reproductive step in which geneticinformation is passed from parent(s) to child. Both the death-birth and birth-death processes have geneticupdate rules, but reproduction need not be clonal for the update rule to be genetic. If the genotypes ofoffspring are determined by genetic inheritance, then the strategy and genotype at each location are updatedsimultaneously: if the offspring of a player whose genotype is u and whose strategy is Ar replaces a playerwhose genotype is v and whose strategy is As, then v is updated to u and As is updated to Ar synchronously.Therefore, rather than treating genotypes and strategies separately, we may consider them together in theform of pairs, pu,Arq, linking genotype and strategy. These pairs may be thought of as composite strategiesof a larger evolutionary game whose payoff matrix, ĂM, is defined byĂMpu,Arq,pv,Asq :“ auvrs (3.13)for genotypes, u and v, and strategies, Ar and As. The map!Muv)`u,v“1ÝÑ ĂM (3.14)resolves a collection of n ˆ n asymmetric payoff matrices with a single symmetric payoff matrix, ĂM, ofsize `n ˆ `n. This argument holds for any population structure, so evolutionary processes with genotypicasymmetry that are based on genetic update rules can be studied in any setting in which there is a theory ofsymmetric games. For example, we may use the results from pair approximation on large, regular networksto study the Donation Game with genotypic asymmetry and genetic updating:Example 15 (Donation Game with genotypic asymmetry and genetic updating). As in Example 14, acooperator of genotype u in the Donation Game donates bu at a cost of cu. Defectors contribute no benefit58and pay no cost, irrespective of genotype. For the death-birth and birth-death update rules, defectors may bemodeled as cooperators whose benefit and costs are both 0. In the larger symmetric game defined by (3.14),it follows that there are ` ` 1 distinct composite strategies: p1, Cq, p2, Cq, . . . , p`, Cq, and D :“ p`` 1, Cq.For death-birth updating on a large, regular network of degree k, cooperators of genotype u P t1, . . . , `u areexpected to increase if and only ifk˜cu ´ÿ`v“1cvpv¸ă bu ´ÿ`v“1bvpv, (3.15)where, for each v P t1, . . . , `u, pv denotes the frequency of cooperators of genotype v (i.e. the frequency ofstrategy pv, Cq in the larger symmetric game). The terms ř`v“1 bvpv and ř`v“1 cvpv are the average popula-tion benefit and cost values, respectively. Therefore, the condition for the expected increase in cooperatorsof a particular genotype depends on the average level of cooperation within the population. Eq. (3.15) maybe thought of as an analogue of the ‘b{c ą k’ rule of Ohtsuki et al. (2006) with b replaced by the “benefitpremium,” bu ´ř`v“1 bvpv, and c replaced by the “cost premium,” cu ´ř`v“1 cvpv.In the birth-death process, on the other hand, cooperators of genotype u P t1, . . . , `u are expected toincrease if and only ifcu ăÿ`v“1cvpv. (3.16)Interestingly, this condition is independent of the benefit values and says that cooperators of genotypeu P t1, . . . , `u increase in abundance if they incur, on average, smaller costs for cooperating than the othercooperators.Eqs. (3.15) and (3.16) are obtained by noticing that the expected change in the frequency of cooperatorsof genotype u, E r∆pus, is a positive multiple of bu ´ ř`v“1 bvpv ´ k ´cu ´ř`v“1 cvpv¯ in the death-birthprocess and ofř`v“1 cvpv ´ cu in the birth-death process (see Eqs. (3.33) and (3.36) in Methods). In thebirth-death process, it follows that the expected change in the frequency of cooperators of genotype u isclose to 0 if pu is close to 1, hence increases in cooperators who pay nonzero costs are necessarily transient.3.3 DiscussionAsymmetric games naturally separate standard evolutionary update rules into cultural and genetic classes.This distinction is important because it captures biological differences that are not always apparent in models59of evolution based on symmetric games. For example, consider a model player whose offspring replaces afocal player and a model player whose strategy is imitated by a focal player. For symmetric games, processesbased on these two types of updates are mathematically identical; if asymmetry is present, then the factthat one update is genetic (replacement) and the other is cultural (imitation) becomes important. Thus,asymmetric games can highlight fundamental differences in evolutionary processes that are based on distinctupdate rules but happen to behave similarly when the underlying game is symmetric.In order to incorporate into evolutionary games the asymmetries commonly studied in classical game the-ory, our focus has been on games with asymmetric payoffs. Games with asymmetric payoffs arise naturallyfrom different forms of interaction heterogeneity. Dependence of payoffs on the environment is a reasonableassumption when considering ecological variation (Maciejewski and Puleo, 2014). Certain patches may pro-vide resources or have drawbacks that influence a player’s success when using a particular strategy (Kun andDieckmann, 2013). Asymmetric interactions may also be the result of heterogeneity in the sizes or strengthsof players (Maynard Smith and Parker, 1976; Hauser et al., 2014). Whether the source of asymmetry is theenvironment or the players themselves, our model effectively resolves a collection of microscopically asym-metric interactions with a macroscopically symmetric game in the limit of weak selection. Figs. 3.1 and 3.2illustrate this result for three common update rules.Similar forms of asymmetry have been studied previously in evolutionary game theory: Szolnoki andSzabo´ (2007) consider asymmetry appearing in the update rule that results in “attractive” and “repulsive”players in the pairwise comparison process. For games with population structures defined by two graphs(“interaction” and “dispersal” graphs), Ohtsuki et al. (2007a,b) show that the evolution of cooperationcan be inhibited by asymmetry arising from differences in these two graphs. On the other hand, Pachecoet al. (2009) show that heterogeneous population structures can promote the evolution of cooperation byeffectively transforming a collection of microscopic social dilemmas into a global coordination game. Thisresult is reminiscent of our Theorem 5, which relates the microscopic interactions to the global behaviorof a process. Such heterogeneous population structures can result in asymmetric interactions even if theunderlying game is symmetric (Maciejewski et al., 2014). These models, although somewhat different fromours, demonstrate that asymmetry (in its many forms) has a remarkable effect on evolutionary dynamics.Although genotypic asymmetry can always be reduced to a (larger) symmetric game under genetic updaterules, this symmetric game can be of independent interest. For example, Eq. (3.16) shows that if cooperatorsvary in size or strength, then certain cooperators may increase in the Donation Game even under birth-death updating. In contrast, cooperation never increases in the absence of cooperator variation Ohtsuki60et al. (2006). Though defectors still eventually outcompete cooperators, the transient increase in cooperatorssuggests that other evolutionary processes with this form of asymmetry can behave in novel ways.If both ecological and genotypic asymmetries are present, they can be handled separately: genotypicasymmetry is reduced to either (i) ecological asymmetry (if the update rule is cultural) or (ii) a symmetricgame with more strategies (if the update rule is genetic). In either case, an evolutionary game with bothecological and genotypic asymmetries can be reduced to a game with ecological asymmetry only and henceTheorem 5 applies. Our framework handles asymmetry resulting from varying baseline traits due to bothenvironment and genotype, which could be referred to as phenotypic asymmetry.The presence of ecological or genotypic asymmetry in an evolutionary process does not necessarily dependon the selection strength or update rule; these forms of asymmetry may be incorporated into many evolu-tionary processes. Theorem 5, which effectively reduces a game with ecological asymmetry to a particularsymmetric game, is stated for four common update rules in evolutionary game theory. Fig. 3.3 demonstrates(using the asymmetric Snowdrift Game) that this theorem is specific to weak selection. That selection isweak is often a reasonable assumption when using evolutionary games to study populations of organismswith many traits. However, our study of the asymmetric Snowdrift Game for stronger selection strengthssuggests that the behavior of asymmetric games is more complicated if selection is strong. Though moredifficult to treat analytically, symmetric games under strong selection are worthy of further investigation.Asymmetry is omnipresent in nature, and any framework that is used to model evolution should take intoaccount possible sources of asymmetry. We have formally introduced ecological and genotypic asymmetriesinto evolutionary game theory and have studied these asymmetries in the limit of weak selection. Asymmetryhas a natural place in the Donation Game and the Snowdrift Game, but our results are applicable to anygeneral n-strategy matrix game. Our treatment of asymmetry highlights important differences betweenmodels of cultural and genetic evolution that are not apparent in the traditional setting of symmetric games.Ecological and genotypic asymmetries cover a wide variety of background variation observed in biologicalpopulations, and, as such, our framework enhances the modeling capacity of evolutionary games.3.4 MethodsFor the two genetic processes (death-birth and birth-death) and the two cultural processes (imitation andpairwise comparison) we consider, we treat ecologically asymmetric games on a large, regular network usingpair approximation (Matsuda et al., 1992; Ohtsuki et al., 2006). We assume here that the degree of thenetwork, k, is at least 3. For k “ 2, the network is just a cycle, and we do not treat this case here. The61detailed steps of each calculation are omitted but we include the main setups to allow for reconstructionof the reported results. We begin by recalling the way in which these four processes are defined (see e.g.Ohtsuki and Nowak (2006)):(DB) In the death-birth process, a player is selected uniformly at random from the population for death. Aneighbor of the focal individual is then selected to reproduce with probability proportional to relativefitness, and the resulting offspring replaces the deceased player;(BD) In the birth-death process, an individual is selected from the population for reproduction with proba-bility proportional to relative fitness, and the offspring replaces a neighbor at random;(IM) In the imitation process, an individual is chosen uniformly at random to evaluate his or her strategy.This focal individual either adopts a strategy of a neighbor (with probability proportional to thatneighbor’s relative fitness) or retains his or her original strategy (with probability proportional to ownrelative fitness);(PC) In the pairwise comparison process, a focal individual is selected uniformly at random from the popu-lation to evaluate his or her strategy. A model individual is then chosen uniformly at random from theneighbors of the focal individual as a basis for comparison, and the focal player adopts the strategy ofthe model player with probability proportional to the model player’s relative fitness.3.4.1 Notation and general remarksLet S “ tA1, . . . , Anu be the set of pure strategies available to each player and suppose that there are Nplayers on a regular network of size N (i.e. every node is occupied). A strategy pair pAr, Asq means a choiceof a player using strategy Ar who has as a neighbor a player using strategy As. Letpr :“ frequency of players using strategy Ar; (3.17a)prs :“ frequency of strategy pairs pAr, Asq; (3.17b)qs|r :“ conditional probability of finding an s player next to an r player. (3.17c)62We will make repeated use of the following properties of these quantities:nÿr“1pr “nÿs“1qs|r “ 1; (3.18a)psqr|s “ prs “ psr “ prqs|r. (3.18b)Strictly speaking, the equalities psqr|s “ prs “ psr “ prqs|r need not hold in general. As a pathologicalexample, one may consider the network with two nodes and a single undirected link between these nodes.If the player on the first node uses Ar, the player on the second node uses As, and r ‰ s, then prs “ 1 butps “ 1{2, which gives qr|s “ 2. However, for large random regular graphs (Bolloba´s, 2001), condition (3.18b)holds approximately, and we will take this equality as given in what follows.For X P pr, prs, qs|r(1ďr,sďn, let E r∆Xs denote the expected change in X in one step of the process.A pair pAr, iq denotes a player on vertex i using strategy Ar. Given pairs pAr, iq and pAs, jq, we denoteby pipAs,jq pAr, iq the expected payoff to a player at vertex j playing strategy As given that they have as aneighbor an individual playing strategy Ar at vertex i. If β ě 0 is a parameter representing the intensity ofselection, then payoff, pi, is converted to fitness, fβ ppiq, viafβ ppiq :“ exp!βpi). (3.19)When defined in this way, fitness is always positive.The main theorem we prove is the following:Theorem 5. In the limit of weak selection, the dynamics of the ecologically asymmetric death-birth, birth-death, imitation, and pairwise comparison processes on a large, regular network may be approximated bythe dynamics of a symmetric game with the same update rule and payoff matrix M :“ 1kNřNi,j“1 wijMij ,i.e.M “¨˚˚˚˚˚˚˚˚˚˝A1 A2 ¨ ¨ ¨ AnA1 a11, a11 a12, a21 ¨ ¨ ¨ a1n, an1A2 a21, a12 a22, a22 ¨ ¨ ¨ a2n, an2.......... . ....An an1, a1n an2, a2n ¨ ¨ ¨ ann, ann‹˛‹‹‹‹‹‹‹‹‚, (3.20)63where ast :“ 1kNřNi,j“1 wijaijst for each s and t.Theorem 5 is established for each of these four update rules separately:3.4.2 Death-birth updatingIf an individual is playing strategy Ar at node i, As at j, and if wij ‰ 0, thenpipAs,jq pAr, iq “ ajisr `ÿm‰iwjmnÿt“1ajmst qt|s. (3.21)Suppose that an pAr, iq individual is selected for death. The probability that pAs, jq replaces this focalindividual is proportional to fβ`pipAs,jq pAr, iq˘. For each i, let pi1, . . . , ikq be an enumeration of the indicesj with wij ‰ 0 (say, in increasing order) and let s` be the strategy used by the player at vertex i`. If pAr, iqis chosen for death, then the probability that it is replaced by pAs` , i`q isfβ´pipAs` ,i`q pAr, iq¯řkj“1 fβˆpi´Asij,ij¯ pAr, iq˙ . (3.22)The Taylor expansion of this term for small β isfβ´pipAs` ,i`q pAr, iq¯řkj“1 fβˆpi´Asij,ij¯ pAr, iq˙ “ 1k` β¨˚˝kpipAs,i`q pAr, iq ´řkj“1 pi´Asij ,ij¯ pAr, iqk2‹˛‚`O `β2˘ . (3.23)This expansion will be used frequently in the displays that follow.Approximation of the expected change in strategy frequenciesLet δx,y be the Kronecker delta (defined to be 1 if x “ y and 0 otherwise). The probability of choosingthe player on vertex i for death is 1{N . The chance that this player is using strategy Ah is ph. Supposethat´Asi1 , . . . , Asik¯is a k-tuple of strategies. If the focal player at vertex i uses strategy Ah, then the64probability that the player on vertex i` uses strategy Asi` for each ` “ 1, . . . , k is qsi1 |h ¨ ¨ ¨ qsik |h. Thus,E r∆prs “ 1NNÿi“1ÿh‰rphnÿsi1 ,...,sik“1qsi1 |h ¨ ¨ ¨ qsik |hkÿ`“1δsi` ,r¨˚˚˝ fβ `pipAr,i`q pAh, iq˘řkj“1 fβˆpi´Asij,ij¯ pAh, iq˙‹˛‹‚ˆ 1N˙` 1NNÿi“1prnÿsi1 ,...,sik“1qsi1 |r ¨ ¨ ¨ qsik |rÿh‰rkÿ`“1δsi` ,h¨˚˚˝ fβ `pipAh,i`q pAr, iq˘řkj“1 fβˆpi´Asij,ij¯ pAr, iq˙‹˛‹‚ˆ´ 1N˙(3.24)for each strategy, Ar. The Taylor expansion to first-order yieldsE r∆prs « βˆ pk ´ 1q prk2N2˙´pAq ´ pBq ´ pCq ` pDq¯`O `β2˘ , (3.25)wherepAq “ÿh‰rqh|rNÿi“1kÿ`“1nÿsi`“1qsi` |rpi´Asi`,i`¯ pAr, iq ; (3.26a)pBq “ÿh‰rqh|rNÿi“1kÿ`“1nÿsi`“1qsi` |hpi´Asi`,i`¯ pAh, iq ; (3.26b)pCq “ÿh‰rqh|rNÿi“1kÿ`“1pipAh,i`q pAr, iq ; (3.26c)pDq “ÿh‰rqh|rNÿi“1kÿ`“1pipAr,i`q pAh, iq . (3.26d)65Approximation of the expected change in pair frequenciesIf r ‰ s, thenE r∆prss “ 1NNÿi“1ÿh‰r,sphnÿsi1 ,...,sik“1qsi1 |h ¨ ¨ ¨ qsik |hˆkÿ`“1δsi` ,r¨˚˚˝ fβ `pipAr,i`q pAh, iq˘řkj“1 fβˆpi´Asij,ij¯ pAh, iq˙‹˛‹‚˜2řkα“1 δsiα ,skN¸` 1NNÿi“1ÿh‰r,sphnÿsi1 ,...,sik“1qsi1 |h ¨ ¨ ¨ qsik |hˆkÿ`“1δsi` ,s¨˚˚˝ fβ `pipAs,i`q pAh, iq˘řkj“1 fβˆpi´Asij,ij¯ pAh, iq˙‹˛‹‚˜2řkα“1 δsiα ,rkN¸` 1NNÿi“1prnÿsi1 ,...,sik“1qsi1 |r ¨ ¨ ¨ qsik |rˆkÿ`“1δsi` ,s¨˚˚˝ fβ `pipAs,i`q pAr, iq˘řkj“1 fβˆpi´Asij,ij¯ pAr, iq˙‹˛‹‚˜2řkα“1`δsiα ,r ´ δsiα ,s˘kN¸` 1NNÿi“1prnÿsi1 ,...,sik“1qsi1 |r ¨ ¨ ¨ qsik |rˆÿh‰r,skÿ`“1δsi` ,h¨˚˚˝ fβ `pipAh,i`q pAr, iq˘řkj“1 fβˆpi´Asij,ij¯ pAr, iq˙‹˛‹‚˜´2řkα“1 δsiα ,skN¸` 1NNÿi“1psnÿsi1 ,...,sik“1qsi1 |s ¨ ¨ ¨ qsik |sˆkÿ`“1δsi` ,r¨˚˚˝ fβ `pipAr,i`q pAs, iq˘řkj“1 fβˆpi´Asij,ij¯ pAs, iq˙‹˛‹‚˜2řkα“1`δsiα ,s ´ δsiα ,r˘kN¸` 1NNÿi“1psnÿsi1 ,...,sik“1qsi1 |s ¨ ¨ ¨ qsik |sˆÿh‰r,skÿ`“1δsi` ,h¨˚˚˝ fβ `pipAh,i`q pAs, iq˘řkj“1 fβˆpi´Asij,ij¯ pAs, iq˙‹˛‹‚˜´2řkα“1 δsiα ,rkN¸. (3.27)66On the other hand,E r∆prrs “ 1Nnÿi“1ÿh‰rphnÿsi1 ,...,sik“1qsi1 |h ¨ ¨ ¨ qsik |hˆkÿ`“1δsi` ,r¨˚˚˝ fβ `pipAr,i`q pAh, iq˘řkj“1 fβˆpi´Asij,ij¯ pAh, iq˙‹˛‹‚˜2řkα“1 δsiα ,rkN¸` 1Nnÿi“1prnÿsi1 ,...,sik“1qsi1 |r ¨ ¨ ¨ qsik |rˆÿh‰rkÿ`“1δsi` ,h¨˚˚˝ fβ `pipAh,i`q pAr, iq˘řkj“1 fβˆpi´Asij,ij¯ pAr, iq˙‹˛‹‚˜´2řkα“1 δsiα ,rkN¸. (3.28)The zeroth-order Taylor expansion yieldsE r∆prss « 4prkN˜´kqs|r ` pk ´ 1qnÿh“1qs|hqh|r¸`O pβq (3.29)if r ‰ s, andE r∆prrs « 2prkN˜1´ kqr|r ` pk ´ 1qkÿh“1qr|hqh|r¸`O pβq . (3.30)Therefore, E r∆prs “ O pβq (by Eq. (3.25)) and E r∆prss “ O p1q (by Eqs. (3.29) and (3.30)) for each r ands, which results in a separation of timescales between the strategy frequencies and the pair frequencies. Inparticular, the pair frequencies will reach their equilibrium much more quickly than the strategy frequencieswill, so we can examine the expression for E r∆prs under the assumption that the pair frequencies havereached their equilibrium (Ohtsuki et al., 2006).Weak-selection dynamicsAssuming that each update takes place in one unit of time, we can approximate the dynamics by thedeterministic systems 9pr “ E r∆prs and 9prs “ E r∆prss for each r and s (Ohtsuki et al., 2006; Ohtsuki andNowak, 2006). Since β is small, we see that the latter system will reach equilibrium much quicker than theformer. When the pair frequencies have reached equilibrium (i.e. E r∆prss “ 0), we havekqs|r “ δs,r ` pk ´ 1qnÿh“1qs|hqh|r. (3.31)67Ohtsuki and Nowak (2006) show that this equation implies thatqr|s “ pr `ˆ1k ´ 1˙pδs,r ´ prq . (3.32)Assuming the system has reached this local equilibrium, we then haveE r∆prs « βˆ pk ´ 1q prk2N2˙˜pk ` 1qNÿi,j“1wijnÿs“1aijrsqs|r´ kNÿi,j“1wijnÿs,t“1aijstqt|sqs|r ´Nÿi,j“1wijnÿs,t“1aijstqs|tqt|r¸`O `β2˘“ βˆ pk ´ 1q prkN˙˜pk ` 1qnÿs“1arsqs|r ´ knÿs,t“1astqt|sqs|r ´nÿs,t“1astqs|tqt|r¸`O `β2˘“ βˆ pk ´ 2q prk pk ´ 1qN˙˜´ pk ´ 2q pk ` 1qnÿs,t“1astpspt ``k2 ´ k ´ 1˘ nÿs“1arsps´nÿs“1asrps ´ pk ` 1qnÿs“1assps ` pk ` 1q arr¸`O `β2˘ (3.33)as long as β is small. Therefore, if we choose an appropriate time scale and set9pr “ E r∆prs∆t; (3.34a)brs “ arr ` ars ´ asr ´ assk ´ 2 ; (3.34b)φ “nÿs,t“1psptast, (3.34c)then 9pr “ pr`řns“1 ps`ars ` brs˘´ φ˘, recovering the replicator equation of Ohtsuki and Nowak (2006). Itfollows that the dynamics depend on M, proving Theorem 5 for death-birth updating.3.4.3 Birth-death updatingIn the birth-death process, an individual is selected for reproduction with probability proportional to rela-tive fitness. The offspring of the selected player then replaces a random neighbor. Rather than trying toapproximate the total fitness of the population, we will simply denote this value by fpop. Since this valueis positive, it does not influence the sign of the expectation values and as such we will largely ignore it. We68haveE r∆prs “ 1fpopNprˆ1N˙ Nÿi“1nÿsi1 ,...,sik“1qsi1 |r ¨ ¨ ¨ qsik |rfβ˜kÿ`“1aii`rsi`¸ ÿh‰r˜řkj“1 δsij ,hk¸ˆ1N˙` 1fpopÿh‰rNphˆ1N˙ Nÿi“1nÿsi1 ,...,sik“1qsi1 |h ¨ ¨ ¨ qsik |hfβ˜kÿ`“1aii`hsi`¸˜řkj“1 δsij ,rk¸ˆ´ 1N˙.(3.35)The local equilibrium conditions for birth-death updating turn out to be the same as those for death-birthupdating (Eq. (3.32)). These local equilibrium conditions do not take into account selection as long as β isclose to 0, so they are essentially based on a neutral process in which at most one strategy is update at eachtime step. Therefore, it is perhaps not surprising that these conditions are the same for different processesbased on one strategy update in each time step.In the following expressions, by x9y we mean that x is proportional to y with positive constant ofproportionality. Letting β Ñ 0 and using the local equilibrium conditions (as well as the same separation-of-timescales argument we used in §3.4.2), we find thatE r∆prs9βpr˜kNÿi,j“1wijnÿs“1aijrsqs|r ´ pk ´ 1qNÿi,j“1wijnÿs,t“1aijstqt|sqs|r ´Nÿi,j“1wijnÿs“1aijsrqs|r¸`O `β2˘9βpr˜knÿs“1arsqs|r ´ pk ´ 1qnÿs,t“1astqt|sqs|r ´nÿs“1asrqs|r¸`O `β2˘9βpr˜´ pk ´ 2qnÿs,t“1astpspt ` pk ´ 1qnÿs“1arsps ´nÿs“1asrps ´nÿs“1assps ` arr¸`O `β2˘ . (3.36)Just as we saw with the death-birth process, after choosing an appropriate time scale and lettingbrs “ pk ` 1q arr ` ars ´ asr ´ pk ` 1q asspk ´ 2q pk ` 1q ; (3.37a)φ “nÿs,t“1psptast, (3.37b)we have 9pr “ pr`řns“1 ps`ars ` brs˘´ φ˘, proving Theorem 5 for birth-death updating.3.4.4 Imitation updatingIn the imitation process, an individual is selected uniformly at random from the population to evaluate hisstrategy. The chosen player then compares his fitness with the fitness of each neighbor and either adopts a69new strategy or retains his or her current strategy (with probability proportional to relative fitness). Supposethat an individual at vertex i, playing Ar, is selected to evaluate his or her strategy. If s ‰ r, then theprobability that he or she adopts strategy s isřk`“1 δs`,sfβ´pipAs` ,i`q pAr, iq¯řkj“1 fβˆpi´Asij,ij¯ pAr, iq˙` fβ´řkj“1 aiijrsij¯ (3.38)and the probability that his strategy remains unchanged isřk`“1 δs`,rfβ´pipAs` ,i`q pAr, iq¯` fβ´řkj“1 aiijrsij¯řkj“1 fβˆpi´Asij,ij¯ pAr, iq˙` fβ´řkj“1 aiijrsij¯ . (3.39)We let pipAs,jq pAr, iq be the same as it was for death-birth updating. For small β,fβ´pipAs` ,i`q pAr, iq¯řkj“1 fβˆpi´Asij,ij¯ pAr, iq˙` fβ´řkj“1 aiijrsij¯« 1k ` 1 ` β¨˚˝ pk ` 1qpipAs` ,i`q pAr, iq ´řkj“1 pi´Asij ,ij¯ pAr, iq ´řkj“1 aiijrsijpk ` 1q2‹˛‚`O `β2˘ . (3.40)Approximation of the expected change in strategy frequenciesFor r P t1, . . . , nu,E r∆prs “ 1NNÿi“1ÿh‰rphnÿsi1 ,...,sik“1qsi1 |h ¨ ¨ ¨ qsik |hˆkÿ`“1δsi` ,r¨˚˚˝ fβ `pipAr,i`q pAh, iq˘řkj“1 fβˆpi´Asij,ij¯ pAh, iq˙` fβ´řkj“1 aiijhsij¯‹˛‹‚ˆ 1N˙` 1NNÿi“1prnÿsi1 ,...,sik“1qsi1 |r ¨ ¨ ¨ qsik |rˆÿh‰rkÿ`“1δsi` ,h¨˚˚˝ fβ `pipAh,i`q pAr, iq˘řkj“1 fβˆpi´Asij,ij¯ pAr, iq˙` fβ´řkj“1 aiijrsij¯‹˛‹‚ˆ´ 1N˙. (3.41)70The local equilibrium conditions are exactly the same as they were for the death-birth process. Assumingthat the system has reached this local equilibrium, the separation-of-timescales argument we used in §3.4.2givesE r∆prs « β˜prpk ` 1q2N2¸˜`k2 ` 2k ´ 1˘ Nÿi,j“1wijnÿs“1aijrsqs|r´ `k2 ` k ´ 2˘ Nÿi,j“1wijnÿs,t“1aijstqt|sqs|r´ pk ´ 1qNÿi,j“1wijnÿs,t“1aijtsqt|sqs|r ´ 2Nÿi,j“1wijnÿs“1aijsrqs|r¸`O `β2˘“ β˜kprpk ` 1q2N¸˜`k2 ` 2k ´ 1˘ nÿs“1arsqs|r ´`k2 ` k ´ 2˘ nÿs,t“1astqt|sqs|r´ pk ´ 1qnÿs,t“1atsqt|sqs|r ´ 2nÿs“1asrqs|r¸`O `β2˘“ β˜k pk ´ 2q prpk ´ 1q pk ` 1q2N¸˜´ pk ´ 2q pk ` 3qnÿs,t“1astpspt ``k2 ` k ´ 3˘ nÿs“1arsps´ 3nÿs“1asrps ´ pk ` 3qnÿs“1assps ` pk ` 3q arr¸`O `β2˘ . (3.42)With brs “ pk`3qarr`3ars´3asr´pk`3qasspk´2qpk`3q and φ “řns,t“1 psptast, we have9pr “ pr˜nÿs“1ps`ars ` brs˘´ φ¸ , (3.43)which establishes Theorem 5 for imitation updating.3.4.5 Pairwise comparison updatingIn the pairwise comparison process, a focal individual is selected uniformly at random from the population.A model individual is then chosen uniformly at random from the neighbors of the focal individual. If pif andpim denote the payoffs to the focal and model individuals, respectively, then the focal player will adopt thestrategy of the model player with probability11` eβppif´pimq “fβ ppimqfβ ppimq ` fβ ppifq , (3.44)71where β ě 0 is a real parameter representing the intensity of selection. In addition to the expected payoffpipAs,jq pAr, iq (defined in the same way as for death-birth updating), we letpipAs,iq :“kÿj“1aiijssij(3.45)if pAs, iq has as a neighborhood´Asi1 , . . . , Asik¯. With this notation in place, we haveE r∆prs “ 1NNÿi“1ÿh‰rphnÿsi1 ,...,sik“1qsi1 |h ¨ ¨ ¨ qsik |hˆkÿ`“1ˆ1k˙δsi` ,r˜fβ`pipAr,i`q pAh, iq˘fβ`pipAr,i`q pAh, iq˘` fβ `pipAh,iq˘¸ˆ1N˙` 1NNÿi“1prnÿsi1 ,...,sik“1qsi1 |r ¨ ¨ ¨ qsik |rˆÿh‰rkÿ`“1ˆ1k˙δsi` ,h˜fβ`pipAh,i`q pAr, iq˘fβ`pipAh,i`q pAr, iq˘` fβ `pipAr,iq˘¸ˆ´ 1N˙. (3.46)As β Ñ 0, we havefβ pxqfβ pxq ` fβ pyq «12` βˆx´ y4˙`O `β2˘ . (3.47)Consequently, in the limit of weak selection,E r∆prs « β pr2kN2˜kNÿi,j“1wijnÿs“1aijrsqs|r ´ pk ´ 1qNÿi,j“1wijnÿs,t“1aijstqt|sqs|r ´Nÿi,j“1wijnÿs“1aijsrqs|r¸`O `β2˘“ β pr2N˜knÿs“1arsqs|r ´ pk ´ 1qnÿs,t“1astqt|sqs|r ´nÿs“1asrqs|r¸`O `β2˘“ βˆ pk ´ 2q pr2 pk ´ 1qN˙˜´ pk ´ 2qnÿs,t“1astpspt ` pk ´ 1qnÿs“1arsps´nÿs“1asrps ´nÿs“1assps ` arr¸`O `β2˘ . (3.48)The local equilibrium conditions are exactly the same as they were for the other processes, but in thiscase they are not needed to arrive at this last expression for E r∆prs. With brs “ arr`ars´asr´assk´2 andφ “ řns,t“1 psptast, we have 9pr “ pr `řns“1 ps `ars ` brs˘´ φ˘. It follows that the dynamics of the pairwisecomparison process depend on M, which completes the proof of Theorem 5.72Finally, we show that the dynamics of each process are independent of the particular network configurationif the asymmetric game is spatially additive:Definition 4. If aijrs “ xirs ` yjrs for each r and s, then Mij is called a spatially additive payoff matrix. IfMij is spatially additive for each i and j, then the game is said to be spatially additive.Corollary 2. If Mij is spatially additive for each i and j, then the expected change in the frequency ofstrategy Ar, E r∆prs, is independent of pwijq1ďi,jďN for each r. In particular, the dynamics of the processdo not depend on the particular network configuration.Proof. If aijrs “ xirs ` yjrs for each r, s, i, j, thenast “ 1kNNÿi,j“1wijaijst “ 1NNÿi“1xirs ` 1NNÿj“1yjrs, (3.49)which is independent of pwijq1ďi,jďN . The corollary then follows directly from Theorem 5.3.4.6 Computer simulationsIn each simulation, a random k-regular network (with k “ 3) of N “ 500 vertices is generated. The selectionintensity is β “ 0.01 for Figs. 3.1 and 3.2, β “ 0.1 for Fig. 3.3(A), and β “ 0.5 for Fig. 3.3(B). The figuresare generated based on data collected from a number of cycles: In each cycle, the network is given an initialconfiguration of cooperators by first choosing a density, d, uniformly at random from the interval r0, 1s, andthen placing a cooperator (resp. defector) at each vertex with probability d (resp. 1 ´ d). The updaterule is applied until either C or D fixates. (The absorption time depends on a number of factors includingthe game, selection strength, and initial configuration of the population.) Let pC ptq denote the frequencyof cooperators at time t; pC p0q is just the initial frequency of cooperators. The frequency pC pt` 1q isobtained from pC ptq by adding to it the change in the frequency of cooperators over the next N (“ 500)updates. For each t, the quantity pC pt` 1q ´ pC ptq is associated with pC ptq. Once pC P t0, 1u, a newinitial configuration of cooperators is chosen and the process is repeated. After each possible value of pChas at least 105 associated data points (changes in cooperator frequency), these changes are averaged, andthis resulting quantity, ∆pC , is paired with the corresponding value of pC . These pairs are then plotted toobtain Figs. 3.1, 3.2, and 3.3. The results from pair approximation apply to the expected change over oneupdate, but we can easily get a predicted result over N updates (i.e. one Monte Carlo step) by scaling theexpressions for E r∆pCs by a factor of N .73Small deviations from the expected results are seen in each of the figures, and these deviations are dueto the effects of finite selection parameter (β) and the finiteness of the set of possible values of pC (∆pCis a multiple of 1{N). As an example of how these properties can give rise to small deviations, considerthe Donation Game under imitation updating in Fig. 3.1(A). Eq. (3.42) predicts that E r∆pCs is alwayspositive, yet we observe in Fig. 3.1(A) that this change becomes negative as pC Ñ 0, 1. If pC “ pN ´ 1q {Nand β ą 0, then the only defector in the population has a higher payoff than all of the other cooperators.Let fpjqβ denote the fitness of the player at location j. Thus, with just a single defector (at location i) ina population of cooperators, we have fpiqβ ě f pjqβ for each j ‰ i, with equality if and only if β “ 0. Theexpected change in the frequency of cooperators in the next time step isE r∆pCs “ˆ1N˙ˆ1N˙¨˝1´ fpiqβfpiqβ `řtj : wij“1u fpjqβ‚˛´ˆ1N˙ ÿtj : wij“1uˆ1N˙¨˝fpiqβfpjqβ `řtl : wjl“1u fplqβ‚˛. (3.50)The first (resp. second) summation runs over all of the neighbors of i (resp. j). For each j ‰ i,fpiqβfpiqβ `řtj : wij“1u fpjqβě 1k ` 1 ; (3.51a)fpiqβfpjqβ `řtl : wjl“1u fplqβě 1k ` 1 , (3.51b)both with equality if and only if β “ 0. Therefore, we see thatE r∆pCs ďˆ1N˙ˆ1N˙ˆ1´ 1k ` 1˙´ˆ1N˙ˆkN˙ˆ1k ` 1˙“ 0 (3.52)with equality if and only if β “ 0. The same argument explains the negative average changes as pC Ñ 0.Since pC can only take on finitely many values for a given population size, similar arguments explain thesmall discrepancies between the actual and expected results for intermediate values of pC (see Fig. 3.1).74Chapter 4Structural symmetry in evolutionarygamesIn evolutionary game theory, an important measure of a mutant trait (strategy) is its ability to invadeand take over an otherwise-monomorphic population. Typically, one quantifies the success of a mutantstrategy via the probability that a randomly occurring mutant will fixate in the population. However,in a structured population, this fixation probability may depend on where the mutant arises. Moreover,the fixation probability is just one quantity by which one can measure the success of a mutant; fixationtime, for instance, is another. We define a notion of homogeneity for evolutionary games that captureswhat it means for two single-mutant states, i.e. two configurations of a single mutant in an otherwise-monomorphic population, to be “evolutionarily equivalent” in the sense that all measures of evolutionarysuccess are the same for both configurations. Using asymmetric games, we argue that the term “homoge-neous” should apply to the evolutionary process as a whole rather than to just the population structure.For evolutionary matrix games in graph-structured populations, we give precise conditions under whichthe resulting process is homogeneous. Finally, we show that asymmetric matrix games can be reducedto symmetric games if the population structure possesses a sufficient degree of symmetry.4.1 IntroductionOne of the most basic models of evolution in finite populations is the Moran process (Moran, 1958). In theMoran process, a population consisting of two types, a mutant type and a wild type, is continually updated75via a birth-death process until only one type remains. The mutant and wild types are distinguished byonly their reproductive fitness, which is assumed to be an intrinsic property of a player. A mutant type hasfitness r ą 0 relative to the wild type (whose fitness relative to itself is 1), and in each step of the process anindividual is selected for reproduction with probability proportional to fitness. Reproduction is clonal, andthe offspring of a reproducing individual replaces another member of the population who is chosen for deathuniformly at random. Eventually, this population will end up in one of the monomorphic absorbing states:all mutant type or all wild type. In this context, a fundamental metric of the success of the mutant type isits ability to invade and replace a population of wild-type individuals (Nowak, 2006a).In a population of size N , the probability that a single mutant in a wild-type population will fixate inthe Moran process isρ “ 1´ r´11´ r´N . (4.1)In this version of a birth-death process, the members of the population are distinguished by only their types;in particular, there is no notion of spatial arrangement, i.e. the population is well-mixed. Lieberman et al.(2005) extend the classical Moran process to graph-structured populations, which are populations with linksbetween the players that indicate who is a neighbor of whom. In this structured version of the Moranprocess, reproduction happens with probability proportional to fitness, but the offspring of a reproducingindividual can replace only a neighbor of the parent. Since individuals are now distinguished by both theirtypes (mutant or wild) and locations within the population, a natural question is whether or not the fixationprobability of a single mutant type depends on where this mutant appears in the population. Liebermanet al. (2005) show that this fixation probability is independent of the location of the mutant if everyone hasthe same number of neighbors, i.e. the graph is regular (Bolloba´s, 2001). In fact, remarkably, the fixationprobability of a single mutant on a regular graph is the same as that of Eq. (4.1)–an observation first madein a special case by Maruyama (1974). This result, known as the Isothermal Theorem, is independent of thenumber of neighbors the players have (i.e. the degree of the graph).The Moran process is frequency-independent in the sense that the fitness of an individual is determined bytype and is not influenced by the rest of the population. However, the Moran model can be easily extendedto account for frequency-dependent fitness. A standard way in which to model frequency-dependence isthrough evolutionary games (Taylor and Jonker, 1978; Hofbauer and Sigmund, 1998; Nowak et al., 2004).In the classical setup, each player in the population has one of two strategies, A and B, and receives anaggregate payoff from interacting with the rest of the population. This aggregate payoff is usually calculated76from a sequence of pairwise interactions whose payoffs are described by a payoff matrix of the form¨˚˝A BA a bB c d‹˛‚. (4.2)Each player’s aggregate payoff is then translated into fitness and the strategies in the population are updatedbased on these fitness values. Since a player’s payoff depends on the strategies of the other players in thepopulation, so does that player’s fitness. Traditionally, this population is assumed to be infinite, in which casethe dynamics of the evolutionary game are governed deterministically by the replicator equation of Taylorand Jonker (1978). More recently, evolutionary games have been considered in finite populations (Nowaket al., 2004; Taylor et al., 2004), where the dynamics are no longer deterministic but rather stochastic. Inorder to restrict who interacts with whom in the population, these populations can also be given structure.Popular types of structured populations are graphs (Lieberman et al., 2005; Ohtsuki et al., 2006; Szabo´ andFa´th, 2007), sets (Tarnita et al., 2009a), and demes (Taylor et al., 2001; Hauert and Imhof, 2012).We focus here on evolutionary games in graph-structured populations that proceed in discrete timesteps. Such processes define discrete-time Markov chains, either with or without absorbing states (dependingon mutation rates). Typically, in evolutionary game theory, one starts with a population of players andrepeatedly updates the population based on some update rule such as birth-death (Nowak et al., 2004),death-birth (Ohtsuki et al., 2006; Zukewich et al., 2013), imitation (Ohtsuki and Nowak, 2006), pairwisecomparison (Szabo´ and To˝ke, 1998; Traulsen et al., 2007), or Wright-Fisher (Ewens, 2004; Imhof and Nowak,2006). These update rules can be split into two classes: cultural and genetic (see McAvoy and Hauert,2015b). Cultural update rules involve strategy imitation while genetic update rules involve reproductionand inheritance. Without mutations, an update rule may be seen as giving a probability distribution over anumber of strategy-acquisition scenarios: a player inherits a new strategy through imitation (cultural rules)or is born with a strategy determined by the parent(s) (genetic rules). Mutation rates disrupt these scenariosby placing a small probability of a player taking on a novel strategy. The way in which strategy mutationrates are incorporated into an evolutionary process depends on both the class of the update rule and thespecifics of the update rule itself. In a general sense, we say that strategy mutations are homogeneous ifthey depend on neither the players themselves nor the locations of the players. This notion of homogeneousstrategy mutations is analogous to that of a symmetric game, which is a game for which the payoffs dependon the strategies played but are independent of the identities and locations of the players.77The Isothermal Theorem seems to indicate that populations structured by regular graphs possess asignificant degree of homogeneity, meaning that different locations within the population appear to beequivalent for the purposes of evolutionary dynamics. However, it is important to note that (i) fixationprobability is just one metric of evolutionary success and (ii) the Moran process is only one example ofan evolutionary process. For example, in addition to the probability of fixation, one could look at theabsorption time, which is the average number of steps until one of the monomorphic absorbing states isreached. Moreover, one could consider frequency-dependent processes, possibly with different update rules,in which fitness is no longer an intrinsic property of an individual but is also influenced by the other membersof the population. We show that the Isothermal Theorem does not extend to arbitrary frequency-dependentprocesses such as evolutionary games. Furthermore, we show that this theorem does not apply to fixationtimes; that is, even for the Moran process on a regular graph, the average number of updates until amonomorphic absorbing state is reached can depend on the initial placement of the mutant.Given that the Isothermal Theorem does not extend to other processes defined on regular graphs, thenext natural question is the following: what is the meaning of a spatially-homogeneous population in evo-lutionary game theory? In fact, we argue using asymmetric games (McAvoy and Hauert, 2015b) that theterm “homogeneous” should apply to an evolutionary process as a whole rather than to just the populationstructure. Even for populations that appear to be spatially homogeneous, such as populations on completegraphs, non-uniform distribution of resources within the population can result in heterogeneity of the overallprocess. Similarly, for symmetric games, heterogeneity can be introduced into the dynamics of an evolution-ary process through strategy mutations. Therefore, a notion of homogeneity of an evolutionary game shouldtake into account at least (i) population structure, (ii) payoffs, and (iii) strategy mutations.If the strategy-mutation rates are miniscule, then the population spends most of its time in monomorphicstates. With small mutation rates, one can define an embedded Markov chain on the monomorphic statesand use this chain to study the success of each strategy (Fudenberg and Imhof, 2006; Wu et al., 2011). Thatis, when a mutation occurs, the population is assumed to return to a monomorphic state before anothermutant arises. Thus, the states of interest are the monomorphic states and the states consisting of a singlemutant in an otherwise-monomorphic population. We say that an evolutionary game is homogeneous if anytwo states consisting of a single mutant (A-player) in wild-type population (B-players) are mathematicallyequivalent. We make precise what we mean by “mathematically equivalent” in §4.2, but, informally, thisequivalence means that any two such states are the same up to relabeling. In particular, all metrics such asfixation probability, absorption time, etc. are the same for any two states consisting of a single A-mutant in78a B-population. We show that an evolutionary game in a graph-structured population is homogeneous if thegraph is vertex-transitive (“looks the same” from each vertex), the payoffs are symmetric, and the strategymutations are homogeneous. This result holds for any update rule and selection intensity.Finally, we explore the effects of population structure on asymmetric evolutionary games. In the weakselection limit, we show that asymmetric matrix games with homogeneous strategy mutations can be reducedto symmetric games if the population structure is arc-transitive (“looks the same” from each edge in thegraph). This result is a finite-population analogue of the main result of McAvoy and Hauert (2015b), whichstates that a similar reduction to symmetric games is possible in sufficiently large populations. Thus, weestablish that this reduction applies to any population size if the graph possesses a sufficiently high degreeof symmetry. Our explorations, both for symmetric and asymmetric games, clearly demonstrate the effectsof population structure, payoffs, and strategy mutations on symmetries in evolutionary games.4.2 Markov chains and evolutionary equivalence4.2.1 General Markov chainsThe evolutionary processes we consider here define discrete-time Markov chains on finite state spaces. Thenotions of symmetry and evolutionary equivalence that we aim to introduce for evolutionary processes canactually be stated quite succinctly at the level of the Markov chain. We first work with general Markovchains, and later we apply these ideas to evolutionary games.Definition 5 (Symmetry of states). Suppose that X “ tXnuně0 is a Markov chain on a (finite) state space,S, with transition matrix T. An automorphism of X is a bijection φ : SÑ S such that Ts,s1 “ Tφpsq,φps1q foreach s, s1 P S. Two states s, s1 P S are said to be symmetric if there exists φ P Aut pXq such that φ psq “ s1.Definition 5 says that the states of the chain can be relabeled in such a way that the transition probabilitiesare preserved. This relabeling may affect the long-run distribution of the chain since it need not fix absorbingstates, so we make one further refinement in order to ensure that if two states are symmetric, then theybehave in the same way:Definition 6 (Evolutionary equivalence). States s and s1 are evolutionarily equivalent if there exists anautomorphism of the Markov chain, φ P Aut pXq, such that(i) φ psq “ s1;(ii) if µ is a stationary distribution of X, then φ pµq “ µ.79For a Markov chain with absorbing states, the notions of symmetry and evolutionary equivalence ofstates need not coincide (see Example 17 of §2.6). However, if the Markov chain has a unique stationarydistribution (as would be the case if it were irreducible), then symmetry implies evolutionary equivalence:Proposition 5. If X has a unique stationary distribution, then two states are symmetric if and only if theyare evolutionarily equivalent.We show in §4.6 that a Markov chain symmetry preserves the set of stationary distributions (Lemma 3),so if there is a unique stationary distribution, then condition (ii) of Definition 6 is satisfied automatically byany symmetry. Proposition 5 is then an immediate consequence of this result.If s and s1 are evolutionarily equivalent, then it is clear for absorbing processes that the probability that sfixates in absorbing state s is the same as the probability that s1 fixates in state s (and similarly for fixationtimes). If the process has a unique stationary distribution, then the symmetry of s and s1 implies that thisdistribution puts the same mass on s and s1. These properties follow at once from the fact that the states sand s1 are equivalent up to relabeling.4.2.2 Markov chains defined by evolutionary gamesOur focus is on evolutionary games on fixed population structures. If S is a finite set of strategies (or“actions”) available to each player, and if the population size is N , then the state space of the Markovchain defined by an evolutionary game in such a population is S “ SN . For evolutionary games withoutrandom strategy mutations, the absorbing states of the chain are the monomorphic states, i.e. the strategyprofiles consisting of just a single unique strategy. Thus, states s and s1 are evolutionarily equivalent if theyare symmetric relative to the monomorphic states. On the other hand, evolutionary processes with strategymutations are typically irreducible (and have unique stationary distributions); in these processes, the notionsof symmetry and evolutionary equivalence coincide by Proposition 5.In order to state the definition of a homogeneous evolutionary process, we first need some notation. Fors, s1 P S, denote by sps1,iq,s the state in SN whose ith coordinate is s1 and whose jth coordinate for j ‰ i iss; that is, all players are using strategy s except for player i, who is using s1.Definition 7 (Homogeneous evolutionary process). An evolutionary process on SN is homogeneous if foreach s, s1 P S, the states sps1,iq,s and sps1,jq,s are evolutionarily equivalent for each i, j “ 1, . . . , N . Anevolutionary process is heterogeneous if it is not homogeneous.In other words, an evolutionary process is homogeneous if, at the level of the Markov chain it defines, any80two states consisting of a single mutant in an otherwise-monomorphic population appear to be relabelingsof one another. As noted in §4.2.1, all quantities with which one could measure evolutionary success are thesame for these single-mutant states if the process is homogeneous.4.3 Evolutionary games on graphsWe consider evolutionary games in graph-structured populations. Unless indicated otherwise, a “graph”means a directed, weighted graph on N vertices. A directed graph is one in which the edges have orientations,meaning there may be an edge from i to j but not from j to i. Moreover, the edges carry weights, whichwe assume are nonnegative real numbers. A directed, weighted graph is equivalent to a nonnegative N ˆNmatrix, D , where there is an edge from i to j if and only if Dij ‰ 0. If there is such an edge, then the weightof this edge is simply Dij . Since there is a one-to-one correspondence between directed, weighted graphs onN vertices and N ˆN real matrices, we refer to graphs and matrices using the same notation, describing Das a graph but using the matrix notation Dij to indicate the weight of the edge from vertex i to vertex j.Every graph considered here is assumed to be connected (strongly), which means that for any two vertices,i and j, there is a (directed) path from i to j. This assumption is not that restrictive in evolutionary gametheory since one can always partition a graph into its strongly-connected components and study the behaviorof an evolutionary process on each of these components. Moreover, for evolutionary processes on graphs thatare not strongly connected, it is possible to have both (i) recurrent non-monomorphic states in processeswithout mutations and (ii) multiple stationary distributions in processes with mutations. Some processes(such as the death-birth process) may not even be defined on graphs that are not strongly connected.Therefore, we focus on strongly-connected graphs and make no further mention of the term “connected.”Since our goal is to discuss symmetry in the context of evolutionary processes, we first describe severalnotions of symmetry for graphs. The three types of graphs we treat here are regular, vertex-transitive,and symmetric. Informally speaking, a regular graph is one in which each vertex has the same number ofneighboring vertices (and this number is known as the degree of the graph). A vertex-transitive graph is onethat looks the same from any two vertices; based on the graph structure alone, a player cannot tell if he orshe has been moved from one location to another. A symmetric (or arc-transitive) graph is one that looksthe same from any two edges. That is, if two players are neighbors and are both moved to another pair ofneighboring vertices, then they cannot tell that they have been moved based on the structure of the graphalone. We recall in detail the formal definitions of these terms in §4.6. The relationships between these threenotions of symmetry, as well as some examples, are illustrated in Fig. 4.1.81Figure 4.1: Three different levels of symmetry for connected graphs. Regular graphs have the property thatthe degrees of the vertices are all the same. Vertex-transitive graphs look the same from each vertex and arenecessarily regular. Symmetric (arc-transitive) graphs look the same from any two (directed) edges. Eachof these containments is strict; there exist graphs that are regular but not vertex-transitive (Fig. 4.2) andvertex-transitive but not symmetric (Fig. 4.7(A)).We turn now to evolutionary processes in graph-structured populations:4.3.1 The Moran processConsider the Moran process on a graph, D . Lieberman et al. (2005) show that if D is regular, then thefixation probability of a randomly placed mutant is given by Eq. (4.1), the fixation probability of a singlemutant in the classical Moran process. This result (known as the Isothermal Theorem) proves that, inparticular, this fixation probability does not depend on the initial location of the mutant. (We refer to thislatter statement as the “weak” version of the Isothermal Theorem.) Our definition of homogeneity in thecontext of evolutionary processes (Definition 7) is related to this independence of initial location and hasnothing to do with fixation probabilities in the classical Moran process. Naturally, the Isothermal Theoremraises the question of whether or not this location independence extends to absorption times (average numberof steps until an absorbing state is reached) when D is regular.Suppose that D is the Frucht graph of Fig. 4.2. The Frucht graph is an undirected, unweighted, regular(but not vertex-transitive) graph of size 12 and degree 3 (Frucht, 1939). The fixation probabilities andabsorption times of a single mutant in a wild-type population are given in Fig. 4.3 as a function of theinitial location of the mutant. The fixation probabilities do not depend on the initial location of the mutant,as predicted by the Isothermal Theorem, but the absorption times do depend on where the mutant arises.In fact, the absorption time is distinct for each different initial location of the mutant. The details of82Figure 4.2: A single mutant (cooperator) at vertex 11 of the Frucht graph. In the Snowdrift Game, theprobability that cooperators fixate depends on the initial location of this mutant on the Frucht graph (evenif the intensity of selection is weak).these calculations are in §4.7. Therefore, even the weak form of the Isothermal Theorem fails to hold forabsorption times. In particular, the Moran process on a regular graph need not define a homogeneousevolutionary process.This setup involving two types of players, frequency-independent interactions, and a population structuredefined by a single graph, can be generalized considerably:4.3.2 Symmetric gamesA powerful version of evolutionary graph theory uses two graphs to define relationships between the players:an interaction graph, E , and a dispersal graph, D (Ohtsuki et al., 2007a; Taylor et al., 2007; Ohtsuki et al.,2007b; Pacheco et al., 2009; De´barre et al., 2014). These graphs both have nonnegative weights. As anexample of how these two graphs are used to define an evolutionary process, we consider a birth-deathprocess based on two-player, symmetric interactions:83initial vertex of mutant1 2 3 4 5 6 7 8 9 10 11 12fixation probability0.480.4850.490.4950.50.5050.510.5150.52(a)initial vertex of mutant1 2 3 4 5 6 7 8 9 10 11 12absorption time228229230231232233234235236237238239average absorption time(b)Figure 4.3: Fixation probability (A) and absorption time (B) versus initial vertex of mutant for the Moranprocess on the Frucht graph. In both figures, the mutant has fitness r “ 2 relative to the wild type. Aspredicted by the Isothermal Theorem, the fixation probability does not depend on the initial location of themutant. The absorption time (measured in number of updates) is different for each initial placement of themutant on the Frucht graph. The precise values of these fixation probabilities and absorption times are in§4.7.Example 16. Consider a symmetric matrix game with n strategies, A1, . . . , An, and payoff matrix¨˚˚˚˚˚˚˚˚˚˝A1 A2 ¨ ¨ ¨ AnA1 a11 a12 ¨ ¨ ¨ a1nA2 a21 a22 ¨ ¨ ¨ a2n.......... . ....An an1 an2 ¨ ¨ ¨ ann‹˛‹‹‹‹‹‹‹‹‚. (4.3)If ps1, . . . , sN q P t1, . . . , nuN , then the total payoff to player i isui ps1, . . . , sN q :“Nÿj“1Eijasisj . (4.4)If β ě 0 is the intensity of selection, then the fitness of player i isfβ´ui ps1, . . . , sN q¯:“ exp!βui ps1, . . . , sN q). (4.5)84In each time step, a player (say, player i) is chosen for reproduction with probability proportional to fitness.With probability ε ě 0, the offspring of this player adopts a novel strategy uniformly at random fromtA1, . . . , Anu; with probability 1´ε, the offspring inherits the strategy of the parent. Next, another memberof the population is chosen for death, with the probability of player j dying proportional to Dij . Theoffspring then fills the vacancy created by the deceased neighbor, and the process repeats. E is called the“interaction” graph since it governs payoffs based on encounters, and D is called the “dispersal” graph sinceit is involved in strategy propagation.Heterogeneous evolutionary gamesWe now explore the ways in which population structure and strategy mutations can introduce heterogeneityinto an evolutionary process. Consider the Snowdrift Game with strategies C (cooperate) and D (defect)and payoff matrix¨˚˝C DC 5 3D 7 0‹˛‚. (4.6)Suppose that E and D are both the (undirected, unweighted) Frucht graph (see Fig. 4.2). If the selectionintensity is β “ 1, then the fixation probability of a single cooperator in a population of defectors in adeath-birth process depends on the initial location of the cooperator (Fig. 4.4). Since the Frucht graph isregular (but not vertex-transitive), this example demonstrates that the Isothermal Theorem does not extendto frequency-dependent games. In particular, symmetric games on regular graphs can be heterogeneous,and regularity of the graph does not imply that the “fixation probability of a randomly placed mutant” iswell-defined. This dependence of the fixation probability on the initial location of the mutant is not specificto the Snowdrift Game or the death-birth update rule; one can show that it also holds for the DonationGame in place of the Snowdrift Game or the birth-death rule in place of the death-birth rule, for instance.With β “ 1, the selection intensity is fairly strong, which raises the question of whether or not thesefixation probabilities still differ if selection is weak. In fact, our observation for this value of β is notan anomaly: Suppose that s and s1 are states (indicating some non-monomorphic initial configuration ofstrategies), and that si and sj are monomorphic absorbing states (indicating states in which each player usesthe same strategy). Let ρs,i denote the probability that state i is reached after starting in state s, and let tsdenote the average number of updates required for the process to reach an absorbing state after starting in85initial vertex of mutant1 2 3 4 5 6 7 8 9 10 11 12fixation probability0.640.660.680.70.720.74average fixation probability(a)initial vertex of mutant1 2 3 4 5 6 7 8 9 10 11 12absorption time114116118120122124126128130132 average absorption time(b)Figure 4.4: Fixation probability (A) and absorption time (B) versus initial vertex of mutant (cooperator) fora death-birth process on the Frucht graph. In both figures, the game is a Snowdrift Game whose payoffs aredescribed by payoff matrix (4.6), and the selection intensity is β “ 1. Unlike the Moran process, this processis frequency-dependent, and it is evident that both fixation probabilities and absorption times (measured innumber of updates) depend on the initial placement of the mutant. See §4.7 for details. Notably, the threevertices (7, 11, and 12) for which both fixation probability and absorption time are highest are the onlyvertices in the Frucht graph not appearing in a three-cycle.state s. Each of ρs,i and ts may be viewed as functions of β, and we have the following result:Proposition 6. Each of the equalitiesρs,i “ ρs1,j ; (4.7a)ts “ ts1 (4.7b)holds for either (i) every β ě 0 or (ii) at most finitely many β ě 0. Thus, if one of these equalities fails tohold for even a single value of β, then it fails to hold for all sufficiently small β ą 0.For a proof of Proposition 6, see §4.5. This result allows one to conclude that if there are differences infixation probabilities or times for large values of β (where these differences are more apparent), then thereare corresponding differences in the limit of weak selection.Even if a symmetric game is played in a well-mixed population, heterogeneous strategy mutations mayresult in heterogeneity of the evolutionary process. Consider, for example, the pairwise comparison process(Szabo´ and To˝ke, 1998; Traulsen et al., 2007) based on the symmetric Snowdrift Game, (4.6), in a well-mixed population with N “ 3 players. We model this well-mixed population using a complete, undirected,86unweighted graph of size 3 for each of E and D (see Fig. 4.5). For i P t1, 2, 3u, let εi P r0, 1s be the strategy-mutation (“exploration”) rate for player i. These strategy mutations are incorporated into the process asfollows: At each time step, a focal player (player i) is chosen uniformly at random to update his or herstrategy. A neighbor (one of the two remaining players) is then chosen randomly as a model player. If β isthe selection intensity, pif is the payoff of the focal player, and pim is the payoff of the model player, then thefocal player imitates the strategy of the model player with probability1´ εi1` e´βppim´pifq . (4.8)and chooses to retain his or her strategy with probability1´ εi1` e´βppif´pimq . (4.9)With probability εi, the focal player adopts a new strategy uniformly at random from the set tC,Du,irrespective of the current strategy. Provided at least one of ε1, ε2, and ε3 is positive, the Markov chain ontC,Du3 defined by this process is irreducible and has a unique stationary distribution, µ. Let ε1 “ 0.01 andε2 “ ε3 “ 0. Since the mutation rate depends on the location, i, the strategy mutations are heterogeneous.If the selection intensity is β “ 1, then a direct calculation (to four significant figures) givesµ pC,D,Dq “ 0.005812 ‰ 0.0004897 “ µ pD,C,Dq , (4.10)where µ pC,D,Dq (resp. µ pD,C,Dq) is the mass µ places on the state pC,D,Dq (resp. pD,C,Dq). Therefore,by Proposition 5 and Definition 7, this evolutionary process is not homogeneous, despite the fact that thepopulation is well-mixed and the game is symmetric. This result is not particularly surprising, but it clearlyillustrates the effects of hetergeneous strategy-mutation rates on symmetries of the overall process.Homogeneous evolutionary gamesThe behavior of an evolutionary process sometimes depends heavily on the choice of update rule. As a result,a particular problem in evolutionary game theory is often stated (such as the evolution of cooperation) andsubsequently explored separately for a number of different update rules. For example, consider the DonationGame (an instance of the Prisoner’s Dilemma) in which cooperators pay a cost, c, in order to provide theopponent with a benefit, b. Defectors pay no costs and provide no benefits. On a large regular graph of87(a) (b)Figure 4.5: Two states consisting of a single cooperator (mutant) among defectors (the wild type) in awell-mixed population of size N “ 3. Despite the spatial symmetry of this population, an evolutionary gameon this graph can be heterogeneous as a result of heterogeneous strategy mutations or asymmetric payoffs.degree k, Ohtsuki et al. (2006) show that selection favors cooperation in the death-birth process if b{c ą k,but selection never favors cooperation in the birth-death process. Therefore, the approach of exploring aproblem in evolutionary game theory separately for several update rules has its merits. On the other hand,one might expect that high degrees of symmetry in the population structure, payoffs, and strategy mutationsinduce symmetries in an evolutionary game for a variety of update rules.Before stating our main theorem for symmetric matrix games, we must first understand the basic compo-nents that make up an evolutionary game. Evolutionary games generally have two timescales: interactionsand updates. In each (discrete) time step, every player in the population has a strategy, and this strategyprofile determines the state of the population. Neighbors then interact (quickly) and receive payoffs basedon these strategies and the game(s) being played. The total payoff to a player determines his or her fitness.In the update step of the process, the strategies of the players are updated stochastically based on thefitness profile of the population, the population structure, and the strategy mutations. Popular examples ofevolutionary update rules are birth-death, death-birth, imitation, pairwise comparison, and Wright-Fisher.Since interactions happen much more quickly than updates, there is a separation of timescales.The most difficult part of an evolutionary game to describe in generality is the update step. If S is thestrategy set of the game and N is the population size, then a state of the population is simply an elementof SN , i.e. a specification of a strategy for each member of the population. Implicit in the state space ofthe population being SN is an enumeration of the players. That is, if s P SN is an N -tuple of strategies,then this profile indicates that player i uses strategy si. For our purposes, we need only one property to besatisfied by the update rule of the process, which we state here as an axiom of an evolutionary game:Axiom. The update rule of an evolutionary game is independent of the enumeration of the players.Remark 4. As an example of what this axiom means, consider a death-birth process in which a player is88selected uniformly at random for death and is replaced by the offspring of a neighbor. A neighbor is chosen forreproduction with probability proportional to fitness, and the offspring of this neighbor inherits the strategyof the parent with probability 1 ´ ε and takes on a novel strategy with probability ε for some ε ą 0. If allelse is held constant (fitness, mutations, etc.), the fact that a player is referred to as the player at locationi is irrelevant: Let SN be the symmetric group on N letters. If pi P SN is a permutation that relabels thelocations of the players by sending i to pi´1 piq, then the strategy of the player at location pi´1 piq after therelabeling is the same as the strategy of the player at location i before the relabeling. In particular, if s P SNis the state of the population before the relabeling, then pi psq P SN is the state of the population after therelabeling, where pi psqi “ spipiq. The probability that player pi´1 piq is selected for death and replaced bythe offspring of player pi´1 pjq after the relabeling is the same as the probability that player i is selected fordeath and replaced by the offspring of player j before the relabeling. Thus, for this death-birth process, theprobability of transitioning between states s and s1 before the relabeling is the same as the probability oftransitioning between states pi psq and pi ps1q after the relabeling. In this sense, a relabeling of the playersinduces an automorphism of the Markov chain defined by the process (in the sense of Definition 5), and theaxiom states that this phenomenon should hold for any evolutionary update rule.In order to state our main result for symmetric games, we note that an evolutionary graph, Γ, in thissetting consists of two graphs: E and D . We say that Γ is regular if both E and D are regular. For vertex-transitivity of Γ (resp. symmetry of Γ), we require slightly more than each E and D being vertex-transitive(resp. symmetric); we require them to be simultaneously vertex-transitive (resp. symmetric). First of all, weneed to define what an automorphism of Γ is. For pi P SN , let piE be the graph defined by ppiE qij :“ Epipiqpipjqfor each i and j. Using this action, we define an automorphism of an evolutionary graph as follows:Definition 8 (Automorphism of an evolutionary graph). An automorphism of Γ “ pE ,Dq is an action,pi P SN , such that piE “ E and piD “ D . We denote by Aut pΓq the set of automorphisms of Γ.We now have the definitions of vertex-transitive and symmetric evolutionary graphs:Definition 9. Γ “ pE ,Dq is vertex-transitive if for each i and j, there exists pi P Aut pΓq such that pi piq “ j.Definition 10. Γ “ pE ,Dq is symmetric if E “ D and E is a symmetric graph.Finally, using the notion of an automorphism of Γ, we have our main result:Theorem 6. Consider an evolutionary matrix game on a graph, Γ “ pE ,Dq, with symmetric payoffs andhomogeneous strategy mutations. If pi P Aut pΓq, then the states with a single mutant at vertex i and pi piq,89respectively, in an otherwise-monomorphic population, are evolutionarily equivalent. That is, in the notationof Definition 7, the states sps1,iq,s and sps1,pipiqq,s are evolutionarily equivalent for each s, s1 P S.The proof of Theorem 6 may be found in §4.6. The proof relies on the observation that the hypothesesof the theorem imply that any two states consisting of a single A-player in a population of B-players can beobtained from one another by relabeling the players. Thus, in light of the argument in Remark 4, relabelingthe players induces an automorphism on the Markov chain defined by the evolutionary game. Since anyrelabeling of the players leaves the monomorphic states fixed, there is an evolutionary equivalence betweenany two such states in the sense of Definition 6. Note that this theorem makes no restrictions on the selectionstrength or the update rule.Corollary 3. An evolutionary game on a vertex-transitive graph with symmetric payoffs and homogeneousstrategy mutations is itself homogeneous.Remark 5. By Theorem 6, two mutants appearing on a graph might define evolutionarily equivalent stateseven if the graph is not vertex-transitive. For example, the Tietze graph (see Bondy and Murty, 2008), likethe Frucht graph, has 12 vertices and is regular but not vertex-transitive. However, unlike the Frucht graph,the Tietze graph has some nontrivial automorphisms. By Theorem 6, any two vertices in the Tietze graphthat are related by an automorphism have the property that the two corresponding single-mutant states areindistinguishable. An example of two evolutionarily equivalent states on this graph is given in Fig. 4.6. In§4.7, for the Snowdrift Game with death-birth updating, we give the fixation probabilities and absorptiontimes for all configurations of a single cooperator among defectors, which further illustrates the effects ofgraph symmetries on an evolutionary process.Remark 6. For a given population size, N , and network degree, k, there may be many vertex-transitivegraphs of size N with degree k. For each such graph, the fixation probability of a randomly occuring mutantis independent of where on the graph it occurs by Theorem 6. However, this fixation probability depends onmore than just N and k; it also depends on the configuration of the network. For example, Fig. 4.7 gives twovertex-transitive graphs of size N “ 6 and degree k “ 3. As an illustration, consider the Snowdrift Game(with payoff matrix (4.6)) on these graphs with birth-death updating. If the selection intensity is β “ 0.1,then the fixation probability of a single cooperator in a population of defectors is 0.1632 in (A) and 0.1722in (B) (both rounded to four significant figures). These two fixation probabilities differ for all but finitelymany β ě 0 by Proposition 6.90(a) (b)Figure 4.6: The Tietze graph with two different initial configurations. Like the Frucht graph, the Tietzegraph is regular of degree k “ 3 (but not vertex-transitive) with N “ 12 vertices. Unlike the Fruchtgraph, the Tietze graph possesses nontrivial automorphisms. In (A), a cooperator is at vertex 6 and allother players are defectors. In (B), a cooperator is at vertex 11 and, again, the other players are defectors.Despite the fact that the Tietze graph is not vertex-transitive, the single-mutant states defined by (A)and (B) are evolutionarily equivalent. Graphically, this result is clear since one obtains (A) from (B) byflipping the graph (i.e. applying an automorphism), and such a difference between the two states doesnot affect fixation probabilities, times, etc. However, it is not true that any two single-mutant states areevolutionarily equivalent. For example, in the Snowdrift Game with β “ 0.1 and death-birth updating, thesingle-mutant state with a cooperator at vertex 1 (resp. vertex 6) has a fixation probability of 0.3777 (resp.0.4186). Therefore, the two single-mutant states with cooperators at vertices 1 and 6, respectively, are notevolutionarily equivalent, so this process is not homogeneous.(a) (b)Figure 4.7: Undirected, unweighted, vertex-transitive graphs of degree k “ 3 with N “ 6 vertices. (B) is asymmetric (arc-transitive) graph and (A) is not.91Until this point, our focus has been on states consisting of just a single mutant in an otherwise-monomorphic population. One could also inquire as to when any two states consisting of two (or three,four, etc.) mutants are evolutionarily equivalent. It turns out that that the answer to this question is simple:in general, the population must be well-mixed in order for any two m-mutant states to be evolutionarilyequivalent if m ą 1. The proof that this equivalence holds in well-mixed populations follows from the ar-gument given to establish Theorem 6 (see §4.6). On the other hand, if the population is not well-mixed,then one can find a pair of states with the first state consisting of two mutants on neighboring verticesand the second state consisting of two mutants on non-neighboring vertices. In general, the mutant typewill have different fixation probabilities in these two states. For example, in the Snowdrift Game on thegraph of Fig. 4.7(B), consider the two states, s and s1, where s consists of cooperators on vertices 1 and2 only and s1 consists of cooperators on vertices 1 and 3 only. If β “ 0.1, then the fixation probability ofcooperators under death-birth updating when starting at s (resp. s1) is 0.3126 (resp. 0.2607). Therefore,despite the arc-transitivity of this graph, it is not true that any two states consisting of exactly two mutantsare evolutionarily equivalent. Only in well-mixed populations are we guaranteed that any two such statesare equivalent.4.3.3 Asymmetric gamesOne particular form of payoff asymmetry appearing in evolutionary game theory is ecological asymmetry(McAvoy and Hauert, 2015b). Ecological asymmetry can arise as a result of an uneven distribution ofresources. For example, in the Donation Game, a cooperator at location i might provide a benefit to hisor her opponent based on some resource derived from the environment. Both this resource and the cost ofdonating it could depend on i, which means that different players have different payoff matrices. These payoffmatrices depend on both the location of the focal player and the locations of the opponents. Thus, payoffsfor a player at location i against an opponent at location j in an n-strategy “bimatrix” game (Hofbauer,921996; Ohtsuki, 2010b; McAvoy and Hauert, 2015b) are given by the asymmetric payoff matrixMij “¨˚˚˚˚˚˚˚˚˚˝A1 A2 ¨ ¨ ¨ AnA1 aij11, aji11 aij12, aji21 ¨ ¨ ¨ aij1n, ajin1A2 aij21, aji12 aij22, aji22 ¨ ¨ ¨ aij2n, ajin2.......... . ....An aijn1, aji1n aijn2, aji2n ¨ ¨ ¨ aijnn, ajinn‹˛‹‹‹‹‹‹‹‹‚. (4.11)Similar to Eq. (4.4), the total payoff to player i for strategy profile ps1, . . . , sN q P t1, . . . , nuN isui ps1, . . . , sN q :“Nÿj“1Eijaijsisj . (4.12)We saw in §4.3.2 an example of a heterogeneous evolutionary game in a well-mixed population withsymmetric payoffs. Rather than looking at a symmetric game with heterogeneous strategy mutations, we nowlook at an asymmetric game with homogeneous strategy mutations. Consider the ecologically asymmetricDonation Game on the graph of Fig. 4.5 (both E and D) with a death-birth update rule. In this asymmetricDonation Game, a cooperator at location i donates bi at a cost of ci; defectors donate nothing and incurno costs. If β “ 0.1, b1 “ b2 “ b3 “ 4, c1 “ 1, and c2 “ c3 “ 3, then the fixation probability of a singlecooperator at location 1 (see Fig. 4.5(A)) is 0.2232, while the fixation probability of a single cooperatorat location 2 (see Fig. 4.5(B)) is 0.1842 (both rounded to four significant figures). Therefore, even in awell-mixed population with homogeneous strategy mutations (none, in this case), asymmetric payoffs canprevent an evolutionary game from being homogeneous.Asymmetric matrix games in large populations reduce to symmetric games if selection is weak (McAvoyand Hauert, 2015b). In the limit of weak selection, McAvoy and Hauert (2016a) establish a selectioncondition for asymmetric matrix games in finite graph-structured populations that extends the condition(for symmetric games) of Tarnita et al. (2011):Theorem 7 (McAvoy and Hauert (2016a)). There exists a set of structure coefficients,!σij1 , σij2 , σij3)i,j,independent of payoffs, such that weak selection favors strategy r P t1, . . . , nu if and only ifNÿi,j“1Eij´σij1´aijrr ´ aij˚˚¯` σij2´aijr˚ ´ aij˚r¯` σij3´aijr˚ ´ aij¯¯ą 0, (4.13)93where aij˚˚ “ 1nřns“1 aijss, aijr˚ “ 1nřns“1 aijrs, aij˚r “ 1nřns“1 aijsr, and aij “ 1n2řns,t“1 aijst.Strictly speaking, Theorem 7 is established for E and D undirected, unweighted, and satisfying E “ D .However, the proof of Theorem 7 extends immediately to the case with E and D directed, weighted, andpossibly distinct, so we make no restrictive assumptions on E and D in the statement of this theorem here.In the simpler case n “ 2, condition (4.13) takes the formNÿi,j“1Eij´τ ij1´aij11 ´ aij22¯` τ ij2´aij12 ´ aij21¯¯ą 0 (4.14)for some collection!τ ij1 , τij2)i,j. For the death-birth process with E and D the graph of Fig. 4.7(A),we calculate exact values for all of these structure coefficients (see §4.7). In particular, we find thatτ121 “ 707905{9315552 and τ141 “ 16291{194074, so vertex-transitivity does not guarantee that the structurecoefficients are independent of i and j. For the same process on the graph in Fig. 4.7(B), we find thatτ ij1 “ τ ij2 “ 2189{27728 for each i and j, so these coefficients do not depend on i and j. (In general, even forwell-mixed populations, τ1 and τ2 need not be the same; for the same process studied here but on the graphof Fig. 4.5, τ ij1 “ 33{1616 and τ ij2 “ 99{1616 for each i and j.) This lack of dependence on i and j is due tothe fact that the graph of Fig. 4.7(B) is symmetric, and it turns out to be a special case of a more generalresult:Theorem 8. Suppose that an asymmetric matrix game with homogeneous strategy mutations is played onan evolutionary graph, Γ “ pE ,Dq. For each pi P Aut pΓq, k P t1, 2, 3u, and i, j P t1, . . . , Nu,σijk “ σpipiqpipjqk . (4.15)The proof of Theorem 8 may be found in §4.6. The following corollary is an immediate consequence ofTheorem 8:Corollary 4. If E “ D and E is a symmetric graph (i.e. Γ is a symmetric evolutionary graph), then thestructure coefficients are independent of i and j.Since symmetric graphs are also regular, we have:Corollary 5. If E “ D and E is a symmetric graph (i.e. Γ is a symmetric evolutionary graph), then strategy94r is favored in the limit of weak selection if and only ifσ1 parr ´ a˚˚q ` σ2 par˚ ´ a˚rq ` σ3 par˚ ´ aq ą 0, (4.16)where M “ pastq1ďs,tďn is the spatial average of the matrices Mij , i.e.M :“ 1kNNÿi,j“1EijMij , (4.17)where k is the degree of the graph, Γ.Remark 7. Eq. (4.16) is just the selection condition of Tarnita et al. (2011) for symmetric matrix games.It follows from Corollary 5 that asymmetric matrix games on arc-transitive (symmetric) graphs can bereduced to symmetric games in the limit of weak selection.4.4 DiscussionEvolutionary games in finite populations may be split into two classes: those with absorbing states (“absorb-ing processes”) and those without absorbing states. In absorbing processes, the notion of fixation probabilityhas played a crucial role in quantifying evolutionary outcomes, but fixation probabilities are far from theonly measure of evolutionary success. Much of the literature on evolutionary games with absorbing stateshas neglected other metrics such as the time to absorption or the time to fixation conditioned on fixationoccurring (“conditional fixation time”). This bias toward fixation probabilities has resulted in certain evo-lutionary processes appearing more symmetric than they actually are. We have illustrated this phenomenonusing the frequency-independent Moran process on graphs: The Isothermal Theorem guarantees that, onregular graphs, a single mutant cannot distinguish between initial locations in the graph if the only metricunder consideration is the probability of fixation. However, certain initial placements of the mutant mayresult in faster absorption times than others if the graph is regular but not vertex-transitive, and the Fruchtgraph exemplifies this claim. The same phenomenon also holds for conditional fixation times.The Frucht graph, which is a regular structure with no nontrivial symmetries (there are no two verticesfrom which the graph “looks” the same), also allowed us to show that the Isothermal Theorem of Liebermanet al. (2005) does not extend to frequency-dependent evolutionary games. That is, on regular graphs, theprobability of fixation of a single mutant may depend on the initial location of the mutant if fitness isfrequency-dependent. This claim was illustrated via a death-birth process on the Frucht graph, in which the95underlying evolutionary game was a Snowdrift Game. For β “ 1 (strong selection), the fixation probabilityof a cooperator at vertex 11 was nearly 14% larger than the fixation probability of a cooperator at vertex 4.Moreover, we showed that if the fixation probabilities of two initial configurations differ for a single value of β,then they are the same for at most finitely many values of β. In particular, these fixation probabilities differfor almost every selection strength, so our observation for β “ 1 was not an anomaly. Similar phenomenaare observed for frequency-dependent birth-death processes on the Frucht graph, for example, and even forfrequency-dependent games with the “equal gains from switching” property, such as the Donation Game.Theorem 6 is an analogue of the Isothermal Theorem that applies to a broader class of games and updaterules. The Isothermal Theorem is remarkable since regularity of the population structure implies that thefixation probabilities are not only independent of the initial location of the mutant, they are the same asthose of the classical Moran process. Our treatment of homogeneous evolutionary processes is focused onwhen different single-mutant states are equivalent, not when they are equivalent to the corresponding statesin the classical Moran process. Even if the fixation probability of a single mutant does not depend on themutant’s location, other factors (such as birth and death rates) may affect whether or not this fixationprobability is the same as the one in a well-mixed population (Komarova, 2006; Kaveh et al., 2015). Remark6, which compares the fixation probabilities for the Snowdrift Game on two different vertex-transitive graphsof the same size and degree, shows that the fixation probability of a single mutant–even if independent ofthe mutant’s location–can depend on the configuration of the graph. In light of these results, the symmetryphenomena for the Moran process guaranteed by the Isothermal Theorem do not generalize and should bethought of as properties of the frequency-independent Moran process and not of evolutionary processes ingeneral.Theorem 6, and indeed most of our discussion of homogeneity, focused on symmetries of states consistingof just a single mutant. In many cases, mutation rates are sufficiently small that a mutant type, when itappears, will either fixate or go extinct before another mutation occurs (Fudenberg and Imhof, 2006; Wuet al., 2011). Thus, with small mutation rates, one need not consider symmetries of states consisting of morethan one mutant. However, if mutation rates are larger, then these multi-mutant states become relevant. Ourdefinition of evolutionary equivalence (Definition 6) applies to these states, but, as expected, the symmetryconditions on the population structure guaranteeing any two multi-mutant states are equivalent are muchstronger. In fact, as we argued in §4.3.2, the population must in general be well-mixed even for any pair ofstates consisting of two mutants to be evolutionarily equivalent. Consequently, our focus on single-mutantstates allowed us to simultaneously treat biologically relevant configurations (assuming mutation rates are96small) and obtain non-trivial conditions guaranteeing homogeneity of an evolutionary process.The counterexamples presented here could be defined on sufficiently small population structures, andthus all calculations (fixation probabilities, structure coefficients, etc.) are exact. However, these quantitiesneed not always be explicitly calculated in order to prove useful: In our study of asymmetric games, weconcluded that an asymmetric game on an arc-transitive (symmetric) graph can be reduced to a symmetricgame in the limit of weak selection. (The graph of Fig. 4.7(A) demonstrates that vertex-transitivity alonedoes not guarantee that an asymmetric game can be reduced to a symmetric game in this way.) This resultwas obtained by examining the qualitative nature of the structure coefficients in the selection condition(4.13), but it did not require explicit calculations of these coefficients. Therefore, despite the difficulty inactually calculating these coefficients, they can still be used to glean qualitative insight into the dynamics ofevolutionary games.On large random regular graphs, the dynamics of an asymmetric matrix game are equivalent to thoseof a certain symmetric game obtained as a “spatial average” of the individual asymmetric games (McAvoyand Hauert, 2015b). Corollary 5 is highly reminiscent of this type of reduction to a symmetric game. Forlarge populations, this result is obtained by observing that large random regular graphs approximate a Bethelattice (Bolloba´s, 2001) and then using the pair approximation method (Matsuda et al., 1992) to describethe dynamics. The pair approximation method is exact for a Bethe lattice (Ohtsuki et al., 2006), so, fromthis perspective, Corollary 5 is not that surprising since Bethe lattices are arc-transitive. Of course, a Bethelattice has infinitely many vertices, and Corollary 5 is a finite-population analogue of this result.The term “homogeneous” is used in the literature to refer to several different kinds of population struc-tures. This term has been used to describe well-mixed populations (Assaf and Mobilia, 2012). For graph-structured populations, “homogeneous graph” sometimes refers to vertex-transitive graphs (Taylor et al.,2007; Tarnita and Taylor, 2014). In algebraic graph theory, however, the term “homogeneous graph” impliesa much higher degree of symmetry than does than vertex-transitivity (see Beineke et al., 2004). “Homoge-neous” has also been used to describe graphs in which each vertex has the same number of neighbors, i.e.regular graphs (Roca et al., 2009; Hindersin and Traulsen, 2014; Cheng et al., 2015). In between regular andvertex-transitive graphs, “homogeneous graph” has also referred to large, random regular graphs (Traulsenet al., 2009b). As we noted, large, random regular graphs approximate Bethe lattices (which are infinite,arc-transitive graphs), but these approximations need not themselves be even vertex-transitive.In many of the various uses of the term “homogeneous,” a common aim is to study the fixation probabilityof a randomly placed mutant. Our definition of homogeneous evolutionary game formally captures what it97means for two single-mutant states to be equivalent, and our explorations of the Frucht graph (in conjunctionwith Theorem 6) show that vertex-transitivity, and not regularity, is what the term “homogeneous” in graph-structured populations should indicate. We also demonstrated the effects of payoffs and strategy mutationson the behavior of these single-mutant states and concluded that the term homogeneous should apply to theentire process rather than to just the population structure. The homogeneity (Theorem 6) and symmetry(Theorem 8) results given here do not depend on the update rule, in contrast with results such as thesymmetry of conditional fixation times in the Moran process of Taylor et al. (2006) or the IsothermalTheorem of Lieberman et al. (2005). We now know that games on regular graphs are not homogeneous,and we know precisely under which conditions the “fixation probability of a randomly placed mutant” iswell-defined. These results provide a firmer foundation for evolutionary game theory in finite populationsand a basis for defining the evolutionary success of the strategies of a game.4.5 Methods: fixation and absorptionUsing a method inspired by a technique of Press and Dyson (2012), we derive explicit expressions (in termsof the transition matrix) for fixation probabilities and absorption times. Subsequently, we prove a simplelemma that says that Markov chain symmetries preserve the set of a chain’s stationary distributions.4.5.1 Fixation probabilitiesSuppose that tXnuně0 is a discrete-time Markov chain on a finite state space, S, that has exactly K (ě 1)absorbing states, s1, . . . , sK . Moreover, suppose that the non-absorbing states are transient (see Fudenbergand Imhof, 2006). The transition matrix for this chain, T, may be written asT “¨˚˝IK 0S1 S2‹˛‚, (4.18)where IK is the KˆK identity matrix and 0 is the matrix of zeros (in this case, its dimension is Kˆp|S| ´Kq,where |S| is the number of states in S). This chain will eventually end up in one of the K absorbing states,and we denote by ρs,i the probability that state si is reached when the chain starts off in states s P S. LetP be the |S| ˆK matrix of fixation probabilities, i.e. Ps,i “ ρs,i for each s and i. This matrix satisfiesTP “ P, (4.19)98which is just the matrix form of the recurrence relation satisfied by fixation probabilities (obtained from a“first step” analysis of the Markov chain). Consider the matrix, M “M pTq, defined byM :“ ´¨˚˝IK 0S1 S2 ´ I|S|´K‹˛‚. (4.20)Since ρj,i “ δi,j for i, j P t1, . . . ,Ku, we see that´MP “ TP´¨˚˝0 00 I|S|´K‹˛‚P “¨˚˝IK0‹˛‚. (4.21)Moreover, the matrix M must have full rank since the non-absorbing states are transient; that is,detM “ p´1q|S| det `S2 ´ I|S|´K˘ ‰ 0. (4.22)Therefore, by Cramer’s rule,ρs,i “ ´detM ps, eiqdetM, (4.23)where the notation M ps, eiq means the matrix obtained by replacing the column corresponding to state swith the ith standard basis vector, ei. Thus, Eq. (4.23) gives explicit formulas for the fixation probabilities.4.5.2 Absorption timesLet t be the |S|-vector indexed by S whose entry ts is the expected time until the process fixates in one ofthe absorbing states when started in state s P S. This vector satisfies ti “ 0 for i “ 1, . . . ,K as well as therecurrence relation T pt` 1q “ t`řKi“1 ei. Therefore,Mt “|S|ÿj“K`1ej , (4.24)so, by Cramer’s rule,ts “|S|ÿj“K`1detM ps, ejqdetM. (4.25)99We now turn to Markov chains defined by evolutionary games. Before proving Proposition 6, we maketwo assumptions:(i) The payoff-to-fitness mapping is of the form fβ ppiq “ exp tβpiu, fβ ppiq “ βpi, fβ ppiq “ 1 ` βpi, orfβ ppiq “ 1´β`βpi, where β denotes the intensity of selection and pi denotes payoff. (Of course, fitnesscan be defined in one of the latter three ways only if the payoffs are such that fβ ppiq ě 0.)(ii) The update probabilities are rational functions of the fitness profile of the population.Remark 8. Assumptions (i) and (ii) are not at all restrictive in evolutionary game theory. Any processin which selection occurs with probability proportional to fitness will satisfy this rationality condition, andindeed all of the standard evolutionary processes (birth-death, death-birth, imitation, pairwise comparison,Wright-Fisher, etc.) have this property. The four payoff-to-fitness mappings are standard as well.With assumptions (i) and (ii) in mind, we have:Proposition 6. Each of the equalitiesρs,i “ ρs1,j ; (4.26a)ts “ ts1 (4.26b)holds for either (i) every β ě 0 or (ii) at most finitely many β ě 0. Thus, if one of these equalities fails tohold for even a single value of β, then it fails to hold for all sufficiently small β ą 0.Proof. Suppose that s, s1 P S and that si and sj are absorbing states. By Eq. (4.23),ρs,i “ ρs1,j ðñ detM ps, eiqdetM“ detM ps1, ejqdetMðñ detM ps, eiq “ detM`s1, ej˘. (4.27)Similarly, by Eq. (4.25),ts “ ts1 ðñ|S|ÿj“K`1detM ps, ejqdetM“|S|ÿj“K`1detM ps1, ejqdetMðñ|S|ÿj“K`1detM ps, ejq “|S|ÿj“K`1detM`s1, ej˘. (4.28)Assuming (i) and (ii), Eqs. (4.27) and (4.28) are equivalent to polynomial equations in either β or exp tβu.100Either way, since nonzero polynomial equations have at most finitely many solutions, we see that the equalitiesρs,i “ ρs1,j and ts “ ts1 each hold for either (i) every β or (ii) finitely many values of β. Thus, if ρs,i ‰ ρs1,j(resp. ts ‰ ts1) for even a single selection intensity, then these fixation probabilities (resp. absorption times)differ for almost every selection intensity. In particular, they differ for all sufficiently small β.4.6 Methods: symmetry and evolutionary equivalence4.6.1 Symmetries of graphsHere we recall some standard notions of symmetry for graphs. Although we treat directed, weighted graphsin general, throughout the main text we give several examples of undirected and unweighted graphs, whichare defined as follows:Definition 11 (Undirected graph). A graph, D , is undirected if Dij “ Dji for each i and j.Definition 12 (Unweighted graph). A graph, D , is unweighted if Dij P t0, 1u for each i and j.Since our goal is to discuss symmetry in the context of evolutionary processes, we first describe severalnotions of symmetry for graphs. In a graph, D , the indegree and outdegree of vertex i areřNj“1Dji andřNj“1Dij , respectively. With these definitions in mind, we recall the definition of a regular graph:Definition 13 (Regular graph). D is regular if and only if there exists k P R such thatNÿj“1Dji “Nÿj“1Dij “ k (4.29)for each i. If D is regular, then k is called the degree of D .Let SN denote the symmetric group on N letters; that is, SN is the set of all bijections pi : t1, . . . , Nu Ñt1, . . . , Nu. Each pi P SN extends to a relabeling action on the set of directed, weighted graphs defined byppiDqij “ Dpipiqpipjq. In other words, any relabeling of the set of vertices results in a corresponding relabelingof the graph. The automorphism group of D , written Aut pDq, is the set of all pi P SN such that piD “ D .We now recall a condition slightly stronger than regularity known as vertex-transitivity:Definition 14 (Vertex-transitive graph). D is vertex-transitive if for each i and j, there exists pi P Aut pDqsuch that pi piq “ j.101Informally, a graph is vertex-transitive if and only if it “looks the same” from every vertex. If a graph isvertex-transitive, then it is necessarily regular. The strongest form of symmetry for graphs that we considerhere is the following:Definition 15 (Symmetric graph). D is symmetric (or arc-transitive) if for each i, j with Dij ‰ 0 and i1, j1with Di1j1 ‰ 0, there exists pi P Aut pDq such that pi piq “ i1 and pi pjq “ j1.A graph is symmetric if it “looks the same” from any two directed edges. Arc-transitivity is typicallydefined for unweighted graphs, i.e. graphs satisfying D P t0, 1uNˆN . For the more general class of weightedgraphs, we require that SN act transitively on the set of edges of D , where “edge” means a pair pi, jq withDij ‰ 0. Thus, all of the edges in a symmetric, weighted graph have the same weight: otherwise, if pi, jqand pi1, j1q are edges but Dij ‰ Di1j1 , then there would exist no permutation, pi, sending i to i1, j to j1,and preserving the weights of the graph. Therefore, since the weights of a symmetric graph take one of twovalues (0 or else the only nonzero weight), such a graph is essentially unweighted.4.6.2 Symmetries of evolutionary processesIn §4.2.1, we defined two states, s and s1, to be evolutionarily equivalent if (i) there exists an automorphismof the Markov chain, φ P Aut pXq, such that φ psq “ s1, and (ii) this automorphism satisfies φ pµq “ µ foreach stationary distribution, µ, of the chain. Condition (i), which means that s and s1 are symmetric, aloneis not quite strong enough to guarantee that s and s1 have the same long-run behavior. To give an exampleof a symmetry of states that is not an evolutionary equivalence, we consider the neutral Moran process in awell-mixed population of size N “ 3:Example 17. In a well-mixed population of size N “ 3, consider the (frequency-independent) Moran processwith two types of players: a mutant type and a wild type. Suppose that the mutant type is neutral withrespect to the mutant; that is, the fitness of the mutant relative to the wild type is 1. Since the populationis well-mixed, the state of the population is given by the number of mutants it contains, i P t0, 1, 2, 3u “: S.Consider the map φ : S Ñ S defined by φ piq “ 3 ´ i. States 0 and 3 are absorbing, and, for i P t1, 2u, thetransition probabilities of this process are as follows:Ti,i´1 “ Ti,i`1 “ˆi3˙ˆ3´ i3˙; (4.30a)Ti,i “ˆi3˙2`ˆ3´ i3˙2. (4.30b)102It follows at once that φ preserves these transition probabilities, so φ is an automorphism of the Markovchain. Let ρi be the probability that mutants fixate given an initial abundance of i mutants. The states 1and 2 are symmetric since φ p1q “ 2, but it is not true that ρ1 “ ρ2 since ρ1 “ 1{3 and ρ2 “ 2{3. The reasonfor this difference in fixation probabilities is that states 1 and 2, although symmetric, are not evolutionaryequivalent since φ swaps the two absorbing states of the process.In contrast to Example 17, processes with unique stationary distributions have the property that ev-ery symmetry of the Markov chain is an evolutionary equivalence (Proposition 5). The following lemmaestablishes Proposition 5:Lemma 3. If φ : S Ñ S is a symmetry of a Markov chain and µ is a stationary distribution of this chain,then φ pµq is also a stationary distribution. In particular, if µ is unique, then φ pµq “ µ.Proof. If T is the transition matrix of this Markov chain, then”φ pµqT Tıs“ÿs1PSφ pµqs1 Ts1,s“ÿs1PSµφps1qTφps1q,φpsq“ÿs1PSµs1Ts1,φpsq“ “µTT‰φpsq“ µφpsq“ φ pµqs , (4.31)so φ pµqT T “ φ pµqT , which completes the proof.We turn now to the proofs of our main results (Theorems 6 and 8):Theorem 6. Consider an evolutionary matrix game on a graph, Γ “ pE ,Dq, with symmetric payoffs andhomogeneous strategy mutations. If pi P Aut pΓq, then the states with a single mutant at vertex i and pi piq,respectively, in an otherwise-monomorphic population, are evolutionarily equivalent. That is, in the notationof Definition 7, the states sps1,iq,s and sps1,pipiqq,s are evolutionarily equivalent for each s, s1 P S.Proof. The state space of the Markov chain defined by this evolutionary game is SN , where S is the strategyset and N is the population size. For pi P Aut pΓq, let pi act on the state of the population by changing103the strategy of player i to that of player pi´1 piq for each i “ 1, . . . , N . In other words, pi sends s P SN topis P SN , which is defined by ppisqi “ spi´1piq for each i. Therefore, for s, s1 P S, we havepisps1,iq,s “ sps1,pipiqq,s (4.32)for each i “ 1, . . . , N . Since pi is an automorphism of the evolutionary graph, Γ, we have piE “ E andpiD “ D . Moreover, pi preserves the strategy mutations since they are homogeneous. Since the payoffs aresymmetric, pi just rearranges the fitness profile of the population: the payoff of player i becomes the payoffof player pi´1 piq (see Eq. (4.4)), so the same is true of the fitness values. Therefore, applying the mappi to SN is equivalent to applying the map on SN obtained by simply relabeling the players. Since anysuch relabeling of the players results in an automorphism of the Markov chain on SN that preserves themonomorphic absorbing states, it follows that sps1,iq,s and sps1,pipiqq,s are evolutionarily equivalent.Theorem 8. Suppose that an asymmetric matrix game with homogeneous strategy mutations is played onan evolutionary graph, Γ “ pE ,Dq. For each pi P Aut pΓq, k P t1, 2, 3u, and i, j P t1, . . . , Nu,σijk “ σpipiqpipjqk . (4.33)Proof. Let T be the transition matrix for the Markov chain defined by this process. Since there are nonzerostrategy mutations, this chain has a unique stationary distribution, µ. The matrix T defines a directed,weighted graph on |S| “ |S|N vertices that has an edge from vertex s to vertex s1 if and only if Ts,s1 ‰ 0. Ifthere is an edge from s to s1, then the weight of this edge is simply Ts,s1 . The (outdegree) Laplacian matrixof this graph, L “ L pTq, is defined by L “ I|S| ´T (see Chung, 1996). In terms of this Laplacian matrix,Press and Dyson (2012) show that for any vector, ν, the stationary distribution satisfiesµ ¨ ν “ detL ps, νqdetL ps,1q , (4.34)for each state, s, where L ps, νq denotes the matrix obtained from L by replacing the column correspondingto state s by ν. Thus, if ψr is the vector indexed by S “ SN with ψr psq being the frequency of strategy rin state s, then the average abundance of strategy r isFr :“ µ ¨ ψr “ detL ps, ψrqdetL ps,1q . (4.35)104Since T is a function of the payoffs, a “´aijst¯s,t,i,j, we may write Fr “ Fr paq. (a is just an ordered tupledefined by aijst :“ aijst for each s, t, i, and j.) Moreover, since the entries of T are assumed to be smoothfunctions of a (see Tarnita et al., 2011), Fr is also a smooth function of a by Eq. (4.35) and the definitionof L. We will show that for each s and t,BFrBaijstˇˇˇˇˇa“0“ BFrBapipiqpipjqstˇˇˇˇˇa“0(4.36)for each i and j. The theorem will then follow from the derivations of σij1 , σij2 , and σij3 in (McAvoy andHauert, 2016a) since it is shown there that each σij is a function of the elements in the set#BFrBaijstˇˇˇˇˇa“0+ns,t“1. (4.37)For s, t, i, and j fixed and a P R, let F s,t,i,jr paq be the function of the vector with a at entry aijst and 0 in allother entries. Symbolically, if δx,y is defined as 1 if x “ y and 0 otherwise andaps,t,i,jq0 paq :“ paδs,s1δt,t1δi,i1δj,j1qs1,t1,i1,j1 , (4.38)thenF s,t,i,jr paq :“ Fr´aps,t,i,jq0 paq¯. (4.39)Let pi P SN and suppose that pi P Aut pΓq; that is, piE “ E and piD “ D . pi induces a map on the payoffs,a, defined by pi´aijst¯s,t,i,j“´apipiqpipjqst¯s,t,i,j. Let orbSN paq denote the orbit of a under this action, andconsider the enlarged state space S1 :“ SN ˆ orbSN paq. Using the Markov chain on SN coming from theevolutionary process, we obtain a Markov chain on SN ˆ orbSN paq via the transition matrix, T1, defined byT1ps,bq,ps1,b1q :“ δb,b1Ts,s1 pbq . (4.40)for s, s1 P SN and b,b1 P orbSN paq. (We write T pbq to indicate the transition matrix as a function of thepayoff values of the game.) pi extends to a map on S1 defined by pi ps,bq “ ppis, pibq. Since pi preserves E , D ,and the strategy mutations (since they are homogeneous), it follows that the induced map pi : S1 Ñ S1 is anautomorphism of the Markov chain on S1 defined by T1. If µ1 is a stationary distribution for the chain T1,105then, for each s P SN and b P orbSN paq,µ1ps,bq “ÿs1PSNÿb1PorbSN paqµ1ps1,b1qT1ps1,b1q,ps,bq“ÿs1PSNÿb1PorbSN paqµ1ps1,b1qδb1,bTs1,s“ÿs1PSNµ1ps1,bqTs1,s. (4.41)It then follows from the uniqueness of µ “ µ pbq that there exists cµ1 pbq ě 0 such that µ1ps,bq “ cµ1 pbqµs. Ifµ1 is such a stationary distribution, then, by Lemma 3, piµ1 “ µ2 for some other stationary distribution, µ2,of the chain on S1. This equation implies that cµ2 pbq “ cµ1 ppibq for each b P orbSN paq. Therefore,piµ ppibqs “ µ ppibqpis “ µ pbqs (4.42)for each s P SN and b P orbSN paq. Consequently, since piψr “ ψr,F s,t,i,jr paq “ Fr´aps,t,i,jq0 paq¯“ µ´aps,t,i,jq0 paq¯¨ ψr“ piµ´piaps,t,i,jq0 paq¯¨ ψr“ piµ´aps,t,pipiq,pipjqq0 paq¯¨ piψr“ µ´aps,t,pipiq,pipjqq0 paq¯¨ ψr“ Fr´aps,t,pipiq,pipjqq0 paq¯“ F s,t,pipiq,pipjqr paq . (4.43)As a result, we haveBFrBaijstˇˇˇˇˇa“0“ ddaˇˇˇˇˇa“0F s,t,i,jr “ ddaˇˇˇˇˇa“0F s,t,pipiq,pipjqr “ BFrBapipiqpipjqstˇˇˇˇˇa“0, (4.44)so Eq. (4.36) holds, which completes the proof.106initial vertex of mutant absorption time1 238.18362 237.05963 234.59824 235.84475 236.59676 234.57927 231.69888 238.03759 233.612210 235.151411 230.134012 228.7114Table 4.1: The absorption times of the 12 initial configurations of a single mutant in a wild-type populationfor the Moran process on the Frucht graph. The fitness of the mutant relative to the wild type is r “ 2.4.7 Methods: explicit calculationsWe now perform explicit calculations using Eqs. (4.23) and (4.25) to show that the Isothermal Theoremextends to neither absorption times nor frequency-dependent games. We also calculate the structure coeffi-cients for the death-birth process on the graph of Fig. 4.7(A) to show that Corollary 5 does not necessarilyhold for graphs that are vertex-transitive but not symmetric.4.7.1 The Moran processConsider the Moran process on the Frucht graph (Fig. 4.2), and suppose that the mutant type has fitnessr ą 0 relative to the wild type. By the Isothermal Theorem of Lieberman et al. (2005), the fixationprobability of a fixed number of mutants is independent of the configuration of those mutants on the graph.For r “ 2, the absorption times (of configurations of a single mutant in a wild-type population) are listed inTable 4.1. The fixation probability of a single mutant is`1´ 12˘ { `1´ 1212 ˘ « 0.5001 for every vertex. Thus,unlike fixation probabilities, absorption times depend on the initial location of the mutant. (Some of theabsorption times are similar in this case, but no two are the same.)4.7.2 Frequency-dependent gamesSymmetric gamesConsider the instance of the Snowdrift Game that has for a payoff matrix (4.6). For this game, Table 4.2gives the fixation probabilities and the absorption times (rounded to four digits after the decimal point) for107initial vertex of mutant fixation probability absorption time1 0.6505 116.09592 0.6471 115.40263 0.6469 115.73024 0.6448 115.73485 0.6463 116.01006 0.6562 117.46717 0.7299 129.86098 0.6512 116.69069 0.6545 118.379510 0.6551 117.899511 0.7344 131.668112 0.7326 131.9634Table 4.2: The fixation probabilities and absorption times of the 12 initial configurations of a single cooper-ator among defectors for the death-birth process on the Frucht graph. Payoffs are frequency-dependent andderived from the Snowdrift Game, (4.6). The intensity of selection is β “ 1.the death-birth process on the Frucht graph (Fig. 4.2) with β “ 1.Similarly, for the same game (and update rule) but on the Tietze graph (Fig. 4.6) with β “ 0.1, Table4.3 and Fig. 4.8 give the fixation probabilities and absorption times for all possible configurations of a singlecooperator among defectors.Asymmetric gamesFor the death-birth process on the graph in Fig. 4.7(A) with homogeneous strategy-mutation rate ε “ 0.01,we calculate the complete collection of structure coefficients!τ ij1 , τij2)i,j(for r “ 1) as follows: Let ψ1 bethe vector indexed by S with ψ1 psq being the frequency of strategy 1 in state s, and let 1 be the vector ofones. McAvoy and Hauert (2016a) show that, for any s, Eq. (4.13) is equivalent to1ntrˆˆ”L ps, ψ1q |β“0ı´1 ´ ”L ps,1q |β“0ı´1˙ ddβˇˇˇβ“0L ps,0q˙ą 0, (4.45)where L “ L pTq “ I|S| ´T is the (outdegree) Laplacian matrix of the graph defined by T. If k P t1, 2u, iand j are fixed, and we choose the (two-strategy) asymmetric game so thatai1j1st “$’’&’’%1 s “ 1, t “ k, i1 “ i, j1 “ j;0 otherwise,(4.46)108initial vertex of mutant fixation probability absorption time1 0.3777 70.78692 0.3777 70.78693 0.3777 70.78694 0.4141 76.50485 0.4186 77.30946 0.4186 77.30947 0.4141 76.50488 0.4186 77.30949 0.4186 77.309410 0.4141 76.504811 0.4186 77.309412 0.4186 77.3094Table 4.3: The fixation probabilities and absorption times of the 12 initial configurations of a single cooper-ator among defectors for the death-birth process on the Tietze graph. Payoffs are frequency-dependent andderived from the Snowdrift Game, (4.6). The intensity of selection is β “ 0.1. These values are illustratedgraphically in Fig. 4.8.initial vertex of mutant1 2 3 4 5 6 7 8 9 10 11 12fixation probability0.370.380.390.40.410.42average fixation probability(a)initial vertex of mutant1 2 3 4 5 6 7 8 9 10 11 12absorption time707172737475767778average absorption time(b)Figure 4.8: Fixation probability (A) and absorption time (B) versus initial vertex of mutant (cooperator) fora death-birth process on the Tietze graph. In both figures, the game is a Snowdrift Game whose payoffs aredescribed by payoff matrix (4.6), and the selection intensity is β “ 0.1. This example illustrates the single-mutant states that are not evolutionarily equivalent in the Tietze graph. Moreover, it happens to be thecase that any two of these states with the same fixation probability (or absorption time) are evolutionarilyequivalent.109pi, jq τ ij1 τ ij2p1, 2q 707905{9315552 32989{405024p1, 4q 16291{194074 57057{776296p1, 6q 707905{9315552 32989{405024p2, 1q 707905{9315552 32989{405024p2, 3q 16291{194074 57057{776296p2, 6q 707905{9315552 32989{405024p3, 2q 16291{194074 57057{776296p3, 4q 707905{9315552 32989{405024p3, 5q 707905{9315552 32989{405024p4, 1q 16291{194074 57057{776296p4, 3q 707905{9315552 32989{405024p4, 5q 707905{9315552 32989{405024p5, 3q 707905{9315552 32989{405024p5, 4q 707905{9315552 32989{405024p5, 6q 16291{194074 57057{776296p6, 1q 707905{9315552 32989{405024p6, 2q 707905{9315552 32989{405024p6, 5q 16291{194074 57057{776296Table 4.4: The structure coefficients in Eq. (4.14) for the death-birth process on the vertex-transitive (butnot symmetric) graph of Fig. 4.7(A) with homogeneous strategy-mutation rate ε “ 0.01.thenτ ijk “12trˆˆ”L ps, ψ1q |β“0ı´1 ´ ”L ps,1q |β“0ı´1˙ ddβˇˇˇβ“0L ps,0q˙. (4.47)Using this method, we obtain the structure coefficients for Fig. 4.7(A) listed in Table 4.4. For the sameprocess on the symmetric graph of Fig. 4.7(B), we find that τ ij1 and τij2 are independent of i and j and areboth equal to 2189{27728.110Chapter 5Stochastic selection processesWe propose a mathematical framework for natural selection in finite populations. Traditionally, manyof the selection-based processes used to describe cultural and genetic evolution (such as imitation andbirth-death models) have been studied on a case-by-case basis. Over time, these models have grownin sophistication to include population structure, differing phenotypes, and various forms of interactionasymmetry, among other features. Furthermore, many processes inspired by natural selection, such asevolutionary algorithms in computer science, possess characteristics that should fall within the realmof a “selection process,” but so far there is no overarching theory encompassing these evolutionaryprocesses. The framework of stochastic selection processes we present here provides such a theory andconsists of three main components: a population state space, an aggregate payoff function, and an updaterule. A population state space is a generalization of the notion of population structure, and it caninclude non-spatial information such as strategy-mutation rates and phenotypes. An aggregate payofffunction allows one to generically talk about the fitness of traits without explicitly specifying a methodof payoff accounting or even the nature of the interactions that determine payoff/fitness. An updaterule is a fitness-based function that updates a population based on its current state, and it includes asspecial cases the classical update mechanisms (Moran, Wright-Fisher, etc.) as well as more complicatedmechanisms involving chromosomal crossover, mutation, and even complex cultural syntheses of strategiesof neighboring individuals. Our framework covers models with variable population size as well as witharbitrary, measurable trait spaces.1115.1 IntroductionEvolutionary game theory has proven itself extremely useful for modeling both cultural and genetic evolution(Maynard Smith, 1982; Hofbauer and Sigmund, 1998; Dugatkin, 2000; Nowak, 2006a). Traits, which arerepresented as strategies, determine the fitness of the players. An individual’s fitness might be determinedsolely by his or her strategy (frequency-independent fitness) or it might also depend on the traits of theother players in the population (frequency-dependent fitness). The population evolves via a fitness-basedupdate mechanism, and the long-run behavior of this process can be studied to determine which traits aremore successful than others.Evolutionary game theory was first used to study evolution in infinite populations via deterministicreplicator dynamics (Taylor and Jonker, 1978). More recently, the dynamics of evolutionary games have alsobeen studied in finite populations (Nowak et al., 2004; Taylor et al., 2004). Finite-population evolutionarydynamics typically have two timescales: one for interactions and one for updates. A player has a strategy(trait) and interacts with his or her neighbors in order to receive a payoff. In a birth-death process, forinstance, this payoff is converted to reproductive fitness and used to update the population as follows:First, a player is chosen from the population for reproduction with probability proportional to (relative)fitness. Next, a player is chosen uniformly at random from the population for death, and the offspring of thereproducing player replaces the deceased player. This birth-death process is a frequency-dependent versionof the classical Moran process (Moran, 1958; Nowak, 2006a).Whereas replicator dynamics are deterministic, finite-population models of evolution are inherentlystochastic and incorporate principles of both natural selection and genetic drift. All biological popula-tions are finite, and we focus here on stochastic evolutionary games in finite populations. In particular, wefocus on selection processes, which, informally, are evolutionary processes in which the update step dependson fitness. Selection processes typically resemble birth-death processes in that there is reproduction and re-placement, with “fitter” players more likely to reproduce than those with lower fitness. Of course, the orderand number of births and deaths may vary, and the update might instead be based on imitation instead ofreproduction. One of our goals here is to precisely define selection process in a way that captures all of thesalient features of the classical models of evolution in finite populations.Nowak et al. (2009) state that “There is (as yet) no general mathematical framework that would encom-pass evolutionary dynamics for any kind of population structure.” We propose here such a framework, whichwe term stochastic selection processes, to describe the evolutionary games used to model natural selection infinite populations. Stochastic selection processes model evolutionary processes with two timescales: one for112interactions (which determine fitness) and one for updates (selection, mutation, etc.). Our framework takesinto account arbitrary population structures, as well as non-spatial information about the population suchas phenotypes and strategy mutations. Moreover, this framework encompasses all types of strategy spaces,games (matrix, asymmetric, multiplayer, etc.), and fitness-based update rules.An example of ambiguity in evolutionary game theory is that classical games, such as two-player matrixgames, are often used to define evolutionary processes in populations, and in this context the term “game”can refer to either the classical game or to the evolutionary process. Moreover, classical multiplayer games,such as the public goods game, can result in processes in which each player in the population derives a payofffrom several multiplayer interactions, and some of these interactions might involve players who are notneighbors (see McAvoy and Hauert, 2016a). Even further, when a player is involved in multiple interactions,the total payoff to this player may be derived in more than one way: payoffs from individual encounters maybe accumulated (added) or averaged, for instance, and the nature of this accounting can strongly influencethe evolutionary process (Maciejewski et al., 2014).In order to accommodate the many ways of deriving “total payoff” from a classical game, one may distillfrom these methods a common feature: each player has a strategy and receives an aggregate payoff froma sequence of interactions. That is, if S is the strategy space available to each of the N players in thepopulation, then there is an aggregate payoff function, u : SN Ñ RN , such that the ith coordinate function,ui, is the payoff to player i for all of the (microscopic) interactions in which this player is involved. As anexample, consider the two-player game defined by the matrix¨˚˝A BA a bB c d‹˛‚. (5.1)Suppose that the population is well mixed so that each player interacts with every other player. Let S “tA,Bu and let s P SN be a strategy profile consisting of k ` 1 players using A and N ´ 1´ k players usingB (the ordering is not important since the population is well mixed). If player i is an A-player, then thepayoff to this player isuacci psq :“ ka` pN ´ 1´ kq b (5.2)113if payoffs are accumulated anduavei psq :“ ka` pN ´ 1´ kq bN ´ 1 (5.3)if payoffs are averaged. These two methods of calculating payoffs from pairwise interactions give essentiallyequivalent evolutionary dynamics since the population is well mixed, but this phenomenon need not hold formore complicated methods of payoff accounting or (spatial) population structures. Evolutionary dynamicsaside, this example illustrates how one can define an aggregate payoff function, u, from a classical game suchas a 2 ˆ 2 matrix game; different methods of obtaining total payoff from a series of interactions result indifferent aggregate payoff functions.A selection process in a finite population typically has for a state space the set of all strategy profiles, i.e.the set of all N -tuples of strategies (Allen and Tarnita, 2012). A strategy profile indicates a strategy for eachplayer in the population, and often an evolutionary process updates only these strategies. The populationstructure may be fixed (Lieberman et al., 2005; Szabo´ and Fa´th, 2007) or dynamic (Tarnita et al., 2009a;Wardil and Hauert, 2014). (In the latter case, one must also account for the population structure in thestate space of the process.) An aggregate payoff function, which takes into account both strategies andpopulation structure, assigns a payoff to each player in the population. The payoff to player i, ui, is thenconverted to fitness, fi “ f puiq, where f is some payoff-to-fitness function (e.g. f puiq “ exp tβuiu for someβ ě 0). A fitness-based update rule such as birth-death (Moran, 1958; Nowak et al., 2004); death-birth orimitation (Ohtsuki et al., 2006; Ohtsuki and Nowak, 2006); pairwise comparison (Szabo´ and To˝ke, 1998;Traulsen et al., 2007); or Wright-Fisher (Ewens, 2004; Imhof and Nowak, 2006) is then repeatedly applied tothe population, at each step updating the strategies of the players and (possibly) the population structure.Based on this pattern, it seems reasonable to define the state of an evolutionary process to be a pair, ps,Γq,where s is an N -tuple of strategies and Γ is a population structure. However, there may be more to the stateof a population than its spatial structure. For example, Antal et al. (2009) consider evolution in phenotypespace, a model in which each player has both a strategy and a phenotype, and phenotypes influence theeffects of strategies on interactions. In the Prisoner’s Dilemma, for instance, a cooperator (whose strategyis C) cooperates only with other players who are phenotypically similar and defects otherwise. In thisinstance, we show how one can consider phenotypes as a part of the “population state” in the sense thatthey contain information about the players in the population. In general, a state of the evolutionary processcan be represented by a pair, ps,Pq, where s is a strategy profile and P is a population state (which wemake mathematically precise). Notably, the population state is distinct from the strategies of the players; it114describes all of the non-strategy information about the players.When viewed from this perspective, the effects of phenotypes on strategies can be implemented directlyin the aggregate payoff function, u: a player facing a cooperator receives the payoff for facing a cooperatorif and only if they are phenotypically similar. Similarly, strategy mutations can also be considered as apart of the population state and accounted for directly in the update step of the process. As a part of ouranalysis, we formally define aggregate payoff function and update rule and show how they are influenced bythe components of the population state.This setup, which involves a series of interactions as the population transitions through various states,is reminiscent of a stochastic game (Shapley, 1953). A stochastic game is played in stages, with each stageconsisting of a normal-form game determined by some “state.” The game played in the subsequent stage isdetermined probabilistically by the current state as well as the strategies played in the current stage. Wewill see that, in general, selection processes are not necessarily stochastic games, and neither are stochasticgames necessarily selection processes. However, these two types of processes do share some common features,and we use some of the components of a stochastic game as inspiration for our framework.Many problems in evolution, such as how and why cooperation evolves, depend on the specifics of theupdate rule, population structure, mutation rates, etc. (Ohtsuki et al., 2006; Taylor et al., 2007; Traulsenet al., 2009a; De´barre et al., 2014; Rand et al., 2014). We clarify how these pieces (among others) fittogether. Our objective here is threefold: (1) to compare and contrast evolutionary and stochastic games,drawing inspiration from the latter to describe the former; (2) to propose a general mathematical frameworkencompassing natural selection models in finite populations; and (3) to examine the components of severalclassical evolutionary processes and demonstrate how they fit into our framework. Many (if not most) of theexisting models of evolution in finite populations involve a fixed population size. Therefore, our frameworkis first stated in terms of a fixed population size since this setting most readily allows for a comparison to thetheory of stochastic games and for illustrative examples placing several standard evolutionary processes intoa broader context. However, the assumption that the population size is fixed is not crucial to our theory,and we conclude by extending our framework to processes with variable population size.5.2 Stochastic gamesPrior to outlining the basic theory of stochastic games, we first need to recall some definitions and notation.A measurable space consists of a set, X, and a σ-algebra of sets, F pXq, on X. Often, we refer to X itself asa “measurable space” and suppress F pXq. If X is a measurable space, then we denote by ∆ pXq the space115of probability measures on X; that is, if M pXq is the space of all measures on pX,F pXqq, then∆ pXq :“ tµ PM pXq : µ pXq “ 1u . (5.4)For measurable spaces X and Y , denote by K pX,Y q the set of Markov kernels from X to Y ; that is, K pX,Y qis the set of functions κ : X ˆF pY q Ñ r0, 1s such that (i) κ px,´q : F pY q Ñ r0, 1s is in ∆ pY q for each x P Xand (ii) κ p´, Eq : X Ñ r0, 1s is measurable for each E P F pY q. We also write κ : X Ñ ∆ pY q to denote sucha kernel.Shapley (1953) considers a collection of normal-form games, together with a probabilistic rule for transi-tioning between these games, which he refers to as a stochastic game. A stochastic game is a generalizationof a repeated game that, formally, consists of the following components:(i) N players, labeled 1, . . . , N ;(ii) for each player, i, a measurable strategy space, Si;(iii) a measurable state space, P;(iv) a “single-period” payoff function, u : S ˆ P Ñ RN , where ui is the payoff to player i and S :“S1 ˆ ¨ ¨ ¨ ˆ SN is the set of all strategy profiles;(v) a transition kernel, T : Sˆ PÑ ∆ pPq.A Markov decision process is a stochastic game with one player (Puterman, 1994; Neyman, 2003), and arepeated game is a stochastic game whose state space, P, consists of just a single element (McMillan, 2001;Mertens et al., 2015).Examples of strategies for stochastic games are the following (see Neyman and Sorin, 2003):(1) Pure strategies: Let H denote the set of all possible histories, i.e.H :“ t∅u Yğtě1pSˆ Pqt , (5.5)where ∅ denotes the “null” history andŮtě1 pSˆ Pqt is the disjoint union of the spaces of t-tuples,pSˆ Pqt, for t ě 1. A pure strategy for player i is a mapsi : H ÝÑ Si, (5.6)116indicating an action in Si for each t and history ht P pSˆ Pqt Ď H. We denote by Map pH, Siq the setof all such maps, i.e. the set of player i’s pure strategies.(2) Mixed strategies: A mixed strategy for player i is a probability distribution over the set of purestrategies for player i, i.e. an element σi P ∆ pMap pH, Siqq.(3) Behavioral strategies: A behavioral strategy for player i is a mapσi : H ÝÑ ∆ pSiq , (5.7)indicating a distribution over Si for each t and history ht P pSˆ Pqt Ď H.(4) Markov strategies: A Markov strategy for player i is a behavioral strategy, σi, such that σi phtq “ σi pktqfor each t and ht, kt P pSˆ Pqt Ď H with htt “ ktt. In other words, a Markov strategy is a “memory-one” behavioral strategy, i.e. a behavioral strategy that depends on only the last strategy profile, state,and t.(5) Stationary strategies: A stationary strategy is a Markov strategy that is independent of t, i.e. abehavioral strategy that depends on only the last strategy profile and state.Of these five classes of strategies, behavioral strategies are the most general. Indeed, pure, mixed,Markov, and stationary strategies are all instances of behavioral strategies. In the context of repeatedgames, a memory-one strategy of the repeated game (Press and Dyson, 2012) is equivalent to a stationarystrategy of the stochastic game, and a longer-memory strategy of the repeated game (Hauert and Schuster,1997) is equivalent to a behavioral strategy.5.2.1 Evolutionary processes as stochastic gamesAt first glance, stochastic games seem to provide a reasonable framework for evolutionary games: a stochasticgame transitions through states in stages (“periods”), which could be population structures or states, andin each stage the players receive payoffs based on a single-period payoff function, u. However, it is in thedynamics that the differences between stochastic and evolutionary games become evident:The combination of a stochastic game and a strategy for each player defines a stochastic process on SˆP,although this stochastic process might or might not be a Markov chain. For example, let T be the transitionkernel for a stochastic game, and suppose that σi is a stationary strategy for player i for i “ 1, . . . , N . Let117κ be the transition kernel on Sˆ P defined by the product measure, i.e.κ : Sˆ P ÝÑ ∆ pSˆ Pq: ps,Pq ÞÝÑ σ1 rs,Ps ˆ ¨ ¨ ¨ ˆ σN rs,Ps ˆ T rs,Ps . (5.8)Thus, the stochastic game–together with this profile of stationary strategies–defines a time-homogeneousMarkov chain on S ˆ P. In general, if these strategies are instead Markov strategies, then the resultingMarkov chain might be time-inhomogeneous. If these strategies are pure, mixed, or behavioral, then thestochastic process on Sˆ P defined by the game need not even be a Markov chain.Evolutionary processes are typically defined as Markov chains on Sˆ P, where P is chosen to be a statespace appropriate for the evolutionary process (such as the space of population structures, mutation-rateprofiles, or phenotype profiles). In light of the previous remarks, it is not unreasonable to expect that manyof these processes are equivalent to stochastic games combined with stationary strategies. As it turns out,evolutionary processes generally possess correlations between updates of strategies and population states,forbidding an equivalence between an evolutionary process and a Markov chain constructed from a stochasticgame via Eq. (5.8). These correlations are evident already in one of the most basic models of evolution infinite populations:Example 18 (Moran process). Suppose that S “ tA,Bu, where strategy A represents the mutant type andstrategy B represents the wild type. A mutant type has fitness r ą 0 relative to the wild type (whose fitnessrelative to itself is just 1). Let m be a finite subset of r0, 1s consisting of a number of “mutation rates,” andlet P :“ mN . In a well-mixed population of size N , the Moran process proceeds as follows: In each time step,an individual (“player”) is chosen for reproduction with probability proportional to relative fitness. Anotherplayer (including the one chosen for reproduction) is then chosen uniformly at random from the populationfor death. The offspring of the player chosen for reproduction replaces the deceased player. If player i ischosen for reproduction and εi P m is this player’s mutation rate, then the offspring of this player inheritsthe type of the parent with probability 1 ´ εi and takes on a type uniformly at random from tA,Bu withprobability εi. The mutation rate of the offspring is εi, which is inherited directly from the parent. Thus,a state of this process consists of a profile of types (“strategies”), s P SN , and a profile of mutation rates,ε P P “ mN .A transition between states ps, εq and ps1, ε1q is possible only if there exists j such that s` “ s1` and ε` “ ε1`for each ` ‰ j. If player i is selected for reproduction and player j is chosen for death, then it must be the118case that εi “ ε1j . If δs,t “ 1 when s “ t (and is 0 otherwise), then the probability that player i is selectedfor reproduction isrδsi,A ` δsi,BřN`“1 prδs`,A ` δs`,Bq. (5.9)The probability that player j is chosen for death is 1{N . If the offspring of player i inherits the strategy ofthe parent, then it must be true that si “ s1j . Otherwise, the offspring of player i “mutates” and adoptsstrategy s1j with probability 1{2. Therefore, the probability of transitioning between states ps, εq and ps1, ε1qisTps,εq,ps1,ε1q “Nÿi,j“1˜ź`‰jδs`,s1`δε`,ε1`¸δεi,ε1jˆ˜rδsi,A ` δsi,BřN`“1 prδs`,A ` δs`,Bq¸ˆ1N˙„p1´ εiq δsi,s1j ` εiˆ12˙. (5.10)By Eq. (5.10), the distributions on SN and P, respectively, are not independent.More formally, consider a Markov chain on S ˆ P defined by some evolutionary process such as theMoran process of Example 18, and let κ be its transition kernel. The projection maps Π1 : Sˆ PÑ S andΠ2 : Sˆ PÑ P produce pushforward mapspΠ1q˚ : ∆ pSˆ Pq ÝÑ ∆ pSq: µ ÞÝÑ µ ˝Π´11 ; (5.11a)pΠ2q˚ : ∆ pSˆ Pq ÝÑ ∆ pPq: µ ÞÝÑ µ ˝Π´12 . (5.11b)From κ, we obtain a transition kernel for a stochastic game, T , defined byT rs,Ps :“ pΠ2q˚ κ rs,Ps (5.12)for each s P S and P P P. Similarly, we obtain the (stationary) strategy profileσ rs,Ps :“ pΠ1q˚ κ rs,Ps . (5.13)119However, one typically loses information in passing from such a Markov chain to the combination of astochastic game and a profile of stationary strategies. First of all, the transition kernel κ generally cannotbe reconstructed from T and σ since κ rs,Ps need not be in ∆ pSq ˆ ∆ pPq; a priori, we know only thatκ rs,Ps P ∆ pSˆ Pq for s P S and P P P (see Eq. (5.10), for example). Moreover, σ need not be of theform pσ1, . . . , σN q for stationary strategies σ1, . . . , σN ; in particular, σ is a correlated stationary profile (seeAumann, 1987; Fudenberg and Tirole, 1991). In other words, whereas a sequence of independent strategychoices produce an element´σ1 rs,Ps , . . . , σN rs,Ps¯P ∆ pS1q ˆ ¨ ¨ ¨ ˆ∆ pSN q , (5.14)it might be the case that σ rs,Ps P ∆ pS1 ˆ ¨ ¨ ¨ ˆ SN q ´´∆ pS1q ˆ ¨ ¨ ¨ ˆ∆ pSN q¯.In §5.3, we present a framework for stochastic evolutionary processes used to model natural selection.These processes, which we call stochastic selection processes, illustrate more clearly the correlations betweendistributions on S and P arising in many evolutionary processes. We saw in Example 18 that, despite thesimilarities between stochastic games and evolutionary processes, there are important differences betweenthe two frameworks. However, our notion of a stochastic selection process draws inspiration from the theoryof stochastic games. Namely, we appropriate the concepts of (i) state space, P; (ii) single-period payofffunction, u; and (iii) update step.5.3 Stochastic selection processes with fixed population sizeHere we focus on a type of evolutionary process that we term a stochastic selection process. Stochasticselection processes seek to model processes with two timescales: one for interactions and one for selection.In the interaction step, players interact with one another and receive payoffs based on their strategies. In theselection step, the population is updated probabilistically based on the current population and the players’payoffs. Selection processes provide a general framework for the evolutionary games used to model processesbased on natural selection (Maynard Smith, 1982).Roughly speaking, the processes modeled by evolutionary game theory may be split into two classes:cultural and genetic (McAvoy and Hauert, 2015b). In cultural processes, there is a fixed set of playerswho repeatedly revise their strategies based on some update rule (such as imitation). In genetic processes,strategies are updated via reproduction and genetic inheritance. Naturally, a process need not be one orthe other; there may be both cultural and genetic components in an evolutionary process. Unlike purely120cultural processes, those with a genetic component have the property that the players themselves, as wellas the size of the population, may actually change via births and deaths, thus it does not make sense tospeak of a fixed population of players as in requirement (i) of a stochastic game. However, one can choosean arbitrary enumeration of the players in each step of the process and refer to the player labeled i at timen as “player i.” Of course, player i at time m and player i at time n ‰ m might be different, but a stochasticselection process is a Markov chain and the transition kernel can be defined for any enumeration of theplayers at a given time. The implicit property that natural selection does not depend on the enumeration ofthe players must be stated formally in terms of the update rule. Informally, the update rule for a stochasticselection process must satisfy a symmetry condition that guarantees the dynamics do not depend on theseenumerations.As an example, an evolutionary process based on a game with two-strategies in a well-mixed populationmay be modeled as a Markov chain whose state space is t0, 1, . . . , Nu (Nowak, 2006a). If S “ tA,Bu isthe strategy set, then the state of the population is determined by the number of A-players, which is justan integer i P t0, 1, . . . , Nu. Alternatively, one may choose an arbitrary enumeration of the players andrepresent the state of the population as an element ps1, . . . , sN q P SN . Since the population is well mixed,evolutionary dynamics depend on only the frequency of each strategy in the population, so any two statesps1, . . . , sN q and ps11, . . . , s1N q consisting of the same number of A-players should be indistinguishable. Inother words, if T is the transition matrix for an evolutionary game in this population, then Tpis,τs1 “ Ts,s1for each s, s1 P SN and pi, τ P SN , where SN acts on SN by permuting the coordinates. Thus, the Markovchain is more naturally defined on the quotient spaceS :“ SN{SN , (5.15)which is isomorphic to t0, 1, . . . , Nu when S “ tA,Bu. (Recall that an action of a group, G, on a set, X, givesan equivalence relation, „, on X, which is defined by x „ x1 if and only if there exists g P G with x1 “ gx.The quotient space, X{G, is defined as the set of equivalence classes under „, and the class containingx is denoted by “x mod G.”) Of course, one may consider strategy spaces with more than two strategies:Suppose that S “ tA1, . . . , Anu, and, for each r “ 1, . . . , n, let ψr : SN Ñ t0, 1, . . . , Nu be the map sendinga strategy profile, s P SN , to the number of players using strategy Ar in s. Since ϕr ppisq “ ϕr psq for each121pi P SN and s P SN , the mapΨ : SN ÝÑ!pk1, . . . , knq P t0, 1, . . . , NuN : k1 ` ¨ ¨ ¨ ` kn “ N): s ÞÝÑ´ψ1 psq , . . . , ψn psq¯(5.16)descends to an isomorphismrΨ : SN{SN ÝÑ !pk1, . . . , knq P t0, 1, . . . , NuN : k1 ` ¨ ¨ ¨ ` kn “ N) . (5.17)Since an evolutionary update rule may be defined on the space of strategy-frequency profiles,!pk1, . . . , knq P t0, 1, . . . , NuN : k1 ` ¨ ¨ ¨ ` kn “ N), (5.18)we see once again that the Markov chain defined by an evolutionary process in this population naturally hasSN{SN for a state space. We show here that this phenomenon generalizes to arbitrary types of populationsand update rules. In the process of establishing this general construction, we must formally define populationstate (§5.3.1) and update rule (§5.3.3).We first assume that the population size, N , is fixed. This assumption allows us to place many of theclassical (fixed population size) stochastic evolutionary processes into the context of our framework. Afterdiscussing the components of a selection process and giving several examples, we formally define stochasticselection process in its full generality (covering populations of variable size) in §5.4.Just like a stochastic game, a stochastic selection process consists of a measurable state space, P, and astrategy space for each player. We assume thatS1 “ S2 “ ¨ ¨ ¨ “ SN “: S, (5.19)so that S “ SN . This assumption that the players all have the same strategy space is not restrictive sincethe dynamics of the process define the evolution of strategies; one can just enlarge each player’s strategyspace if necessary and let the dynamics ensure that those strategies that are not available to a given playerare never used. Before discussing these dynamics, we first need to explore the state space, P:1225.3.1 Population statesWe seek to appropriate the idea of a state space, P, of a stochastic game in order to introduce populationstructure into an evolutionary process. In fact, a population’s spatial structure (such as a graph) is just onecomponent of this space; mutation rates or phenotypes may also be included in this space. Therefore, ratherthan declaring P to be a space of population structures, we say that P is the space of population states.A population state indicates properties of the players and relationships between the players. (Note that“population state” in this context does not include the strategies of the players in the population.) If oneenumerates these players differently, then there should be a corresponding “relabeling” of the population stateso that these properties and relationships are preserved. Since changing the enumeration of the populationamounts to applying an element of SN (the symmetric group on N letters) to t1, . . . , Nu, it follows that Pmust be equipped with a group action of SN . Thus, if s P SN is a strategy profile of the population andP P P is a population state, then the pair ps,Pq represents the same population of players as ppis, piPqwhenever pi P SN . In other words, the population state space, P “ PN , which we write with a subscript toindicate the population size, is a measurable SN -space. More formally:Definition 16. A population state space for a set of N players is a measurable space, PN , equipped with anaction of SN in such a way that the map pi : PN Ñ PN is measurable for each pi P SN . If PN is a populationstate space for a set of N players, then a population state for these players is simply an element P P PN .ExamplesExample 19 (Graphs). Consider the set of N ˆN , nonnegative matrices over R,PGN :“ Γ P RNˆN : Γij ě 0 for each i, j “ 1, . . . , N(, (5.20)equipped with an action of SN defined by ppiΓqij “ Γpipiqpipjq. An element Γ P PGN defines a directed, weightedgraph whose vertices are t1, . . . , Nu, with an edge from i to j if and only if Γij ‰ 0 (the weight of the edgeis then Γij). Γ is undirected if Γij “ Γji for each i and j, and Γ is unweighted if Γ P t0, 1uNˆN .Example 20 (Sets). A set-structured population consists of a finite number of sets, each containing somesubset of the population, such that each player is in at least one set (Tarnita et al., 2009a). Set-structuredpopulations may be modeled using relations:PSN :“!R Ď t1, . . . , Nu ˆ t1, . . . , Nu : R is reflexive and symmetric). (5.21)123That is, if R P PSN , then pi, iq P R for each i (“reflexive”) and pi, jq P R if and only if pj, iq P R (“symmetric”).R P PSN defines a set-structure with i and j in a common set if and only if pi, jq P R. There is a naturalaction of SN on PSN defined bypi, jq P piR ðñ ppii, pijq P R, (5.22)which makes PSN into a population state space.Example 21 (Demes). A deme-structure on a population is a subdivision of the population into subpop-ulations, or “demes” (Taylor et al., 2001; Wakeley and Takahashi, 2004; Hauert and Imhof, 2012). Similarto set-structured populations, deme-structured populations may be modeled using relations (but with thestronger notion of equivalence relation):PDN :“!R Ď t1, . . . , Nu ˆ t1, . . . , Nu : R is reflexive, symmetric, and transitive). (5.23)PDN Ď PSN , and whereas a set-structured population may have overlapping sets, a deme-structured populationhas disjoint sets. The additional transitivity requirement guarantees that these sets partition t1, . . . , Nu.The action of SN on PDN is the one inherited from PSN , making PDN into a population state space.So far, we have considered population structures that describe spatial and qualitative relationships be-tween the players. One could also associate to the players quantities such as mutation rates or phenotypes:Example 22 (Mutation rates). Consider a process in which updates are based on births and deaths (such asa Moran or Wright-Fisher process). Moreover, suppose that the spatial structure of the population is a graph.If player i reproduces, then with probability εi the offspring adopts a novel strategy uniformly at random(“mutates”), and with probability 1 ´ εi the offspring inherits the strategy of the parent. The mutationrate, εi, is passed on directly from parent to offspring. The population state space for this process is thenPN :“ PGNˆr0, 1sN ; a population state consists of (i) a graph, indicating the spatial relationships between theplayers, and (ii) a profile of mutation rates, ε P r0, 1sN , with εi indicating the probability that the offspringof player i mutates. For update rules based on imitation, mutation rates might more appropriately be called“exploration rates,” and they are implemented slightly differently. In general, mutation rates appear indifferent forms and help to distinguish cultural and genetic update rules; we give several examples in §5.3.3.The upshot of this discussion is that a population state may consist of a spatial structure, such as a graphin PGN , as well as some extra information pertaining to the players, such as mutation rates.124Example 23 (Phenotype space). In addition to strategies, the players may also have phenotypes thataffect interactions with other players in the population (Antal et al., 2009; Nowak et al., 2009). If the spatialstructure of the population is a graph and the phenotype of each player is a one-dimensional discrete quantity,then one may define the population state space to be PGN ˆZN . Just as with mutation rates, the rest of theprocess determines how these phenotypes affect the dynamics. In §5.3.2, we continue this example and gointo the details of how the inclusion of phenotypes in the population state space affects the payoffs of theplayers, which can then be used to recover a model of evolution in phenotype space of Antal et al. (2009).Symmetries of population statesThe action of SN on PN can be used to formally define a notion of population symmetry:Definition 17 (Automorphism of population state). For a population state, P, in a population state space,PN , an automorphism of P is an element pi P SN such that piP “P. The group of automorphisms of Pis Aut pPq :“ StabSN pPq, where StabSN pPq denotes the stabilizer of P under the group action of SNon PN .If Γ P PGN , for example, then Aut pΓq is the group of graph automorphisms of Γ in the classical sense, i.e.the set of pi P SN such that Γpipiqpipjq “ Γij for each i and j. Such automorphisms have played an importantrole in the study of evolutionary games on graphs (see Taylor et al., 2007; De´barre et al., 2014). We discusssymmetries of population states further in §5.6.Now that we have a formal definition of population state space, we explore how this space influencesevolutionary dynamics. The processes we seek to model have two timescales: interactions and updates. In§5.3.2, we consider the influence of the population state on interactions, and in §5.3.3, we define update ruleand show how the population state fits into the update step of an evolutionary process.5.3.2 Aggregate payoff functionsPrior to stating the definition of a general payoff function for a stochastic selection process, we consider amotivating example that is based on a popular type of game used to model frequency-dependent fitness inevolutionary game theory:125Example 24. Consider the symmetric, two-player game whose payoff matrix is¨˚˝A1 A2A1 a11 a12A2 a21 a22‹˛‚. (5.24)If the population structure is a graph and the population state space is PGN , then one can construct a functionu : t1, 2uN ˆ PGN Ñ RN by letting ui be defined asui´ps1, . . . , sN q ,Γ¯:“Nÿj“1Γijasisj . (5.25)That is, ui is the “aggregate payoff” function for player i since it produces the total payoff from player i’sinteractions with all of his or her neighbors, weighted appropriately by the edge weights of the populationstructure. If pi P SN , thenui´ `spip1q, . . . , spipNq˘, piΓ¯“Nÿj“1Γpipiqpipjqaspipiqspipjq“Nÿj“1Γpipiqjaspipiqsj“ upipiq´ps1, . . . , sN q ,Γ¯. (5.26)Therefore, although uΓ :“ u p´,Γq : t1, 2uN Ñ RN need not be symmetric in the sense that uΓ ppisq “ piuΓ psqfor each s P t1, 2uN , it is symmetric in the sense thatu ppis, piΓq “ piu ps,Γq (5.27)for each s P t1, 2uN and Γ P PN “ PGN . In other words, all of the information that results in payoff asymmetryis contained in the population state space, PN .Using the function u and Eq. (5.27) of the previous example as motivation, we have:Definition 18 (Aggregate payoff function). An aggregate payoff function is a mapu : SN ˆ PN ÝÑ RN (5.28)126that satisfies u ppis, piPq “ piu ps,Pq for each pi P SN , s P SN , and P P PN .The symmetry condition in Definition 18, u ppis, piPq “ piu ps,Pq, implies that an aggregate payofffunction, u, is completely determined by the map u1 : SNˆPN Ñ R. Indeed, if u1 is known and pi P SN sends1 to i, then ui ps,Pq “ u1 ppis, piPq for each s P SN andP P PN , which recovers the map u : SNˆPN Ñ RN .This symmetry condition must hold even for individual encounters that are asymmetric (Example 25).ExamplesExample 25. In place of (5.24), one could consider a collection of bimatrices,Mij :“¨˚˝A1 A2A1 aij11, aji11 aij12, aji21A2 aij21, aji12 aij22, aji22‹˛‚, (5.29)indexed by i, j P t1, . . . , Nu (McAvoy and Hauert, 2015b). For each i and j, Mij is the payoff matrix forplayer i against player j, with the first coordinate of each entry denoting the payoff to player i and thesecond coordinate denoting the payoff to player j. The collection Mij(Ni,j“1 is equivalent to an element of`R2ˆ2˘NˆN, i.e. a 2ˆ2 real matrix indicating the payoff to player i against player j for each i, j “ 1, . . . , N .In this case, the population state space consists of more than spatial structures; it also includes the detailsof the payoff-asymmetry appearing in individual encounters. In other words, if the population is graph-structured, then a population state consists of a graph and a collection of payoff matrices of the form of Eq.(5.29), i.e.PN :“ PGN ˆ`R2ˆ2˘NˆN, (5.30)where the action of pi P SN on`R2ˆ2˘NˆNis pi`Mij˘Ni,j“1 :“`Mpipiqpipjq˘Ni,j“1. The aggregate payofffunction, u : SN ˆ PN Ñ RN , is defined byui´ps1, . . . , sN q ,´Γ,`Mij˘Ni,j“1¯¯:“Nÿj“1Γijaijsisj . (5.31)127For pi P SN , we see thatui´ `spip1q, . . . , spipNq˘, pi´Γ,`Mij˘Ni,j“1¯¯“Nÿj“1Γpipiqpipjqapipiqpipjqspipiqspipjq“Nÿj“1Γpipiqjapipiqjspipiqsj“ upipiq´ps1, . . . , sN q ,´Γ,`Mij˘Ni,j“1¯¯, (5.32)so u defines an aggregate payoff function in the sense of Definition 18.Remark 9. For a fixed collection,`Mij˘Ni,j“1, we could have instead let PN “ PGN andui´ps1, . . . , sN q ,Γ¯:“Nÿj“1Γijaijsisj . (5.33)However, for pi P SN , we would then haveui´ `spip1q, . . . , spipNq˘, piΓ¯“Nÿj“1Γpipiqjaipi´1pjqspipiqsj ; (5.34a)upipiq´ps1, . . . , sN q ,Γ¯“Nÿj“1Γpipiqjapipiqjspipiqsj . (5.34b)The only way (5.34a) and (5.34b) are the same for each s P t1, 2uN and Γ P PGN is if Mij “ Mpipiqpipjq foreach i, j “ 1, . . . , N , and this equality need not hold (which would mean that u, when defined in this way, isnot an aggregate payoff function in the sense of Definition 18). Therefore, by enlarging PN via (5.30) anddefining u via (5.31), we can essentially “factor out” the asymmetry present in the payoff function definedby (5.33). In other words, PN contains all of the non-strategy information that distinguishes the players’payoffs.Due to the separation of timescales in the selection processes we consider here, it often happens that u isindependent of a portion of the population state space. More specifically, the population state space can bedecomposed into an interaction state space, EN , and a dispersal state space, DN , such that PN “ EN ˆDNandu´s, pE ,Dq¯“ u´s,`E ,D 1˘ ¯(5.35)for each s P SN , E P EN , and D ,D 1 P DN . This decomposition generalizes models with separate interaction128and dispersal graphs (Taylor et al., 2007; Ohtsuki et al., 2007a,b; Pacheco et al., 2009; De´barre et al., 2014).The interaction state, for instance, might consist of a population structure and other information (such asphenotypes):Example 26 (Phenotype space, continued). Antal et al. (2009) study the evolution of cooperation inphenotype space. In terms of (5.24), each player has a strategy, A1 (“cooperate”) or A2 (“defect”), as wellas a one-dimensional phenotype, which is simply an integer. If an interaction state has a graph as its spatialstructure, then the interaction state space is EN :“ PGN ˆ ZN . Thus, an interaction structure consists of agraph, Γ P PGN , and an N -tuple of phenotypes r “ pr1, . . . , rN q P ZN , where ri is the phenotype of player iin the population. The phenotypes affect the strategies of the players as follows: cooperators cooperate withother neighbors with whom they share a phenotype, and they defect otherwise. Defectors always defect,regardless of phenotypic similarities. For each i, the payoff function, u : t1, 2uN ˆ EN Ñ RN , satisfiesui´ps1, . . . , sN q , pΓ, rq¯“Nÿj“1Γij´δri,rjasisj ``1´ δri,rj˘a22¯, (5.36)where δri,rj is 1 if ri “ rj and 0 otherwise. Therefore, one can directly implement the influence of phenotypeon strategy using the interaction state space, EN , and u.5.3.3 Update rulesWe saw at the beginning of §5.3 that for a game with n strategies in a well-mixed population, the state spacefor an evolutionary process isSN{SN –!pk1, . . . , knq P t0, 1, . . . , NuN : k1 ` ¨ ¨ ¨ ` kn “ N). (5.37)On the other hand, between update steps, one can simply fix some enumeration of the population andrepresent the state of the evolutionary process by an element of SN , i.e. a representative of the statespace, SN{SN . In populations with spatial structure, mutation rates, phenotypic differences, etc., thisrepresentative contains more information than simply a strategy profile; it also contains information aboutthe population state. In other words, at a fixed point in time, the state of the evolutionary process can bedescribed by an element of SN ˆ PN , where PN is a population state space. Of course, the evolutionarydynamics of the process should not be affected by the choice of enumeration of the players in each time step,129which means that the state space for the evolutionary process is naturally the quotient spaceS :“ `SN ˆ PN˘ {SN , (5.38)generalizing the state space of Eq. (5.37) to structured populations.We now wish to describe the update step of an evolutionary process on S. This update rule should notdepend on how the players in the updated population are labeled. For example, if a player dies and is replacedby the offspring of another player, then the result of this death and replacement is a new element of S. Inother words, the new population does not lie in SN ˆPN in a natural way; we must choose an enumerationof the players in order to get an element of SN ˆ PN , and this enumeration may be arbitrary. Therefore,given the current strategy profile and population state, an update rule should give a probability distributionover the state space of the process, S (not SN ˆPN ). On the other hand, in order to update the state of thepopulation, one needs to speak of the likelihood that each player in the current state is updated. To do so,one may choose a representative of the current state of the process, ps,Pq P SN ˆPN , which is equivalent tochoosing a labeling of the players at that point in time. Again, the distribution over S (conditioned on thecurrent state of the process) should not depend on the labeling of the current state. Finally, this distributionover S is a function of the fitness profile of the population; each player has a real-valued fitness, and theupdate rule depends on these values. Putting these components together, we have:Definition 19 (Update rule). An update rule is a map,U : RN ÝÑ K `SN ˆ PN , S˘ , (5.39)that satisfies the symmetry conditionU rpixs´ppis, piPq , E¯“ U rxs´ps,Pq , E¯(5.40)for each pi P SN , x P RN , ps,Pq P SN ˆ PN , and E P F pSq, where F pSq is the quotient σ-algebra onS “ `SN ˆ PN˘ {SN derived from F pSq and F pPN q.That is, an update rule is a family of Markov kernels,!U rxs)xPRNĎ K `SN ˆ PN , S˘ , (5.41)130parametrized by the fitness profiles of the population, x P RN , and satisfying Eq. (5.40). Eq. (5.40) saysthat the update does not depend on how the current population is represented. In other words, if ps,Pqand ppis, piPq are two representatives of the same population at time t, then the update rule treats ps,Pqand ppis, piPq as the same population. (If x is the fitness profile corresponding to the representative ps,Pq,then pix is the fitness profile corresponding to the representative ppis, piPq.)Together, an update rule and aggregate payoff function define a Markov chain on S whose kernel, κ, isconstructed as follows: Let f : R Ñ R be a payoff-to-fitness map, i.e. a function that converts a player’spayoff to fitness. Consider the functionF : RN ÝÑ RN: px1, . . . , xN q ÞÝÑ´f px1q , . . . , f pxN q¯, (5.42)which converts payoff profiles to fitness profiles. If u : SN ˆPN Ñ RN is an aggregate payoff function, then,for ps,Pq P SN ˆ PN and E P F pSq, we letκ´ps,Pq mod SN , E¯:“ U”F´u ps,Pq¯ı´ps,Pq , E¯. (5.43)κ is well defined since, for each pi P SN ,κ´ppis, piPq mod SN , E¯“ U”F´u ppis, piPq¯ı´ppis, piPq , E¯“ U”piF´u ps,Pq¯ı´ppis, piPq , E¯“ U”F´u ps,Pq¯ı´ps,Pq , E¯“ κ´ps,Pq mod SN , E¯, (5.44)where the second and third lines come from Eqs. (5.27) and (5.40), respectively.Update pre-rulesDespite the fact that the evolutionary processes we seek to model here naturally have S “ `SN ˆ PN˘ {SNfor a state space, many evolutionary processes in the literature are defined directly on SN ˆ PN (see Allenand Tarnita, 2012). Update rules are sometimes cumbersome to write out explicitly, and defining a Markovchain on SN ˆ PN instead of on`SN ˆ PN˘ {SN can simplify the presentation of the transition kernel.131In this context, the notion of “update rule” still makes sense, but we instead call it an update pre-rule todistinguish it from the update rule of Definition 19:Definition 20 (Update pre-rule). An update pre-rule is a map,U0 : RN ÝÑ K`SN ˆ PN , SN ˆ PN˘, (5.45)such that for each pi P SN , x P RN , ps,Pq P SN ˆ PN , and E P F pSq,U0 rpixs´ppis, piPq , E¯“ U0 rxs´ps,Pq , τE¯(5.46)for some τ P SN .In many cases, the permutation τ is just pi´1, i.e.U0 rpixs´ppis, piPq , piE¯“ U0 rxs´ps,Pq , E¯(5.47)for each pi P SN . However, all that an evolutionary process on SN ˆ PN really requires is that if stateppis, piPq is updated to ps1,Pq, then state ps,Pq is updated to pτs1, τP 1q for some τ . To relate Definitions19 and 20, consider the projection map,Π : SN ˆ PN ÝÑ`SN ˆ PN˘ {SN “ S: ps,Pq ÞÝÑ ps,Pq mod SN . (5.48)Π gives rise to a pushforward map on measures,Π˚ : ∆`SN ˆ PN˘ ÝÑ ∆ pSq: µ ÞÝÑ µ ˝Π´1, (5.49)which can be used to naturally derive an update rule from an update pre-rule:Proposition 7. An update pre-rule canonically defines an update rule.132Proof. Let U0 be an update pre-rule and consider the mapΠ˚U0 : RN ÝÑ K`SN ˆ PN , S˘: x ÞÝÑ!ps,Pq ÞÑ Π˚U0 rxs´ps,Pq ,´¯). (5.50)For pi P SN , x P R, ps,Pq P SN ˆ PN , and E P F pSq, there exists τ P SN such thatpΠ˚U0q rpixs´ppis, piPq , E¯“ U0 rpixs´ppis, piPq ,Π´1E¯“ U0 rxs´ps,Pq , τΠ´1E¯“ U0 rxs´ps,Pq ,Π´1E¯“ pΠ˚U0q rxs´ps,Pq , E¯, (5.51)so Π˚U0 is an update rule, which completes the proof.In other words, an update pre-rule can be “pushed forward” to an update rule. If S and PN are finite,then an update rule can also be “pulled back” to an update pre-rule:Proposition 8. If U is an update rule and S and PN are finite, then there exists an update pre-rule, U0,such that Π˚U0 “ U. That is, U can be “pulled back” to U0.Proof. From U, we define a map, U0, as follows:U0 rxs´ps,Pq , `s1,P 1˘ ¯ “ 1|orbSN ps1,P 1q|U rxs´ps,Pq , `s1,P 1˘ mod SN¯. (5.52)For pi P SN , we see from Eq. (5.40) thatU0 rpixs´ppis, piPq , `s1,P 1˘ ¯“ 1|orbSN ps1,P 1q|U rpixs´ppis, piPq , `s1,P 1˘ mod SN¯“ 1|orbSN ps1,P 1q|U rxs´ps,Pq , `s1,P 1˘ mod SN¯“ U0 rxs´ps,Pq , `s1,P 1˘ ¯. (5.53)133Therefore, U0 satisfies Eq. (5.46) and defines an update pre-rule. SincepΠ˚U0q rxs´ps,Pq , `s1,P 1˘ mod SN¯“ U0 rxs´ps,Pq ,Π´1´ `s1,P 1˘mod SN¯¯“ÿps2,P2qPorbSN ps1,P1qU0 rxs´ps,Pq , `s2,P2˘ ¯“ÿps2,P2qPorbSN ps1,P1q1|orbSN ps2,P2q|U rxs´ps,Pq , `s2,P2˘ mod SN¯“ÿps2,P2qPorbSN ps1,P1q1|orbSN ps1,P 1q|U rxs´ps,Pq , `s1,P 1˘ mod SN¯“ U rxs´ps,Pq , `s1,P 1˘ mod SN¯, (5.54)it follows that Π˚U0 “ U, which completes the proof.Remark 10. The proof of Proposition 8 requires thatE0 P F`SN ˆ PN˘ ùñ ΠE0 P F pSq . (5.55)In the case that SNˆPN is finite, the singletons generate the canonical (i.e. discrete) σ-algebra on SNˆPN ,and the image Π ps1,P 1q “ ps1,P 1q mod SN is measurable for each ps1,P 1q P SN ˆ PN . In general, theimage of a measurable set need not be measurable. However, the purpose of introducing an update pre-ruleis to provide an alternative way to obtain an update rule. An update rule defines a Markov chain on thetrue space of the evolutionary process, S; pulling this chain back to SN ˆ PN is not necessary.An update pre-rule defines a Markov chain on SN ˆ PN whose kernel, κ0, satisfiesκ0´ps,Pq , E0¯:“ U0”F´u ps,Pq¯ı´ps,Pq , E0¯(5.56)for each ps,Pq P SN ˆPN and E0 P F`SN ˆ PN˘. Denote by Π˚κ0 the kernel of the Markov chain definedby Π˚U0. The stationary distribution(s) of κ0 can be “pushed forward” to stationary distribution(s) of Π˚κ0via the pushforward map of Eq. (5.49):Proposition 9. If µ is a stationary distribution of the Markov chain defined by κ0, then Π˚µ is a stationarydistribution of the Markov chain defined by Π˚κ0.134Proof. Suppose that µ is a stationary distribution of κ0, i.e.µ pE0q “żsPSNˆPNκ0 ps, E0q dµ psq (5.57)for each E0 P F`SN ˆ PN˘. For each E P F pSq, it follows thatżs mod SNPSpΠ˚κ0q ps mod SN , Eq d pΠ˚µq ps mod SN q“żsPSNˆPNpΠ˚κ0q ps mod SN , Eq dµ psq“żsPSNˆPNκ0`s,Π´1E˘dµ psq“ µ `Π´1E˘“ pΠ˚µq pEq (5.58)by the change of variables formula and Eq. (5.57), which completes the proof.Thus, the passage from an update pre-rule to an update rule via Π˚ is compatible with the steady statesof the chains defined by κ0 and Π˚κ0, respectively.ExamplesWe now give several classical examples of update pre-rules and rules:Example 27 (Death-birth process). Suppose that S is finite and that PN is the finite subset of PGN consistingof the undirected, unweighted graphs on N vertices (with no other restrictions–this set contains regulargraphs, scale-free networks, etc.). In each step of a death-birth process, a player is selected uniformly atrandom from the population for death. The neighbors (determined by a graph, Γ P PN ) then compete to fillthe vacancy: a neighbor–say, player j–is chosen for reproduction with probability proportional to relativefitness, xj . The offspring of this player inherits the strategy of the parent and fills the vacancy left by thedeceased player. The population state (i.e. the graph) is left unchanged by this process. We define anupdate pre-rule for this process by giving transition probabilities from SN ˆ PN to itself when the N ´ 1surviving players in each round retain their labels. That is, if ps,Γq is the state of the process and playeri is chosen for death, Γ remains the same and only the ith coordinate of s is updated in order to obtain anew state, ps1,Γq. Thus, for two states, ps,Γq , ps1,Γ1q P SN ˆ PN , it must be the case that Γ “ Γ1 for thereto be a nonzero probability of transitioning from ps,Γq to ps1,Γ1q. The probability of choosing player i for135death is 1{N , and, if this player is chosen for death, a transition is possible only if sj “ s1j for each j ‰ i.The probability that player i is replaced by the offspring of a player using si’ isÿj‰iδsj ,s1i˜Γjixjřj‰i Γjixj¸, (5.59)where δsj ,s1i is 1 if sj “ s1i and 0 otherwise. Thus, for x P RN , the transition probability from ps,Γq to ps1,Γ1qis given by the update pre-rule, U0, defined byU0 rxs´ps,Γq , `s1,Γ1˘ ¯ :“ δΓ,Γ1 Nÿi“1˜źj‰iδsj ,s1j¸ˆ1N˙˜řj‰i δsj ,s1iΓjixjřj‰i Γjixj¸. (5.60)U0 is indeed an update pre-rule since, for each pi P SN ,U0 rpixs´ppis, piΓq , `pis1, piΓ1˘ ¯“ δpiΓ,piΓ1Nÿi“1˜źj‰iδspipjq,s1pipjq¸ˆ1N˙˜řj‰i δspipjq,s1pipiqΓpipjqpipiqxpipjqřj‰i Γpipjqpipiqxpipjq¸“ δΓ,Γ1Nÿi“1¨˝ źj‰pipiqδsj ,s1j‚˛ˆ 1N˙˜řj‰pipiq δsj ,s1pipiqΓjpipiqxjřj‰pipiq Γjpipiqxj¸“ δΓ,Γ1Nÿi“1˜źj‰iδsj ,s1j¸ˆ1N˙˜řj‰i δsj ,s1iΓjixjřj‰i Γjixj¸“ U0 rxs´ps,Γq , `s1,Γ1˘ ¯. (5.61)This example verifies in detail the symmetry condition of an update pre-rule, Eq. (5.46). Calculations forother processes are similar.Example 28 (Wright-Fisher process). In contrast to the Moran process of Example 18 and the death-birth process of Example 27, one could also consider a process in which the entire population is updatedsynchronously. For example, in the Wright-Fisher process (Ewens, 2004; Imhof and Nowak, 2006), thepopulation is updated as follows: A player–say, player i–is first selected for reproduction with probabilityproportional to fitness. The offspring of this player inherits the strategy of the parent with probability 1´εiand takes on a novel strategy uniformly at random with probability εi. The mutation rate of the offspringis inherited from the parent (so that the offspring’s offspring will also mutate with probability εi). Thisprocess is then repeated until there are N new offspring, and these offspring constitute the new population.Thus, one update step of the Wright-Fisher process involves updating the entire population.136Let S be finite and let m be some finite subset of r0, 1s. The population state space for this version of theWright-Fisher process is PN :“ mN . That is, a population state is an N -tuple of strategy-mutation rates,ε, with εi the mutation rate for player i. We define an update pre-rule, U0, as follows: for x P RN andps, εq , ps1, ε1q P SN ˆ PN , letU0 rxs´ps, εq , `s1, ε1˘ ¯ “ Nźi“1Nÿj“1δε1i,εjˆxjx1 ` ¨ ¨ ¨ ` xN˙„δs1i,sj p1´ εjq ` εjˆ1n˙. (5.62)For each pi, τ P SN , x P RN , and ps, εq , ps1, ε1q P SN ˆ PN , it is readily verified thatU0 rpixs´ppis, piεq , `τs1, τε1˘ ¯ “ U0 rxs´ ps, εq , `s1, ε1˘ ¯. (5.63)Therefore, the resulting update rule, U :“ Π˚U0, satisfiesU rxs´ps, εq mod SN ,`s1, ε1˘mod SN¯“ ˇˇorbSN `s1, ε1˘ˇˇ Nźi“1Nÿj“1δε1i,εjˆxjx1 ` ¨ ¨ ¨ ` xN˙„δs1i,sj p1´ εjq ` εjˆ1n˙. (5.64)Consider the simple case ε “ ε1 “ 0 (meaning there are no strategy mutations). If k1r denotes the frequencyof strategy r in state s1 for r “ 1, . . . , n, then |StabSN ps1,0q| “ k11! ¨ ¨ ¨ k1n!, which means thatU rxs´ps,0q mod SN ,`s1,0˘mod SN¯“ ˇˇorbSN `s1,0˘ˇˇ Nźi“1Nÿj“1ˆxjx1 ` ¨ ¨ ¨ ` xN˙δs1i,sj“ |SN ||StabSN ps1,0q|Nźi“1Nÿj“1ˆxjx1 ` ¨ ¨ ¨ ` xN˙δs1i,sj“ˆNk11, . . . , k1n˙ Nźi“1Nÿj“1ˆxjx1 ` ¨ ¨ ¨ ` xN˙δs1i,sj , (5.65)where the third line was obtained using the orbit-stabilizer theorem (see Knapp, 2006). Eq. (5.65) is just theclassical formula for the transition probabilities of the Wright-Fisher process based on multinomial sampling(Kingman, 1980; Durrett, 2002; Der et al., 2011).Example 29 (Pairwise comparison process). In each of our examples so far, both S and PN have beenfinite. Since our theory allows for these sets to be measurable, we now give an example of an evolutionaryprocess whose strategy space is continuous. Let S “ r0,Ks for some K ą 0. This interval might be the137strategy space for a public goods game, for instance, with K the maximum amount any one player maycontribute to the public good. As in Examples 18 and 28, let PN :“ mN for some finite subset, m, of r0, 1s;an element of PN is just a profile of mutation rates, ε.In each step of a pairwise comparison process, a player–say, player i–is selected uniformly at randomfrom the population to evaluate his or her strategy. Another player–say, player j–is then chosen uniformlyat random from the rest of the population as a model player. With probability 1´ εi, the focal player takesinto account the model player and probabilistically updates his or her strategy as follows: if xi (resp. xj)is the fitness of the focal (resp. model) player, and if β ě 0 is the selection intensity, then the focal playerimitates the model player with probability11` e´βpxj´xiq (5.66)and retains his or her current strategy with probability11` e´βpxi´xjq (5.67)(Szabo´ and To˝ke, 1998). On the other hand, with probability εi the focal player ignores the model playercompletely. In this case, the focal player “explores” and adopts a new strategy from the interval r0,Ksprobabilistically according to a truncated Gaussian distribution centered at si (the current strategy of thefocal player). For some specified variance, σ2, this truncated Gaussian distribution has for a density functionφsi pxq :“˜ż K0exp˜´py ´ siq22σ2¸dy¸´1exp˜´px´ siq22σ2¸. (5.68)The parameter σ may be interpreted as a measure of how venturesome a player is, with cautious explorationcorresponding to small σ and risky exploration corresponding to large σ. The density function, φsi , definesa probability measure,Φsi : F pSq ÝÑ r0, 1s: E ÞÝÑżEφsi pxq dx. (5.69)If player i ignores the model player, then he or she adopts a strategy from E P F pSq with probability Φsi pEq.Thus, a player who explores is more likely to adopt a strategy close to his or her current strategy than one138farther away. For β ě 0, letgβ pxq :“ 11` e´βx (5.70)be the logistic function. (In terms of this function, the probability that a focal player with fitness xi imitatesa model player with fitness xj is gβ pxj ´ xiq.) We assemble these components into an update pre-rule, U0,as follows: For x P RN , ps, εq P SN ˆPN , and a measurable rectangle, E1ˆ¨ ¨ ¨ˆEN ˆE1 P F pSqN ˆF pPN q,letU0 rxs´ps, εq , E1 ˆ ¨ ¨ ¨ ˆ EN ˆ E1¯:“ δε`E1˘ Nÿi“1ˆ1N˙˜źj‰iδsj pEjq¸ÿj‰iˆ1N ´ 1˙#εiΦsi pEiq` p1´ εiq”δsj pEiq gβ pxj ´ xiq ` δsi pEiq gβ pxi ´ xjqı+, (5.71)and extend this definition additively to disjoint unions of measurable rectangles. For each x P RN andps, εq P SN ˆ PN , one can verify that U0 rxs´ps, εq ,´¯extends to a measure on SN ˆ PN by the Hahn-Kolmogorov theorem, which we also denote by U0 rxs´ps, εq ,´¯. It is readily verified that U0 is an updatepre-rule, so U0 extends to an update rule, Π˚U0, by Proposition 7. This example illustrates how the strategymutations might themselves depend on the strategies (as opposed to simply being uniform random variableson S as they were in the previous examples).In Examples 18, 28, and 29, we considered processes with heterogeneous mutation rates (meaning εidepends on i). In Examples 18 and 28, there is a nonzero probability of transitioning from a state withheterogeneous mutation rates to a state with homogeneous mutation rates. For instance, in the Wright-Fisher process of Example 28, if ps, εq is a state such that ε` ‰ ε`1 for some ` and `1, and if ps1, ε1q is a statesatisfying s11 “ s12 “ ¨ ¨ ¨ “ s1N “ s` and ε11 “ ε12 “ ¨ ¨ ¨ “ ε1N “ ε`, then there is a nonzero probability oftransitioning from ps, εq to ps1, ε1q provided x` ą 0. Thus, the population state, which is simply of profileof mutation rates, can change from generation to generation. In contrast, the population state of Example29 cannot change from generation to generation since strategies are imitated and mutation rates are notinherited. In other words, much of the biological meaning behind the quantities appearing in the populationstate are encoded in the dynamics of the process via the update rule.In a more formal setting, let κ be the transition kernel obtained from an update rule via Eq. (5.43).139From the projection Π2 : SN ˆ PN Ñ PN , we obtain a maprΠ2 : `SN ˆ PN˘ {SN ÝÑ PN{SN: ps,Pq mod SN ÞÝÑP mod SN . (5.72)rΠ2 gives us a pushforward map, ´rΠ2¯˚ : ∆ pSq Ñ ∆ pPN{SN q, which we use to formalize the intuitionbehind “static” and “dynamic” population states:Definition 21 (Static and dynamic population states). A population state, P, in a population state space,PN , is static relative to κ if, for each s P SN ,´rΠ2¯˚ κ´ ps,Pq mod SN ,´¯ “ δP mod SN , (5.73)where δP mod SN denotes the Dirac measure on ∆ pPN{SN q centered at P mod SN . Otherwise, if κ is notstatic relative to κ, we say that P is dynamic relative to κ.In Examples 27 and 29, every population state is static. In Examples 18 and 28, only the populationstates (i.e. mutation profiles) with ε1 “ ε2 “ ¨ ¨ ¨ “ εN are static.5.4 Stochastic selection processes with variable population sizeSuppose now that the population size is dynamic, and let N Ď t0, 1, 2, . . . u be the set of admissible populationsizes. As in §5.3, let S be the strategy space for each player. Instead of having a single population statespace, we now require the existence of a population state space, PN , for each N P N. The state space forsuch a process isS :“ğ`PN`S` ˆ P`˘ {S`, (5.74)whereŮ`PN`S` ˆ P`˘ {S` denotes the disjoint union of the spaces `S` ˆ P`˘ {S`. Instead of a singleaggregate payoff function and update rule, we now require that there be an aggregate payoff function,uN : SN ˆ PN Ñ RN , and an update rule, UN : RN Ñ K`SN ˆ PN , S˘, for each admissible populationsize, N P N. If the population currently has size N , then uN determines the payoffs to the players in theinteraction step, and UN updates the population (possibly to one of a different size). Of course, for each140pi P SN , x P RN , s P SN , P P PN , and E P F pSq, these functions must satisfyuN ppis, piPq “ piuN ps,Pq ; (5.75a)UN rpixs´ppis, piPq , E¯“ UN rxs´ps,Pq , E¯(5.75b)just as they did in Definitions 18 and 19, respectively.Finally, we have the definition of a stochastic selection process in its full generality:Definition 22 (Stochastic selection process). A stochastic selection process consists of the following com-ponents:(1) a set of admissible population sizes, N Ď t0, 1, 2, . . . u;(2) a measurable strategy space, S;(3) for each N P N, a population state space, PN ;(4) for each N P N, an aggregate payoff function, uN : SN ˆ PN Ñ RN ;(5) a payoff-to-fitness function, f : RÑ R;(6) for each N P N, an update rule, UN : RN Ñ K `SN ˆ PN , S˘, whereS :“ğ`PN`S` ˆ P`˘ {S`. (5.76)The components of a stochastic selection process produce a Markov chain on S whose kernel, κ, is definedas follows: for N P N, ps,Pq P SN ˆ PN , and E P S,κ´ps,Pq mod SN , E¯“ UN”F´uN ps,Pq¯ı´ps,Pq , E¯. (5.77)It is readily verified that κ is well defined (see Eq. (5.44)).Remark 11. The payoff-to-fitness function, f , in requirement (5) of a stochastic selection process, is notstrictly necessary. It could instead be absorbed into either the payoff function (which would then be a fitnessfunction) or the update rule (which would then be a family of transition kernels parametrized by payoffprofiles rather than by fitness profiles). We include this function as a part of a stochastic selection processfor three reasons: (1) having an aggregate payoff function instead of an aggregate fitness function allows141for a more straightforward comparison to the theory of stochastic games; (2) having an update rule be afamily of transition kernels parametrized by fitness simplifies its presentation (see §5.3.3); and (3) payoff-to-fitness functions are often explicitly mentioned in models of evolutionary games in the literature. Tuning theselection strength of a process, for instance, amounts to modifying the payoff-to-fitness function, so includingthis function in a stochastic selection process allows one to more explicitly separate the various componentsof a selection process.Remark 12. The notion of update pre-rule also makes sense for populations of variable size, although onemust define the symmetry condition, Eq. (5.46), with greater care. If N is the set of admissible populationsizes, then we require–for each N P N–a mapUN0 : RN ÝÑ K˜SN ˆ PN ,ğ`PNS` ˆ P`¸. (5.78)For each N P N, there is an action of SN on Ů`PN S` ˆ P` defined bypi ps,Pq “$’’&’’%ppis, piPq ps,Pq P SN ˆ PN ;ps,Pq ps,Pq R SN ˆ PN .(5.79)From the set of groups tSNuNPN, one can construct the free product,SN :“ ˚NPNSN , (5.80)which is just the analogue of disjoint union in the category of groups (see Knapp, 2006). Collectively, theactions of SN ˆ PN on Ů`PN S` ˆ P` defined by Eq. (5.79) (over all N P N) result in a (measurable) actionof SN onŮ`PN S` ˆ P`. For UN0(NPN to define a collection of update pre-rules, we require that for eachN P N, pi P SN , x P RN , ps,Pq P SN ˆ PN and E0 P F`Ů`PN S` ˆ P`˘, there exists τ P SN such thatU0 rpixs´ppis, piPq , E0¯“ U0 rxs´ps,Pq , τE0¯. (5.81)The reason we require τ to be in the free product, SN, instead of just in SN for some N P N, is that it neednot hold that E0 P F`SN ˆ PN˘for some N P N. E0 could be some complicated measurable set consistingof elements of SN ˆPN for several N P N, so we need some way of relabeling elements SN ˆPN for severalvalues of N P N simultaneously. Extending the action of SN on SN ˆ PN to Ů`PN S` ˆ P` via (5.79), in142order to form the free product, SN, via Eq. (5.80), accomplishes this task.Analogues of Propositions 7, 8, and 9 also hold in this context, but we do not go through the detailshere; the proofs are essentially the same as they were in §5.3.3.5.5 ApplicationsHere we give some example applications of having a formal framework for selection processes.5.5.1 Evolutionary gamesWe use the term “selection process” instead of “evolutionary game” in order to emphasize that the updatestep is based on the principles of natural selection. Several types of adaptive processes appearing in theeconomics literature have been referred to as evolutionary games. Best-response dynamics of Ellison (1993)is a procedure in which, at each round, the players update their strategies based on the best responses to theiropponents in the previous round. This process is known to converge to a Nash equilibrium of the game. Hartand Mas-Colell (2000) define a similar process called regret matching that leads to a correlated equilibriumof the game. These processes can be phrased as stochastic games (along with appropriate strategies), butthey are not stochastic selection processes, as we now illustrate with best-response dynamics:Suppose N “ 2. Let S “ tA,Bu and let u : S2 Ñ R2 be the payoff function for a game between twoplayers. If best-response dynamics in this population defines a stochastic selection process, then there existsa population state space, P2, and an update rule, U, such that the transition kernel of the resulting Markovchain, κu, satisfiesκu´ps,Pq , E¯“ U ru psqs´ps,Pq , E¯(5.82)for each ps,Pq P S2 ˆ P2 and E P F pSq. In other words, κu depends on u insofar as U depends on R2.Consider the two payoff functions, u, v : S2 Ñ R2, defined by¨˚˝u pA,Aq u pA,Bqu pB,Aq u pB,Bq‹˛‚“¨˚˝2 21 1‹˛‚; (5.83a)¨˚˝v pA,Aq v pA,Bqv pB,Aq v pB,Bq‹˛‚“¨˚˝2 01 1‹˛‚. (5.83b)143Since u pB,Bq “ v pB,Bq “ 1, it follows that for s “ pB,Bq and any P P P2,κu´´pB,Bq ,P¯, E¯“ U ru pB,Bqs´´pB,Bq ,P¯, E¯“ U rv pB,Bqs´´pB,Bq ,P¯, E¯“ κv´´pB,Bq ,P¯, E¯. (5.84)Thus, the strategy profile pB,Bq must be updated by best-response dynamics in the same way for bothfunctions, u and v. However, best-response dynamics actually results in different updates of pB,Bq for thesetwo games: for u, the profile pB,Bq is updated to pA,Aq; for v, the profile pB,Bq is updated to pB,Bq (itis already a Nash equilibrium). The key observation is that, while the Markov chain defined by a stochasticselection process depends on the aggregate payoff function, u, the update rule is independent of u. Incontrast, the update step in best-response dynamics clearly depends on u. In a stochastic selection process,the only role of the aggregate payoff function is to determine the fitness profile (which is then passed to theupdate rule).5.5.2 Symmetries of evolutionary processesA homogeneous evolutionary process is one in which any two states consisting of a single A-mutant in amonomorphic B-population are equivalent (McAvoy and Hauert, 2015a). More specifically, suppose thata population state consists of a graph and a profile of mutation rates; that is, PN “ PGN ˆ mN for somefinite subset, m, of r0, 1s. If s, s1 P SN , and if pi P SN satisfies piΓ “ Γ and piε “ ε for some Γ P PGNand ε P mN , then the states ps, pΓ, εqq and ppis, pΓ, εqq are evolutionarily equivalent in the sense that piinduces an automorphism of the Markov chain on SN ˆ PN that sends ps, pΓ, εqq to ppis, pΓ, εqq. Thisresult is a special case of an observation that is completely obvious in the context of stochastic selectionprocesses: if pi P Aut pPq for some P in a population state space, PN , then the representatives ps,Pq andppis,Pq “ ppis, piPq define exactly the same state in S. Therefore, working on the true state space of anevolutionary process, S, helps to elucidate structural symmetries that are not as clear when working with aMarkov chain on SN ˆ PN that is defined by an update pre-rule.1445.6 DiscussionAn update rule in the classical sense, vaguely speaking, generally consists of information about births anddeaths (or imitation). Choosing to represent the strategies of the players in the population as an N -tuple,s P SN , is just a mathematical convenience; the update rule is independent of how the players are labeled.We defined the notion of update pre-rule (Definition 20) in order to relate stochastic selection processes tothe way in which evolutionary processes are frequently modeled–as Markov chains on SN (or, more generally,on SN ˆ PN ). In many cases, an update pre-rule is simpler to write down explicitly than an update rulesince one can choose a convenient enumeration of the players in each time step. We showed that an updatepre-rule can always be “pushed forward” to an update rule, so it is sufficient to give an update pre-rule inplace of an update rule in the definition of stochastic selection process. However, an update rule is the truemathematical formalization of an evolutionary update in this context since it is independent of the labelingof the players.Stochastic selection processes encompass existing models of selection such as the evolutionary gameMarkov chain of Wage (2010) and the evolutionary Markov chain of Allen and Tarnita (2012). Wage (2010)considers a Markov chain arising from probability distributions over a collection of inheritance rules. Aninheritance rule is a map, I : t1, . . . , Nu Ñ t1, . . . , N,mu, that designates the source of a player’s strategy:if I piq “ j ‰ m, then player i inherits his or her strategy from player j; if I piq “ m, then player i’s strategyis the result of a random mutation (assumed to be uniform over the strategy set). Similarly, Allen andTarnita (2012) model evolution in populations with fixed size and structure using replacement events. Areplacement event is a pair, pR,αq, consisting of a collection, R Ď t1, . . . , Nu, of players who are replacedand a rule, α : R Ñ t1, . . . , Nu, indicating the parent of each offspring involved in the replacement. Theseframeworks provide good models for many classical evolutionary processes, but they do not account forgenetic processes with crossover, for instance. Moreover, one could imagine a cultural process in which aplayer updates his or her strategy based on some complicated synthesis of many strategies in the population.Our framework generalizes these models, taking into account arbitrary strategy spaces, payoff functions,population structures, and update rules.In the definition of a population state space (Definition 16), we require a measurable space, PN , alongwith a measurable action of SN on PN . Naturally, this setup raises the question of what types of SN -actionsone can put on PN while still retaining the desired properties of an evolutionary process. If one were totake an update rule, U, and then arbitrarily change the action of SN on PN , then U need not remain anupdate rule under this new action. For example, let Γ0 P PGN be the Frucht graph, which is an undirected,145unweighted, regular graph with N “ 12 vertices and no nontrivial symmetries (Frucht, 1939). In other words,if Γ0pipiqpipjq “ Γ0ij for each i, j “ 1, . . . , 12, then pi “ id. Instead of the standard action of SN on PGN , one couldinstead declare that SN acts trivially on PGN ; in particular, pi‹Γ “ Γ for each pi P SN and Γ P PGN . If U is theupdate rule for the death-birth process (see Example 27), then it follows that the representatives`s,Γ0˘and`pis, pi ‹ Γ0˘ “ `pis,Γ0˘ define the same point in the state space, S. For the (frequency-dependent) SnowdriftGame, all 12 states consisting of a single cooperator in a population of defectors give rise to different fixationprobabilities for cooperators (see McAvoy and Hauert, 2015a). Therefore, it cannot be the case that Udefines a Markov chain on S via Eq. (5.43); in particular, U is no longer a well-defined update rule underthe new action, ‹, of SN on PN .The dynamics of a stochastic selection process, which are obtained via its update rule, encode much ofthe biological meaning of the components that constitute the population state space. In both of Examples28 and 29, the population state space was PN :“ mN for some finite subset, m, of r0, 1s. On the other hand,the interpretations of these mutation rates were completely different in these two processes: in Example 28,the mutation applied to the offspring of reproducing players; in Example 29, the mutation was interpretedas “exploration” and applied to a player who was chosen to update his or her strategy. One could evenconsider different implementations of mutation rates in the same process: in a genetic process based onreproduction, a player’s strategy-mutation rate may be inherited from the parent (as in Examples 18 and28), or, alternatively, it may be determined by a player’s spatial location (see McAvoy and Hauert, 2015a).These details are encoded entirely in the update rule.Although the framework we present here is clearly aimed at evolutionary games used to describe naturalselection, related processes that are not technically “games” may also constitute stochastic selection pro-cesses. Evolutionary algorithms, for example, form an important subclass of stochastic selection processes.These algorithms seek to apply the principles of natural selection to solve search and optimization problems(Ba¨ck, 1996). Evolutionary algorithms typically do not have population state spaces, which, in our context,means that PN can be taken to be a singleton equipped with the trivial action of SN . A popular typeof evolutionary algorithm, known as a genetic algorithm, involves representing the elements of the searchspace, i.e. the genomes in S, as sequences of binary digits. Each genome is then assigned a fitness based onits viability as a solution to the problem at hand. (Unlike in biological populations, the fitness landscape,although complex, is inherently static and does not depend on the other members of the population.) Theupdate step, which is commonly designed to mimic sexual reproduction in nature, involves a combination ofselection, crossover, and mutation. A population of genomes is then repeatedly updated until a sufficiently fit146genome appears. Despite the fact that biological reproduction generally involves either one (asexual) or two(sexual) parents, evolutionary algorithms have been simulated using many parents (Chambers, 1998). Othercomponents of the update step in some algorithms, such as stochastic universal sampling (Baker, 1987),elitism (Baluja and Caruana, 1995), and tournament selection (Poli, 2005), are all readily incorporated intoour model of stochastic selection processes.In Example 29, we saw an evolutionary process with an uncountably infinite state space. A state spaceof this sort arises naturally in the public goods game, for instance, where there is a continuous range ofinvestment levels. This example also illustrates the more complicated ways in which strategy mutationscan be incorporated into an evolutionary process. If a player has a strategy, x P r0,Ks for some K ą 0,then it may be the case that this player is more likely to “explore” strategies close to x than he or she isto switch to strategies farther away. A truncated Gaussian random variable on r0,Ks, whose variance is ameasure of how venturesome a player is, captures this type of strategy exploration and is easily incorporatedinto the update rule of an evolutionary process. This type of mutation has appeared in the context ofadaptive dynamics (Doebeli et al., 2004) and, more recently, in a study of stochastic evolutionary branching(Wakano and Iwasa, 2012), but it has been largely ignored elsewhere in the literature on evolutionary gametheory, where strategy mutations typically involve switching between two strategies or else are governed bya uniform random variable over the strategy space. Further studies of the dynamics of processes with thesebiologically-relevant mutations are certainly warranted.Our framework makes no assumptions on the cardinality of S and PN ; all that is required is that thesespaces be measurable (and that PN be equipped with an action of SN ). Markov chains on continuousstate spaces have unique stationary distributions under certain circumstances (see Durrett, 2009), but, inthe generality of this framework, it need not be the case that a stationary distribution is unique. Evenif S is finite and there are nonzero strategy-mutation rates, the spatial structure of the population mightbe disconnected, resulting in multiple stationary distributions. Particular instances of stochastic selectionprocesses might have the property that nonzero strategy-mutation rates imply that the Markov chain definedby the process is irreducible (Fudenberg and Imhof, 2006, 2008; Allen and Tarnita, 2012) or at least has aunique stationary distribution (McAvoy, 2015a), but these phenomena need not hold in general. Our goalhere was not to study the dynamics of any particular subclass of stochastic selection processes, but ratherto formalize what these processes are from a mathematical viewpoint.Our general theory of stochastic selection processes provides a mathematical foundation for a broadclass of processes used to describe evolution by means of natural selection in finite populations. Stochastic147selection processes also provide a mathematical framework for processes with variable population size, atopic that has received surprisingly little attention in the literature. Although many biological interactionshave been modeled using classical games, the differences between stochastic games and stochastic selectionprocesses illustrate a fundamental distinction between classical and evolutionary game theory. There is still alot to discover about the dynamics of selection processes in finite populations (especially those with variablepopulation size), and our hope is that this framework elucidates the roles of the components of processesbased on natural selection and advances the effort to transform evolution into a mathematical theory.148Chapter 6Autocratic strategies for repeatedgames with simultaneous movesThe recent discovery of zero-determinant strategies for the iterated Prisoner’s Dilemma sparked a surgeof interest in the surprising fact that a player can exert unilateral control over iterated interactions.These remarkable strategies, however, are known to exist only in games in which players choose betweentwo alternative actions such as “cooperate” and “defect.” Here we introduce a broader class of autocraticstrategies by extending zero-determinant strategies to iterated games with more general action spaces.We use the continuous Donation Game as an example, which represents an instance of the Prisoner’sDilemma that intuitively extends to a continuous range of cooperation levels. Surprisingly, despite thefact that the opponent has infinitely many donation levels from which to choose, a player can devise anautocratic strategy to enforce a linear relationship between his or her payoff and that of the opponent evenwhen restricting his or her actions to merely two discrete levels of cooperation. In particular, a player canuse such a strategy to extort an unfair share of the payoffs from the opponent. Therefore, although theaction space of the continuous Donation Game dwarfs that of the classical Prisoner’s Dilemma, playerscan still devise relatively simple autocratic and, in particular, extortionate strategies.6.1 IntroductionGame theory provides a powerful framework to study interactions between individuals (“players”). Amongthe most interesting types of interactions are social dilemmas, which result from conflicts of interest between149individuals and groups (Dawes, 1980; Van Lange et al., 2013). Perhaps the simplest and most well-studiedmodel of a social dilemma is the Prisoner’s Dilemma (Axelrod and Hamilton, 1981). A two-player gamewith actions, C (“cooperate”) and D (“defect”), and payoff matrix,¨˚˝C DC R SD T P‹˛‚, (6.1)is said to be a Prisoner’s Dilemma if T ą R ą P ą S (Axelrod, 1984). In a Prisoner’s Dilemma, defectionis the dominant action, yet the players can realize higher payoffs from mutual cooperation (R) than theycan from mutual defection (P ), resulting in a conflict of interest between the individual and the pair, whichcharacterizes social dilemmas. Thus, in a one-shot game (i.e. a single encounter), two opponents have anincentive to defect against one another, but the outcome of mutual defection (the unique Nash equilibrium)is suboptimal for both players.One proposed mechanism for the emergence of cooperation in games such as the Prisoner’s Dilemmais direct reciprocity (Trivers, 1971; Nowak, 2006b), which entails repeated encounters between players andallows for reciprocation of cooperative behaviors. In an iterated game, a player might forgo the temptationto defect in the present due to the threat of future retaliation–“the shadow of the future”–or the possibilityof future rewards for cooperating (Axelrod, 1984; Delton et al., 2011), phenomena for which there is boththeoretical and empirical support (Heide and Miner, 1992; Dal Bo´, 2005). One example of a strategy for theiterated game is simply to copy the action of the opponent in the previous round (“tit-for-tat”) (Axelrod,1984). Alternatively, a player might choose to retain his or her action from the previous round if and onlyif the most recent payoff was R or T (“win-stay, lose-shift”) (Nowak and Sigmund, 1993). These examplesare among the simplest and most successful strategies for the iterated Prisoner’s Dilemma (Nowak, 2006a).In a landmark paper, Press and Dyson (2012) deduce the existence of strategies (known as zero-determinantstrategies) that allow a single player to exert much more control over this game than previously thoughtpossible. Since their introduction, these strategies have been extended to cover multiplayer social dilemmas(Pan et al., 2015; Hilbe et al., 2014b) and temporally-discounted games (Hilbe et al., 2015a). Moreover, zero-determinant strategies have been studied in the context of evolutionary game theory (Hilbe et al., 2013a;Adami and Hintze, 2013; Stewart and Plotkin, 2013; Hilbe et al., 2015b), adaptive dynamics (Hilbe et al.,2013b), and human behavioral experiments (Hilbe et al., 2014a). In each of these studies, the game is as-sumed to have only two actions: cooperate and defect. In fact, the qualifier of “zero-determinant” strategies150actually reflects this assumption because they force a matrix determinant to vanish for action spaces withonly two options. We show here that this assumption is unnecessary.Suppose that players X and Y interact repeatedly with no limit on the number of interactions. For gameswith two actions, C and D, a memory-one strategy for player X is a vector, p “ ppCC , pCD, pDC , pDDqᵀ,where pxy is the probability that X cooperates following an outcome in which X plays x and Y plays y. LetsX “ pR,S, T, P qᵀ and sY “ pR, T, S, P qᵀ be the payoff vectors for players X and Y , respectively, and letα, β, and γ be fixed constants. Press and Dyson (2012) show that if there is a constant, φ, for whichrp :“¨˚˚˚˚˚˚˝˚pCC ´ 1pCD ´ 1pDCpDD‹˛‹‹‹‹‹‹‚“ φ´αsX ` βsY ` γ¯, (6.2)then X can unilaterally enforce the linear relationship αpiX ` βpiY ` γ “ 0 on the average payoffs, piX andpiY , by playing p. A strategy, p, that satisfies Eq. (6.2) is known as a “zero-determinant” strategy dueto the fact that rp causes a particular matrix determinant to vanish (Press and Dyson, 2012). However,what is important about these strategies is not that they cause some matrix determinant to vanish, butrather that they unilaterally enforce a linear relationship on expected payoffs. Therefore, we refer to thesestrategies and their generalization to arbitrary action spaces as “autocratic strategies.” Of particular interestare extortionate strategies, which ensure that a player receives an unfair share of the payoffs exceeding thepayoff at the Nash equilibrium (Stewart and Plotkin, 2012). Hence, if P is the payoff for mutual defectionin the Prisoner’s Dilemma, then p is a an extortionate strategy for player X if p unilaterally enforces theequation piX ´ P “ χ ppiY ´ P q for some extortion factor, χ ě 1.Apart from finite action sets, perhaps the most common type of action space is an interval of the formr0,Ks for some K ą 0. An element x P r0,Ks represents a player’s investment or cooperation level (up tosome maximum, K), such as the amount a player invests in a public good (Doebeli and Hauert, 2005); thevolume of blood one vampire bat donates to another (Wilkinson, 1984); the amount of resources used bymicrobes to produce siderophores (West and Buckling, 2003); or the effort expended in intraspecies grooming(Hemelrijk, 1994; Akinyi et al., 2013). It is important to note that games with continuous action spacescan yield qualitatively different results than their discrete counterparts. For example, the strategy “raise-the-stakes” initially offers a small investment in Prisoner’s Dilemma interactions and subsequently raises thecontribution in discrete increments if the opponent matches the investment (Roberts and Sherratt, 1998).151However, in a continuous action space, raise-the-stakes evolves into defection due to the fact that anotherstrategy can be arbitrarily close–in terms of the initial investment and subsequent increases in contribution–yet exhibit qualitatively different behavior (Killingback and Doebeli, 1999). In particular, raise-the-stakessucceeds in a discrete strategy space but fails in a continuous one.Akin (2015) calls the vector, rp, of Eq. (6.2) a Press-Dyson vector. For continuous action spaces, thepayoff vectors, sX and sY , must be replaced by payoff functions, uX px, yq and uY px, yq. That is, ui px, yqdenotes the payoff to player i when X plays x and Y plays y. The analogue of the linear combinationαsX ` βsY ` γ is the function αuX ` βuY ` γ. Here, we formally define a Press-Dyson function thatextends the Press-Dyson vector to iterated games with arbitrary action spaces. This extension allows one todeduce the existence of strategies that unilaterally enforce linear relationships on the payoffs in more generaliterated games. In particular, autocratic (or zero-determinant strategies) are not peculiar to games with two(or even finitely many) actions. Moreover, we show that, under mild conditions, autocratic strategies can bechosen based on only two elements in the action space. That is, player X can enforce a linear relationshipon expected payoffs by choosing a memory-one strategy that plays just two actions, despite the fact that theopponent may have an infinite number of actions from which to choose. We give examples of such autocraticstrategies in the continuous Donation Game, which represents an instance of the Prisoner’s Dilemma butwith an extended, continuous action space.6.2 Autocratic strategiesConsider a two-player iterated game with a general action space and payoff functions, uX px, yq and uY px, yq,for players X and Y , respectively. Players X and Y interact repeatedly (infinitely many times), derivinga payoff at each round based on uX and uY . We treat games with temporally-discounted payoffs, whichmeans that for some discounting factor λ with 0 ă λ ă 1, a payoff of 1 at time t ` τ is treated the sameas a payoff of λτ at time t (see Fudenberg and Tirole, 1991). Alternatively, one may interpret this gameas having a finite number of rounds, where in any given round λ denotes the probability of another round(Nowak, 2006b), which results in an expected game length of 1{ p1´ λq rounds.Suppose that, for each t ě 0, xt and yt are the actions used by players X and Y at time t. Then,irrespective of the interpretation of the discounting factor, the average payoff to player X ispiX “ p1´ λq8ÿt“0λtuX pxt, ytq . (6.3)152Figure 6.1: Reactive, memory-one strategies, σX rx, ys psq “ σX rys psq, for a game whose action space isthe interval r0,Ks. In (A), player X uses Y ’s action in the previous round to determine the probabilitieswith which she plays 0 and K in the next round. As Y ’s previous action (investment level), y, increases,so does the probability that X uses K in the subsequent round. Since X plays only these two actions, thisstrategy is called a two-point strategy. In (B), player X uses Y ’s action in the previous round to determinethe probability density function she uses to devise her next action. In contrast to (A), which depicts astrategy concentrated on just two actions, the strategy depicted in (B) is concentrated on an infinite rangeof actions in r0,Ks. As Y ’s previous action, y, increases, the mean of the density function governing X’s nextaction increases, which indicates that X is more willing reciprocate by increasing her investment (action) inresponse to Y increasing his investment.The payoff to player Y , piY , is obtained by replacing uX with uY in Eq. (6.3). If the strategies of X and Yare stochastic, then the payoffs are random variables with expectations, piX and piY (see §6.5). Of particularinterest are memory-one strategies, which are probabilistic strategies that depend on only the most recentoutcome of the game. If σX is a memory-one strategy for player X, then we denote by σX rx, ys psq theprobability that X plays s in the encounter after X plays x and Y plays y (see Fig. 6.1).The proofs of the existence of zero-determinant strategies (both in games with and without discounting)rely heavily on the fact that the action space is finite (Press and Dyson, 2012; Hilbe et al., 2015a; Akin, 2015).In particular, it remains unclear whether zero-determinant strategies are consequences of the finiteness of theaction space or instances of a more general concept. Here we introduce autocratic strategies as an extensionof zero-determinant strategies to discounted games with arbitrary (even uncountably infinite) action spaces.The traditional, undiscounted case is recovered in the limit λÑ 1.Theorem 9 (Autocratic strategies in arbitrary action spaces). Suppose that σX rx, ys is a memory-one153strategy for player X and let σ0X be player X’s initial action. If, for some bounded function, ψ, the equationαuX px, yq ` βuY px, yq ` γ “ ψ pxq ´ λżsPSXψ psq dσX rx, ys psq ´ p1´ λqżsPSXψ psq dσ0X psq (6.4)holds for each x P SX and y P SY , then σ0X and σX rx, ys together enforce the linear payoff relationshipαpiX ` βpiY ` γ “ 0 (6.5)for any strategy of player Y . In other words, the pair´σ0X , σX rx, ys¯is an autocratic strategy.Note that the initial action in Eq. (6.4), σ0X , becomes irrelevant without discounting (λ Ñ 1). Thefunction ψ may be interpreted as a “scaling function” that is used to ensure σX rx, ys is a feasible memory-one strategy; that is, ψ plays the same role as the scalar φ in Eq. (6.2), which is chosen so that the entries ofp are all between 0 and 1. In fact, Eq. (6.4) is the general-action-space analogue of Eq. (6.2) (see §6.7.1 formore details). We call ψ pxq´λ şsPSX ψ psq dσX rx, ys psq´p1´ λqşsPSX ψ psq dσ0X psq a Press-Dyson function,which extends the Press-Dyson vector of Eq. (6.2) to arbitrary action spaces. Unlike in the case of an actionspace with two options (“cooperate” and “defect”, for instance), for general action spaces–and actuallyalready for spaces with three options–autocratic strategies are defined only implicitly via Eq. (6.4). For eachx and y, the integral,şsPSX ψ psq dσX rx, ys psq, may be thought of as the weighted average (expectation) ofψ with respect to σX rx, ys. Since the integral is taken against σX rx, ys, in general one cannot solve Eq. (6.4)explicitly for σX rx, ys, so it is typically not possible to directly specify all pairs´σ0X , σX rx, ys¯that enforcethe linear payoff relation of Eq. (6.5).Interestingly, under mild conditions, σX can be chosen to be a remarkably simple “two-point” strategy,concentrated on just two actions, s1 and s2 (see Corollary 7 in §6.6). Player X can enforce Eq. (6.5) byplaying either s1 or s2 in each round, with probabilities determined by the outcome of the previous round(see Fig. 6.1(A)). Thus, a strategy of this form uses the (memory-one) history of previous play only to adjustthe relative weights placed on s1 and s2, while s1 and s2 themselves remain unchanged. Unlike in the case ofarbitrary σX , for fixed ψ, s1, and s2, it is possible to explicitly solve for all autocratic, two-point strategieson s1 and s2 satisfying Eq. (6.4) (see Remark 15 in §6.6). In a two-action game, every memory-one strategyis concentrated on two points, which explains why games like the classical Prisoner’s Dilemma fail to capturethe implicit nature of autocratic strategies.1546.3 Continuous Donation GameIn the classical Donation Game, cooperators pay a cost, c, to provide a benefit, b, to the opponent (Sigmund,2010). Defectors make no donations and pay no costs. The payoff matrix for this game is¨˚˝C DC R SD T P‹˛‚“¨˚˝C DC b´ c ´cD b 0‹˛‚. (6.6)For b ą c ą 0 a social dilemma arises because the payoff for mutual defection (the Nash equilibrium)is strictly less than the payoff for mutual cooperation, yet both players are tempted to shirk donations,which represents an instance of the Prisoner’s Dilemma. In the iterated (and undiscounted) version of theDonation Game, the main result of Press and Dyson (2012) implies that a memory-one strategy for playerX, p “ ppCC , pCD, pDC , pDDqᵀ, enforces piX ´ κ “ χ ppiY ´ κq for some κ and χ ě 1 whenever there exists ascalar, φ, for whichpCC “ 1´ φ pχ´ 1q pb´ c´ κq ; (6.7a)pCD “ 1´ φ pχb` c´ pχ´ 1qκq ; (6.7b)pDC “ φ pb` χc` pχ´ 1qκq ; (6.7c)pDD “ φ pχ´ 1qκ. (6.7d)The term κ is the baseline payoff and is interpreted as the payoff of p against itself (Hilbe et al., 2014a).For example, if κ “ 0 (the payoff for mutual defection) and χ ě 1 is the extortion factor, then Eq. (6.7)defines an extortionate strategy, which unilaterally enforces piX “ χpiY as long as φ is sufficiently small. Inthis sense, φ acts as a scaling factor to ensure each coordinate of p falls between 0 and 1.Instead of having discrete “levels” of cooperation (i.e. cooperate or defect), in the continuous DonationGame a player may choose a cooperation level, s P r0,Ks, with K ą 0 indicating the maximum level ofcooperation. The costs and benefits associated with s, denoted by c psq and b psq, respectively, are nonde-creasing functions of s and, in analogy to the discrete case, satisfy b psq ą c psq for s ą 0 and cp0q “ bp0q “ 0(Killingback et al., 1999; Wahl and Nowak, 1999a,b; Killingback and Doebeli, 2002; Doebeli et al., 2004).The payoff matrix, Eq. (6.6) is replaced by payoff functions, with the payoffs to players X and Y for playingx against y being uX px, yq :“ b pyq ´ c pxq and uY px, yq “ uX py, xq “ b pxq ´ c pyq, respectively (i.e. the155game is symmetric). For this natural extension of the classical Donation Game, we show the existence ofautocratic and, in particular, extortionate strategies, that play only x “ 0 and x “ K and ignore all othercooperation levels.6.3.1 Two-point autocratic strategiesFor the continuous Donation Game we can again ask the question, which autocratic strategies unilaterallyenforce the relation piX ´ κ “ χ ppiY ´ κq for fixed χ and κ? Using Theorem 9, we show that for appropriateκ, χ, and λ, player X can enforce this equation by playing only two actions: x “ 0 (defect) and x “ K (fullycooperate). Conditioned on the fact that X plays only 0 and K, a memory-one strategy for player X isdefined by a reaction function, p px, yq, which denotes the probability that X plays K (i.e. fully cooperates)following an outcome in which X plays x P t0,Ku and Y plays y P r0,Ks; 1 ´ p px, yq is the probabilitythat X plays 0 (i.e. defects). Moreover, if δs denotes the Dirac measure centered on s P r0,Ks, then, forsome p0, the initial action of X is σ0X :“ p1´ p0q δ0 ` p0δK , which means that X initially plays x “ K withprobability p0 and x “ 0 with probability 1´ p0.For a two-point strategy, X’s action space may be restricted to SX “ t0,Ku. Therefore, the scaling func-tion ψ : SX Ñ R of Theorem 9 is defined by two numbers, ψ p0q and ψ pKq. Letting φ :“ 1{ pψ p0q ´ ψ pKqq,we see by Corollary 7 in §6.6 that the functionp px, yq :“$’’&’’%1λ´1´ φ´χb pKq ` c pKq ´ b pyq ´ χc pyq ´ pχ´ 1qκ¯´ p1´ λq p0¯x “ K,1λ´φ´b pyq ` χc pyq ` pχ´ 1qκ¯´ p1´ λq p0¯x “ 0(6.8)gives well-defined reaction probabilities provided ψ (hence φ) is chosen so that 0 ď p px, yq ď 1 for eachx P t0,Ku and y P r0,Ks. For any such ψ, the memory-one strategy,σX rx, ys :“´1´ p px, yq¯δ0 ` p px, yq δK , (6.9)together with σ0X , defines an autocratic strategy that allows X to enforce the linear relationshippiX ´ κ “ χ ppiY ´ κq . (6.10)Note Eq. (6.9) simply states formally that player X fully cooperates with probability p px, yq and defectswith probability 1´ p px, yq following an outcome in which X plays x and Y plays y.156In the absence of discounting, i.e. in the limit λÑ 1, we havep pK,Kq “ 1´ φ pχ´ 1q´b pKq ´ c pKq ´ κ¯; (6.11a)p pK, 0q “ 1´ φ´χb pKq ` c pKq ´ pχ´ 1qκ¯; (6.11b)p p0,Kq “ φ´b pKq ` χc pKq ` pχ´ 1qκ¯; (6.11c)p p0, 0q “ φ pχ´ 1qκ. (6.11d)Thus, setting b :“ b pKq, and c :“ c pKq, the general form, Eq. (6.11), recovers the discrete-action-spacecase, Eq. (6.7). In particular, the autocratic memory-one strategy in Eq. (6.9) is a direct generalization ofzero-determinant strategies to the continuous Donation Game. However, the autocratic strategy containsmuch more information than the corresponding strategy for the classical Donation Game because it encodesX’s play in response to Y ’s for every y P r0,Ks. Despite the fact that player Y has an uncountably infinitenumber of actions to choose from, player X can still ensure that Eq. (6.10) holds by playing only two actions.As an example, consider the function ψ psq :“ ´χb psq ´ c psq and suppose that 0 ď κ ď b pKq ´ c pKq. Ifplayer X’s initial action is x “ K with probability p0 and x “ 0 with probability 1´ p0 then, for sufficientlyweak discounting or, equivalently, sufficiently many rounds of interaction,λ ě b pKq ` χc pKqχb pKq ` c pKq , (6.12)and for p0 falling within a feasible range (see Eq. (6.74)), the reaction functionppx, yq “ 1λ¨˝b pyq ` χc pyq ` pχ´ 1qκ´ p1´ λq´χb pKq ` c pKq¯p0χb pKq ` c pKq ‚˛, (6.13)defines a memory-one strategy, σX rx, ys, via Eq. (6.9) that, together with σ0X “ p1´ p0q δ0` p0δK , enforcesthe equation piX ´ κ “ χ ppiY ´ κq. If there is no discounting (i.e. λ “ 1), then the initial move is irrelevantand p0 can be anything in the interval r0, 1s. Note that Eq. (6.13) represents a reactive strategy (Nowak andSigmund, 1990) because X conditions her play on only the previous move of the opponent (see Fig. 6.2(B)).For κ “ 0 and χ ě 1, Eq. (6.13) defines an extortionate strategy, σX , which guarantees player X a fixedshare of the payoffs over the payoff for mutual defection. If χ “ 1 (and κ is arbitrary), then this strategyis fair since player X ensures the opponent has a payoff equal to her own (Hilbe et al., 2014b). On theother hand, if κ “ b pKq ´ c pKq and χ ě 1, then Eq. (6.13) defines a generous (or “compliant”) strategy157s0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 200.511.522.533.544.55(A)b (s)c (s)y0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 200.10.20.30.40.50.60.70.80.91(B)p (x; y) = p (y)s0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2-14-12-10-8-6-4-20Zs2SXA (s) d<X [x; y] (s),uX (x; y) + -uY (x; y) + .A (x)(C)A (s)Figure 6.2: Extortion in the continuous Donation Game without discounting. (A) shows linear costs, c psq “2s, and saturating benefits, b psq “ 5 `1´ e´2s˘. Note that the benefit function may be phrased in termsof the cost function as b pcq “ 5 p1´ e´cq. In (B), the reaction function of player X, p px, yq “ p pyq (seeEq. (6.13)), indicates the probability that player X plays K (as opposed to 0) after Y uses y and represents areactive strategy because it depends solely on Y ’s previous move. In (C), the function ψ psq “ ´χb psq´c psq isshown together with two dashed lines indicating ψpxq and şsPSX ψ psq dσX rx, ys psq, respectively, for x “ 0.3where σX rx, ys denotes the memory-one strategy for player X based on the reactive strategy in (A). Thevertical distance between the dashed lines must equal αuX px, yq ` βuY px, yq ` γ “ uX px, yq ´ χuY px, yqfor each x, y P r0, 2s to satisfy Theorem 9. Parameters: λ “ 1, κ “ 0, α “ 1, β “ ´2, and γ “ 0, yielding anextortion factor of χ :“ ´β “ 2.(Stewart and Plotkin, 2012; Hilbe et al., 2013a). By playing a generous strategy, player X ensures that herpayoff is at most that of her opponent’s. For each of these types of strategies, the probability that X reactsto y by cooperating is increasing as a function of y. In particular, X is most likely to cooperate after Yfully cooperates (y “ K) and is most likely to defect after Y defects (y “ 0). Moreover, this single choice ofψ psq “ ´χb psq ´ c psq demonstrates the existence of each of these three classes of autocratic strategies forthe continuous Donation Game provided the discounting factor, λ, is sufficiently weak.Similarly, if ψ psq “ b psq (see Eq. (6.59) in §6.6), then the two-point reactive strategy defined byp px, yq “ 1λˆc pyq ` γ ´ p1´ λq b pKq p0b pKq˙, (6.14)where σ0X “ p1´ p0q δ0 ` p0δK , allows player X to set Y ’s score to piY “ γ for any γ satisfyingp1´ λq b pKq p0 ď γ ď λb pKq ´ c pKq ` p1´ λq b pKq p0. (6.15)Since 0 ď p0 ď 1, it follows that, for any 0 ď γ ď b pKq ´ c pKq, player X can choose σ0X and σX rx, yssuch that piY “ γ and hence set the payoff of player Y to a particular value, which is known as an equalizerstrategy (Boerlijst et al., 1997; Hilbe et al., 2013a). However, no autocratic strategy allows player X to set158:Y-4 -3 -2 -1 0 1 2 3 4 5: X-4-3-2-1012345(0; 0)(!c (2) ; b (2))(b (2) ;!c (2))(b (2)! c (2) ; b (2)! c (2))(b (m)! c (m) ; b (m)! c (m))Figure 6.3: The region of feasible payoff pairs, ppiY , piXq, for the repeated, continuous Donation Game whenX takes advantage of the entire action space, r0,Ks “ r0, 2s (light blue), and when X uses only 0 and 2(hatched). When both players together use the same action throughout the repeated game, the best outcomeoccurs at an investment level of m “ 12 log 5 ă 2, which results in a payoff of b pmq ´ c pmq ą b p2q ´ c p2q toboth players (yellow dot).her own score via Eq. (6.4) (see §6.7.2). These results are consistent with the observations of Press and Dyson(2012) that, in the classical Prisoner’s Dilemma without discounting, player X cannot set her own score butcan set player Y ’s score to anything between the payoffs for mutual defection and mutual cooperation.6.3.2 Deterministic autocratic strategiesOne of the virtues of two-point autocratic strategies is that they allow a player to exert control over thepayoffs of a repeated game while ignoring most of the action space. One of the drawbacks of two-pointstrategies, as shown in Fig. 6.3, is that they restrict the region of feasible payoffs that can result fromthe game. This shortcoming of two-point strategies leads one to the question of whether one can enforcelinear relationships covering a greater portion of the feasible region of Fig. 6.3. To answer this question, weconsider a new class of strategies called deterministic strategies.A deterministic strategy, which is perhaps the simplest type of strategy next to a two-point strategy,requires a player to respond to a history of previous play by playing an action with certainty rather than159(A): extortioniate:Y-4 -3 -2 -1 0 1 2 3 4 5: X-4-3-2-1012345(0; 0)(!c (2) ; b (2))(b (2) ;!c (2))(b (2)! c (2) ; b (2)! c (2))(B): generous:Y-4 -3 -2 -1 0 1 2 3 4 5: X-4-3-2-1012345(0; 0)(!c (2) ; b (2))(b (2) ;!c (2))(b (2)! c (2) ; b (2)! c (2))(C): equalizer:Y-4 -3 -2 -1 0 1 2 3 4 5: X-4-3-2-1012345(0; 0)(!c (2) ; b (2))(b (2) ;!c (2))(b (2)! c (2) ; b (2)! c (2))Figure 6.4: Two-point extortionate, generous, and equalizer strategies for the continuous Donation Gamewith action spaces SX “ t0, 2u and SY “ r0, 2s and discounting factor λ “ 0.95. In each panel, the simulationresults were obtained by plotting the average payoffs for a fixed autocratic strategy against 1000 randomlychosen memory-one strategies. (A) depicts extortionate strategies enforcing piX “ χpiY with χ “ 2 (black)and χ “ 3 (blue). (B) illustrates generous strategies enforcing b p2q ´ c p2q ´ piX “ χ pb p2q ´ c p2q ´ piY qwith χ “ 2 (black) and χ “ 3 (blue). (C) demonstrates equalizer strategies enforcing piY “ γ with γ “ 0(black) and γ “ b p2q ´ c p2q (blue). From (A) and (B), it is clear that neither mutual extortion nor mutualgenerosity is a Nash equilibrium since, when both players use extortionate or generous strategies, eitherplayer can single-handedly improve the payoffs of both players by deviating from his or her strategy.probabilistically. For example, a memory-one deterministic strategy for player X is defined by (i) an initialaction, x0 P SX , and (ii) a reaction function, rX : SX ˆ SY Ñ SX , such that X plays rX px, yq following anoutcome in which X plays x and Y plays y. One well-known example of a deterministic strategy is tit-for-tatin the classical Prisoner’s Dilemma, which is defined by x0 “ C (initially cooperate) and rX px, yq “ y (dowhat the opponent did in the previous round). In addition to being deterministic, tit-for-tat is also anautocratic strategy since it enforces the fair relationship piX “ piY (Stewart and Plotkin, 2013).For general memory-one deterministic strategies, the condition for the existence of autocratic strategies,Eq. (6.4), becomesαuX px, yq ` βuY px, yq ` γ “ ψ pxq ´ λψ`rX px, yq˘´ p1´ λqψ px0q . (6.16)If Eq. (6.16) holds for each x P SX and y P SY , then the deterministic strategy defined by x0 and rX enforcesαpiX ` βpiY ` γ “ 0. Thus, in order to enforce the equation piX ´ κ “ χ ppiY ´ κq for 0 ď κ ď b pKq ´ c pKq,one may choose ψ psq :“ ´χb psq ´ c psq as in §6.3.1 and use the reaction functionrX px, yq “ ψ´1¨˝´b pyq ´ χc pyq ´ pχ´ 1qκ` p1´ λq´χb px0q ` c px0q¯λ‚˛, (6.17)160(A): extortioniate:Y-4 -3 -2 -1 0 1 2 3 4 5: X-4-3-2-1012345(0; 0)(!c (2) ; b (2))(b (2) ;!c (2))(b (2)! c (2) ; b (2)! c (2))(b (m)! c (m) ; b (m)! c (m))(B): generous:Y-4 -3 -2 -1 0 1 2 3 4 5: X-4-3-2-1012345(0; 0)(!c (2) ; b (2))(b (2) ;!c (2))(b (2)! c (2) ; b (2)! c (2))(b (m)! c (m) ; b (m)! c (m))(C): equalizer:Y-4 -3 -2 -1 0 1 2 3 4 5: X-4-3-2-1012345(0; 0)(!c (2) ; b (2))(b (2) ;!c (2))(b (2)! c (2) ; b (2)! c (2))(b (m)! c (m) ; b (m)! c (m))Figure 6.5: Deterministic extortionate, generous, and equalizer strategies for the continuous Donation Gamewith action spaces SX “ SY “ r0, 2s and discounting factor λ “ 0.95. As in Fig. 6.4, the simulation resultsin each panel were obtained by plotting the average payoffs for a fixed autocratic strategy against 1000randomly chosen memory-one strategies. (A) and (B) demonstrate extortionate and generous strategies,respectively, with χ “ 2 (black) and χ “ 3 (blue). (C) shows equalizer strategies enforcing piY “ γ withγ “ 0 (black) and γ “ b p2q ´ c p2q (blue). Since deterministic strategies allow player X to use a muchlarger portion of the action space than just 0 and K, we observe that the linear relationships enforced bydeterministic autocratic strategies cover a greater portion of the feasible region than do two-point strategies.where ψ´1 p¨ ¨ ¨ q denotes the inverse of the function ψ, provided λ satisfies Eq. (6.12) and x0 falls within afeasible range (see Eq. (6.77)).Similarly, to enforce piY “ γ for 0 ď γ ď b pKq´c pKq, one may choose ψ psq :“ b psq and use the reactionfunctionrX px, yq “ ψ´1ˆc pyq ` γ ´ p1´ λq b px0qλ˙, (6.18)provided λ ě c pKq {b pKq and x0 falls within a feasible range (see Eq. (6.79)).Examples of deterministic extortionate, generous, and equalizer strategies are given in Fig. 6.5, where itis evident that deterministic strategies allow a player to enforce linear payoff relationships covering a greaterportion of the feasible region than do their two-point counterparts (Fig. 6.4).6.4 DiscussionIn games with two actions, zero-determinant strategies are typically defined via a technical condition suchas Eq. (6.2) (Press and Dyson, 2012; Hilbe et al., 2015a). Defining zero-determinant strategies in this waymakes it difficult to generalize the definition to games with larger action spaces since Eq. (6.2) makes senseonly for two-action games. Therefore, we introduce the more general term autocratic strategy to refer to any161strategy that unilaterally enforces a linear relationship on expected payoffs. Of course, this linear relationshipis precisely what makes strategies satisfying Eq. (6.2) interesting in the first place. Since expected payoffsare meaningful in games with arbitrary action spaces, the definition used here is better suited to extend thenotion of zero-determinant strategies to a broader class of iterated games.Theorem 9 provides a condition for the existence of autocratic strategies with general action spaces.We illustrate this phenomenon with a continuous-action-space extension of the classical Donation Game,which represents a specific instance of the Prisoner’s Dilemma. The existing literature on zero-determinantstrategies for the classical Prisoner’s Dilemma provides no way of treating this continuous extension of theDonation Game. However, Theorem 9 makes no assumptions on the action space of the game and thusapplies to the continuous Donation Game as well as its classical counterpart.Surprisingly, in many cases a player can enforce a linear relationship on expected payoffs by playing onlytwo actions, despite the fact that the opponent may have infinitely many actions available to use (Corollary7 in §6.6). More specifically, we demonstrate that the conditions guaranteeing the existence of extortionate,generous, fair, and equalizer strategies in the continuous Donation Game are in fact similar to those of thetwo-action case. However, despite the simplicity of such two-point strategies, a player needs to know howto respond to every possible move of the opponent. In particular, knowledge of how to respond to justdefection (y “ 0) and full cooperation (y “ K) does not suffice. Therefore, although a player using a two-point strategy might play only x “ 0 and x “ K, these strategies represent a departure from the classicalDonation Game.Another important difference is that, whereas in the classical Prisoner’s Dilemma mutual generosityrepresents a symmetric Nash equilibrium (Hilbe et al., 2015a), this need not be the case in the continuousDonation Game. Instead, mutual generosity results in a payoff of b pKq ´ c pKq for each player, where K isthe maximum investment level. We observed in §6.3 the existence of an intermediate level of cooperation,m P p0,Kq, such that b pmq ´ c pmq ą b pKq ´ c pKq, i.e. both players fare better if they each invest minstead of K (fully cooperate). However, no player can enforce a generous relationship with baseline payoffκ “ b pmq ´ c pmq since this value is outside of r0, b pKq ´ c pKqs, the feasible range for κ (see §6.7.2). Thus,the performance of a generous strategy as a response to itself depends critically on whether the game hastwo actions or a continuous range of actions.Extortionate strategies for the iterated Prisoner’s Dilemma are not evolutionary stable (Adami andHintze, 2013). Since mutual generosity in the continuous Donation Game need not be a Nash equilibrium, itfollows that generous strategies also need not be evolutionarily stable. Moreover, against human opponents,162extortioners are punished by a refusal to fully cooperate, while generous players provide their opponentswith an incentive to cooperate and fare better in experiments (Hilbe et al., 2014a). Such behavior supportswhat one would expect from a player using a fair autocratic strategy enforcing piX “ piY (such as tit-for-tat):if the opponent ensures piX ´ κ “ χ ppiY ´ κq for some χ ą 1, then both players get κ. In particular, fairstrategies punish extortion and reward generosity. In light of our results on generous strategies in gameswith larger action spaces, it would be interesting to see if human experiments support the same conclusionfor generosity in the continuous Donation Game. The performance of autocratic strategies in populations,however, is but one perspective on this recently-discovered class of strategies for repeated games.In games with two discrete actions, our definition of a Press-Dyson function specializes to a multiple ofthe Press-Dyson vector, rp, of Eq. (6.2) (see §6.7.1 for details). That is, any vector of the form crp (with c areal constant) defines a Press-Dyson function. The Press-Dyson vector is recovered by normalizing the Press-Dyson function and eliminating the constant, c. However, in games with three or more actions, this functioninvolves at least two constants, and these constants cannot, in general, be eliminated by normalization (see§6.7.1 for an example with three actions). Therefore, based on Theorem 9, in two-action games it is perhapsmore appropriate to define a Press-Dyson vector to be any vector of the form crp for some constant, c, sothat a Press-Dyson function reduces to a Press-Dyson vector–not just a multiple of the Press-Dyson vector.This distinction in games with two actions is minor, however, and does not change the fact that Theorem 9covers all of the known results on the existence of zero-determinant strategies for repeated games.More importantly, however, Theorem 9 uncovers a feature of autocratic strategies that is not captured bytwo-action games: in general, they are defined implicitly via an integral (Eq. (6.4)). Even in the case of anaction space with just two elements, there may be infinitely many autocratic strategies that enforce a givenlinear relationship on expected payoffs (Press and Dyson, 2012). However, the simplistic nature of two-actiongames enables one to solve for these strategies explicitly. In particular, the analysis of iterated games withonly two actions completely misses the fact that autocratic strategies are most naturally presented implicitlyvia Eq. (6.4). Our extension shows that, in general, (i) such autocratic strategies need not be unique and(ii) one cannot explicitly list all autocratic strategies that produce a fixed Press-Dyson function. Thus, forarbitrary action spaces (but already for games with more than two actions), the space of autocratic strategies,which enforce a given linear relationship on expected payoffs, is more sophisticated than two-action gamessuggest. Notwithstanding the intrinsic difficulty in explicitly specifying all autocratic strategies, our resultsdemonstrate that these strategies exist in general and are not simply consequences of the finiteness of theaction space in games such as the Prisoner’s Dilemma.1636.5 Methods: iterated games with two players and measurableaction spacesBy “action space,” we mean a measurable space, S, equipped with a σ-algebra, F pSq (although we suppressF pSq and refer to the space simply as S). Informally, S is the space of actions, decisions, investments, oroptions available to a player at each round of the iterated interaction and could be a finite set, a continuousinterval, or something more complicated. Since the players need not have the same action space, we denoteby SX the space of actions available to player X and by SY the space of actions available to player Y . Inwhat follows, all functions are assumed to be measurable and bounded.In each encounter (i.e. “one-shot game”), the players receive payoffs based on a payoff function,u “ puX , uY q : SX ˆ SY ÝÑ R2. (6.19)The first and second coordinate functions, uX and uY , give the payoffs to players X and Y , respectively. Aniterated game between players X and Y consists of a sequence of these one-shot interactions. If, at time t,player X uses xt P SX and player Y uses yt P SY , then the (normalized) payoff to player X for a sequenceof T ` 1 interactions (from time t “ 0 to t “ T ) is1´ λ1´ λT`1Tÿt“0λtuX pxt, ytq , (6.20)where λ is the discounting factor, 0 ă λ ă 1. The payoff to player Y is obtained by replacing uX by uYin Eq. (6.20). Thus, the discounted payoffs, λtuX pxt, ytq, are simply added up and then normalized by afactor of 1´λ1´λT`1 to ensure that the payoff for the repeated game is measured in the same units as the payoffsfor individual encounters (Fudenberg and Tirole, 1991). Moreover, provided the seriesř8t“0 uX pxt, ytq isCesa`ro summable, meaning limTÑ8 1T`1řTt“0 uX pxt, ytq exists, we havelimλÑ1´p1´ λq8ÿt“0λtuX pxt, ytq “ limTÑ81T ` 1Tÿt“0uX pxt, ytq (6.21)(see Korevaar, 2004). Therefore, payoffs in the undiscounted case may be obtained from the payoffs fordiscounted games in the limit λÑ 1´, provided this limit exists (see Hilbe et al., 2015a).Here, we consider stochastic strategies that condition on the history of play: both players observe thesequence of play up to the current period and use it to devise an action for the present encounter. In order164to formally define such strategies, we first recall the notion of “history” in a repeated game: a history attime T is a sequence of action pairs,hT :“´px0, y0q , . . . , pxT´1, yT´1q¯, (6.22)indicating the sequence of play leading up to the interaction at time T (Fudenberg and Tirole, 1991). Inother words, a history at time T is an element of HT :“śT´1t“0 SX ˆ SY . Let H0 :“ t∅u, where ∅ denotesthe “empty” history (which serves just to indicate that there has been no history of past play, i.e. thatthe game has not yet begun). The set of all possible histories is the disjoint union, H :“ ŮTě0HT . ForhT “´px0, y0q , . . . , pxT´1, yT´1q¯and t ď T ´ 1, lethTt :“ pxt, ytq ; (6.23a)hTďt :“´px0, y0q , . . . , pxt, ytq¯. (6.23b)That is, hTt is the action pair played at time t, and hTďt is the “sub-history” of hT until time t ď T .A pure strategy for the repeated game is a map, HÑ S, indicating an action in S (deterministically) foreach history leading up to the current encounter. More generally, players could look at the history of pastplay and use this information to choose an action from S probabilistically (rather than deterministically).A strategy of this form is known as a behavioral strategy (Fudenberg and Tirole, 1991). In terms of H, abehavioral strategy for player X is a mapσX : H ÝÑ ∆ pSXq , (6.24)where ∆ pSXq is the space of probability measures on SX . An important type of behavioral strategy is aMarkov strategy, which is a behavioral strategy, σX , that satisfies σX“hT‰ “ σX “hTT´1‰. That is, a Markovstrategy depends on only the last pair of actions and not on the entire history of play. Note, however, thata Markov strategy may still depend on t. If σX is a Markov strategy that does not depend on t, then we saythat σX is a stationary (or memory-one) strategy.Suppose that σX and σY are behavioral strategies for players X and Y , respectively. Consider the map,165σ, defined by the product measure,σ :“ σX ˆ σY : H ÝÑ ∆ pSX ˆ SY q: ht ÞÝÑ σX“ht‰ˆ σY “ht‰ . (6.25)By the Hahn-Kolmogorov theorem, for each t ě 0 there exists a unique measure, µt, on Ht`1 such that foreach E1 P F pHtq and E P F pSX ˆ SY q,µt`E1 ˆ E˘ “ żhtPE1σ`ht, E˘dσ`htďt´2, htt´1˘ ¨ ¨ ¨ dσ `htď0, ht1˘ dσ `∅, ht0˘ , (6.26)where, for h P H and s P SX ˆ SY , dσ ph, sq denotes the differential of the measure σ ph,´q on SX ˆ SY .In the case t “ 0, this measure is simply the product of the two initial actions, i.e. µ0 “ σX r∅s ˆ σY r∅s.From these measures, we obtain a sequence of measures, tνtutě0 Ď ∆ pSX ˆ SY q, defined byνt pEq :“ µt`Ht ˆ E˘ . (6.27)Informally, νt pEq is the probability that the action pair at time t is in E Ď SX ˆ SY , averaged over allhistories that lead to E. The sequences, tµtutě0 and tνtutě0, admit a convenient format for the expectedpayoffs, piTX and piTY , to players X and Y , respectively. Before stating this result, we first formally defineexpected payoffs for the pT ` 1q-period game (where T ă 8):Definition 23 (Objective function for a finite game). If σX and σY are behavioral strategies for players Xand Y , respectively, and if σ “ σX ˆ σY (see Eq. (6.25)), then the objective function (or expected payoff)for player X in the pT ` 1q-period game ispiTX :“żhT`1PHT`1«1´ λ1´ λT`1Tÿt“0λtuX`hT`1t˘ffdσ`hT`1ďT´1, hT`1T˘ ¨ ¨ ¨ dσ `hT`1ď0 , hT`11 ˘ dσ `∅, hT`10 ˘ .(6.28)Using tνtutě0, we can write piTX differently:Lemma 4. For fixed σX and σY generating tνtutě0, we havepiTX “ 1´ λ1´ λT`1Tÿt“0λtżpx,yqPSXˆSYuX px, yq dνt px, yq . (6.29)166As a consequence of Lemma 4, we see that limTÑ8 piTX exists since νt is a probability measure andˇˇˇˇˇżpx,yqPSXˆSYuX px, yq dνt px, yqˇˇˇˇˇ ď suppx,yqPSXˆSY |uX px, yq| ă 8 (6.30)by the fact that uX is bounded. Thus, we define the objective function for an infinite game as follows:Definition 24 (Objective function for an infinite game). If σX and σY are behavioral strategies for playersX and Y , respectively, and if σ “ σX ˆ σY , then the objective function for player X in the infinite game ispiX :“ limTÑ8piTX “ p1´ λq8ÿt“0λtżpx,yqPSXˆSYuX px, yq dνt px, yq . (6.31)Remark 13. Classically, the objective function of a repeated game with infinitely many rounds is definedusing a distribution over “infinite histories,” which is generated by the players’ strategies for the repeatedgame (Fudenberg and Tirole, 1991). That is, for H8 :“ś8t“0 SX ˆ SY and some measure, µ P ∆ pH8q, theobjective function of player X is defined byżh8PH8p1´ λq8ÿt“0λtuX ph8t q dµ ph8q . (6.32)Using Eq. (6.31) as an objective function for player X, we do not need to worry about what µ is (or if it evenexists for a general action space) since Eq. (6.32), whenever it is defined, must coincide with Eq. (6.31). Tosee why, suppose that there is a distribution, µ, on H8 that satisfiesµ pE ˆ pSX ˆ SY q ˆ pSX ˆ SY q ˆ ¨ ¨ ¨ q “ µT pEq (6.33)for each E P F `HT`1˘. Then, by the dominated convergence theorem, Eq. (6.33), and Eq. (6.27),żh8PH8p1´ λq8ÿt“0λtuX ph8t q dµ ph8q“ p1´ λq8ÿt“0λtżh8PH8uX ph8t q dµ ph8q“ p1´ λq8ÿt“0λtżht`1PHt`1uX`ht`1t˘dµt`ht`1˘“ p1´ λq8ÿt“0λtżpx,yqPSXˆSYuX px, yq dνt px, yq . (6.34)Therefore, assuming Lemma 4, the objective function for player X defined by piX :“ limTÑ8 piTX is the same167as the classical objective function for repeated games when the players’ strategies produce a probabilitydistribution over infinite histories. Typically, the existence of such a distribution depends on S being finiteor the measures in tµtutě0 being inner regular (which allows one to deduce the existence of µ from tµtutě0using the Kolmogorov extension theorem). In practice, these assumptions are often not unreasonable, butwith Lemma 4, we do not need to worry about the existence of such a distribution.In order to prove Lemma 4, we first need a simple technical result:Lemma 5. Suppose that X and Y are measure spaces and σ is a Markov kernel from X to Y. Let µ be aprobability measure on X and consider the measure on Y defined, for E P F pYq, byν pEq :“żxPXσ px,Eq dµ pxq . (6.35)For any bounded, measurable function, f : YÑ R, we haveżyPYf pyq dν pyq “żxPXżyPYf pyq dσ px, yq dµ pxq , (6.36)where, for each x P X, dσ px, yq denotes the differential of the measure σ px,´q on Y.Proof. Since f is bounded, there exists a sequence of simple functions, tfnuně1, such that fn Ñ f uniformlyon Y. For each n ě 1, let fn “ řNni“1 cni χEni for some cni P R and Eni P F pYq, where χEni is the characteristicfunction of Eni (meaning χEni pxq “ 1 if x P Eni and χEni pxq “ 0 if x R Eni ). By uniform convergence,żyPYf pyq dν pyq “ limnÑ8żyPYfn pyq dν pyq“ limnÑ8Nnÿi“1cni ν pEni q“ limnÑ8Nnÿi“1cniżxPXσ px,Eni q dµ pxq“ limnÑ8żxPXNnÿi“1cni σ px,Eni q dµ pxq“ limnÑ8żxPXżyPYfn pyq dσ px, yq dµ pxq“żxPXżyPYf pyq dσ px, yq dµ pxq , (6.37)which completes the proof.168Proof of Lemma 4. By Lemma 5 and the definitions of µt and νt,piTX “żhT`1PHT`1«1´ λ1´ λT`1Tÿt“0λtuX`hT`1t˘ffdσ`hT`1ďT´1, hT`1T˘ ¨ ¨ ¨ dσ `hT`1ď0 , hT`11 ˘ dσ `∅, hT`10 ˘“ 1´ λ1´ λT`1Tÿt“0λtżhT`1PHT`1uX`hT`1t˘dσ`hT`1ďT´1, hT`1T˘ ¨ ¨ ¨ dσ `hT`1ď0 , hT`11 ˘ dσ `∅, hT`10 ˘“ 1´ λ1´ λT`1Tÿt“0λtżhT`1PHT`1uX`hT`1t˘dµT`hT`1˘“ 1´ λ1´ λT`1Tÿt“0λtżht`1PHt`1uX`ht`1t˘dµt`ht`1˘“ 1´ λ1´ λT`1Tÿt“0λtżht`1t PSXˆSYuX`ht`1t˘dνt`ht`1t˘“ 1´ λ1´ λT`1Tÿt“0λtżpx,yqPSXˆSYuX px, yq dνt px, yq , (6.38)which completes the proof.The objective function of Eq. (6.31), which is obtained using Lemma 4, eliminates the need to deal withhistories when proving our main results for iterated games. With the background on expected payoffs nowestablished, we turn our attention to the proofs of the results claimed in the main text:6.6 Methods: detailed proofs of the main resultsBefore proving our main results, we state a technical lemma that generalizes Lemma 3.1 of Akin (2015)–which Hilbe et al. (2014b) refer to as Akin’s Lemma–and Lemma 1 of Hilbe et al. (2015a). This lemmarelates the strategies of the two players, σX and σY , and the (discounted) sequence of play to the initialaction of player X when σX is memory-one. Our proof of this lemma is essentially the same as theirs but inthe broader setting of a measurable action space:Lemma 6. For any memory-one strategy, σX , and E P F pSXq,8ÿt“0λtżpx,yqPSXˆSY”χE pxq ´ λσX rx, ys pEqıdνt px, yq “ σ0X pEq , (6.39)where σ0X is the initial action of player X.169Proof. By the definition of νt, we haveżpx,yqPSXˆSYχE pxq dνt px, yq “ νt pE ˆ Sq ; (6.40a)żpx,yqPSXˆSYσX rx, ys pEq dνt px, yq “ νt`1 pE ˆ Sq . (6.40b)Therefore, since νt is a probability measure (in particular, at most 1 on any measurable set) for each t,8ÿt“0λtżpx,yqPSXˆSY”χE pxq ´ λσX rx, ys pEqıdνt px, yq“8ÿt“0λt´νt pE ˆ Sq ´ λνt`1 pE ˆ Sq¯“ ν0 pE ˆ Sq ´ limtÑ8λt`1νt`1 pE ˆ Sq“ ν0 pE ˆ Sq“ σ0X pEq , (6.41)which completes the proof.By the definitions of piX and piY , we haveαpiX ` βpiY ` γ “ p1´ λq8ÿt“0λtżpx,yqPSXˆSY”αuX px, yq ` βuY px, yq ` γıdνt px, yq . (6.42)Since our goal is to establish Theorem 9, which states that player X can enforce the relation αpiX ` βpiY `γ “ 0 using some σX rx, ys and σ0X , as a first step we show thatř8t“0 λtşpx,yqPSXˆSY ϕ px, yq dνt px, yq “şsPSX ψ psq dσ0X psq for a particular choice of ϕ px, yq. We then deduce Theorem 9 by settingαuX px, yq ` βuY px, yq ` γ ` p1´ λqżsPSXψ psq σ0X psq “ ϕ px, yq (6.43)for this known function, ϕ.Proposition 10. If ψ : SX Ñ R is a bounded, measurable function, then8ÿt“0λtżpx,yqPSXˆSY„ψ pxq ´ λżsPSXψ psq dσX rx, ys psqdνt px, yq “żsPSXψ psq dσ0X psq , (6.44)for any memory-one strategy, σX , where σ0X is the initial action of player X.170Proof. Since ψ is bounded, there exists a sequence of simple functions, tψnuně1, such that ψn Ñ ψ uniformlyon S. For each n ě 1, let ψn “ řNni“1 cni χEni . Using the uniform convergence of this sequence, together withthe dominated convergence theorem and Lemma 6, we obtain8ÿt“0λtżpx,yqPSXˆSY„ψ pxq ´ λżsPSXψ psq dσX rx, ys psqdνt px, yq“8ÿt“0λt limnÑ8żpx,yqPSXˆSY„ψn pxq ´ λżsPSXψn psq dσX rx, ys psqdνt px, yq“ limnÑ88ÿt“0λtżpx,yqPSXˆSY„ψn pxq ´ λżsPSXψn psq dσX rx, ys psqdνt px, yq“ limnÑ88ÿt“0λtNnÿi“1cniżpx,yqPSXˆSY„χEni pxq ´ λżsPSXχEni psq dσX rx, ys psqdνt px, yq“ limnÑ88ÿt“0λtNnÿi“1cniżpx,yqPSXˆSY”χEni pxq ´ λσX rx, ys pEni qıdνt px, yq“ limnÑ8Nnÿi“1cni8ÿt“0λtżpx,yqPSXˆSY”χEni pxq ´ λσX rx, ys pEni qıdνt px, yq“ limnÑ8Nnÿi“1cni σ0X pEni q“ limnÑ8żsPSXψn psq dσ0X psq“żsPSXψ psq dσ0X psq , (6.45)which completes the proof.While Proposition 10 applies to discounted games with λ ă 1, we can get an analogous statement forundiscounted games by multiplying both sides of Eq. (6.44) by 1´ λ and taking the limit λÑ 1´:Corollary 6. If ψ : SX Ñ R is a bounded, measurable function, then, when the limit exists,limTÑ81T ` 1Tÿt“0żpx,yqPSXˆSY„ψ pxq ´żsPSXψ psq dσX rx, ys psqdνt px, yq “ 0 (6.46)for any memory-one strategy, σX , where σ0X is the initial action of player X.Theorem 9 (Autocratic strategies in arbitrary action spaces). Suppose that σX rx, ys is a memory-onestrategy for player X and let σ0X be player X’s initial action. If, for some bounded function, ψ, the equationαuX px, yq ` βuY px, yq ` γ “ ψ pxq ´ λżsPSXψ psq dσX rx, ys psq ´ p1´ λqżsPSXψ psq dσ0X psq (6.47)171holds for each x P SX and y P SY , then σ0X and σX rx, ys together enforce the linear payoff relationshipαpiX ` βpiY ` γ “ 0 (6.48)for any strategy of player Y . In other words, the pair´σ0X , σX rx, ys¯is an autocratic strategy.Proof. If Eq. (6.47) holds, then by Eq. (6.44) in Proposition 10,αpiX`βpiY ` γ ` p1´ λqżsPSXψ psq σ0X psq“ p1´ λq8ÿt“0λtżpx,yqPSXˆSY„ψ pxq ´ λżsPSXψ psq dσX rx, ys psqdνt px, yq“ p1´ λqżsPSXψ psq dσ0X psq , (6.49)and it follows at once that αpiX ` βpiY ` γ “ 0.6.6.1 Two-point autocratic strategiesCorollary 7. Let α, β, γ P R and suppose that there exist s1, s2 P SX (i.e. two discrete actions) and φ ą 0such that´1´ p1´ λq p0φď αuX ps1, yq ` βuY ps1, yq ` γ ď ´p1´ λq p
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Essays on game theory and stochastic evolution
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Essays on game theory and stochastic evolution McAvoy, Alexander Patrick 2016
pdf
Page Metadata
Item Metadata
Title | Essays on game theory and stochastic evolution |
Creator |
McAvoy, Alexander Patrick |
Publisher | University of British Columbia |
Date Issued | 2016 |
Description | Evolutionary game theory is a popular framework for modeling the evolution of populations via natural selection. The fitness of a genetic or cultural trait often depends on the composition of the population as a whole and cannot be determined by looking at just the individual ("player") possessing the trait. This frequency-dependent fitness is quite naturally modeled using game theory since a player's trait can be encoded by a strategy and their fitness can be computed using the payoffs from a sequence of interactions with other players. However, there is often a distinct trade-off between the biological relevance of a game and the ease with which one can analyze an evolutionary process defined by a game. The goal of this thesis is to broaden the scope of some evolutionary games by removing restrictive assumptions in several cases. Specifically, we consider multiplayer games; asymmetric games; games with a continuous range of strategies (rather than just finitely many); and alternating games. Moreover, we study the symmetries of an evolutionary process and how they are influenced by the environment and individual-level interactions. Finally, we present a mathematical framework that encompasses many of the standard stochastic evolutionary processes and provides a setting in which to study further extensions of stochastic models based on natural selection. |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2016-07-21 |
Provider | Vancouver : University of British Columbia Library |
Rights | Attribution-NonCommercial-NoDerivatives 4.0 International |
DOI | 10.14288/1.0306911 |
URI | http://hdl.handle.net/2429/58514 |
Degree |
Doctor of Philosophy - PhD |
Program |
Mathematics |
Affiliation |
Science, Faculty of Mathematics, Department of |
Degree Grantor | University of British Columbia |
GraduationDate | 2016-09 |
Campus |
UBCV |
Scholarly Level | Graduate |
Rights URI | http://creativecommons.org/licenses/by-nc-nd/4.0/ |
AggregatedSourceRepository | DSpace |
Download
- Media
- 24-ubc_2016_september_mcavoy_alexander.pdf [ 13.88MB ]
- Metadata
- JSON: 24-1.0306911.json
- JSON-LD: 24-1.0306911-ld.json
- RDF/XML (Pretty): 24-1.0306911-rdf.xml
- RDF/JSON: 24-1.0306911-rdf.json
- Turtle: 24-1.0306911-turtle.txt
- N-Triples: 24-1.0306911-rdf-ntriples.txt
- Original Record: 24-1.0306911-source.json
- Full Text
- 24-1.0306911-fulltext.txt
- Citation
- 24-1.0306911.ris