{"Affiliation":[{"label":"Affiliation","value":"Science, Faculty of","attrs":{"lang":"en","ns":"http:\/\/vivoweb.org\/ontology\/core#departmentOrSchool","classmap":"vivo:EducationalProcess","property":"vivo:departmentOrSchool"},"iri":"http:\/\/vivoweb.org\/ontology\/core#departmentOrSchool","explain":"VIVO-ISF Ontology V1.6 Property; The department or school name within institution; Not intended to be an institution name."},{"label":"Affiliation","value":"Computer Science, Department of","attrs":{"lang":"en","ns":"http:\/\/vivoweb.org\/ontology\/core#departmentOrSchool","classmap":"vivo:EducationalProcess","property":"vivo:departmentOrSchool"},"iri":"http:\/\/vivoweb.org\/ontology\/core#departmentOrSchool","explain":"VIVO-ISF Ontology V1.6 Property; The department or school name within institution; Not intended to be an institution name."}],"AggregatedSourceRepository":[{"label":"AggregatedSourceRepository","value":"DSpace","attrs":{"lang":"en","ns":"http:\/\/www.europeana.eu\/schemas\/edm\/dataProvider","classmap":"ore:Aggregation","property":"edm:dataProvider"},"iri":"http:\/\/www.europeana.eu\/schemas\/edm\/dataProvider","explain":"A Europeana Data Model Property; The name or identifier of the organization who contributes data indirectly to an aggregation service (e.g. Europeana)"}],"Campus":[{"label":"Campus","value":"UBCV","attrs":{"lang":"en","ns":"https:\/\/open.library.ubc.ca\/terms#degreeCampus","classmap":"oc:ThesisDescription","property":"oc:degreeCampus"},"iri":"https:\/\/open.library.ubc.ca\/terms#degreeCampus","explain":"UBC Open Collections Metadata Components; Local Field; Identifies the name of the campus from which the graduate completed their degree."}],"Creator":[{"label":"Creator","value":"Jiang, Xin","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/creator","classmap":"dpla:SourceResource","property":"dcterms:creator"},"iri":"http:\/\/purl.org\/dc\/terms\/creator","explain":"A Dublin Core Terms Property; An entity primarily responsible for making the resource.; Examples of a Contributor include a person, an organization, or a service."}],"DateAvailable":[{"label":"DateAvailable","value":"2012-01-09T18:03:27Z","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/issued","classmap":"edm:WebResource","property":"dcterms:issued"},"iri":"http:\/\/purl.org\/dc\/terms\/issued","explain":"A Dublin Core Terms Property; Date of formal issuance (e.g., publication) of the resource."}],"DateIssued":[{"label":"DateIssued","value":"2011","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/issued","classmap":"oc:SourceResource","property":"dcterms:issued"},"iri":"http:\/\/purl.org\/dc\/terms\/issued","explain":"A Dublin Core Terms Property; Date of formal issuance (e.g., publication) of the resource."}],"Degree":[{"label":"Degree","value":"Doctor of Philosophy - PhD","attrs":{"lang":"en","ns":"http:\/\/vivoweb.org\/ontology\/core#relatedDegree","classmap":"vivo:ThesisDegree","property":"vivo:relatedDegree"},"iri":"http:\/\/vivoweb.org\/ontology\/core#relatedDegree","explain":"VIVO-ISF Ontology V1.6 Property; The thesis degree; Extended Property specified by UBC, as per https:\/\/wiki.duraspace.org\/display\/VIVO\/Ontology+Editor%27s+Guide"}],"DegreeGrantor":[{"label":"DegreeGrantor","value":"University of British Columbia","attrs":{"lang":"en","ns":"https:\/\/open.library.ubc.ca\/terms#degreeGrantor","classmap":"oc:ThesisDescription","property":"oc:degreeGrantor"},"iri":"https:\/\/open.library.ubc.ca\/terms#degreeGrantor","explain":"UBC Open Collections Metadata Components; Local Field; Indicates the institution where thesis was granted."}],"Description":[{"label":"Description","value":"In the last decade, there has been much research at the interface of computer science and game theory. One important class of problems at this interface is the computation of solution\nconcepts (such as Nash equilibrium or correlated equilibrium) of a finite game. In order to take advantage of the highly-structured utility functions in games of practical interest, it\nis important to design compact representations of games as well as efficient algorithms for computing solution concepts on such representations. In this thesis I present several novel contributions in this direction:\n\nThe design and analysis of Action-Graph Games (AGGs), a fully-expressive modeling language for representing simultaneous-move games.\nWe propose a polynomial-time algorithm for computing expected utilities given arbitrary mixed strategy profiles, and leverage the algorithm to achieve exponential speedups of existing algorithms for computing Nash equilibria. Designing efficient algorithms for computing pure-strategy Nash equilibria in AGGs. For symmetric AGGs with bounded treewidth our algorithm runs in polynomial time.\n\nExtending the AGG framework beyond simultaneous-move games. We propose Temporal Action-Graph Games (TAGGs) for representing dynamic games and\n Bayesian Action-Graph Games (BAGGs) for representing Bayesian games. For certain subclasses of TAGGs and BAGGs we gave efficient algorithms for equilibria that achieve exponential speedups over existing approaches.\n\nEfficient computation of correlated equilibria. In a landmark paper, Papadimitriou and Roughgarden described a polynomial-time algorithm (\"Ellipsoid Against Hope\") for computing sample correlated equilibria of compactly-represented games. Recently, Stein, Parrilo and Ozdaglar showed that this algorithm can fail to find an exact correlated equilibrium. We present a variant of the Ellipsoid Against Hope algorithm that guarantees the polynomial-time identification of exact correlated equilibrium.\nEfficient computation of optimal correlated equilibria. We show that the polynomial-time solvability of what we call the deviation-adjusted social welfare problem is a sufficient condition for the tractability of the optimal correlated equilibrium problem.","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/description","classmap":"dpla:SourceResource","property":"dcterms:description"},"iri":"http:\/\/purl.org\/dc\/terms\/description","explain":"A Dublin Core Terms Property; An account of the resource.; Description may include but is not limited to: an abstract, a table of contents, a graphical representation, or a free-text account of the resource."}],"DigitalResourceOriginalRecord":[{"label":"DigitalResourceOriginalRecord","value":"https:\/\/circle.library.ubc.ca\/rest\/handle\/2429\/39951?expand=metadata","attrs":{"lang":"en","ns":"http:\/\/www.europeana.eu\/schemas\/edm\/aggregatedCHO","classmap":"ore:Aggregation","property":"edm:aggregatedCHO"},"iri":"http:\/\/www.europeana.eu\/schemas\/edm\/aggregatedCHO","explain":"A Europeana Data Model Property; The identifier of the source object, e.g. the Mona Lisa itself. This could be a full linked open date URI or an internal identifier"}],"FullText":[{"label":"FullText","value":"Representing and Reasoning with Large Games by Xin Jiang B. Science, University of British Columbia, 2003 M. Science, University of British Columbia, 2006 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Doctor of Philosophy in THE FACULTY OF GRADUATE STUDIES (Computer Science) The University Of British Columbia (Vancouver) December 2011 c\u00a9 Xin Jiang, 2011 Abstract In the last decade, there has been much research at the interface of computer sci- ence and game theory. One important class of problems at this interface is the computation of solution concepts (such as Nash equilibrium or correlated equilib- rium) of a finite game. In order to take advantage of the highly-structured utility functions in games of practical interest, it is important to design compact represen- tations of games as well as efficient algorithms for computing solution concepts on such representations. In this thesis I present several novel contributions in this direction: The design and analysis of Action-Graph Games (AGGs), a fully-expressive mod- eling language for representing simultaneous-move games. We propose a polynomial-time algorithm for computing expected utilities given arbitrary mixed strategy profiles, and leverage the algorithm to achieve exponential speedups of existing algorithms for computing Nash equilibria. Designing efficient algorithms for computing pure-strategy Nash equilibria in AGGs. For symmetric AGGs with bounded treewidth our algorithm runs in polyno- mial time. Extending the AGG framework beyond simultaneous-move games. We propose Temporal Action-Graph Games (TAGGs) for representing dynamic games and Bayesian Action-Graph Games (BAGGs) for representing Bayesian games. For certain subclasses of TAGGs and BAGGs we gave efficient algorithms for equilibria that achieve exponential speedups over existing approaches. Efficient computation of correlated equilibria. In a landmark paper, Papadim- ii itriou and Roughgarden described a polynomial-time algorithm (\u201dEllipsoid Against Hope\u201d) for computing sample correlated equilibria of compactly- represented games. Recently, Stein, Parrilo and Ozdaglar showed that this algorithm can fail to find an exact correlated equilibrium. We present a vari- ant of the Ellipsoid Against Hope algorithm that guarantees the polynomial- time identification of exact correlated equilibrium. Efficient computation of optimal correlated equilibria. We show that the polynomial- time solvability of what we call the deviation-adjusted social welfare prob- lem is a sufficient condition for the tractability of the optimal correlated equi- librium problem. iii Preface Certain chapters of this thesis are based on publications (or submissions to publica- tions) by my collaborators and me (under the name Albert Xin Jiang). Per require- ment of UBC Faculty of Graduate Studies, I describe here the relative contributions of all collaborators. Chapter 3 is based on the article Action-Graph Games by Albert Xin Jiang, Kevin Leyton-Brown and Navin Bhat, published in Games and Economic Behav- ior, Volume 71, Issue 1, January 2011, Pages 141\u2013173, Elsevier. Navin and Kevin first proposed Action-Graph Games without function nodes (called AGG-\/0s in this thesis), proposed an algorithm for computing expected utility for the symmetric case, and proposed an approach for computing sample Nash equilibria in symmet- ric AGG-\/0s, by adapting Blum et al. [2006]\u2019s approach for speeding up Govindan and Wilson\u2019s [2003] global Newton method. My main contributions include: 1) ex- tending the basic AGG-\/0 representation by introducing function nodes and additive structure, yielding the more general representations AGGs with Function Nodes (AGG-FNs) and AGG-FNs with Additive Structure (AGG-FNAs); 2) proposing and implementing an algorithm for computing expected utility for general AGGs, and proving that it runs in polynomial time; 3) implementing software packages for game-theoretic analysis using AGGs, including programs that speed up exist- ing algorithms for sample Nash Equilibria [Govindan and Wilson, 2003, van der Laan et al., 1987] by leveraging the expected utility algorithm; 4) carrying out computational experiments; 5) preparation of the manuscript. Kevin has played a supervisory role throughout the project. Chapter 4 is based on the paper Computing Pure Nash Equilibria in Symmetric Action Graph Games by Albert Xin Jiang and Kevin Leyton-Brown, published iv in the Proceedings of AAAI, 2007, although the chapter contains a significant amount of new material. My main contributions include: 1) identification of the research problem and the design of the overall approach; 2) working out the details of our algorithm and proving its correctness and running time; 3) preparation of the manuscript. Kevin has played a supervisory role throughout the project. Chapter 5 is based on the paper Temporal Action-Graph Games: A New Repre- sentation for Dynamic Games by Albert Xin Jiang, Kevin Leyton-Brown and Avi Pfeffer, published in the Proceedings of UAI, 2009. The identification and design of the overall research program is done via joint discussions by all three co-authors. My other contributions include: 1) working out the details of the Temporal Action- Graph Game representation and our algorithm for computing expected utility, and proving their properties; 2) implementing our algorithm and carrying out computa- tional experiments; 3) preparation of a majority of the text in the manuscript. Kevin has played a supervisory role throughout the project. Chapter 6 is based on the paper Bayesian Action-Graph Games, published in the Proceedings of NIPS, 2010. The identification and design of the overall re- search program is done via joint discussions by both co-authors. My other contri- butions include: 1) working out the details of the Bayesian Action-Graph Game representation, our algorithm for computing expected utility and our approach for computing Bayes-Nash equilibrium, and proving their properties; 2) implementing our algorithm and carrying out computational experiments; 3) preparation of the manuscript. Kevin has played a supervisory role throughout the project. Chapter 7 is based on the paper Polynomial-time Computation of Exact Cor- related Equilibrium in Compact Games by Albert Xin Jiang and Kevin Leyton- Brown, published in the Proceedings of ACM-EC, 2011. My main contributions include: 1) identification of the research program; 2) design of our algorithm and analysis of its properties; 3) preparation of the manuscript. Kevin has played a supervisory role throughout the project. Chapter 8 is based on the manuscript A General Framework for Computing Optimal Correlated Equilibria in Compact Games by Albert Xin Jiang and Kevin Leyton-Brown, published in the Proceedings of the Seventh Workshop on Internet and Network Economics (WINE), 2011. My main contributions include: 1) iden- tification of the research program; 2) design of our algorithm and analysis of its v properties; 3) preparation of the manuscript. Kevin has played a supervisory role throughout the project. vi Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 A Brief Survey on the Computation of Solution Concepts . . . . . . 10 2.1 Representations of Games . . . . . . . . . . . . . . . . . . . . . 11 2.1.1 Representing Complete-information Static Games . . . . 11 2.1.2 Representing Dynamic Games . . . . . . . . . . . . . . . 17 2.1.3 Representing Games of Incomplete Information . . . . . . 18 2.2 Computation of Game-theoretic Solution Concepts . . . . . . . . 19 2.2.1 Computing Sample Nash Equilibria for Normal-Form Games 20 2.2.2 Computing Sample Nash Equilibria for Compact Repre- sentations of Static Games . . . . . . . . . . . . . . . . . 27 2.2.3 Computing Sample Bayes-Nash Equilibria for Incomplete- information Static Games . . . . . . . . . . . . . . . . . 31 2.2.4 Computing Sample Nash Equilibria for Dynamic Games . 33 2.2.5 Questions about the Set of All Nash Equilibria of a Game 35 2.2.6 Computing Pure-Strategy Nash Equilibria . . . . . . . . . 35 vii 2.2.7 Computing Correlated Equilibrium . . . . . . . . . . . . 38 2.2.8 Computing Other Solution Concepts . . . . . . . . . . . . 41 2.3 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3 Action-Graph Games . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.1.1 Our Contributions . . . . . . . . . . . . . . . . . . . . . 43 3.2 Action Graph Games . . . . . . . . . . . . . . . . . . . . . . . . 45 3.2.1 Basic Action Graph Games . . . . . . . . . . . . . . . . . 45 3.2.2 AGGs with Function Nodes . . . . . . . . . . . . . . . . 51 3.2.3 AGG-FNs with Additive Structure . . . . . . . . . . . . . 58 3.3 Further Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.3.1 A Job Market . . . . . . . . . . . . . . . . . . . . . . . . 61 3.3.2 Representing Anonymous Games as AGG-FNs . . . . . . 62 3.3.3 Representing Polymatrix Games as AGG-FNAs . . . . . . 63 3.3.4 Congestion Games with Action-Specific Rewards . . . . . 64 3.4 Computing Expected Payoff with AGGs . . . . . . . . . . . . . . 66 3.4.1 Computing Expected Payoff for AGG-\/0s . . . . . . . . . 66 3.4.2 Computing Expected Payoff with AGG-FNs . . . . . . . 77 3.4.3 Computing Expected Payoff with AGG-FNAs . . . . . . . 81 3.5 Computing Sample Equilibria with AGGs . . . . . . . . . . . . . 82 3.5.1 Complexity of Finding a Nash Equilibrium . . . . . . . . 83 3.5.2 Computing a Nash Equilibrium: The Govindan-Wilson Al- gorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 3.5.3 Computing a Nash Equilibrium: The Simplicial Subdivi- sion Algorithm . . . . . . . . . . . . . . . . . . . . . . . 88 3.5.4 Computing a Correlated Equilibrium . . . . . . . . . . . 89 3.6 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 3.6.1 Software Implementation and Experimental Setup . . . . 90 3.6.2 Representation Size . . . . . . . . . . . . . . . . . . . . . 92 3.6.3 Expected Utility Computation . . . . . . . . . . . . . . . 93 3.6.4 Computing Payoff Jacobians . . . . . . . . . . . . . . . . 94 3.6.5 Finding a Nash Equilibrium Using Govindan-Wilson . . . 96 viii 3.6.6 Finding a Nash Equilibrium Using Simplicial Subdivision 97 3.6.7 Visualizing Equilibria on the Action Graph . . . . . . . . 100 3.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4 Computing Pure-strategy Nash Equilibria in Action-Graph Games . 104 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.2.1 AGGs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.2.2 Complexity of Computing PSNE . . . . . . . . . . . . . . 107 4.3 Computing PSNE in AGGs with Bounded Number of Action Nodes 108 4.4 Computing PSNE in Symmetric AGGs . . . . . . . . . . . . . . . 110 4.4.1 Restricted Games and Partial Solutions . . . . . . . . . . 110 4.4.2 Combining Partial Solutions . . . . . . . . . . . . . . . . 112 4.4.3 Dynamic Programming via Characteristics . . . . . . . . 113 4.4.4 Algorithm for Symmetric AGGs with Bounded Treewidth 120 4.4.5 Finding PSNE . . . . . . . . . . . . . . . . . . . . . . . 125 4.4.6 Computing Optimal PSNE . . . . . . . . . . . . . . . . . 126 4.5 Beyond symmetric AGGs . . . . . . . . . . . . . . . . . . . . . . 128 4.5.1 Algorithm for k-Symmetric AGG-\/0s . . . . . . . . . . . . 128 4.5.2 General AGG-\/0s and the Augmented Action Graph . . . . 129 4.6 Conclusions and Open Problems . . . . . . . . . . . . . . . . . . 134 5 Temporal Action-Graph Games: A New Representation for Dynamic Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 5.2 Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 5.2.1 Temporal Action-Graph Games . . . . . . . . . . . . . . 138 5.2.2 Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . 143 5.2.3 Expected Utility . . . . . . . . . . . . . . . . . . . . . . 144 5.2.4 The Induced MAID of a TAGG . . . . . . . . . . . . . . 146 5.2.5 Expressiveness . . . . . . . . . . . . . . . . . . . . . . . 147 5.3 Computing Expected Utility . . . . . . . . . . . . . . . . . . . . 148 5.3.1 Exploiting Causal Independence . . . . . . . . . . . . . . 149 ix 5.3.2 Exploiting Temporal Structure . . . . . . . . . . . . . . . 150 5.3.3 Exploiting Context-Specific Independence . . . . . . . . . 153 5.4 Computing Nash Equilibria . . . . . . . . . . . . . . . . . . . . . 154 5.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 5.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 6 Bayesian Action-Graph Games . . . . . . . . . . . . . . . . . . . . . 159 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 6.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 6.2.1 Complete-information interpretations . . . . . . . . . . . 162 6.3 Bayesian Action-Graph Games . . . . . . . . . . . . . . . . . . . 163 6.3.1 BAGGs with Function Nodes . . . . . . . . . . . . . . . 166 6.4 Computing a Bayes-Nash Equilibrium . . . . . . . . . . . . . . . 168 6.4.1 Computing Expected Utility in BAGGs . . . . . . . . . . 170 6.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 7 Polynomial-time Computation of Exact Correlated Equilibrium in Compact Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 7.1.1 Recent Uncertainty About the Complexity of Exact CE . . 177 7.1.2 Our Results . . . . . . . . . . . . . . . . . . . . . . . . . 178 7.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 7.3 The Ellipsoid Against Hope Algorithm . . . . . . . . . . . . . . . 182 7.4 Our Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 7.4.1 The Purified Separation Oracle . . . . . . . . . . . . . . . 185 7.4.2 The Simplified Ellipsoid Against Hope Algorithm . . . . 188 7.5 Uncoupled Dynamics with Polynomial Communication Complexity 192 7.6 Computing Extensive-form Correlated Equilibria . . . . . . . . . 194 7.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 8 A General Framework for Computing Optimal Correlated Equilib- ria in Compact Games . . . . . . . . . . . . . . . . . . . . . . . . . . 199 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 8.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . 203 x 8.2.1 Correlated Equilibrium . . . . . . . . . . . . . . . . . . . 203 8.3 The Deviation-Adjusted Social Welfare Problem . . . . . . . . . . 204 8.3.1 The Weighted Deviation-Adjusted Social Welfare Problem 207 8.3.2 The Coarse Deviation-Adjusted Social Welfare Problem . 208 8.4 The Deviation-Adjusted Social Welfare Problem for Specific Rep- resentations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 8.4.1 Reduced Forms . . . . . . . . . . . . . . . . . . . . . . . 209 8.4.2 Linear Reduced Forms . . . . . . . . . . . . . . . . . . . 213 8.4.3 Representations with Action-Specific Structure . . . . . . 216 8.5 Conclusion and Open Problems . . . . . . . . . . . . . . . . . . . 221 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 A Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 A.1 File Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 A.1.1 The AGG File Format . . . . . . . . . . . . . . . . . . . 239 A.1.2 The BAGG File Format . . . . . . . . . . . . . . . . . . . 241 A.2 Solvers for finding Nash Equilibria . . . . . . . . . . . . . . . . . 242 A.3 AGG Graphical User Interface . . . . . . . . . . . . . . . . . . . 243 A.4 AGG Generators in GAMUT . . . . . . . . . . . . . . . . . . . . 244 A.5 Software Projects Under Development . . . . . . . . . . . . . . . 244 xi List of Figures Figure 3.1 AGG-\/0 representation of the Ice Cream Vendor game. . . . . 48 Figure 3.2 AGG-\/0 representation of a 3-player, 3-action graphical game. 51 Figure 3.3 A 5\u00d7 6 Coffee Shop game: Left: the AGG-\/0 representation without function nodes (looking at only the neighborhood of \u03b1). Middle: we introduce two function nodes, p\u2032 (bottom) and p\u2032\u2032 (top). Right: \u03b1 now has only 3 neighbors. . . . . . . . . . 57 Figure 3.4 Left: a two-player congestion game with three facilities. The actions are shown as ovals containing their respective facilities. Right: the AGG-FNA representation of the same congestion game. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Figure 3.5 AGG-\/0 representation of the Job Market game. . . . . . . . . 61 Figure 3.6 AGG-FN representation of a game with agent-specific utility functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Figure 3.7 AGG-FNA representation of a 3-player polymatrix game. Func- tion node UAB represents player A\u2019s payoffs in his bimatrix game against B, UBA represents player B\u2019s payoffs in his bi- matrix game against A, and so on. To avoid clutter we do not show the edges from the action nodes to the function nodes in this graph. Such edges exist from A and B\u2019s actions to UAB and UBA, from A and C\u2019s actions to UAC and UCA, and from B and C\u2019s actions to UBC and UCB. . . . . . . . . . . . . . . . . . . 64 Figure 3.8 Projection of the action graph. Left: action graph of the Ice Cream Vendor game. Right: projected action graph and action sets with respect to the action C1. . . . . . . . . . . . . . . . 69 xii Figure 3.9 Representation sizes of coffee shop games. Top left: 5\u00d75 grid with 3 to 16 players (log scale). Top right: AGG only, 5\u00d7 5 grid with up to 80 players (log scale). Bottom left: 4-player r\u00d7 5 grid, r varying from 3 to 15 (log scale). Bottom right: AGG only, up to 80 rows. . . . . . . . . . . . . . . . . . . . . 93 Figure 3.10 Running times for payoff computation in the Coffee Shop game. Top left: 5\u00d75 grid with 3 to 16 players. Top right: AGG only, 5\u00d7 5 grid with up to 80 players. Bottom left: 4-player r\u00d7 5 grid, r varying from 3 to 15. Bottom right: AGG only, up to 80 rows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Figure 3.11 Job Market games, varying numbers of players. Left: compar- ing representation sizes. Right: running times for computing 1000 expected utilities. . . . . . . . . . . . . . . . . . . . . . 95 Figure 3.12 Govindan-Wilson algorithm; Coffee Shop game. Top row: 4\u00d7 4 grid, varying number of players. Bottom row: 4-player r\u00d74 grid, r varying from 3 to 12. For each row, the left figure shows ratio of running times; the right figure shows logscale plot of CPU times for the AGG-based implementation. The dashed horizontal line indicates the one day cutoff time. . . . . . . . 98 Figure 3.13 Govindan-Wilson algorithm; Job Market games, varying num- bers of players. Left: ratios of running times. Right: logscale plot of CPU times for the AGG-based implementation. . . . . 99 Figure 3.14 Ratios of running times of simplicial subdivision algorithms on Coffee Shop games. Left: 4\u00d7 4 grid with 3 to 4 players. Right: 3-player r\u00d73 grid, r varying from 4 to 7. . . . . . . . 99 Figure 3.15 Simplicial subdivision algorithm; symmetric AGG-\/0s on small world graphs. Top row: 5 actions, varying number of players. Bottom row: 4 players, varying number of actions. The left figures show ratios of running times; the right figures show logscale plots of CPU times for the AGG-based implementa- tion. The dashed horizontal line indicates the one day cutoff time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 xiii Figure 3.16 Visualization of a Nash equilibrium of a 16-player Coffee Shop game on a 4\u00d74 grid. The function nodes and the edges of the action graph are not shown. The action node at the bottom corresponds to not entering the market. . . . . . . . . . . . . 101 Figure 3.17 Visualization of a Nash equilibrium of a Job Market game with 20 players. Left: expected configuration of the equilibrium. Right: two mixed equilibrium strategies. . . . . . . . . . . . . 102 Figure 3.18 Visualization of a Nash equilibrium of an Ice Cream Vendor game. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Figure 4.1 The road game with m = 8 and the action graph of its AGG representation. . . . . . . . . . . . . . . . . . . . . . . . . . 111 Figure 4.2 Restricted game on the rightmost 6 actions. . . . . . . . . . . 111 Figure 4.3 A partial solution on the rightmost 6 actions describes the con- figuration over these 8 actions. . . . . . . . . . . . . . . . . . 112 Figure 4.4 Characteristic function chP,Q for the rightmost 6 actions with P = {T6,B6} and Q = {T5,T6,T7,B5,B6,B7}. . . . . . . . 118 Figure 4.5 An action graph G. . . . . . . . . . . . . . . . . . . . . . . . 120 Figure 4.6 The primal graph G\u2032. . . . . . . . . . . . . . . . . . . . . . . 120 Figure 4.7 Tree decomposition of und(G) . . . . . . . . . . . . . . . . . 120 Figure 4.8 Tree decomposition of primal graph G\u2032, satisfying the condi- tions of Lemma 4.4.11. . . . . . . . . . . . . . . . . . . . . . 120 Figure 5.1 Induced BN of the TAGG of Example 5.1.1, with 2 time steps, 3 lanes, and 3 players per time step. Squares represent behav- ior strategy variables, circles represent action count variables, diamonds represent utility variables and shaded diamonds rep- resent decision-payoff variables. To avoid cluttering the graph, we only show utility variables at time step 2 and a decision- payoff variable for one of the decisions. . . . . . . . . . . . . 146 Figure 5.2 The transformed BN of the tollbooth game from Figure 5.1 with 3 lanes and 3 cars per time step. . . . . . . . . . . . . . . 150 xiv Figure 5.3 Running times for expected utility computation. Triangle data points represent Approach 1 (induced BN), diamonds repre- sent Approach 2 (transformed BN), squares represent Approach 3 (proposed algorithm). . . . . . . . . . . . . . . . . . . . . 155 Figure 6.1 Action graph for a symmetric Bayesian game with n players, 2 types, 2 actions per type. . . . . . . . . . . . . . . . . . . . . 166 Figure 6.2 BAGG representation for a Coffee Shop game with 2 types per player on an 1\u00d7 k grid. . . . . . . . . . . . . . . . . . . . . . 169 Figure 6.3 GW, varying players. . . . . . . . . . . . . . . . . . . . . . . 174 Figure 6.4 GW, varying locations. . . . . . . . . . . . . . . . . . . . . . 174 Figure 6.5 GW, varying types. . . . . . . . . . . . . . . . . . . . . . . . 174 Figure 6.6 Simplicial subdivision. . . . . . . . . . . . . . . . . . . . . . 174 xv Acknowledgments First and foremost I would like to thank my parents, for their unconditional love and support, for their wisdom, and for encouraging me to pursue my interests. I am the person I am because of them, and I am very lucky to have them as parents. Kevin Leyton-Brown has been my advisor since my MSc degree. He has in- troduced me to game theory, and mentored me through all the research projects described in this thesis. I am eternally grateful to him for being a great teacher and communicator, for showing me his research vision yet giving me the freedom to explore and find my research topics, for helping me refine my half-formed ideas, for giving me concrete advice and pushing me to be better at all aspects of being a researcher, and for the career opportunities he introduced me to. I can honestly say that I really enjoyed my Ph.D. experience. I would like to thank fellow members of Kevin\u2019s game theory group and my office mates, David Thompson, James Wright and Baharak Rastegari, for stimulat- ing discussions on research and otherwise, and for the camaraderie. I have also had many enjoyable discussions with Chris Ryan while he was doing his Ph.D. in Oper- ations Research at UBC, during which he introduced me to quite a few interesting mathematical concepts including algebraic geometry and generating functions. I would like to thank David Poole and Joel Friedman for serving on my super- visory committee, my university examiners Michael Friedlander and Sergei Sev- erinov, and my external examiner David Parkes. They have given me very helpful feedbacks on my thesis. Many members of the algorithmic game theory research community have given me encouragement and help during my studies, I would like to especially mention Vince Conitzer, Christos Papadimitriou, Tim Roughgarden, Tuomas Sandholm, and Ted Turocy. I would also like to thank all my collaborators, xvi some of which I have mentioned above: Kevin Leyton-Brown, Navin Bhat, Avi Pf- effer, Mohammad Ali Safari, Chris Ryan, Nando de Freitas, Michael Buro, David Thompson, James Wright, and Damien Bargiacchi. During my Ph.D. studies I was supported by UBC\u2019s University Graduate Fel- lowship for one year, the NSERC Canada Graduate Scholarship for three years, and partially by a Google Research Award \u201cAdvanced Computational Analysis of Position Auction Games\u201d. I would like to thank them for their financial support. xvii Chapter 1 Introduction Game theory is a mathematical theory of games, interactions in which multiple autonomous agents, each with their own utility functions, act according to their own interests. Game theory has received a great deal of study, and is perhaps the dominant paradigm in microeconomics [e.g., Fudenberg and Tirole, 1991]. In the last decade, there has been much research at the interface of computer science and game theory [e.g., Nisan et al., 2007, Shoham and Leyton-Brown, 2009]. This interdisciplinary field has been named \u201calgorithmic game theory\u201d, \u201ccomputational economics\u201d, and \u201cmultiagent systems\u201d by various researchers. This recent interest in game theory by the computer science community has been partially motivated by the explosion in the popularity of the Internet, which is essentially a network of computers controlled by selfish agents. There is thus much recent effort to apply game theory to various subdomains of the Internet such as TCP\/IP routing, peer- to-peer sharing, auction environments including eBay and AdWords, and social networks. One fundamental class of computational problems in game theory is the compu- tation of solution concepts of a finite game. Examples of solution concepts include Nash equilibrium and correlated equilibrium. Intuitively, these solution concepts are answers to the following type of questions: what are the likely outcomes of the game, under certain models of rationality of the agents? Thus the task of comput- ing these solution concepts can be understood in the language of AI as reasoning about the game. The goal is to be able to efficiently carry out such reasoning for 1 real-world multiagent systems. One application of such game-theoretic reasoning is the development of autonomous agents that can act intelligently by taking into account the strategic behavior of other agents. Another application is to help the designer of a system to predict its likely outcomes and to optimize the parameters of the system to achieve preferred outcomes. Furthermore, some computer scien- tists argue that the complexity of these computational problems have implications on whether equilibria can be reached in practice. A famous quote by Kamal Jain is \u201cif your laptop cannot find the equilibrium, neither can the market.\u201d The input to such computational problems is a description of the game. Most of the game theory literature presumes that simultaneous-action games will be rep- resented in normal form. This is problematic because in many domains of interest the number of players and\/or the number of actions per player is large. In the normal form representation, the game\u2019s payoff function is stored as a matrix with one entry for each player\u2019s payoff under each combination of all players\u2019 actions. As a result, the size of the representation grows exponentially with the number of players. A similar problem arises in dynamic games, for which the extensive form serves as the standard representation. For large games, it becomes infeasible to store the game in memory. Computations that require time polynomial in the input size are nevertheless impractical. Fortunately, most large games of practical interest have highly-structured pay- off functions, and thus it is possible to represent them compactly, by which we mean a representation that is exponentially smaller than its induced normal form. Intuitively, this helps to explain why people are able to reason about these games in the first place: we understand the payoffs in terms of simple relationships rather than in terms of enormous lookup tables. Of course, there are any number of ways of representing games compactly. For example, games of interest could be assigned short ID numbers. But we ultimately want to be able to compute solution concepts of the games, and we would like the running time of our algorithms to depend on the size of the compact representation rather than the size of the corresponding normal form. Can we design representations of games that are able to compactly encode a wide range of interesting games and are amenable to efficient computation? And how do we design efficient algorithms for computing solution concepts in these 2 compactly represented games? These are the central questions I tackle in this the- sis. Before discussing my contributions, I will first briefly summarize the relevant literature; I will give a more in-depth survey in Chapter 2. One thread of recent work in the literature has explored compact game representations (also called con- cise or succinct representations) that are able to succinctly describe games that exhibit certain types of structure. Examples of such representations for complete- information simultaneous-action games include anonymous games, graphical games [Kearns et al., 2001], and congestion games [Rosenthal, 1973]. Examples of struc- ture include symmetry\/anonymity, strict and action-specific independence, and ad- ditivity. However, the existing representations either only capture a subset of these types of structure, or are only able to represent a subset of games that exhibit a specific structure. There is a lack of a general modeling language that is fully expressive (able to express arbitrary games) while also able to compactly encode utility functions exhibiting commonly-encountered types of structure. Nash equilibrium (NE) is perhaps the most well-known and well-studied game- theoretic solution concept. There is a line of recent results from the computational complexity theory community on the hardness of various computational problems regarding Nash equilibria, perhaps most prominently the series of papers [Chen and Deng, 2006, Daskalakis et al., 2006b, Goldberg and Papadimitriou, 2006] establish- ing the PPAD-completeness of the the problem of finding a sample mixed-strategy Nash equilibrium in normal-form games of two or more players. I take the view that although these hardness results are important for understanding the problems, they do not imply that practical algorithms cannot be built. For example, there has been great advances in the design and implementation of practical solvers for theoretically hard problems such as SAT and integer programming. In terms of algorithms for finding a Nash equilibrium, earlier literature from economics and operations research focused on algorithms for the normal form [e.g., Govindan and Wilson, 2003, van der Laan et al., 1987]. In the last decade, with more com- pact game representations being proposed, there has been more efforts from the computer science community on algorithms for compact representations. Such efforts can roughly be divided into two categories, \u201cblack-box\u201d approaches and \u201cspecial-purpose\u201d approaches. A black-box algorithm requires certain subroutines 3 provided by the representation to work, but otherwise treats the representation as a black box. Examples include efforts to adapt algorithms designed for the nor- mal form to compact representations [Bhat and Leyton-Brown, 2004, Blum et al., 2006]. The computation of expected utility has emerged as a key subtask required by many black-box algorithms. The ability to carry out this computation efficiently has become an important design criterion for compact representations. Fortunately, most existing representations admit polynomial-time algorithms for expected util- ity. The existing black-box approaches are for the problem of finding a sample Nash equilibrium; while this problem is very important, we are often interested in questions regarding the set of equilibria such as finding the optimal equilibrium. On the other hand, a special-purpose approach tries to exploit certain specific struc- ture of the game, and is thus specific to the representation. Although not as general as the black-box approach, a special-purpose approach can often identify tractable subclasses of games while the general case is hard; furthermore it can sometimes compute a concise description of the set of equilibria, allowing us to e.g., compute the optimal equilibrium. Examples include algorithms for computing pure-strategy Nash equilibria for tree graphical games [Daskalakis and Papadimitriou, 2006, Got- tlob et al., 2005] and singleton congestion games [Ieong et al., 2005], and for com- puting mixed-strategy Nash equilibria for symmetric games [Papadimitriou and Roughgarden, 2005] and anonymous games [Daskalakis and Papadimitriou, 2007]. In terms of software implementations, the GAMBIT [McKelvey et al., 2006] pack- age contains many of the existing algorithms for the normal form and the extensive form. There is a relative lack of publicly-available implementations of algorithms for compact representations, except for the Gametracer [Blum et al., 2002] pack- age which provides implementations of black-box adaptations of two of Govindan and Wilson\u2019s algorithms [Govindan and Wilson, 2003, 2004] for finding a sample Nash equilibrium. In summary, although there have been many advances in the theoretical under- standing of how certain types of structure in games can be exploited for efficient computation, the lack of a general representation and publicly available software implementations for structured games meant that the computational analysis of large games has not become practical. Much of this thesis can be understood as my efforts to address these problems. Below I give an outline of my contributions, 4 including the design of game representations that can capture a wide variety of computation-friendly structure, novel algorithms for computing sample equilibria as well as optimal equilibria in compact games, and software implementations of tools for modeling and reasoning about structured games. In Chapter 3 I present work (joint with Kevin Leyton-Brown and Navin Bhat) regarding Action-graph games (AGGs), a compact representation of complete-information simultaneous-action games first proposed by Bhat and Leyton-Brown [2004]. We make several contributions that significantly extends Bhat and Leyton-Brown\u2019s [2004] original work. First, we extended the original definition of AGGs by in- troducing function nodes and additive utility functions, capturing a wider vari- ety of utility structure. The resulting AGG representation is a fully-expressive modeling language that both extends and unifies previous approaches: it can com- pactly express games with structure such as strict or context-specific independence, anonymity, and additivity; it can be used to compactly encode all games that are compact when represented as graphical games, symmetric games, anonymous games, congestion games, and polymatrix games, as well as additional realistic games that would take exponential space to represent using these existing repre- sentations. Second, we gave a polynomial-time algorithm for the important task of computing expected utility for AGGs, which then allows us to speed up existing normal-form-based equilibrium-finding algorithms including Govindan and Wil- son\u2019s [2003] Global Newton Method and the simplicial subdivision algorithm of van der Laan et al. [1987]. Third, we implemented and made available software tools for constructing, visualizing, and reasoning with AGGs. We present results of experiments showing that using AGGs leads to a dramatic increase in the size of games accessible to computational analysis. Pure-strategy Nash equilibrium (PSNE) is a more restricted concept that Nash equilibrium, and has certain theoretically and practically attractive properties. In Chapter 4 I present work (joint with Kevin Leyton-Brown) on computing pure- strategy Nash equilibria for AGGs. Unlike our black-box approach in Chapter 3 for computing equilibria, here we use a special-purpose approach that exploits the graph-theoretical properties of the action graph. In particular, we propose a dynamic-programming algorithm that constructs equilibria of the game from equi- libria of restricted games played on subgraphs of the action graph. If the game is 5 symmetric and the action graph has bounded treewidth, our algorithm determines the existence of pure-strategy Nash equilibrium in polynomial time. We also ex- tend our approach to certain classes of asymmetric AGGs. Just as AGGs unify and extend existing representations, our approach can be understood as a generaliza- tion of existing special-purpose approaches for representations including singleton congestion games [Ieong et al., 2005] and graphical games [Daskalakis and Pa- padimitriou, 2006, Gottlob et al., 2005]. So far we have focused on representing and reasoning with simultaneous-action games. On the other hand, many multi-agent interactions involve decisions made sequentially over time; such situations are modeled as dynamic games in game the- ory. The standard representation for dynamic games, the extensive form, is ineffi- cient for large, structured games, while the state-of-the-art compact representation, multi-agent influence diagrams (MAIDs), only capture strict utility independence structure. In Chapter 5 I present work (joint with Kevin Leyton-Brown and Avi Pf- effer), in which we propose temporal action-graph games (TAGGs), an extension of AGGs that can compactly represent dynamic games exhibiting a wide range of structure including anonymity or context-specific utility independencies. We also show that TAGGs can be understood as indirect MAID encodings in which many deterministic chance nodes are introduced. We provide an efficient algorithm for computing expected utility for TAGGs, and show both theoretically and empirically that our approach improves significantly on MAIDs. Games of incomplete information, or Bayesian games, are an important game- theoretic model in which players are uncertain about the utilities of the game. De- spite having many applications in economics, there are relatively fewer results on the computational aspects of Bayesian games, such as compact representations and practical algorithms for computing solution concepts like Bayes-Nash equilibria. In Chapter 6 we extend AGGs to the incomplete-information setting and present Bayesian action-graph games (BAGGs), a compact representation for Bayesian games. BAGGs can represent arbitrary Bayesian games, and furthermore can com- pactly express Bayesian games exhibiting commonly encountered types of struc- ture including symmetry, action- and type-specific utility independence, and proba- bilistic independence of type distributions. We provide an algorithm for computing expected utility in BAGGs, and discuss conditions under which the algorithm runs 6 in polynomial time. Sample Bayes-Nash equilibria of BAGGs can be computed by adapting existing algorithms for complete-information normal form games and leveraging our expected utility algorithm. First proposed by Aumann [1974, 1987], correlated equilibrium (CE) is another important solution concept. In a landmark paper, Papadimitriou and Roughgar- den [2008] described a polynomial-time black-box algorithm (\u201cEllipsoid Against Hope\u201d) for computing sample correlated equilibria of concisely-represented simultaneous- move games. Recently, Stein, Parrilo and Ozdaglar [2010] showed that this algo- rithm can fail to find an exact correlated equilibrium, but can be easily modified to efficiently compute approximate correlated equilibria. Currently, it remains an open problem to determine whether the algorithm can be modified to compute an exact correlated equilibrium. In Chapter 7 we show that it can, presenting a vari- ant of the Ellipsoid Against Hope algorithm that guarantees the polynomial-time identification of exact correlated equilibrium. Also, our algorithm is the first to tractably compute correlated equilibria with polynomial-sized supports; such cor- related equilibria are more natural solutions than the mixtures of product distribu- tions produced previously, and have several advantages including requiring fewer bits to represent, being easier to sample from, and being easier to verify. However, since in general there can be an infinite number of correlated equilib- ria in a game, finding an arbitrary one is of limited value. In Chapter 8 we focus on the problem of computing a correlated equilibrium that optimizes some objec- tive (e.g., social welfare). Papadimitriou and Roughgarden [2008] gave a sufficient condition for the tractability of the problem, however it only applies to a subset of existing representations. We propose a different algorithmic approach for the opti- mal CE problem that applies to all compact representations, and give a sufficient condition that generalizes Papadimitriou and Roughgarden\u2019s condition. In partic- ular, we reduce the optimal CE problem to the deviation-adjusted social welfare problem, a combinatorial optimization problem closely related to the optimal social welfare outcome problem. Our algorithm can be understood as an instance of the black-box approach, with the computation of the deviated social welfare problem as the key subroutine provided by the game representation. This framework allows us to identify new classes of games for which the optimal CE problem is tractable, including graphical polymatrix games on tree graphs. We also study the problem 7 of computing the optimal coarse correlated equilibrium, a solution concept closely related to CE. Using a similar approach we derive a sufficient condition for this problem, and use it to prove that the problem is tractable for singleton congestion games. In Appendix A I describe software packages we implemented and made avail- able at http:\/\/agg.cs.ubc.ca. Taken together, this thesis presents several basic components of an algorithmic framework for computational analysis of large games: compact representations for complete-information and incomplete-information simultaneous-action games as well as dynamic games, a collection of implemented algorithms for comput- ing sample Nash and correlated equilibria given such games, and some theoretical foundations for computing PSNE and optimal correlated equilibria. These are parts of a larger ongoing effort by our research group, that aims to apply computational game-theoretic analysis to real-world systems, especially the design and analysis of market mechanisms such as auctions. Such mechanism design problems have traditionally been attacked via purely analytical means, but computational analysis allows us to tackle settings for which theoretical analysis is difficult or impossi- ble. Position auctions for advertising slots, such as the Generalized Second-Price auction used by Google AdWords, have received much recent interest from com- puter scientists and economists. Thompson and Leyton-Brown [2009] were able to use AGGs to compactly represent complete-information position auctions and compute their Nash equilibria, which allows them to analyze the economic prop- erties of such auctions such as revenue and efficiency. Building on their work, I am currently working with David and Kevin to extend this analysis to incomplete- information models of position auctions using BAGGs. Finally, I mention a couple of papers on related topics that I co-authored but do not include in this thesis. In [Jiang and Safari, 2010], Mohammad Ali Safari and I analyzed the problem of deciding the existence of pure-strategy Nash equilibria for graphical games on restricted classes of graphs, and showed that the problem is in polynomial time if and only if the class of graphs has bounded treewidth (after iter- ated removal of sinks). We proved our result by applying Grohe\u2019s characterization of the complexity of homomorphism problems. This result illustrated a limitation of a class of graph-based special-purpose approaches that includes the algorithm 8 of Chapter 4, that it cannot be extended much beyond bounded-treewidth graphs. It influenced my later focus on more general approaches such as those in Chap- ters 7 and 8. In [Ryan et al., 2010], Chris Ryan, Kevin Leyton-Brown and I ana- lyzed the problem of computing pure-strategy Nash equilibria in symmetric games whose utilities are compactly represented, such that the number of players can be exponential in the representation size. We showed that if the utility functions are represented as piecewise-linear functions, there exist polynomial-time algorithms for finding a pure-strategy Nash equilibria and count the number of equilibria. Our approach made use of the rational generating function method developed by Barvi- nok and Woods. I do not include these papers here because they do not fit in with the focus of the thesis. 9 Chapter 2 A Brief Survey on the Computation of Solution Concepts In this chapter we give a brief survey on the economics and computer science lit- erature on the computation of game-theoretic solution concepts, focusing on Nash equilibrium and correlated equilibrium. There have been several surveys on various aspects of this topic: von Stengel [2002] focused on two-player games; McKelvey and McLennan [1996] focused on algorithms for the normal form; Papadimitriou [2007] focused on complexity results. In this survey we give emphasis to topics most relevant to this thesis, i.e., results that are relevant to large, structured games. The goal of this chapter is to present a bird\u2019s-eye view of the state of the art. We will largely follow the narrative outlined in Chapter 1. In Section 2.1 we look at representations of games and the types of structure they capture. In Section 2.2 we look at algorithmic and complexity-theoretic results, with emphasis on algo- rithms for compact representations. In Section 2.3 we survey software packages for game-theoretic modeling and computation. 10 2.1 Representations of Games A game is a mathematical model of interaction among self-interested agents. Infor- mally, to specify a game we need to specify a set of agents (also known as players), a set of strategies for each agent, and a utility function for each agent that assigns a utility value (also known as payoff) to each outcome of the game. Such models can be further divided into complete-information static games, incomplete-information static games and dynamic games. A game representation is a data structure that stores all information needed to specify a game. An instance of a representation is a game encoded in that repre- sentation. Thus it is often useful to think of a game representation as a class (or type) in the language of object-oriented software engineering, and an instance of a representation as an object in that class. Then the size of a representation is the amount of data required to specify a game instance (i.e., initialize an object) of that representation. In this section we survey the existing literature on representing games. Section 2.1.1 focuses on representing complete-information static games; Section 2.1.2 focuses on representing dynamic games; Section 2.1.3 focuses on representing incomplete-information games. 2.1.1 Representing Complete-information Static Games In static games, also known as simultaneous-move games, each agent chooses a strategy simultaneously (e.g., Rock-Paper-Scissors). By complete-information we mean that each agent knows the utility functions of all agents. Definition 2.1.1. A complete-information static game is a tuple (N,{Ai}i\u2208N ,{ui}i\u2208N) where \u2022 N = {1, . . . ,n} is the set of agents; \u2022 for each agent i, Ai is the nonempty set of i\u2019s actions (or pure strategies). We denote by ai \u2208 Ai one of agent i\u2019s actions. An action profile (or pure-strategy profile) a = (\u03b11, . . . ,\u03b1n) \u2208 \u220fi\u2208N Ai is a tuple of actions of the n agents. We also denote by a\u2212i the (n\u2212 1)-tuple of actions by agents other than i under 11 the action profile a.1 \u2022 ui : \u220f j\u2208N A j \u2192 R is i\u2019s utility function, which specifies i\u2019s utility given any action profile. A game representation is fully expressive if it can represent arbitrary games. We say a game representation has polynomial type [Daskalakis et al., 2006a] if the number of players and the number of actions for each player are bounded by polynomials of the representation size. For example, if the set of players and the sets of actions are encoded explicitly, then the representation has polynomial type. This is the case for all representations of static games discussed in this section. Normal Form A normal form representation of a game uses a multi-dimensional matrix Ui \u2208 R\u220f j\u2208N A j to represent each utility function ui. The size of this representation is approximately n\u220f j\u2208N |A j|, which is O(nmn) where m = maxi\u2208N |Ai|. Two-player normal-form games are also called bimatrix games, since the utility functions of such a game can be specified by two A1\u00d7A2 matrices. Although these games are fully expressive, the size of the representation grows exponentially in the number of players. As a result, the normal form is unsuitable for representing large systems. Although several computational tasks such as find- ing pure Nash equilibria and computing expected payoff under mixed strategies are polynomial-time in the size of the normal form representation, they are intractable for large games because the representation size itself is exponential. Graphical Games Fortunately, most real-world large games have structure that allows them to be represented compactly. A popular compact representation of games is graphical games, proposed by Kearns et al. [2001]. A game is associated with a graph whose 1While in complete-information static games the concepts of actions and pure strategies coincide, we will see that this is no longer the case for incomplete-information games and dynamic games. For the cases when pure strategies are distinct from actions, we denote pure strategies by si \u2208 Si and pure-strategy profiles by s \u2208 S. For complete-information static games, both the a-based notation and the s-based notation are commonly used in the literature to denote pure strategies\/actions [e.g., Fudenberg and Tirole, 1991, Shoham and Leyton-Brown, 2009]. 12 nodes correspond to the players of the game and edges correspond to payoff influ- ence between players. In other words, each player\u2019s payoffs depend only on his actions and those of his neighbors in the graph. We call this kind of structure strict utility independence. Definition 2.1.2. A graphical game is a tuple (G,{Ui}i\u2208N) where \u2022 G = (N,E) is a directed graph,2 with the set of vertices corresponding to the set of agents. E is a set of ordered tuples corresponding to the arcs of the graph, i.e. (i, j) \u2208 E means there is an arc from i to j. Vertex j is a neighbor of i if ( j, i) \u2208 E. \u2022 for each i \u2208 N, a local utility function Ui : \u220f j\u2208\u03bd(i) A j \u2192 R where \u03bd(i) = {i}\u222a{ j \u2208 N|( j, i) \u2208 E} is the neighborhood of i. Each local utility function Ui is represented as a matrix of size \u220f j\u2208\u03bd(i) |A j|. Since the size of the local utility functions dominates the size of the graph G, the total size of the representation is O(nm(I+1)) where I is the maximum in-degree of G. A graphical game (G,{Ui}) specifies a game (N,{Ai},{ui}) where each Ai is specified by the domain of agent i in Ui, and for all i \u2208 N and all action profiles a we have ui(s) \u2261 Ui(a\u03bd(i)), where a\u03bd(i) = (a j) j\u2208\u03bd(i). Graphical games are fully expressive: an arbitrary game can be represented as a graphical game on a complete graph. Symmetric Games and Anonymous Games A game is symmetric when all players are identical and interchangeable. Formally, a game is symmetric if each player has an identical set of actions and for all per- mutation of players pi : {1, . . . ,n} \u2192 {1, . . . ,n}, ui(a1, . . . ,an) = upi(i)(api(1), . . . ,api(n)). 2Kearns et al. [2001] originally defined graphical games on undirected graphs, while some later authors [e.g., Daskalakis and Papadimitriou, 2006, Gottlob et al., 2005] used the directed graph version given here. A undirected graphical game is equivalent to a directed graphical game in which each edge {i, j} from the undirected graph is replaced by two directed edges (i, j) and ( j, i). Thus the directed graph version is more general. 13 Symmetric games have been studied since the beginning of noncooperative game theory. For example, Nash proved that symmetric games always have symmetric mixed Nash equilibria [Nash, 1951]. In a symmetric game, a player\u2019s utility de- pends only on the player\u2019s chosen action and the configuration, which is the vector of integers specifying the numbers of players choosing each of the actions. We say such a utility function exhibits anonymity. As a result, symmetric games can be rep- resented more compactly than the normal form: we only need to specify a utility value for each action and each configuration. For a symmetric game with n players and m actions per player, the number of configurations is ( n+m\u22121 m\u22121 ) . For fixed m, this grows like nm\u22121, in which case \u0398(nm\u22121) numbers are required to specify the game. A straightforward generalization of symmetric games is k-symmetric games, in which there are k equivalence classes of players. Nash\u2019s [1951] result applies to a very general notion of symmetry: roughly, if a game is invariant under a per- mutation group, then there exists a Nash equilibrium strategy profile that is invari- ant under the same group. Specialized to k-symmetric games, it implies that they always have k-symmetric Nash equilibria, where strategies within each class are identical. Any game is a k-symmetric game with k = n. On the other hand, when k is small compared to n, k-symmetric games can be compactly represented by specifying utilities for each k-configuration, where a k-configuration is a tuple of k configurations, one for each equivalence class. There has also been research [e.g., Brandt et al., 2009, Daskalakis and Papadim- itriou, 2007] on a generalization of symmetric games called anonymous games, in which a given player\u2019s utility depends on his identity as well as the action cho- sen and the configuration. Anonymous games can be compactly represented in a similar manner, requiring \u0398(nm) numbers for fixed m. Polymatrix Games Polymatrix games are a class of games in which each player\u2019s utility is the sum of utilities resulting from her bilateral interactions with each of the n\u22121 other players. This can be represented by specifying for each pair of players i and j a bimatrix game (two-player normal form game) with sets of actions Ai and A j. 14 When a utility function can be expressed as a sum of other functions, as in polymatrix games, we say it exhibits additive structure. Congestion Games A congestion game [Rosenthal, 1973] is a tuple (N,M,(Ai)i\u2208N ,(K jk) j\u2208M,k\u2264n), where N = {1, . . . ,n} is the set of players, M = {1, . . . ,m} is a set of facilities (or re- sources); Ai is player i\u2019s set of actions; each action ai \u2208 Ai is a subset of the fa- cilities: ai \u2282 M. K jk is the cost of using facility j when a total of k players have chosen actions that include facility j. For notational convenience we also define K j(k) \u2261 K jk. Let #( j,a) be the number of players that chose facility j given the action profile a. The total cost (or disutility) of player i under pure strategy profile a = (ai,a\u2212i) is the sum of the costs on each of the facilities in ai, Costi(ai,a\u2212i) =\u2212ui(ai,a\u2212i) = \u2211 j\u2208ai K j(#( j,a)). (2.1.1) Only nm numbers are needed to specify the costs (K jk) j\u2208M,k\u2264n. The represen- tation also needs to specify the \u2211i\u2208N |Ai| actions, each of which is a subset of M. If we use an m-bit binary string to represent each of these subsets, the total size of the congestion game representation is O(mn+m\u2211i\u2208N |Ai|). From the above definition we can see that congestion games exhibit a specific combination of anonymity and additive structure, plus a type of utility indepen- dence which we call context-specific independence (CSI). This means that the in- dependence structure of player i\u2019s utility function (i.e., which subset of players that affect player i\u2019s utility) changes depending on the context, which is a certain feature of the players\u2019 strategies (in this case the facilities included in i\u2019s chosen action). This is a more general type of independence structure than the strict independen- cies captured by graphical games. On the other hand, congestion games are not fully expressive. Local Effect Games Local Effect Games (LEGs), proposed by Leyton-Brown and Tennenholtz [2003], were the first graphical representation of games that focused on actions. In an LEG, 15 we have a graph whose nodes correspond to the actions of the game. Each player can choose any one of the nodes. Define configuration as in symmetric games, and let the configuration over node k, denoted c(k), be the number of players choosing node k. There is a node function Uk associated with each node k which maps the configuration of node k to a real number. There is an edge function Uk,m associated with each edge (k,m) of the graph, which maps the configuration over nodes k and m to a real number. The utility of a player i choosing node k is the sum of the node function Uk and all incoming edge functions, evaluated at the current configuration c: Uk(c(k))+ \u2211 m\u2208\u03bd(k) Um,k(c(m),c(k)). Like congestion games, LEGs also exhibit a combination of anonymity, additivity and context-specific independence structure. In this case the context for player i\u2019s utility independence is the action chosen by i. We call such structure action- specific independence. Unfortunately, like congestion games, LEGs are also not fully expressive. Action-Graph Games We have seen representations that capture various types of structure such as strict and context-specific independence, anonymity, and additivity. However, the exist- ing representations either only capture a subset of these types of structure (graph- ical games, symmetric\/anonymous games, polymatrix games), or are only able to represent a subset of games (symmetric\/anonymous games, polymatrix games, con- gestion games, local-effect games). Action-graph games (AGGs), proposed by Bhat and Leyton-Brown [2004] and extended by Jiang et al. [2011], are a compact representation of simultaneous-move games that extends and unifies these previous approaches. AGGs are fully expres- sive (able to represent arbitrary games), can compactly express games whose util- ity functions exhibit action-specific independence, anonymity or additivity, and fur- thermore have nice computational properties. Chapter 3 gives a detailed discussion of AGGs. 16 2.1.2 Representing Dynamic Games In dynamic games, agents move sequentially. When agents are able to perfectly ob- serve all moves, dynamic games are said to exhibit perfect information; otherwise, dynamic games exhibit imperfect information. The standard representation for dynamic games is the extensive form, which is a tree whose edges represent moves of players. Thus each node of the tree corre- sponds to a unique sequence of moves. Utilities for all players are specified for each leaf of the tree. Each internal node is assigned to a player, who can choose among the edges below that node. Imperfect information is specified using infor- mation sets: each player\u2019s set of internal nodes is partitioned into information sets, and a player is unable to distinguish nodes in any of his information sets. Random- ness in the environment can be represented as nodes for the Nature (also known as Chance) player, who randomizes over his actions according to some fixed distri- bution. See e.g., [Shoham and Leyton-Brown, 2009] for a formal definition of the extensive form. Each extensive-form game can be transformed to an induced normal form, where each pure strategy of a player prescribes an action for each of her infor- mation sets. The number of pure strategies can be exponential in the size of the extensive form, so transforming to the induced normal form entails an exponential blowup in representation size. In this sense the extensive form can be seen as a compact representation of dynamic games. However, this representation requires us to specify utilities for every possible sequence of moves; when the game exhibits more structure than this, a more compact representation is needed. For imperfect-information dynamic games, the most influential compact rep- resentation is multiagent influence diagrams (MAIDs) [Koller and Milch, 2003], which generalize single-agent influence diagrams to multiple agents. A MAID is represented as a directed graph, consisting of decision nodes, chance nodes and util- ity nodes. Each chance node corresponds to a random variable, with its domain and its probability distribution conditioned on its parents (nodes with incoming edges) specified by input. Each decision node represents a decision (over a finite number of choices) taken by some player, given her observations which are the instantiated values of the decision node\u2019s parents. Each utility node represents the payoff to 17 some player, as a function of the instantiated values of the node\u2019s parents. MAIDs are compact when players\u2019 utility functions exhibit strict independencies, but are unable to compactly represent utility functions with anonymity or action-specific independencies. In Chapter 5 we discuss temporal action-graph games (TAGGs), which are a generalization of AGGs to the dynamic setting, and are able to compactly represent dynamic games with anonymity or context-specific utility independencies. 2.1.3 Representing Games of Incomplete Information In many multi-agent situations, players are uncertain about the game being played. Harsanyi [1967] proposed games of incomplete information (or Bayesian games) as a mathematical model of such interactions. Definition 2.1.3. A Bayesian game is a tuple (N,{Ai}i\u2208N ,\u0398,P,{ui}i\u2208N) where N = {1, . . . ,n} is the set of players; each Ai is player i\u2019s action set, and A = \u220fi Ai is the set of action profiles; \u0398 = \u220fi \u0398i is the set of type profiles, where \u0398i is player i\u2019s set of types; P : \u0398 \u2192 R is the type distribution and ui : A\u00d7\u0398 \u2192 R is the utility function for player i. As in the complete-information case, we denote by ai an element of Ai, and a = (a1, . . . ,an) an action profile. Furthermore we denote by \u03b8i an element of \u0398i, and by \u03b8 a type profile. The game is played as follows. A type profile \u03b8 = (\u03b81, . . . ,\u03b8n) \u2208 \u0398 is drawn according to the distribution P. Each player i observes her type \u03b8i and, based on this observation, chooses from her set of actions Ai. Each player i\u2019s utility is then given by ui(a,\u03b8), where a is the resulting action profile. Intuitively player i\u2019s type represents her private information about the game. Bayesian games can be encoded as dynamic games with an initial move by Nature. Thus dynamic game representations such as the extensive form can be used to represent Bayesian games. This is also why we do not discuss dynamic games of incomplete information here, as they can also be encoded using existing dynamic game representations. However, incomplete-information static games do have independent interest apart from their dynamic game interpretation, as they are more similar to complete-information static games than to dynamic games. 18 In specifying a Bayesian game, the space bottlenecks are the type distribution and the utility functions. Without additional structure, we cannot do better than representing each utility function ui : A\u00d7\u0398\u2192 R as a table and the type distribution as a table as well. We call this representation the Bayesian normal form. The size of this representation is n\u00d7\u220fni=1(|\u0398i|\u00d7 |Ai|)+\u220fni=1 |\u0398i|. A Bayesian game can be converted to its induced normal form, which is a complete-information game with the same set of n players, in which each player\u2019s set of actions is her set of pure strategies in the Bayesian game. Each player\u2019s utility under an action profile is defined to be equal to the player\u2019s expected utility under the corresponding pure strategy profile in the Bayesian game. Alternatively, a Bayesian game can be transformed to its agent form, where each type of each player in the Bayesian game is turned into one player in a complete-information game. The sizes of the normal forms for the two complete-information interpreta- tions are both exponential in the size of the Bayesian normal form. Singh et al. [2004] proposed a incomplete information version of the graphi- cal game representation. Gottlob et al. [2007] considered a similar extension of the graphical game representation. Like graphical games, such representations are limited in that they can only exploit strict utility independencies. In Chapter 6 we discuss Bayesian Action-Graph Games (BAGGs), a fully- expressive compact representation for Bayesian games that can compactly express Bayesian games exhibiting commonly encountered types of structure including symmetry, action- and type-specific utility independence, and probabilistic inde- pendence of type distributions. 2.2 Computation of Game-theoretic Solution Concepts Being able to compactly represent structured games is necessary, but often not suf- ficient for our purposes. We would like to efficiently reason about these games, by computing game-theoretic solution concepts such as Nash equilibrium and corre- lated equilibrium. 19 2.2.1 Computing Sample Nash Equilibria for Normal-Form Games In this subsection, we survey the literature on computing Nash equilibria in games represented in normal form. We start with the definition of Nash equilibrium and some theoretical results on the complexity of finding a sample Nash equilibrium, then look at existing algorithms, focusing on approaches for games with more than two players. In summary, the problem of computing one Nash equilibrium is PPAD-complete: polynomial time algorithms are unlikely to exist. Unsurpris- ingly, existing approaches all require exponential time in the size of the normal form. In a simultaneous-move game, a player i plays a pure strategy when she deter- ministically chooses an action from her action set Ai. She can also randomize over her actions, in which case we say that she plays a mixed strategy. Formally, let \u03d5(X) denote the set of all probability distributions over a set X . Define the set of mixed strategies for i as \u03a3i \u2261 \u03d5(Ai); then a mixed strategy \u03c3i \u2208 \u03a3i is a probability distribution over Ai. Define the set of all mixed strategy profiles as \u03a3 \u2261 \u220fi\u2208N \u03a3i; then a mixed strategy profile \u03c3 \u2208 \u03a3 is a tuple of the n players\u2019 mixed strategies. The expected utility (also known as expected payoff) of player i under the mixed strategy profile \u03c3 , denote by ui(\u03c3), is ui(\u03c3) = \u2211 a\u2208A ui(a)\u220f j\u2208N \u03c3 j(ai), (2.2.1) where \u03c3i(ai) denotes the probability that i plays ai. The support of a mixed strategy \u03c3 j is the set of actions with positive probability under the distribution \u03c3 j. A support profile is a tuple of all players\u2019 supports. Given \u03c3\u2212i, a tuple of mixed strategies of players other than i, we define the best response set of i to be the set of i\u2019s mixed strategies that maximize her expected utility: BRi(\u03c3\u2212i) = arg max \u03c3i ui(\u03c3i,\u03c3\u2212i) Given \u03c3\u2212i, the expected utility of i playing mixed strategy \u03c3i is a convex combi- nation of the expected utilities of playing pure strategies in Ai, so at least one of the pure strategies must be a best response. Thus to check whether \u03c3i is a best response, we just need to compare its expected utility against the expected utilities 20 of playing each of i\u2019s pure strategies. One of the central solution concepts in game theory is Nash equilibrium. Definition 2.2.1 (Nash Equilibrium). A mixed strategy profile \u03c3 is a Nash equilib- rium if for all i \u2208 N, \u03c3i \u2208 BRi(\u03c3\u2212i). Intuitively, a Nash equilibrium is strategically stable: no player can profit by unilaterally deviating from her current mixed strategy. From the above discussion on best response, an equivalent condition for Nash equilibrium is that for all i \u2208 N, for all ai \u2208Ai, ui(\u03c3)\u2265 ui(ai,\u03c3\u2212i), where by a slight abuse of notation, we denote by (ai,\u03c3\u2212i) the mixed strategy profile where i plays pure strategy ai and other players play according to \u03c3 . One of the most famous results in game theory is Nash\u2019s proof that any finite game always has a Nash equilibrium [Nash, 1951]. For a tutorial on Nash\u2019s proof (as well as a derivation of Brouwer\u2019s fixed-point theorem, which is used by his proof), see [Jiang and Leyton-Brown, 2007b]. Although a Nash equilibrium always exists, the existence proofs do not give an efficient algorithm for finding one. The central computational problem we consider here is the problem of finding a sample Nash equilibrium: Problem 2.2.2 (NASH). Given a game represented in normal form, find one Nash equilibrium. McKelvey and McLennan [1996] showed that this problem can be formulated as instances of other of computational problems, e.g., \u2022 finding a fixed point of a continuous function; \u2022 finding a global minimum of a continuous function; \u2022 solving a system of polynomial equations and inequalities. A frequently-used notion of approximation for Nash equilibrium is the so- called \u03b5-Nash equilibrium: Definition 2.2.3 (\u03b5-Nash Equilibrium). A mixed strategy profile \u03c3 is an \u03b5-Nash equilibrium for some \u03b5 \u2265 0 if for all i \u2208 N, for all ai \u2208 Ai, ui(\u03c3)+ \u03b5 \u2265 ui(ai,\u03c3\u2212i). Intuitively, each player cannot gain more than \u03b5 by deviating from her mixed strategy. When \u03b5 = 0, we recover Nash equilibrium. 21 Complexity The NASH problem is different from decision problems studied in complexity the- ory (e.g. SAT), which have a yes\/no answer. Since a Nash equilibrium always exists, the decision problem asking about the existence of Nash equilibrium can be solved by a trivial algorithm that always returns \u201cyes\u201d. Instead, we are interested in finding a Nash equilibrium. This is an example of a function problem, which re- quires more complex answers than yes\/no. Because we can check whether a given mixed strategy profile is a Nash equilibrium by computing expected utilities, the NASH problem is in FNP, the function problem version of NP. In fact it belongs to TFNP, the class of FNP problems whose solutions are guaranteed to exist. Another issue is that a Nash equilibrium for a game of more that two players may require irrational numbers in the probabilities, even if the game itself involves only rational payoffs. It is impossible to represent such a solution exactly using floating point numbers. Instead, in such cases we look for algorithms that given a game and an error tolerance \u03b5 represented in binary, computes an \u03b5-Nash equi- librium. As always, we evaluate complexity as a function of the input size, which here includes \u03b5 . A recent series of papers [Chen and Deng, 2006, Daskalakis et al., 2006b, Goldberg and Papadimitriou, 2006] established that the NASH problem is PPAD- complete for normal form games, even if the game has only two players. The complexity class PPAD, introduced by Papadimitriou [1994], stands for Polyno- mial Parity Argument (Directed version). It is the class of TFNP problems whose solutions are guaranteed by a parity argument. It is widely believed that PPAD- complete problems are unlikely to be in P [e.g., Papdimitriou, 2007]. Although any Nash equilibrium is close to an \u03b5-Nash equilibrium (in the space of mixed strategy profiles), a given \u03b5-Nash equilibrium may be arbitrarily far from any Nash equilibrium of the game. Etessami and Yannakakis [2007] studied the complexity of the problem of finding an \u03b5-Nash equilibrium close to some exact Nash equilibrium. They showed that the problem is at least as hard as the square- root sum problem, which is not known even to belong to NP. 22 Algorithms for Two-Player Games A two-player game is zero-sum if for all action profiles a, we have u1(a)+u2(a) = 0. For zero-sum games, Nash equilibria can be computed in polynomial time by linear programming (see, e.g., [Shoham and Leyton-Brown, 2009, von Neumann and Morgenstern, 1944]). For general two-player games, the NASH problem can be formulated as a linear complementarity problem (LCP). The canonical method for solving such games is the Lemke-Howson Algorithm [Lemke and Howson, 1964]. Sets of la- bels are assigned to mixed-strategy profiles and Nash equilibria are characterized as \u201ccompletely-labeled\u201d mixed-strategy profiles. The algorithm uses pivoting tech- niques that are similar to the Simplex Algorithm to trace a path that ends at a completely-labeled point (i.e., Nash equilibrium). It is guaranteed to find a Nash equilibrium but in the worst case may require exponential time [Savani and von Stengel, 2004]. Lemke\u2019s algorithm [Lemke, 1965] is a related method that uses similar pivoting techniques. Lipton et al. [2003] used the probabilistic method to show that for any two- player game, there always exists an \u03b5-equilibrium with log-sized support. Their result implies a quasi-polynomial algorithm for finding an \u03b5-equilibrium. Another interesting property of two-player games is that if both of the payoff matrices have small rank (say k), then there exists a Nash equilibrium with small (size k) support. Such a Nash equilibrium can be found efficiently by going through the small-sized support profiles. This was discussed by Lipton et al. [2003], but they mentioned that the result was known earlier. For bimatrix games whose entry-wise sum of the two matrices have small rank, Kannan and Theobald [2009] proposed a polynomial time algorithm for finding approximate Nash equilibria. More recently, Adsul et al. [2011] showed that if the rank of the sum of the two matrices is 1, a Nash equilibrium can be computed in polynomial time. Fictitious Play We now focus on algorithms for n-player games, where n > 2. We start with Fic- titious Play [e.g., Brown, 1951, Shoham and Leyton-Brown, 2009], well-known 23 in the study of learning in games but can also be used as an algorithm for finding Nash equilibria. It is an iterative process; at each step, each player i plays a best response assuming each of the other players j chooses a mixed strategy correspond- ing to the empirical distribution of j\u2019s past actions. For certain classes of games (e.g., zero-sum games and potential games) the empirical distribution of this pro- cess converges to a Nash equilibrium. However it is not guaranteed to converge for all games, hence it is only a heuristic for general games. Simplicial Subdivision One influential class of algorithms for computing Nash equilibria in n-player games are simplicial subdivision algorithms, which are based on Scarf\u2019s algorithm [1967]. A modern version is due to van der Laan, Talman & van der Heyden [1987]. In a high level, the algorithm does the following: 1. The space of mixed strategy profiles \u03a3 = \u220fi \u03a3i is partitioned into a set of subsimplexes. 2. We assign labels to vertices of the subsimplexes, in a way such that a \u201ccom- pletely labeled\u201d subsimplex corresponds to an approximate Nash equilib- rium. 3. The algorithm follows a path of \u201calmost completely labeled\u201d subsimplexes, and eventually reaches a \u201ccompletely labeled\u201d subsimplex. 4. The approximate equilibrium is refined by restarting the algorithm near the approximate equilibrium, but using a finer grid. It can be proven (using Sperner\u2019s Lemma) that the algorithm will always find an \u03b5-equilibrium for any given \u03b5 . However the running time is exponential. In par- ticular, the path could go through an exponential number of subsimplexes. Within each step of the path, one of the computational bottlenecks is computation of labels of the subsimplex. The computation of labels in turn depends on computation of expected utilities under mixed strategy profiles. 24 Function Minimization McKelvey and McLennan [1996] discussed formulating Nash equilibria as solu- tions of a function minimization problem. Given mixed strategy profile \u03c3 , let gi j(\u03c3) be the amount player i could gain by deviating to action j (and 0 if j is worse). A Nash equilibrium then corresponds to a global minimum of the function v(\u03c3) = \u2211 i \u2211 j [gi j(\u03c3)]2, subject to \u03c3 being a mixed strategy profile. Note that the global minimum of v(\u03c3) is always 0, due to the existence of Nash equilibria. Standard function minimization techniques can then be applied. In order to find a global minimum, a good starting point is essential. According to McKelvey and McLennan [1996], this approach is \u201cgenerally slower than other methods\u201d. Homotopy Methods and the Global Newton Method At a high level, a homotopy method starts with a game that has a simple solu- tion, then continuously deforms the payoffs of the game, until it ends at the orig- inal game of interest. Meanwhile, the method traces the path of Nash equilibria for these games, starting at a Nash equilibrium of the simple game and ending at a Nash equilibrium of the game of interest. Several homotopy methods for computing Nash equilibria have been proposed (a recent survey is [Herings and Peeters, 2009]). One such approach is Govindan and Wilson\u2019s [2003] global New- ton method (also known as continuation method [e.g., Blum et al., 2006]), which can be thought of as a generalization of the Lemke-Howson algorithm to the n- player case. It starts at a deformed game where one action per player is given a large bonus, such that there exists a unique equilibrium. At each iteration, it com- putes the direction of next step by following a gradient. Since the path is nonlinear, the algorithm needs to periodically correct accumulated error using a local Newton method. One implementation of the algorithm is available in GameTracer [Blum et al., 2002]. The bottleneck of each iteration is the computation of the so-called payoff 25 Jacobian matrix given a mixed strategy profile. Entries of the Jacobian correspond to the expected utility of player i when i plays action a, player i\u2032 plays action a\u2032, and all other players play according to the given mixed strategy profile. Iterated Polymatrix Approximation Iterated Polymatrix Approximation is another algorithm proposed by Govindan and Wilson [2004]. At a high level, the algorithm can be summarized as follows. 1. Start at some strategy profile \u03c3 0. 2. Consider the problem linearized at \u03c3 0: we get a polymatrix game, which (as we will see in Section 2.2.2) can be solved using a variant of the the Lemke- Howson algorithm, to find equilibrium \u03c3 1. The payoffs of the polymatrix game correspond to entries of the payoff Jacobian. 3. Repeat with starting point \u03c3 1. If this process converges, it converges to a Nash equilibrium. However the algo- rithm is not guaranteed to converge. Thus, like fictitious play, this belongs to the category of heuristics. In cases of non-convergence, the authors propose using the result of the algorithm as a starting point for the Govindan-Wilson global Newton method. Support Enumeration Porter et al. [2008] proposed an algorithm that finds Nash equilibria by searching through support profiles. The algorithm can be summarized as follows. 1. Enumerate all support profiles, starting with small support sizes 2. Given a support profile, determine whether there exists a Nash equilibrium having that support profile. \u2022 For 2-player games, this involves solving a linear feasibility program. 26 \u2022 For n-player games, this involves solving a system of polynomial equa- tions and inequalities3 of degree n\u22121. 3. Stop when one equilibrium is found. Since the number of possible support profiles is exponential in the size of the normal form, and for n-player games step 2 requires exponential time, the above algorithm has exponential worst-case complexity. Nevertheless, the motivation behind the algorithm is the observation that many games have small-support Nash equilibria. When such equilibria exist, the algorithm can quickly find them. Another effective speedup Porter et al.\u2019s algorithm employs is to prune off support profiles by eliminating dominated strategies conditioned on the current support profile. 2.2.2 Computing Sample Nash Equilibria for Compact Representations of Static Games So far we have focused on the NASH problem for normal form games. In this section we give an overview of literature on the computation of Nash equilibria under compact representations. Overall, we will see that (1) for many representa- tions the NASH problem is in PPAD, and is PPAD-complete for fully-expressive representations, and (2) algorithms for the NASH problem can roughly be divided into two categories, \u201cblack-box\u201d approaches which treat the representation as a black box, and \u201cspecial-purpose\u201d approaches which are representation-specific al- gorithms that exploit the structure of the representation such as symmetry and graph-theoretic properties. 3One may wonder why not just solve the system of polynomial equations and inequalities charac- terizing the Nash equilibria of the game (see Section 2.2.1). There are two reasons one might prefer to solve the support-profile-specific system here: (1) for small support profiles, the resulting systems are much smaller; (2) it is known that for generic games, the solution set of a support-profile-specific system minus all the inequality constraints has dimension zero, i.e., it consists of isolated points. This means one method for solving this system is to solve the system minus all the inequality con- straints (which is a system of polynomial equations), then check the solutions against the inequality constraints. Compared to the problem of solving systems of polynomial equations and inequalities, a wider variety of algorithms are available for solving polynomial equations, including ones based on (complex) algebraic geometry such as Groebner basis methods and polynomial homotopy contin- uation methods. 27 Complexity Fully-expressive game representations such as graphical games and AGGs can en- code arbitrary normal form games. Therefore finding Nash equilibria for these representations is PPAD-hard. In other words, polynomial time algorithms are un- likely to exist. On the other hand, Daskalakis et al. [2006a] proved the following result: Theorem 2.2.4 ([Daskalakis et al., 2006a]). If a game representation satisfies the following properties: (1) the representation has polynomial type (defined in Sec- tion 2.1.1), and (2) expected utility can be computed using an arithmetic binary circuit with polynomial length, with nodes evaluating to constant values or per- forming addition, substraction, or multiplication on their inputs, then the NASH problem for this representation can be polynomially reduced to the NASH problem for some two-player, normal-form game. Since the NASH problem is in PPAD for two-player, normal-form games, the theorem implies that if the above properties hold, the NASH problem for such a compact game representation is in PPAD. Many of the existing representations satisfy these conditions. This is a positive result: since the NASH problems for such a compact representation reduces to NASH for a two-player game with size polynomial in the size of the compact representation, solving such a two-player game can be much easier than solving the normal form of the original game. The above result suggests that the computation of expected utility is of funda- mental importance for the NASH problem. Another example of its importance is the observation that if we can compute expected utilities, we can verify a solution of the NASH problem. We will see more useful applications of expected utility computation throughout this survey. Speeding up Existing Algorithms and the Black-box Approach Quite a few of the existing algorithms for finding Nash equilibria of normal form games use computation of expected utility as a subroutine. Examples include Govindan and Wilson\u2019s Global Newton Method and Iterated Polymatrix Approxi- mation, as well as the simplicial subdivision algorithm. 28 For many compact representations (including all compact representations in- troduced in Section 2.1.1), there exist efficient algorithms for computing expected utility that scale polynomially in the representation size [e.g., Papadimitriou and Roughgarden, 2008]. Using these methods instead of normal-form-based methods for the expected utility subroutine, we can achieve exponential speedup of these existing Nash equilibrium algorithms without introducing any change in the algo- rithms\u2019 behavior or output. Blum et al. [2006] were the first to propose such an approach, speeding up Govindan and Wilson\u2019s algorithms [2003, 2004] for graphi- cal games and MAIDs. In Chapter 3 we discuss our work on speeding up Govindan and Wilson\u2019s Global Newton Method and the simplicial subdivision algorithm for AGGs. From a software-engineering point of view, such algorithms have a nice modu- lar structure: an algorithm calls certain subroutines provided by the representation that access information about the game, but is otherwise unaware of the internal structure of the representation. At the same time, the representation-specific sub- routines do not need to know about the details of the calling algorithm. We call such algorithms black-box algorithms. Another example of the black-box approach is the very recent adaptation of the support-enumeration approach to AGGs and graphical games [Thompson et al., 2011]. Here there are several required subroutines; one is the formulation of the polynomial system given a support profile. The polynomial system contains expres- sions for expected utilities, the construction of which can be thought of as symbolic computation of expected utilities. Many techniques for the expected utility prob- lem in compact games translate to the symbolic problem. Another subroutine is the elimination of dominated strategies conditioned on a support profile. The black-box approach is not limited to the problem of computing a sample Nash equilibrium. For example, in Section 2.2.7 we look at Papadimitriou and Roughgarden\u2019s [2008] algorithm for the problem of computing a correlated equi- librium, which requires a polynomial-time expected utility subroutine. This is also an example of a black-box algorithm that isn\u2019t a direct adaptation of an existing algorithm for the normal form. On the other hand, specific representations may exhibit certain structure that can be exploited for efficient computation. We call these representation-specific al- 29 gorithms special-purpose algorithms. Intuitively, black-box algorithms and special- purpose algorithms both exploit the compact representation\u2019s structure, albeit at different levels: a black-box algorithm exploits structure to speed up a subroutine of the algorithm, keeping the rest of the algorithm intact across different represen- tations, while in a special-purpose approach the entire algorithm is designed with a specific representation in mind. We now go through several representations and their corresponding special-purpose algorithms. Polymatrix Games Yanovskaya [1968] showed that Nash equilibria of a polymatrix game are solutions of an LCP. Such equilibria can be computed using a variant of the Lemke Howson algorithm [Howson Jr, 1972]. Symmetric Games As mentioned in Section 2.1.1, Nash [1951] proved that any symmetric game al- ways has a symmetric Nash equilibrium. The space of symmetric strategy profiles has lower dimension than the space of mixed strategy profiles, so one might expect the problem of finding symmetric Nash equilibria to be easier than NASH in the general case. Gale et al. [1950] showed that NASH for bimatrix games can be reduced to finding a symmetric Nash equilibrium for symmetric bimatrix games. Therefore, the recent PPAD-completeness result for bimatrix games implies that finding sym- metric Nash is also PPAD-complete. On the other hand, for symmetric games with a large number of players but a small number of actions, Papadimitriou and Roughgarden [2005] proposed a polynomial-time algorithm for finding a symmetric Nash equilibrium. The algo- rithm is based on the enumeration of all symmetric support profiles and the solution of a polynomial system for each support profile. Anonymous Games For anonymous games, the existence of symmetric equilibria is no longer guar- anteed. Thus the above algorithm for symmetric games with a small number of 30 actions does not apply. Nevertheless, in a series of papers Daskalakis and Papadim- itriou [2007, 2008, 2009] proposed polynomial-time algorithms for finding approx- imate Nash equilibria for anonymous games having a constant number of actions per player. Graphical Games Kearns et al. [2001] presented a polynomial-time algorithm for finding approxi- mate Nash equilibrium in graphical games on tree graphs. The algorithm is based on a discretization of the mixed strategy space and a message-passing approach similar to probabilistic inference algorithms for Bayesian networks. For comput- ing approximate Nash equilibria in graphical games on general graphs, Ortiz and Kearns [2003] and Vickrey and Koller [2002] proposed several approaches based on similar ideas. Elkind et al. [2006] presented a polynomial-time algorithm for finding exact Nash equilibria for graphical games on path graphs. The problem of finding exact Nash for tree graphs is still open. Symmetric AGGs Besides the black-box algorithms that we discuss in Chapter 3, Daskalakis et al. [2009] presented a polynomial-time special-purpose algorithm for finding an ap- proximate symmetric Nash equilibrium in symmetric AGGs on tree graphs. Their algorithm is based on a discretization of the space of symmetric mixed strategies and a message-passing\/dynamic programming approach. 2.2.3 Computing Sample Bayes-Nash Equilibria for Incomplete-information Static Games Bayes-Nash equilibrium is a solution concept for Bayesian games that is analogous to Nash equilibrium for complete-information games. Before we give its definition we first need to define strategies in Bayesian games. In a Bayesian game, player i can deterministically choose a pure strategy si, in which given each \u03b8i \u2208 \u0398i she deterministically chooses an action si(\u03b8i). Player i can also randomize and play a mixed strategy \u03c3i, in which her probability of choosing ai given \u03b8i is \u03c3i(ai|\u03b8i). 31 That is, given a type \u03b8i \u2208 \u0398i, she plays according to distribution \u03c3i(\u00b7|\u03b8i) over her set of actions Ai. A mixed strategy profile \u03c3 = (\u03c31, . . . ,\u03c3n) is a tuple of the players\u2019 mixed strategies. The expected utility of i given \u03b8i under a mixed strategy profile \u03c3 is the ex- pected value of i\u2019s utility under the resulting joint distribution of a and \u03b8 , condi- tioned on i receiving type \u03b8i: ui(\u03c3 |\u03b8i) =\u2211 \u03b8\u2212i P(\u03b8\u2212i|\u03b8i)\u2211 a ui(a,\u03b8)\u220f j \u03c3 j(a j|\u03b8 j). (2.2.2) A mixed strategy profile \u03c3 is a Bayes-Nash equilibrium if for all i, for all \u03b8i, for all ai \u2208 Ai, ui(\u03c3 |\u03b8i)\u2265 ui(\u03c3 \u03b8i\u2192ai |\u03b8i), where \u03c3 \u03b8i\u2192ai is the mixed strategy profile that is identical to \u03c3 except that i plays ai with probability 1 given \u03b8i. Computing Bayes-Nash Equilibria via Complete-information Interpretations Harsanyi [1967] showed that a Bayesian game can be interpreted as one of two equivalent complete-information games via both \u201cinduced normal form\u201d and \u201cagent form\u201d interpretations. Specifically, the Nash equilibria of these complete-information games correspond to Bayes-Nash equilibria of the Bayesian game. (A detailed de- scription of these correspondences is given in Chapter 6.) Thus one approach is to interpret a Bayesian game as a complete-information game, enabling the use of existing Nash-equilibrium-finding algorithms. However, as mentioned in Section 2.1.3, generating the normal form representations under both of these complete- information interpretations leads to an exponential blowup in representation size. Howson and Rosenthal [1974] applied the agent form transformation to 2- player Bayesian games, resulting in a complete-information polymatrix game which (recall from Section 2.2.2) can be solved using a variant of the Lemke-Howson al- gorithm. Their approach was able to avoid the aforementioned exponential blowup because in this case the agent forms admit a more compact representation (as poly- matrix games). However, for n-player Bayesian games the corresponding agent forms do not correspond to polymatrix games or any other known representation. Nevertheless, in Chapter 6 we propose a general approach for computing sam- ple Bayes-Nash equilibria in n-player Bayesian games (and BAGGs in particular). 32 Specifically, our approach solves the agent form of the BAGG using black-box versions of the Global Newton Method [Govindan and Wilson, 2003] and the sim- plicial subdivision algorithm [van der Laan et al., 1987], and instead of explicitly constructing the normal form of the agent form we use the BAGG as a compact representation of its agent form. Special-purpose Approaches Singh et al. [2004] proposed an incomplete information version of the graphical game representation, and presented efficient algorithms for computing approximate Bayes-Nash equilibria in the case of tree games. Gottlob et al. [2007] considered a similar extension of the graphical game representation and analyzed the problem of finding a pure-strategy Bayes-Nash equilibrium. Oliehoek et al. [2010] pro- posed a heuristic search algorithm for common-payoff Bayesian games, which has applications to cooperative multi-agent problems. 2.2.4 Computing Sample Nash Equilibria for Dynamic Games In perfect-information extensive-form games, all information sets contain a single node. As a result, each subtree of the extensive-form game tree form a subgame which can be solved independently of the rest of the tree. The backward induction algorithm computes a Nash equilibrium of the game by solving subgames from the leaves to the root. The running time is linear in the size of the extensive form. Furthermore, when the game is zero sum, it is possible to prune parts of the game tree that are not optimal. The canonical algorithm, Alpha-Beta pruning, has been influential in the design of high-performance game-playing systems for perfect- information games such as chess and checkers. For extensive-form games with imperfect information, transforming to the in- duced normal form entails an exponential blowup in representation size. This is the main difficulty of the Nash equilibrium problem for dynamic games compared to the simultaneous-move case, and avoiding this exponential blowup is the focus of considerable existing literature. One common assumption is perfect recall: roughly, that each player remem- bers all her decisions and observations. For dynamic games with perfect recall, 33 there always exists a Nash equilibrium in behavior strategies, where a player in- dependently chooses a distribution over actions at each of her information sets [Kuhn, 1953]. Computationally, behavior strategies are easier to work with, since representing a behavior strategy requires space linear in the extensive form, while representing a mixed strategy (i.e. a distribution over pure strategies) requires ex- ponential space. For MAIDs, a behavior strategy for a player entails choosing, at each of her decision nodes and for each possible instantiation of the node\u2019s parents, a probability distribution over her choices. The sequence form formulation of Koller, Meggido and von Stengel [1996] encodes a behavior strategy as a vector of \u201crealization probabilities\u201d. Using this formulation, the Nash equilibrium problem for zero-sum dynamic games can be formulated as a linear program of size polynomial in the extensive form represen- tation. For two-player general-sum dynamic games, using the sequence form the Nash equilibrium problem can be formulated as a linear complementarity program (LCP) and solved using Lemke\u2019s algorithm. For n-player games, Govindan and Wilson [2002] proposed an extension of their Global Newton Method to perfect-recall extensive-form games. As with the sequence form, strategies are encoded as realization probabilities. Daskalakis et al. [2006a] showed that the problem of finding a Nash equilibrium in behavior strate- gies for perfect-recall extensive-form games is in PPAD. For compact representations, existing approaches can again be divided into black-box and special-purpose ones. Koller and Milch [2001] proposed a special- purpose approach for decomposing a MAID into subgraphs, each of which can be solved independently. As in the simultaneous-move case, the computation of ex- pected utility is again an important subtask used by many game-theoretic compu- tations. For example, such a subroutine can be used to run fictitious play, although (like in the simultaneous-move case) it is not guaranteed to converge. Blum et al. [2006] proposed a black-box approach for adapting Govindan and Wilson\u2019s Global Newton Method for extensive-form games to MAIDs, by speeding up the subtask of computing the Jacobian matrix using a MAID-specific subroutine. In Chapter 5 we show that this algorithm can also be adapted to TAGGs. 34 2.2.5 Questions about the Set of All Nash Equilibria of a Game So far we have focused on finding one arbitrary Nash equilibrium. Since in gen- eral there can be more than one Nash equilibrium in a game, we are sometimes more interested in questions about the set of all Nash equilibria. Such problems include finding all Nash equilibria, counting the number of equilibria, and finding optimal Nash equilibria according to some objective such as social welfare, which is defined to be the sum of the players\u2019 utilities. Unsurprisingly, such problems are usually intractable in the worst case (see e.g. [Conitzer and Sandholm, 2008]). For the problem of finding all Nash equilibria, Mangasarian [1964] proposed an algorithm for bimatrix games. More recently, Avis et al. [2010] described and implemented two algorithms for bimatrix games. Herings and Peeters [2005] pro- posed an algorithm that computes all Nash equilibria in an n-player normal form game by enumerating all support profiles. Compared to the support-enumeration method for finding a sample Nash equilibrium as discussed in Section 2.2.1, here the algorithm does not stop at a single Nash equilibrium and keeps going until all support profiles have been visited. At each support profile, the corresponding poly- nomial system is solved by either polynomial homotopy continuation or Groebner basis methods. For the problem of computing optimal Nash equilibria, Sandholm et al. [2005] proposed and evaluated a practical approach for bimatrix games using mixed-integer programming. 2.2.6 Computing Pure-Strategy Nash Equilibria A pure-strategy Nash equilibrium (PSNE), also known as pure Nash equilibrium or pure equilibrium, is a pure strategy profile that is a Nash equilibrium. Equivalently: Definition 2.2.5. An action profile a\u2208A is a pure-strategy Nash equilibrium (PSNE) of the game \u0393 if for all i \u2208 N, for all a\u2032i \u2208 Ai, ui(ai,a\u2212i)\u2265 ui(a\u2032i,a\u2212i). Unlike mixed strategy Nash equilibria, PSNEs do not always exist in a game. Nevertheless, in many ways PSNE is a more attractive solution concept than mixed- strategy Nash equilibrium. First, PSNE can be easier to justify because it does not require the players to randomize. Second, it can be easier to analyze because of 35 its discrete nature (see, e.g., [Brandt et al., 2009]). There are several versions of the problem of computing PSNEs: deciding if a PSNE exists, finding one, count- ing the number of PSNEs, enumerating them, and finding the optimal equilibrium according to some objective (e.g., social welfare). Unlike the NASH problem, for games in normal form these problems can be solved in polynomial time in the in- put size, by enumerating all pure strategy profiles. Of course, since the size of the normal form representation grows exponentially in the number of players, this is problematic in practice. We thus focus on the problem for compact representations. The problem is hard in the most general case, when utility functions are arbitrary, efficiently-computable functions represented as circuits [Schoenebeck and Vadhan, 2006] or Turing Machines [Alvarez et al., 2005]. This is in contrast to the NASH case, where the Nash problems for both the normal form and fully-expressive com- pact representations are PPAD-complete. Iterated Best Response Iterated best response is a well-known both as a learning dynamics and as a heuris- tic algorithm for PSNE [e.g., Shoham and Leyton-Brown, 2009]. It is an iterative process starting at some arbitrary pure strategy profile. At each step, if there ex- ists a player that is not playing a best response to the current pure strategy profile, that player changes her strategy to a best response. The process stops when all are playing best responses, in which case we have reached a PSNE. A related pro- cess is iterated better response, in which a deviating player only has to pick a pure strategy that is better than the current one. These processes can be carried out for all representations that provide efficient evaluation of utilities under arbitrary pure- strategy profiles. However, like fictitious play, these are not guaranteed to converge for games in general. Graphical Games Gottlob et al. [2005] were the first to analyze the existence problem of pure-strategy Nash equilibria in graphical games. They proved that while the problem is NP- complete in general, on games with graphs of bounded hypertree-width there ex- ist a dynamic-programming algorithm that determines the existence of PSNE (and 36 finds one if it exists) in time polynomial in the size of the representation. Daskalakis and Papadimitriou [2006] reduced the problem to a Markov Random Field (MRF), and then applied the standard clique tree algorithm to the resulting MRF. Among their results they showed that for graphical games on graphs with log-sized treewidth, and bounded neighborhood size and bounded number of actions per player, the ex- istence of pure Nash equilibria can be decided in polynomial time. Jiang and Safari [2010] analyzed the problem of deciding the existence of pure- strategy Nash equilibria for graphical games on restricted classes of graphs, and gave a complete characterization of hard and easy classes of graphical games with bounded indegree, showing that the only tractable classes of graphs are those with bounded treewidth (after iterated removal of sinks). Daskalakis and Papadimitriou [2005] analyzed the complexity of finding pure and mixed Nash equilibria of graphical games on highly regular graphs (specifi- cally, the d-dimensional grid) with identical local payoff functions for every player. Such games can be represented very compactly, as only the local payoff function at one neighborhood needs to be stored. They showed that finding pure-strategy Nash equilibria is tractable if d = 1 and NEXP-complete otherwise. Symmetric Games For symmetric games, questions about PSNE can be computed straightforwardly by checking all configurations, which requires polynomial time in the size of the representation, and polynomial time in n when the number of actions is fixed. In- deed, Brandt et al. [2009] proved that the existence problem for PSNE of symmet- ric games with constant number of actions is in the complexity class AC0, which is the set of problems that can be solved by polynomial-sized constant-depth cir- cuits with unlimited-fanin AND- and OR-gates. For anonymous games, efficient algorithms for PSNE have also been proposed [Brandt et al., 2009, Daskalakis and Papadimitriou, 2007]. Ryan et al. [2010] considered the problem of finding pure-strategy Nash equi- libria in symmetric games whose utilities are very compactly represented, such that the number of players can be exponential in the representation size, and showed that if the utility functions are represented as piecewise-linear functions, there exist 37 polynomial-time algorithms for finding a pure-strategy Nash equilibria and count the number of equilibria. Congestion Games For congestion games, a PSNE always exists [Rosenthal, 1973]. Furthermore, iter- ated best-response dynamics always converge to a PSNE [Monderer and Shapley, 1996]. However, Fabrikant et al. [2004] showed that such dynamics may require an exponential number of steps to converge, and furthermore the problem of finding a PSNE for congestion games is complete for the complexity class PLS (which stands for Polynomial Local Search), which implies that a polynomial-time algo- rithm is unlikely to exist. For singleton congestion games, where the game is symmetric and each ac- tion consists of choosing only a single resource, Ieong et al. [2005] presented a polynomial-time algorithm for finding an optimal PSNE. AGGs Since AGGs can compactly encode arbitrary graphical games, the existence prob- lem is NP-complete for AGGs. Conitzer [pers. comm., 2004] and Daskalakis et al. [2009] showed that the problem is NP-complete even for symmetric AGGs. In Chapter 4 we present a dynamic programming approach for computing PSNE in AGGs. For symmetric AGGs with bounded treewidth, our algorithm de- termines the existence of PSNE (and returns one if any exists) in polynomial time. We also show that our approach can be extended to certain classes of asymmetric AGGs. 2.2.7 Computing Correlated Equilibrium First proposed by Aumann [1974, 1987], correlated equilibrium (CE) is another important solution concept. Whereas in a mixed strategy Nash equilibrium play- ers randomize independently, in a correlated equilibrium the players are allowed to coordinate their behavior based on signals from an intermediary. CE has in- teresting connections to the theory of online learning: the empirical distribution of no-internal-regret learning dynamics converge to the set of CE [e.g., Hart and 38 Mas-Colell, 2000, Nisan et al., 2007]. A correlated equilibrium is defined as a distribution x over action profiles, such that when a trusted intermediary draws a strategy profile a from this distribution, privately announcing to each player i her own component ai, i will have no in- centive to choose another strategy, assuming others follow the suggestions. This requirement can be written as a set of linear incentive constraints on x. Combining these with the constraints that x is a distribution, the set of correlated equilibria can be formulated as a linear feasibility program with size polynomial in the size of the normal form. (A detailed description of this formulation is given in Chapter 7.) Thus it takes polynomial time in the size of the normal form to compute one CE, and indeed to compute an optimal CE according to some linear objective function. For compact representations, the same LP can have an exponential number of variables, due to the fact that the input size can be exponentially smaller. Thus, the above approach is not efficient for compact representations. Another challenge is that even explicitly representing a solution vector x can take exponential space. Thus, a compact representation for the distribution x is required. Furthermore, in order for the intermediary to be able to tractably implement such a correlated equilibrium, we also need an efficient algorithm for sampling from the distribution. In a landmark paper, Papadimitriou and Roughgarden [2008] proposed a black- box algorithm for computing a sample CE, which runs in polynomial time when the game representation has polynomial type and when there is a polynomial-time algorithm for computing expected utility given mixed strategy profiles. The solu- tions are represented as mixtures of product distributions. Recently, Stein, Parrilo and Ozdaglar [2010] showed that this algorithm can fail to find an exact correlated equilibrium, but can be (easily) modified to efficiently compute approximate corre- lated equilibria. In Chapter 7 we present a variant of the Ellipsoid Against Hope algorithm that guarantees the polynomial-time identification of exact correlated equilibrium. For the problem of computing the optimal CE, Papadimitriou and Roughgar- den [2008] showed that the problem is NP-hard for many existing representations, and gave a sufficient condition for the problem to be tractable. They showed that symmetric games, anonymous games and graphical games on tree graphs satisfy such a condition. In Chapter 8 we give a sufficient condition that generalizes Pa- 39 padimitriou and Roughgarden\u2019s condition. In particular, we reduce the optimal CE problem to the deviation-adjusted social welfare problem, a combinatorial op- timization problem closely related to the optimal social welfare outcome problem. This framework allows us to identify new classes of games for which the optimal CE problem is tractable, including graphical polymatrix games on tree graphs. Our algorithm can be understood as a black-box algorithm, with deviation-adjusted so- cial welfare problem as the required subroutine. A couple of special-purpose approaches have been proposed for graphical games. Kakade et al. [2003] proposed an algorithm for computing a CE with maximum en- tropy in tree graphical games in polynomial time. More recently, Kamisetty et al. [2011] proposed a practical approach for approximating the optimal CE in graphi- cal games. Computing Coarse Correlated Equilibria Coarse correlated equilibrium (CCE) [Hannan, 1957] is a solution concept closely related to CE. The difference between the two is the class of deviations they con- sider. Whereas CE requires that each player have no profitable deviation even if she takes into account the signal she receives from the intermediary, CCE only requires that each player have no profitable unconditional deviation. CCE is also related to online learning: the empirical distribution of a no-external-regret learning dynam- ics converge to the set of CCE. As in the case of CE, the set of CCE can also be formulated as an LP. A formal description is given in Chapter 8. A CE is also a CCE, and hence results for the polynomial-time computation of a sample CE also apply to the computation of a sample CCE. On the other hand, since the optimal CE problem is not always tractable, the optimal CCE problem could be easier than the optimal CE problem for some repre- sentations. In Chapter 8 we show that for singleton congestion games, the optimal CCE problem can be solved in polynomial time, while the complexity of the opti- mal CE problem for this class of games is unknown. 40 Computing Extensive-form Correlated Equilibria Recently, von Stengel and Forges [2008] proposed extensive-form correlated equi- librium (EFCE), a solution concept for perfect-recall extensive-form games that is closely related to correlated equilibrium. Recall that in an extensive-form game, each pure strategy of a player prescribes a move for each of her information sets. Like correlated equilibria, an EFCE is a distribution over pure-strategy profiles. Whereas in a CE of the induced normal form of the game the intermediary rec- ommends a pure strategy to each player at the start of the game, in an EFCE the intermediary recommends a move to the player only when the corresponding infor- mation set is reached. Huang and Von Stengel [2008] described a polynomial-time algorithm for com- puting sample extensive-form correlated equilibria. Their algorithm follows a very similar structure as Papadimitriou and Roughgarden\u2019s Ellipsoid Against Hope algo- rithm, and the flaws of the Ellipsoid Against Hope algorithm pointed out by Stein et al. [2010] also carry over. As a result, the algorithm can fail to find an exact EFCE. In Chapter 7 we extend our fix for Papadimitriou and Roughgarden\u2019s Ellip- soid Against Hope algorithm to Huang and Von Stengel\u2019s algorithm, allowing it to compute an exact EFCE. 2.2.8 Computing Other Solution Concepts Other solution concepts have been proposed in the economics literature to represent different notions of rational behavior. Computer scientists have studied the corre- sponding computational problems, including the computation of (iterated) elimina- tion of dominated strategies [Conitzer and Sandholm, 2005], Stackelberg equilib- rium [Conitzer and Sandholm, 2006, Paruchuri et al., 2008], closed under rational behavior (CURB) sets [M. Benisch and Sandholm, 2010], and sink equilibrium [Goemans et al., 2005]. While these are interesting problems, they are not directly related to this thesis and we refer interested readers to the papers referenced above. 41 2.3 Software GAMBIT [McKelvey et al., 2006] is a collection of software tools for game theo- retic analysis. It includes implementations of many of the existing algorithms for the normal form and the extensive form. It also provides a graphical user inter- face for creating normal form and extensive form games, running algorithms for computing Nash equilibria, and visualizing the resulting profiles. It is available at http:\/\/www.gambit-project.org. Gametracer [Blum et al., 2002] provides black-box adaptations of two of Govin- dan and Wilson\u2019s algorithms for finding a sample Nash equilibrium: Global New- ton Method [Govindan and Wilson, 2003] and Iterated Polymatrix Approximation [Govindan and Wilson, 2004]. The algorithms are written as C++ functions that takes an instance of \u201cgnmgame\u201d, an abstract class with an abstract method4 for computing expected utilities.5 As a result, in order to apply these algorithms to a specific game representation, one merely has to implement the representation as a subclass of gnmgame. The package itself only provides a subclass for the normal form representation. Gametracer\u2019s source code is available for download at http:\/\/dags.stanford.edu\/Games\/gametracer.html. It has also been adapted and in- corporated into GAMBIT. GAMUT [Nudelman et al., 2004] is a suite of game instance generators. It includes many classes of games studied in the economics and computer science literature, and parameterization options for the dimensions of the game, the types of utility functions and randomization. The stated purpose of GAMUT is for eval- uating game-theoretic algorithms. The main output format for GAMUT is normal form. GAMUT is available at http:\/\/gamut.stanford.edu. In Appendix A we describe the software tools we implemented and make avail- able at http:\/\/agg.cs.ubc.ca. They include command-line programs for finding sam- ple Nash equilibria in AGGs and BAGGs, a graphical user interface for creating, editing and visualizing AGGs, and extensions of GAMUT that generate AGG in- stances. 4An abstract method in C++ means that only the interface of method is given; any subclass that is not also abstract needs to provide an implementation of the method. 5Another abstract method is for computing payoff Jacobians (see Chapter 3 for the definition), which usually requires similar types of computations as expected utilities. 42 Chapter 3 Action-Graph Games 3.1 Introduction In this chapter we focus on complete-information simultaneous-action games. An overview of the literature on compact representations and computation of solution concepts for such games is given in Chapter 2, specifically Sections 2.1.1, 2.2.2 and 2.2.7. As we summarized in Chapter 1, the existing representations either only capture a subset of the known types of structure (anonymity, strict and action- specific independence, and additivity), or are only able to represent a subset of games. Meanwhile, the computation of expected utility has emerged as a key sub- task required by many black-box algorithms for computing solution concepts. 3.1.1 Our Contributions Action-graph games (AGGs) are a general game representation that can be under- stood as offering the advantages of\u2014and, indeed, unifying\u2014existing representa- tions including graphical games and congestion games. Like graphical games, AGGs can represent any game, and important game-theoretic computations can be performed efficiently when the AGG representation is compact. Hence, AGGs offer a general representational framework for game-theoretic computation. Like congestion games, AGGs compactly represent context-specific independence, anonymity, and additivity, though unlike congestion games they do not require any of these. Finally, AGGs can also compactly represent many games that are not compact as 43 either graphical games or as congestion games. We begin this chapter in Section 3.2 by defining action-graph games, includ- ing the basic representation and extensions with function nodes and additive utility functions, and characterizing their representation sizes. In Section 3.3 we provide several more examples of structured games which can be compactly represented as AGGs. Then we turn from representational to computational issues. In Section 3.4 we present a dynamic programming algorithm for computing an agent\u2019s expected utility under an arbitrary mixed-strategy profile, prove its complexity, and explore several elaborations. In Section 3.5 we show that (as a corollary of the polynomial complexity of our expected utility algorithm) the problem of finding an \u03b5-Nash equilibrium of an AGG is in PPAD: this is a positive result, as AGGs can be ex- ponentially smaller than normal-form games. We also show how to use our dy- namic programming algorithm to speed up existing methods for computing sample \u03b5-Nash and \u03b5-correlated equilibria. Finally, in Section 3.6 we present the results of extensive experiments with some of these algorithms, demonstrating that AGGs can feasibly be used to reason about interesting games that were inaccessible to any previous techniques. The largest game that we tackled in our experiments had 20 agents and 13 actions per agent; we found its Nash equilibrium in 14.3 minutes. A normal form representation of this game would have involved 9.4\u00d710134 numbers, requiring an outrageous 7.5\u00d710126 gigabytes even to store. Finally, let us describe the relationship between this chapter and past work on AGGs. Leyton-Brown and Tennenholtz [2003] introduced local-effect games, which can be understood as symmetric AGGs in which utility functions are re- quired to satisfy a particular linearity property. Bhat and Leyton-Brown [2004] introduced the basic AGG representation and some of the computational ideas for reasoning with them. The dynamic programming algorithm was first proposed in Jiang and Leyton-Brown [2006], as was the idea of function nodes. An extended version of that paper appeared as Chapter 2 of the MSc thesis [Jiang, 2006]. The current chapter is based on the journal publication [Jiang et al., 2011], which sub- stantially elaborates upon and extends the representations and methods from these earlier papers. Specifically, [Jiang et al., 2011] introduced the additive structure model and the encoding of congestion games, several of the examples, our compu- tational methods for k-symmetric games and for additive structure, our speedup of 44 the simplicial subdivision algorithm, and all experiments presented in this chapter (Section 3.6). 3.2 Action Graph Games This section has three parts, each of which defines a different AGG variant. In Sec- tion 3.2.1 we define the basic AGG representation (which we dub AGG-\/0), char- acterize its representation size, and show how it can be used to represent normal- form, graphical, and symmetric games. In Section 3.2.2 we introduce the idea of function nodes, show how AGGs with function nodes (AGG-FNs) can capture additional structure in several example games, and show how to represent anony- mous games as AGG-FNs. In Section 3.2.3 we introduce AGG-FNs with additive structure (AGG-FNA), which compactly represent additive structure in the utility functions of AGGs, and show how congestion games can be succinctly written as AGG-FNAs. 3.2.1 Basic Action Graph Games We begin with an intuitive description of basic action-graph games. Consider a directed graph with nodes A and edges E , and a set of agents N = {1, . . . ,n}. Identical tokens are given to each agent i \u2208 N. To play the game, each agent i simultaneously places her token on a node ai \u2208 Ai, where Ai \u2286 A . Each node in the graph thus corresponds to an action choice that is available to one or more of the agents; this is where action-graph games get their name. Each agent\u2019s utility is calculated according to an arbitrary function of the node she chose and the numbers of tokens placed on the nodes that neighbor that chosen node in the graph. We will argue below that any simultaneous-move game can be represented in this way, and that action-graph games are often much more compact than games represented in other ways. We now turn to a formal definition of basic action-graph games. Let N = {1, . . . ,n} be the set of agents. Central to our model is the action graph. Definition 3.2.1 (Action graph). An action graph G = (A ,E) is a directed graph where: 45 \u2022 A is the set of nodes. We call each node \u03b1 \u2208A an action, and A the set of distinct actions. For each agent i \u2208 N, let Ai be the set of actions available to i, with A = \u22c3 i\u2208N Ai.1 We denote by ai \u2208 Ai one of agent i\u2019s actions. An action profile (or pure strategy profile) is a tuple a = (a1, . . . ,an). Denote by A the set of action profiles. Then A = \u220fi\u2208N Ai where \u220f is the Cartesian product. \u2022 E is a set of directed edges, where self edges are allowed. We say \u03b1 \u2032 is a neighbor of \u03b1 if there is an edge from \u03b1 \u2032 to \u03b1 , i.e., (\u03b1 \u2032,\u03b1) \u2208 E. Let the neighborhood of \u03b1 , denoted \u03bd(\u03b1), be the set of neighbors of \u03b1 , i.e., \u03bd(\u03b1)\u2261 {\u03b1 \u2032 \u2208A |(\u03b1 \u2032,\u03b1) \u2208 E}. Given an action graph and a set of agents, we can further define a configuration, which is a feasible arrangement of agents across nodes in an action graph. Definition 3.2.2 (Configuration). Given an action graph (A ,E) and a set of action profiles A, a configuration c is a tuple of |A | non-negative integers (c(\u03b1))\u03b1\u2208A , where c(\u03b1) is interpreted as the number of agents who chose action \u03b1 \u2208 A , and where there exists some a \u2208 A that would give rise to c. Denote the set of all configurations as C. Let C : A \u2192 C be the function that maps from an action profile a to the corresponding configuration c. Formally, if c = C (a) then c(\u03b1) = |{i \u2208 N : ai = \u03b1}| for all \u03b1 \u2208A . We can also restrict a configuration to a given node\u2019s neighborhood. Definition 3.2.3 (Configuration over a neighborhood). Given a configuration c\u2208C and a node \u03b1 \u2208A , let the configuration over the neighborhood of \u03b1 , denoted c(\u03b1), be the restriction of c to \u03bd(\u03b1), i.e., c(\u03b1) = (c(\u03b1 \u2032))\u03b1 \u2032\u2208\u03bd(\u03b1). Similarly, let C(\u03b1) denote the set of configurations over \u03bd(\u03b1) in which at least one player plays \u03b1 .2 Let C (\u03b1) : A \u2192C(\u03b1) be the function which maps from an action profile to the corresponding configuration over \u03bd(\u03b1). 1Different agents\u2019 action sets Ai,A j may (partially or completely) overlap. The implications of this will become clear once we define the utility functions. 2If action \u03b1 is in multiple players\u2019 action sets (say players i, j), and these action sets do not completely overlap, then it is possible that the set of configurations given that i played \u03b1 (denoted C(s,i)) is different from the set of configurations given that j played \u03b1 . C(\u03b1) is the union of these sets of configurations. 46 Now we can state the formal definition of basic action-graph games as follows. Definition 3.2.4 (Basic action-graph game). A basic action-graph game (AGG- \/0) is a tuple (N, A, G, u) where \u2022 N is the set of agents; \u2022 A = \u220fi\u2208N Ai is the set of action profiles; \u2022 G = (A ,E) is an action graph, where A = \u22c3 i\u2208N Ai is the set of distinct actions; \u2022 u = (u\u03b1)\u03b1\u2208A is a tuple of |A | functions, where each u\u03b1 : C(\u03b1) \u2192 R is the utility function for action \u03b1 . Semantically, u\u03b1(c(\u03b1)) is the utility of an agent who chose \u03b1 , when the configuration over \u03bd(\u03b1) is c(\u03b1). For notational convenience, we define u(\u03b1 ,c(\u03b1))\u2261 u\u03b1(c(\u03b1)) and ui(a)\u2261 u(ai,C (ai)(a)). We also define A\u2212i \u2261\u220f j 6=i A j as the set of action profiles of agents other than i, and denote an element of A\u2212i by a\u2212i. Example: Ice Cream Vendors The following example helps to illustrate the elements of the AGG-\/0 representa- tion, and also exhibits context-specificity and anonymity in utility functions. This example would not be compact under the existing game representations discussed in the introduction. It was inspired by Hotelling [1929], and elaborates an example used in Leyton-Brown and Tennenholtz [2003]. Example 3.2.5 (Ice Cream Vendor game). Consider a setting in which n vendors sell ice cream or strawberries, and must choose one of four locations along a beach. There are three kinds of vendors: nI ice cream vendors, nS strawberry vendors, and nW vendors who can sell both ice cream and strawberry, but only on the west side. Ice cream (strawberry) vendors are negatively affected by the presence of other ice cream (strawberry) vendors in the same or neighboring locations, and are simultaneously positively affected by the presence of nearby strawberry (ice cream) vendors. The AGG- \/0 representation of this game is illustrated in Figure 3.1. As al- ways, nodes represent actions and directed edges represent membership in a node\u2019s 47 S1 S3 I4 S4S2 I3I2I1 AI AS AW Figure 3.1: AGG-\/0 representation of the Ice Cream Vendor game. neighborhood. The dotted boxes represent the action sets for each group of players; for example, the ice cream vendors have action set AI . Note that this game exhibits context-specific independence without any strict independence, and that the graph structure is independent of n. Size of an AGG- \/0 Representation Intuitively, AGG-\/0s capture two types of structure in games: 1. Shared actions capture the game\u2019s anonymity structure: agent i\u2019s utility de- pends only on her action ai and the configuration. Thus, agent i cares about the number of players that play each action, but not the identities of those players. 2. The (lack of) edges between nodes in the action graph expresses context- specific independencies of utilities of the game: for all i \u2208 N, if i chose action \u03b1 \u2208 A , then i\u2019s utility depends only on the configuration over the neighborhood of \u03b1 . In other words, the configuration over actions not in \u03bd(\u03b1) does not affect i\u2019s utility. We have claimed informally that action graph games provide a way of repre- senting games compactly. But what exactly is the size of an AGG-\/0 representation, and how does it grow with the number of agents n? In this subsection we give a 48 bound on the size of an AGG-\/0, and show that asymptotically it is never worse than the size of the equivalent normal form. From Definition 3.2.4 we observe that to completely specify an AGG-\/0 we need to specify (1) the set of agents, (2) each agent\u2019s set of actions, (3) the ac- tion graph, and (4) the utility functions. The first three can easily be compactly represented: 1. The set of agents N = {1, . . . ,n} can be specified by the integer n. 2. The set of actions A can be specified by the integer |A |. Each agent\u2019s action set Ai \u2286A can be specified in O(|A |) space. 3. The action graph G = (A ,E) can be straightforwardly represented as neigh- bor lists: for each node \u03b1 \u2208 A we specify its list of neighbors \u03bd(\u03b1) \u2286 A . The space required is \u2211\u03b1\u2208A |\u03bd(\u03b1)|, which is bounded by |A |I , where I = max\u03b1 |\u03bd(\u03b1)|, i.e., the maximum in-degree of G. We observe that whereas the first three components of an AGG-\/0 (N,A,G,u) can always be represented in space polynomial in n and |Ai|, the size of the utility functions is worst-case exponential. So the size of the utility functions determines whether an AGG-\/0 can be tractably represented. Indeed, for the rest of the paper we will refer to the number of payoff values stored as the representation size of the AGG-\/0. The following proposition gives an upper bound on the number of payoff values stored. Proposition 3.2.6. Given an AGG- \/0, the number of payoff values stored by its utility functions is at most |A | (n\u22121+I )!(n\u22121)!I ! . If I is bounded by a constant as n grows, the number of payoff values is O(|A |nI ), i.e. polynomial with respect to n. Proof. For each utility function u\u03b1 : C(\u03b1) \u2192 R, we need to specify a utility value for each distinct configuration c(\u03b1) \u2208 C(\u03b1). The set of configurations C(\u03b1) can be derived from the action graph, and can be sorted in lexicographical order. Thus, we can just specify a list of |C(\u03b1)| utility values that correspond to the (ordered) set of configurations.3 In general there is no closed form expression for |C(\u03b1)|, the num- ber of distinct configurations over \u03bd(\u03b1). Instead, we consider the operation of ex- tending all agents\u2019 action sets via \u2200i : Ai 7\u2192A . The number of configurations over 49 \u03bd(\u03b1) under the new action sets is an upper bound on |C(\u03b1)|. This is the number of (ordered) combinatorial compositions of n\u22121 (since one player has already chosen \u03b1) into |\u03bd(\u03b1)|+1 nonnegative integers, which is (n\u22121+|\u03bd(\u03b1)| |\u03bd(\u03b1)| ) = (n\u22121+|\u03bd(\u03b1)|)!(n\u22121)!|\u03bd(\u03b1)|! . Then the total space required for the utilities is bounded from above by |A | (n\u22121+I )!(n\u22121)!I ! . If I is bounded by a constant as n grows, this grows like O(|A |nI ). For each AGG-\/0, there exists a unique induced normal form representation with the same set of players and |Ai| actions for each i; its utility function is a matrix that specifies each player i\u2019s payoff for each possible action profile a \u2208 A. This implies a space complexity of n\u220fni=1 |Ai|. When Ai \u2265 2 for all i, the size of the induced normal form representation grows exponentially with respect to n. On the other hand, we observe that the number of payoff values stored in an AGG-\/0 representation is always less than or equal to the number of payoff values in the induced normal form representation. Of course, the AGG-\/0 representation has the extra overhead of representing the action graph, which is bounded by |A |I . But this overhead is dominated by the size of the induced normal form, n\u220f j |A j|. Thus, an AGG-\/0\u2019s asymptotic space complexity is never worse than that of its induced normal form game. It is also possible to describe a reverse transformation that encodes any arbi- trary game in normal form as an AGG-\/0. Specifically, a unique node ai must be created for each action available to each agent i. Thus \u2200\u03b1 \u2208 A , c(\u03b1) \u2208 {0,1}, and \u2200i, \u2211\u03b1\u2208Ai c(\u03b1) must equal 1. The configuration simply indicates each agent\u2019s action choice, and expresses no anonymity or context-specific independence struc- ture. This representation is no more or less compact than the normal form. More precisely, the number of distinct configurations over \u03bd(ai) is the number of action profiles of the other players, which is \u220f j 6=i |A j|. Since i has |Ai| actions, \u220f j |A j| payoff values are needed to represent i\u2019s payoffs. So in total n\u220f j |A j| payoff values are stored, exactly the number in the normal form. 3This is the most compact way of representing the utility functions, but does not provide easy random access to the utilities. Therefore, when we want to do computation using AGGs, we may convert each utility function u\u03b1 to a data structure that efficiently implements a mapping from se- quences of integers to (floating-point) numbers, (e.g. tries, hash tables or Red-Black trees), with space complexity O(I |C(\u03b1)|). 50 23 1 5 4 6 8 9 7 Figure 3.2: AGG-\/0 representation of a 3-player, 3-action graphical game. One might ask whether AGG-\/0s can compactly represent known classes of structured games. Consider the graphical game representation as defined in Defini- tion 2.1.2. Graphical games can be represented as AGG-\/0s by replacing each node i in the graphical game by a distinct cluster of nodes Ai representing the action set of agent i. If the graphical game has an edge from i to j, edges must be created in the AGG-\/0 so that \u2200ai \u2208 Ai,\u2200a j \u2208 A j, ai \u2208 \u03bd(a j). The resulting AGG-\/0s are as compact as the original graphical games. Figure 3.2 shows the AGG-\/0 representa- tion of a graphical game having three nodes and two edges (i.e., player 1 and player 3 do not directly affect each others\u2019 payoffs). Another important class of structured games are symmetric games as defined in Section 2.1.1. An arbitrary symmetric game can be encoded as an AGG-\/0 without an increase in asymptotic size. Specifically, let Ai = A for all i \u2208 N. The resulting action graph is a clique, i.e., \u03bd(\u03b1) = A for all \u03b1 \u2208A . 3.2.2 AGGs with Function Nodes There are games with certain kinds of context-specific independence structures that AGG-\/0s are not able to exploit (see, e.g., Example 3.2.7 below). In this section we extend the AGG-\/0 representation by introducing function nodes, allowing us to exploit a much wider variety of utility structures. Of course, as always, compact representation is not interesting as an end in itself. In Section 3.4.2 we identify broad subclasses of AGG-FNs\u2014indeed, rich enough to encompass all AGG-FN examples presented in this chapter \u2014which are amenable to efficient computation. 51 Examples: Coffee Shops and Parity Example 3.2.7 (Coffee Shop game). Consider a game involving n players; each player plans to open a coffee shop in a downtown area, represented by a r\u00d7k grid. Each player can choose to open a shop located within any of the B \u2261 rk blocks or decide not to enter the market. Conditioned on player i choosing some location \u03b1 , her utility depends on the numbers of players who chose (i) the same block; (ii) any of the surrounding blocks; and (iii) any other location. The normal form representation of this game has size n|A |n = n(B+1)n. Since there are no strict independencies in the utility function, the asymptotic size of the graphical game representation is the same. Let us now represent the game as an AGG-\/0. We observe that if agent i chooses an action \u03b1 corresponding to one of the B locations, then her payoff is affected by the configuration over all B locations. Hence, \u03bd(\u03b1) must consist of B action nodes corresponding to the B locations, and so the action graph has in-degree I = B. Since the action sets completely overlap, the representation size is \u0398(|A ||C(\u03b1)|) = \u0398 ( B (n\u22121+B)!(n\u22121)!B! ) . If we hold B constant, this becomes \u0398(BnB), which is exponentially more compact than the normal form and the graphical game representation. If we instead hold n constant, the size of the representation is \u0398(Bn), which is only slightly better than the normal form and graphical game representations. Intuitively, the AGG-\/0 representation is able to exploit anonymity structure in this game. However, this game\u2019s payoff function also has context-specific struc- ture that the AGG-\/0 does not capture. Observe that u\u03b1 depends only on three quantities: the number of players who chose the same block, the number of play- ers who chose an adjacent block, and the number of players who chose another location. In other words, u\u03b1 can be written as a function g of only three integers: u\u03b1(c(\u03b1)) = g(c(\u03b1),\u2211\u03b1 \u2032\u2208A \u2032 c(\u03b1 \u2032),\u2211\u03b1 \u2032\u2032\u2208A \u2032\u2032 c(\u03b1 \u2032\u2032)) where A \u2032 is the set of actions surrounding \u03b1 and A \u2032\u2032 the set of actions corresponding to other locations. The AGG-\/0 representation is not able to exploit this context-specific information, and so duplicates some utility values. There exist many similar examples in which the utility functions u\u03b1 can be expressed as functions of a small number of intermediate parameters. Here we give one more. 52 Example 3.2.8 (Parity game). In a \u201cparity game\u201d, each u\u03b1 depends only on whether the number of agents at neighboring nodes is even or odd, as follows: u\u03b1 = { 1 if \u2211\u03b1 \u2032\u2208\u03bd(\u03b1) c(\u03b1 \u2032) mod 2 = 0; 0 otherwise. Observe that in the Parity game u\u03b1 can take just two distinct values; however, the AGG-\/0 representation must specify a value for every configuration c(\u03b1). Definition of AGG-FNs Structure such as that in Examples 3.2.7 and 3.2.8 can be exploited within the AGG framework by introducing function nodes to the action graph G; intuitively, we use them to describe intermediate parameters upon which players\u2019 utilities depend. Now G\u2019s vertices consist of both the set of action nodes A and the set of function nodes P , i.e. G = (A \u222aP,E). We require that no function node p \u2208 P can be in any player\u2019s action set: A \u2229P = {}. Thus, the total number of nodes in G is |A |+ |P|. Each node in G can have action nodes and\/or function nodes as neighbors. We associate a function f p : C(p) \u2192 R with each p \u2208 P , where c(p) \u2208 C(p) denotes configurations over p\u2019s neighbors. The configurations c are extended to include the function nodes by the definition c(p)\u2261 f p(c(p)). If p \u2208P has no neighbors, f p is a constant function. To ensure that the AGG is meaningful, the graph G restricted to nodes in P is required to be a directed acyclic graph (DAG). This condition ensures that for all \u03b1 and p, c(\u03b1) and c(p) are well defined. To ensure that every p \u2208 P is \u201cuseful\u201d, we also require that p has at least one outgoing edge. As before, for each action node \u03b1 we define a utility function u\u03b1 : C(\u03b1) \u2192R. We call this extended representation an Action Graph Game with Function Nodes (AGG-FN), and define it formally as follows. Definition 3.2.9 (AGG-FN). An Action Graph Game with Function Nodes (AGG-FN) is a tuple (N,A,P,G, f ,u), where: \u2022 N is the set of agents; \u2022 A = \u220fi\u2208N Ai is the set of action profiles; \u2022 P is a finite set of function nodes; 53 \u2022 G = (A \u222aP,E) is an action graph, where A = \u22c3 i\u2208N Ai is the set of distinct actions. We require that the restriction of G to the nodes P is acyclic and that for every p \u2208P there exists an m \u2208A \u222aP such that (p,m) \u2208 E; \u2022 f is a tuple ( f p)p\u2208P , where each f p : C(p) \u2192 R is an arbitrary mapping from neighbors of p to real numbers; \u2022 u is a tuple (u\u03b1)\u03b1\u2208A , where each u\u03b1 : C(\u03b1) \u2192 R is the utility function for action \u03b1 . Given an AGG-FN, we can construct an equivalent AGG-\/0 with the same play- ers N and actions A and equivalent utility functions, but without any function nodes. We call this the induced AGG- \/0 of the AGG-FN. There is an edge from \u03b1 \u2032 to \u03b1 in the induced AGG-\/0 either if there is an edge from \u03b1 \u2032 to \u03b1 in the AGG-FN, or if there is a path from \u03b1 \u2032 to \u03b1 through a chain consisting entirely of function nodes. From the definition of AGG-FNs, the utility of playing action \u03b1 is uniquely determined by the configuration c(\u03b1), which is uniquely determined by the config- uration over the actions that are neighbors of \u03b1 in the induced AGG-\/0. As a result, the utility tables of the induced AGG-\/0 can be filled in unambiguously. We observe that the number of utility values stored in an AGG-FN is no greater than the num- ber of utility values in the induced AGG-\/0. On the other hand, AGG-FNs have to represent the functions f p for each p \u2208P . In the worst case, these functions can be represented as explicit mappings similar to the utility functions u\u03b1 . However, it is often possible to define these functions algebraically by combining elementary operations, as we do in most of the examples given in this chapter . In this case the functions\u2019 representations require a negligible amount of space. Representation Size What is the size of an AGG-FN (N,A,P,G, f ,u)? The following proposition gives a sufficient condition for the representation size to be polynomial. Here we speak about a class of AGG-FNs because our statement is about the asymptotic behavior of the representation size. This is in contrast to Proposition 3.2.6, where we gave an exact bound on the size of an individual AGG-\/0. Proposition 3.2.10. A class of AGG-FNs has representation size bounded by a function polynomial in n, |A | and |P| if the following conditions hold: 54 1. for all function nodes p \u2208P , the size of p\u2019s range |R( f p)| is bounded by a function polynomial in n, |A | and |P|; and 2. maxm\u2208A\u222aP \u03bd(m) (the maximum in-degree in the action graph) is bounded by a constant. Proof. Given an AGG-FN (N,A,P,G, f ,u), it is straightforward to check that all components except u and f are polynomial in n, |A | and |P|. First, consider an action node \u03b1 \u2208A . Recall that the size of the utility function u\u03b1 is C(\u03b1). Partition \u03bd(\u03b1), the set of \u03b1\u2019s neighbors, into \u03bdA (\u03b1) = \u03bd(\u03b1)\u2229A and \u03bdP(\u03b1) = \u03bd(\u03b1)\u2229P (neighboring action nodes and function nodes respectively). Since for each action \u03b1 \u2032 \u2208 \u03bdA (\u03b1), c(\u03b1 \u2032) \u2208 {0, . . . ,n}, and for each p\u2032 \u2208 \u03bdP(\u03b1), c(p) \u2208 R( f p), then C(\u03b1) \u2264 (n+ 1)|\u03bdA (\u03b1)|\u220fp\u2208\u03bdP (\u03b1) |R( f p)|. This is polynomial because all action node in-degrees are bounded by a constant. Now consider a function node p \u2208 P . Without loss of generality, assume that its function f p is represented explicitly as a mapping. (Any other repre- sentation of f p can be transformed into this explicit representation.) The repre- sentation size of f p is then C(p). Using the same reasoning as above, we have C(p) \u2264 (n+1)|\u03bdA (p)|\u220fq\u2208\u03bdP(p) |R( f q)|, which is polynomial since all function node in-degrees are bounded by a constant. When the functions f p do not have to be represented explicitly, we can drop the requirement on the in-degree of function nodes. Corollary 3.2.11. A class of AGG-FNs has representation size bounded by a func- tion polynomial in n, |A | and |P| if the following conditions hold: 1. for all function nodes p \u2208 P , the function f p has a representation whose size is polynomial in n, |A | and |P|; 2. for each function node p \u2208P that is a neighbor of some action node \u03b1 , the size of p\u2019s range |R( f p)| is bounded by a function polynomial in n, |A | and |P|; and 3. max\u03b1\u2208A \u03bd(\u03b1) (the maximum in-degree among action nodes) is bounded by a constant. A very useful type of function node is the simple aggregator. 55 Definition 3.2.12 (Simple aggregator). A function node p\u2208P is a simple aggrega- tor if each of its neighbors \u03bd(p) are action nodes and f p is the summation function: f p(c(p)) = \u2211m\u2208\u03bd(p) c(m). Simple aggregator function nodes take the value of the total number of players who chose any of the node\u2019s neighbors. Since these functions can be specified in constant space, and since R( f p) = {0, . . . ,n} for all p, Corollary 3.2.11 applies. That is, the representation sizes of AGG-FNs whose function nodes are all simple aggregators are polynomial whenever the in-degrees of action nodes are bounded by a constant. In fact, under certain assumptions we can prove an even tighter bound on the representation size, analogous to Proposition 3.2.6 for AGG-\/0s. Intu- itively, this works because both configurations on action nodes and configurations on simple aggregators count the numbers of players who behave in certain ways. Proposition 3.2.13. Consider a class of AGG-FNs whose function nodes are all simple aggregators. For each m \u2208A \u222aP , define the function \u03b2 (m) = { m m \u2208 A; \u03bd(m) otherwise. Intuitively, \u03b2 (m) is the set of nodes whose counts are aggregated by node m. If for each \u03b1 \u2208A and for each m,m\u2032 \u2208 \u03bd(\u03b1), \u03b2 (m)\u2229\u03b2 (m\u2032) = {} unless m = m\u2032 (i.e., no action node affects \u03b1 in more than one way), then the AGG-FNs\u2019 representation sizes are bounded by |A | ( n\u22121+I I ) where I = max\u03b1\u2208A |\u03bd(\u03b1)| is the maximum in- degree of action nodes. Proof. Consider the utility function u\u03b1 for an arbitrary action \u03b1 . Each neighbor m \u2208 \u03bd(\u03b1) is either an action or a simple aggregator. Observe that a configura- tion c(\u03b1) \u2208C(\u03b1) is a tuple of integers specifying the numbers of players choosing each action in the set \u03b2 (m) for each m \u2208 \u03bd(\u03b1). As in the proof of Proposition 3.2.6, we extend each player\u2019s set of actions to |A |, making the game symmet- ric. This weakly increases the number of configurations. Since the sets \u03b2 (m) are non-overlapping, the number of configurations possible in the extended action space is equal to the number of (ordered) combinatorial compositions of n\u22121 into |\u03bd(\u03b1)|+ 1 nonnegative integers, which is ( n\u22121+|\u03bd(\u03b1)| |\u03bd(\u03b1)| ) . This includes one bin for 56 Figure 3.3: A 5\u00d76 Coffee Shop game: Left: the AGG-\/0 representation with- out function nodes (looking at only the neighborhood of \u03b1). Middle: we introduce two function nodes, p\u2032 (bottom) and p\u2032\u2032 (top). Right: \u03b1 now has only 3 neighbors. each action or simple aggregator in \u03bd(\u03b1), plus one bin for agents that take an action that is neither in \u03bd(\u03b1) nor in the neighborhood of any simple aggregator in \u03bd(\u03b1). Then the total space required for representing u is bounded by |A | ( n\u22121+I I ) where I = max\u03b1\u2208A |\u03bd(\u03b1)|. Consider the Coffee Shop game from Example 3.2.7. For each action node \u03b1 corresponding to a location, we introduce two simple aggregator function nodes, p\u2032\u03b1 and p\u2032\u2032\u03b1 . Let \u03bd(p\u2032\u03b1) be the set of actions surrounding \u03b1 , and \u03bd(p\u2032\u2032\u03b1 ) be the set of actions corresponding to other locations. Then we set \u03bd(\u03b1) = {\u03b1 , p\u2032\u03b1 , p\u2032\u2032\u03b1}, as shown in Figure 3.3. Now each c(\u03b1) is a configuration over only three nodes. Since each f p is a simple aggregator, Corollary 3.2.11 applies and the size of this AGG-FN is polynomial in n and A . In fact since the game is symmetric and the \u03b2 ()\u2019s as defined in Proposition 3.2.13 are non-overlapping, we can calculate the exact value of |C(\u03b1)| as the number of compositions of n\u22121 into four nonnegative integers, (n+2)!(n\u22121)!3! = n(n+ 1)(n + 2)\/6 = O(n 3). We must therefore store Bn(n+ 1)(n+ 2)\/6 = O(Bn3) utility values. This is significantly more compact than the AGG-\/0 representation, which has a representation size of O(B (n\u22121+B)!(n\u22121)!B! ). We can represent the parity game from Example 3.2.8 in a similar way. For each action \u03b1 we create a function node p\u03b1 , and let \u03bd(p\u03b1 ) = \u03bd(\u03b1). We then modify \u03bd(\u03b1) so that it has only one member, p\u03b1 . For each function node p we define f p as f p(c(p)) = \u2211\u03b1\u2208\u03bd(p) c(\u03b1) mod 2. Since R( f p) = {0,1}, Corollary 3.2.11 applies. In fact, each utility function just needs to store two values, and so the representation size is O(|A |) plus the size of the action graph. 57 3.2.3 AGG-FNs with Additive Structure So far we have assumed that the utility functions u\u03b1 : C(\u03b1) \u2192 R are represented explicitly, i.e., by specifying the payoffs for all c(\u03b1) \u2208 C(\u03b1). This is not the only way to represent a mapping; the utility functions could be defined as analytical functions, decision trees, logic programs, circuits, or even arbitrary algorithms. These alternative representations might be more natural for humans to specify, and in many cases are more compact than the explicit representation. However, this extra compactness does not always allow us to reason more efficiently with the games. In this section, we look at utility functions with additive structure. These functions can be represented compactly and do allow more efficient computation. Definition of AGG-FNs with Additive Structure We say that a multivariate function has additive structure if it can be written as a (weighted) sum of functions of subsets of the variables. This form is more compact because we only need to represent the summands, which have lower dimensionality than the entire function. We extend the AGG-FN representation by allowing u\u03b1 to be represented as a weighted sum of the configuration of the neighbors of \u03b1 .4 Definition 3.2.14. A utility function u\u03b1 of an AGG-FN is additive if for all m\u2208 \u03bd(\u03b1) there exist \u03bbm \u2208R, such that u\u03b1(c(\u03b1))\u2261 \u2211 m\u2208\u03bd(\u03b1) \u03bbmc(m). (3.2.1) Such an additive utility function can be represented as the tuple (\u03bbm)m\u2208\u03bd(\u03b1). This is a very versatile representation of additivity, because the neighbors of \u03b1 can be function nodes. Thus additive utility functions can represent weighted sums of arbitrary functions of configurations over action nodes. We now formally define an AGG-FN representation where some of the utility functions are additive. 4Such a utility function could also be represented using standard function nodes representing summation. However, we treat the common case of additivity separately because it is amenable to special-purpose computational methods (intuitively, leveraging the linearity of expectation; see Section 3.4.3). 58 Definition 3.2.15. An AGG-FN with additive structure (AGG-FNA) is a tuple (N, A,P,G, f ,A+, \u039b,u) where N,A,P,G, f are as defined in Definition 3.2.9, and \u2022 A+ \u2286A is the set of actions whose utility functions are additive; \u2022 \u039b = (\u03bb \u03b1+)\u03b1+\u2208A+ , where each \u03bb \u03b1+ = (\u03bb \u03b1+ m )m\u2208\u03bd(\u03b1) is the tuple of coefficients representing the additive utility function u\u03b1+ ; \u2022 u = (u\u03b1)\u03b1\u2208A \\A+ , where each u\u03b1 is as defined in Definition 3.2.9. These are the non-additive utility functions of the game, which are represented explic- itly. Representation Size We only need |\u03bd(\u03b1)| numbers to represent the coefficients of an additive utility function u\u03b1 , whereas the explicit representation requires |C(\u03b1)| numbers. Of course we also need to take into account the sizes of the neighboring function nodes p \u2208 \u03bd(\u03b1) and their corresponding functions f p, which represent the summands of the additive functions. Each f p either has a simple description requiring negligible space, or is represented explicitly as a mapping. In the latter case its size can be analyzed the same way as utility functions on action nodes. That is, when the neighbors of p are all actions then Proposition 3.2.6 applies; otherwise the discussion in Section 3.2.2 applies. Representing Congestion Games as AGG-FNAs An arbitrary congestion game can be encoded as an AGG-FNA with no loss of compactness, where all u\u03b1 are represented as additive utility functions. Given a congestion game (N,M, (Ai)i\u2208N ,(K jk) j\u2208M,k\u2264n) as defined in Definition 2.1.1, we construct an AGG-FNA with the same number of players and same number of actions for each player as follows. \u2022 Create \u2211i\u2208N |Ai| action nodes, corresponding to the actions in the congestion game. In other words, the action sets do not overlap. \u2022 Create 2m function nodes, labeled (p1, . . . , pm,q1, . . . ,qm). For each j \u2208 M, there is an edge from p j to q j. For all j \u2208M and for all \u03b1 \u2208A , if facility j 59 1 2 3 A2 B1 A1 B2 A1 A2 B1 B2 + + + + p1 p2 p3 q1 q2 q3 Figure 3.4: Left: a two-player congestion game with three facilities. The actions are shown as ovals containing their respective facilities. Right: the AGG-FNA representation of the same congestion game. is included in action \u03b1 in the congestion game, then in the action graph there is an edge from the action node \u03b1 to p j, and also an edge from q j to \u03b1 . \u2022 For each p j, define c(p j) \u2261 \u2211\u03b1\u2208\u03bd( j) c(\u03b1), i.e., p j is a simple aggregator. Since its neighbors are the actions that includes facility j, thus c(p j) is the number of players that chose facility j, which is #( j,a). \u2022 Assign each q j only one neighbor, namely p j, and define c(q j)\u2261 f q j (c(p j))\u2261 K j(c(p j)). In other words, c(q j) is exactly K j(#( j,a)), the cost on facility j. \u2022 For each action node \u03b1 , represent the utility function u\u03b1 as an additive func- tion with weight \u22121 for each of its neighbors, u\u03b1(c(\u03b1)) = \u2211 j\u2208\u03bd(\u03b1) \u2212c( j) =\u2212 \u2211 j\u2208\u03bd(\u03b1) K j(#( j,a)). (3.2.2) Example 3.2.16 (Congestion game). Consider the AGG-FNA representation of a two-player congestion game (see Figure 3.4). The congestion game has three facilities labeled {1, 2, 3}. Player A has actions A1={1} and A2={1, 2}; Player B has actions B1={2, 3} and B2={3}. Now let us consider the representation size of this AGG-FNA. The action graph has |A |+2m nodes and O(m|A |) edges; the function nodes p1, . . . , pm are simple aggregators and each only requires constant space; each f q j requires n numbers to specify so the total size of the AGG-FNA is \u0398(mn+m|A |) = \u0398(mn+m\u2211i\u2208N |Ai|). 60 PhD MSc BSc Dipl Economics PhD MEng BEng Dipl Computer Science Electrical Engineering PhD MEng BEng Dipl High Figure 3.5: AGG-\/0 representation of the Job Market game. Thus this AGG-FNA representation has the same space complexity as the original congestion game representation. One extension of congestion games is player-specific congestion games [Milch- taich, 1996, Monderer, 2007]. Instead of all players having the same costs K jk, in these games each player has a different set of costs. This can be easily represented as an AGG-FNA by following the construction above, but using a different set of function nodes qi1, . . . ,qim for each player i. 3.3 Further Examples In this section we provide several more examples of structured games that can be compactly represented as AGGs. 3.3.1 A Job Market Here we describe a class of example games that can be compactly represented as AGG-\/0s. Unlike the Ice Cream Vendor game, the following example does not involve choosing among actions that correspond to geographical locations. Example 3.3.1 (Job Market game). Consider the individuals competing in a job market. Each player chooses a field of study and a level of education to achieve. 61 The utility of player i is the sum of two terms: (a) a constant cost depending only on the chosen field and education level, capturing the difficulty of studies and the cost of tuition and forgone wages; and (b) a variable reward, depending on (i) the number of players who chose the same field and education level as i, (ii) the number of players who chose a related field at the same education level, and (iii) the number of players who chose the same field at one level above or below i. Figure 3.5 gives an action graph modeling one such job market scenario, in which there are three fields, Economics, Computer Science and Electrical Engi- neering . For each field there are four levels of postsecondary study: Diploma, Bachelor, Master and PhD. Economics and Computer Science are considered re- lated fields, and so are Computer Science and Electrical Engineering. There is an- other action representing high school education, which does not require a choice of field. The maximum in-degree of the action graph is five, whereas a naive repre- sentation of the game as a symmetric game (see Section 3.2.1) would correspond to a complete action graph with in-degree 13. Thus this AGG- \/0 representation is able to take advantage of anonymity as well as context-specific independence structure. 3.3.2 Representing Anonymous Games as AGG-FNs One property of the AGG-\/0 representation as defined in Section 3.2.1 is that utility function u\u03b1 is shared by all players who have \u03b1 in their action sets. What if we want to represent games with agent-specific utility functions, where utilities depend not only on \u03b1 and c(\u03b1), but also on the identity of the player playing \u03b1? As mentioned in Section 2.1.1, researchers have studied anonymous games, which deviate from symmetric games by allowing agent-specific utility functions [Daskalakis and Papadimitriou, 2007, Kalai, 2004, 2005]. To represent games of this type as AGGs, we cannot just let multiple players share action \u03b1 , because that would force those players to have the same utility function u\u03b1 . It does work to give agents non-overlapping action sets, replicating each action once for each agent. However, the resulting AGG-\/0 is not compact; it does not take advantage of the fact that each of the replicated actions affects other players\u2019 utilities in the same way. Using function nodes, it is possible to compactly represent this kind of structure. We again split \u03b1 into separate action nodes \u03b1i for each player i able 62 A2A1 A3 B2B1 B3 Figure 3.6: AGG-FN representation of a game with agent-specific utility functions. to take the action. Now we also introduce a function node p with every \u03b1i as a neighbor, and define f p to be a simple aggregator. Now p gives the total number of agents who chose action \u03b1 , expressing anonymity, and action nodes include p as a neighbor instead of each \u03b1i. This allows agents to have different utility functions without sacrificing representational compactness. Example 3.3.2 (Anonymous game). Consider an anonymous game with two classes of players, each class sharing the same utility functions. The AGG-FN representa- tion of the game is shown in Figure 3.6. Players from the first class have action set {A1, A2, A3}, and players from the second class have action set {B1, B2, B3}. Fur- thermore, the utility functions of the second class of players exhibit certain context- specific independence structure, which are expressed by the absence of some of the possible edges from function nodes to action nodes B1, B2, B3. 3.3.3 Representing Polymatrix Games as AGG-FNAs A polymatrix game (defined in Section 2.1.1) can be compactly represented as an AGG-FNA. The encoding is as follows. The AGG-FNA has non-overlapping action sets. For each pair of players (i, j), we create two function nodes to represent i and j\u2019s payoffs under the bimatrix game between them. Each of these function nodes has incoming edges from all of i\u2019s and j\u2019s actions. For each player i and each of his actions ai, there are incoming edges from the n\u2212 1 function nodes representing i\u2019s payoffs in his bimatrix games against each of the other players. 63 B2 B1 C2 C1 A2A1 S A S B S C U AB U BA U AC U CA U CB U BC + + + + + + Figure 3.7: AGG-FNA representation of a 3-player polymatrix game. Func- tion node UAB represents player A\u2019s payoffs in his bimatrix game against B, UBA represents player B\u2019s payoffs in his bimatrix game against A, and so on. To avoid clutter we do not show the edges from the action nodes to the function nodes in this graph. Such edges exist from A and B\u2019s actions to UAB and UBA, from A and C\u2019s actions to UAC and UCA, and from B and C\u2019s actions to UBC and UCB. uai is an additive utility function with weights equal to 1. Based on arguments similar to those in Section 3.2.1, this AGG-FNA representation has the same space complexity as the total size of the bimatrix games. Example 3.3.3 (Polymatrix game). Consider the AGG-FNA representation of a three-player polymatrix game, given in Figure 3.7. Each player\u2019s payoff is the sum of her payoffs in 2\u00d72 game with played with each of the other players; she is only able to choose her action once. This additive utility function can be captured by introducing a function node Ui j to represent each player i\u2019s utility in the bimatrix game played with player j. 3.3.4 Congestion Games with Action-Specific Rewards So far the only use we have shown for AGG-FNAs is bringing existing game rep- resentations into the AGG framework. Of course, another key advantage of our approach is the ability to compactly represent games that would not have been compact under these existing game representations. We now give such an exam- ple. 64 Example 3.3.4 (Congestion game with action-specific rewards). Consider the fol- lowing game with n players. As in a congestion game, there is a set of facilities M, each action involves choosing a subset of the facilities, and the cost for facility j de- pends only on the number of players that chose facility j. Now further assume that, in addition to the cost of using the facilities, each player i also derives some utility Ri depending only on her own action ai, i.e., the set of facilities she chose. This utility is not necessarily additive across facilities. That is, in general if A,B \u2282 M and A\u2229B = \/0, Ri(A\u222aB) 6= Ri(A)+Ri(B). So i\u2019s total utility is ui(a) = Ri(ai)\u2212 \u2211 j\u2208ai K j(#( j,a)). (3.3.1) This game can model a situation in which the players use the facilities to complete a task, and the utility of the task depends on the facilities chosen. Another interpre- tation is given by Ben-Sasson et al. [2006], in their analysis of \u201ccongestion games with strategy costs,\u201d which also have exactly this type of utility function. This work interpreted (the negative of) Ri(ai) as the computational cost of choosing the pure strategy ai in a congestion game. Due to the extra Ri(ai) term in the utility expression (3.3.1), this game cannot be directly represented as a congestion game or a player-specific congestion game,5 but it can be compactly represented as an AGG-FNA. We create \u2211i |Ai| action nodes, giving the agents nonoverlapping action sets. We have shown in Section 3.2.3 that we can use function nodes and additive utility functions to represent the congestion-game-like costs. Beyond this construction, we just need to create a function node ri for each player i and define c(ri) to be equal to Ri(ai). The neighbors of ri are i\u2019s entire action set: \u03bd(ri) = Ai. Since the action sets do not overlap, there are only |Ai| distinct configurations over Ai. In other words, |C(ri)|= |Ai| and we need only O(|Ai|) space to represent each Ri. The total size of the representation is O(mn+m\u2211i\u2208N |Ai|). 5Interestingly, Ben-Sasson et al. [2006] showed that this game belongs to the set of potential games, which implies that there exists an equivalent congestion game. However, building such a congestion game from the potential function following Monderer and Shapley\u2019s [1996] construction yields an exponential number of facilities, meaning that this congestion game representation is expo- nentially larger than the AGG-FNA representation presented here. 65 3.4 Computing Expected Payoff with AGGs Up to this point, we have concentrated on how AGGs may be used to compactly represent games of interest. But compact representation is only half the story, and indeed by itself is relatively easy to achieve. Our goal is to identify a compact repre- sentation that can be used directly (e.g., without conversion to its induced normal form) for the computation of game-theoretic quantities of interest. We now turn to this computational perspective, and show that we can indeed leverage AGG\u2019s representational compactness in the computation of game-theoretic quantities. In this section we focus on the computational task of computing an agent\u2019s expected payoff under a mixed strategy profile. As we discussed in Section 2.2, this task is important as an inner-loop problem in the computation of many game-theoretic quantities, including Govindan and Wilson\u2019s [2003, 2004] algorithms for finding Nash equilibria, the simplicial subdivision algorithm for finding Nash equilibria [van der Laan et al., 1987], and Papadimitriou and Roughgarden\u2019s [2008] algo- rithm for finding correlated equilibria. We discuss some of these applications in Section 3.5. Our main result of this section is an algorithm that efficiently computes ex- pected payoffs of AGGs by exploiting their context-specific independence, anonymity and additivity structure. In Section 3.4.1 we introduce our expected payoff algo- rithm for AGG-\/0s, and show (in Theorem 3.4.1) that the algorithm runs in time polynomial in the size of the input AGG-\/0. For the special case of symmetric strategies in symmetric AGG-\/0s, we present a different algorithm in Section 3.4.1 which runs asymptotically faster than our general algorithm for AGG-\/0s; in Section 3.4.1 we extend this approach to the broader class of k-symmetric AGG-\/0s. Finally, in Sections 3.4.2 and 3.4.3 we extend our expected payoff algorithm to AGG-FNs and AGG-FNAs respectively, and identify (in Theorems 3.4.5 and 3.4.6) conditions under which these extended algorithms run in polynomial time. 3.4.1 Computing Expected Payoff for AGG- \/0s Following the notation of Section 2.2, we denote a mixed strategy of i by \u03c3i \u2208 \u03a3i, a mixed-strategy profile by \u03c3 \u2208 \u03a3, and the probability that i plays action \u03b1 as \u03c3i(\u03b1). Now we can write the expected utility to agent i for playing pure strategy ai, 66 given that all other agents play the mixed strategy profile \u03c3\u2212i, as V iai(\u03c3\u2212i)\u2261 \u2211 a\u2212i\u2208A\u2212i ui(ai,a\u2212i)Pr(a\u2212i|\u03c3\u2212i), (3.4.1) Pr(a\u2212i|\u03c3\u2212i)\u2261\u220f j 6=i \u03c3 j(a j). (3.4.2) Note that Equation 3.4.2 gives the probability of a\u2212i under the mixed strategy \u03c3\u2212i. In the rest of this section we focus on the problem of computing V iai(\u03c3\u2212i) given i, ai and \u03c3\u2212i. Having established the machinery to compute V iai(\u03c3\u2212i), we can then compute the expected utility of player i under a mixed strategy profile \u03c3 as \u2211ai\u2208Ai \u03c3i(ai)V iai(\u03c3\u2212i). One might wonder why Equations (3.4.1) and (3.4.2) are not the end of the story. Notice that Equation (3.4.1) is a sum over the set A\u2212i of action profiles of players other than i. The number of terms is \u220f j 6=i |A j|, which grows exponentially in n. If we were to use the normal form representation, there really would be |A\u2212i| different outcomes to consider, each with potentially distinct payoff values. Thus, using normal form the evaluation of Equation (3.4.1) would be the best possible algorithm for computing V iai . Since AGGs are fully expressive, the same is true for games without any structure represented as AGGs. However, what about games that are exponentially more compact when represented as AGGs than when repre- sented in the normal form? For these games, evaluating Equation (3.4.1) amounts to an exponential-time algorithm. In this section we present an algorithm that given any i, ai and \u03c3\u2212i, computes the expected payoff V iai(\u03c3\u2212i) in time polynomial in the size of the AGG-\/0 repre- sentation. In other words, our algorithm is efficient if the AGG-\/0 is compact, and requires time exponential in n if it is not. In particular, recall from Proposition 3.2.6 any AGG-\/0 with maximum in-degree bounded by a constant has a representation size that is polynomial in n. As a result our algorithm is polynomial in n for such games. Exploiting Context-Specific Independence: Projection First, we consider how to take advantage of the context-specific independence structure of an AGG-\/0: the fact that i\u2019s payoff when playing ai only depends on 67 configurations over the neighborhood of i. The key idea is that we can project other players\u2019 strategies onto a smaller action space that is strategically the same from the point of view of an agent who chose action ai. That is, we construct a graph from the point of view of a given agent, expressing his sense that actions that do not affect his chosen action are in a sense the \u201csame action.\u201d This can be seen as in- ducing a context-specific graphical game. Formally, for every action \u03b1 \u2208A define a reduced graph G(\u03b1) by including only the nodes \u03bd(\u03b1) and a new node denoted \/0. The only edges included in G(\u03b1) are the directed edges from each of the nodes \u03bd(\u03b1) to the node \u03b1 . Player j\u2019s action a j is projected to a node a(\u03b1)j in the reduced graph G(\u03b1) by the mapping a (\u03b1) j \u2261 { a j a j \u2208 \u03bd(\u03b1) \/0 a j 6\u2208 \u03bd(\u03b1) . (3.4.3) In other words, actions that are not in \u03bd(\u03b1) (and therefore do not affect the payoffs of agents playing \u03b1) are projected onto a new action, \/0. The resulting projected action set A(\u03b1)j has cardinality at most min(|A j|, |\u03bd(\u03b1)|+ 1). This is illustrated in Figure 3.8, using the Ice Cream Vendor game described in Example 3.2.5. We define the set of mixed strategies on the projected action set A(\u03b1)j by \u03a3(\u03b1)j \u2261 \u03d5(A(\u03b1)j ). A mixed strategy \u03c3 j on the original action set A j is projected to \u03c3 (\u03b1)j \u2208 \u03a3(\u03b1)j by the mapping \u03c3 (\u03b1)j (a (\u03b1) j )\u2261 { \u03c3 j(a j) a j \u2208 \u03bd(\u03b1) \u2211\u03b1 \u2032\u2208A j\\\u03bd(\u03b1)\u03c3 j(\u03b1 \u2032) a(\u03b1)j = \/0 . (3.4.4) So given ai and \u03c3\u2212i, we can compute \u03c3 (ai)\u2212i in O(n|A |) time in the worst case. Now we can operate entirely on the projected space, and write the expected payoff as V iai(\u03c3\u2212i) = \u2211 a (ai) \u2212i \u2208A (ai) \u2212i u ( ai,C (ai)(ai,a\u2212i) ) Pr ( a (ai) \u2212i |\u03c3 (ai) \u2212i ) , Pr ( a (ai) \u2212i |\u03c3 (ai) \u2212i ) = \u220f j 6=i \u03c3 (ai)j ( a (ai) j ) . The summation is over A(ai)\u2212i , which in the worst case has (|\u03bd(ai)|+ 1)(n\u22121) terms. 68 S1 S3 I4 S4S2 I3I2I1 AI AS AW S1 S2 I2I1 ; Figure 3.8: Projection of the action graph. Left: action graph of the Ice Cream Vendor game. Right: projected action graph and action sets with respect to the action C1. So for AGG-\/0s with strict or context-specific independence structure, computing V iai(\u03c3\u2212i) in this way is exponentially faster than doing the summation in (3.4.1) directly. However, the time complexity of this approach is still exponential in n. Exploiting Anonymity: Summing over Configurations Next, we want to take advantage of the anonymity structure of the AGG-\/0. Recall from our discussion of representation size that the number of distinct configura- tions is usually smaller than the number of distinct pure action profiles. So ideally, we want to compute the expected payoff V iai(\u03c3\u2212i) as a sum over the possible con- figurations, weighted by their probabilities: V iai(\u03c3\u2212i) = \u2211 c(ai)\u2208C(ai ,i) ui ( ai,c (ai) ) Pr ( c(ai)|\u03c3 (ai) ) , (3.4.5) Pr ( c(ai)|\u03c3 (ai) ) = \u2211 a : C (ai)(a) = c(ai) n \u220f j=1 \u03c3 j(a j). (3.4.6) where \u03c3 (ai)\u2261 (ai,\u03c3 (ai)\u2212i ) and Pr(c(ai)|\u03c3 (ai)) is the probability of c(ai) given the mixed strategy profile \u03c3 (ai). Recall that C(ai,i) is the set of configurations over \u03bd(ai) given that i played ai. So Equation (3.4.5) is a summation of size |C(ai,i)|, the number of configurations given that i played ai, which is polynomial in n if |\u03bd(ai)| is bounded by a constant. The difficult task is to compute Pr(c(ai)|\u03c3 (ai)) for all c(ai) \u2208 C(ai,i), 69 i.e., the probability distribution over C(ai,i) induced by \u03c3 (ai). We observe that the sum in Equation (3.4.6) is over the set of all action profiles corresponding to the configuration c(ai). The size of this set is exponential in the number of players. Therefore directly computing the probability distribution using Equation (3.4.6) would take time exponential in n. Can we do better? We observe that the players\u2019 mixed strategies are indepen- dent, i.e., \u03c3 is a product probability distribution \u03c3(a) = \u220fi \u03c3i(ai). Also, each player affects the configuration c independently. This structure allows us to use dy- namic programming (DP) to efficiently compute the probability distribution Pr(c(ai)|\u03c3 (ai)). The intuition behind our algorithm is to apply one agent\u2019s mixed strategy at a time, effectively adding one agent at a time to the action graph. Let \u03c3 (ai)1...k denote the pro- jected strategy profile of agents {1, . . . ,k}. Denote by C(ai)k the set of configurations induced by actions of agents {1, . . . ,k}. Similarly, write c(ai)k \u2208C (ai) k . Denote by Pk the probability distribution on C(ai)k induced by \u03c3 (ai) 1...k, and by Pk[c] the probability of configuration c. At iteration k of the algorithm, we compute Pk from Pk\u22121 and \u03c3 (ai)k . After iteration n, the algorithm stops and returns Pn. The pseudocode of our DP algorithm is shown as Algorithm 1, and our full algorithm for computing V iai(\u03c3\u2212i) is summarized in Algorithm 2. Each c(ai)k is represented as a sequence of integers, so Pk is a mapping from sequences of integers to real numbers. We need a data structure to manipulate such probability distributions over configurations (sequences of integers) which permits quick lookup, insertion and enumeration. An efficient data structure for this purpose is a trie [Fredkin, 1962]. Tries are commonly used in text processing to store strings of characters, e.g. as dictionaries for spell checkers. Here we use tries to store strings of integers rather than characters. Both lookup and insertion complexity is linear in |\u03bd(ai)|. To achieve efficient enumeration of all elements of a trie, we store the elements in a list, in the order of their insertion. We omit the proof of correctness of our algorithm, which is relatively straightforward. Complexity Let C(ai,i)(\u03c3\u2212i) denote the set of configurations over \u03bd(ai) that have positive prob- ability of occurring under the mixed strategy (ai,\u03c3\u2212i). In other words, this is the 70 Algorithm 1: Computing the induced probability distribution Pr(c(ai)|\u03c3 (ai)). Input: ai, \u03c3 (ai) Output: Pn, which is the distribution Pr(c(ai)|\u03c3 (ai)) represented as a trie. c (ai) 0 = (0, . . . ,0); P0[c (ai) 0 ] = 1.0 ; \/\/ Initialization: C (ai) 0 = {c (ai) 0 } for k = 1 to n do Initialize Pk to be an empty trie; foreach c(ai)k\u22121 from Pk\u22121 do foreach a(ai)k \u2208 A (ai) k such that \u03c3 (ai) k (a (ai) k )> 0 do c (ai) k = c (ai) k\u22121; if a(ai)k 6= \/0 then c (ai) k (a (ai) k ) += 1 ; \/\/ Apply action a (ai) k if Pk[c(ai)k ] does not exist yet then Pk[c (ai) k ] = 0.0; Pk[c (ai) k ] += Pk\u22121[c (ai) k\u22121]\u00d7\u03c3 (ai) k (a (ai) k ); return Pn number of terms we need to add together when doing the weighted sum in Equation (3.4.5). When \u03c3\u2212i has full support, C(ai,i)(\u03c3\u2212i) =C(ai,i). Theorem 3.4.1. Given an AGG- \/0 representation of a game, i\u2019s expected payoff V iai(\u03c3\u2212i) can be computed in \u0398(n|A |+n|\u03bd(ai)| 2|C(ai,i)(\u03c3\u2212i)|) time, which is poly- nomial in the size of the representation. If I , the in-degree of the action graph, is bounded by a constant, V iai(\u03c3\u2212i) can be computed in time polynomial in n. Proof. Since looking up an entry in a trie takes time linear in the size of the key, which is |\u03bd(ai)| in our case, the complexity of doing the weighted sum in Equation (3.4.5) is O(|\u03bd(ai)||C(ai,i)(\u03c3\u2212i)|). Algorithm 1 requires n iterations; in iteration k, we look at all possible combi- nations of c(ai)k\u22121 and \u03b1 (ai) k , and in each case do a trie look-up which costs \u0398(|\u03bd(ai)|). Since |A (ai)k | \u2264 |\u03bd(ai)|+ 1, and |C (ai) k\u22121| \u2264 |C(ai,i)|, the complexity of Algorithm 1 is \u0398(n|\u03bd(ai)|2|C(ai,i)(\u03c3\u2212i)|). This dominates the complexity of summing up Equa- tion (3.4.5). Adding the cost of computing \u03c3 (\u03b1)\u2212i , we get the overall complexity of 71 Algorithm 2 Computing expected utility V iai(\u03c3\u2212i), given ai and \u03c3\u2212i. 1. for each j 6= i, compute the projected mixed strategy \u03c3 (ai)j using Equation (3.4.4): \u03c3 (ai)j (a (ai) j )\u2261 { \u03c3 j(a j) a j \u2208 \u03bd(ai) \u2211\u03b1 \u2032\u2208A j\\\u03bd(ai) \u03c3 j(\u03b1 \u2032) a (ai) j = \/0 . 2. compute the probability distribution Pr(c(ai)|ai,\u03c3 (ai)\u2212i ) by following Algorithm 1. 3. calculate the expected utility using the following weighted sum (Equation (3.4.5)): V iai(\u03c3\u2212i) = \u2211 c(ai)\u2208C(ai ,i) ui ( ai,c (ai) ) Pr ( c(ai)|\u03c3 (ai) ) . expected payoff computation \u0398(n|A |+n|\u03bd(ai)|2|C(ai,i)(\u03c3\u2212i)|). Since |C(ai,i)(\u03c3\u2212i)| \u2264 |C(ai,i)| \u2264 |C(ai)|, and |C(ai)| is the number of payoff values stored in payoff function uai , this means that expected payoffs can be computed in polynomial time with respect to the size of the AGG-\/0. Furthermore, our algorithm is able to exploit strategies with small supports which lead to a small |C(ai,i)(\u03c3\u2212i)|. Since |C(ai)| is bounded by (n\u22121+|\u03bd(ai)|)!(n\u22121)!|\u03bd(ai)|! , this implies that if the in-degree of the graph is bounded by a constant, then the complexity of computing expected payoffs is O(n|A |+nI+1). The proof of Theorem 3.4.1 shows that besides exploiting the compactness of the AGG-\/0 representation, our algorithm is also able to exploit the cases where the mixed strategy profiles given have small support sizes, because the time complex- ity depends on |C(ai,i)(\u03c3\u2212i)| which is small when support sizes are small. This is important in practice, since we will often need to carry out expected utility com- putations for strategy profiles with small supports. Porter et al. [2008] observed that quite often games have Nash equilibria with small support, and proposed algo- rithms that explicitly search for such equilibria. In other algorithms for computing Nash equilibria such as Govindan-Wilson and simplicial subdivision, it is also quite often necessary to compute expected payoffs for mixed strategy profiles with small support. Of course it is not necessary to apply the agents\u2019 mixed strategies in the order 72 1 . . .n. In fact, we can apply the strategies in any order. Although the number of configurations |C(ai,i)(\u03c3\u2212i)| remains the same, the ordering does affect the interme- diate configurations C(ai)k . We can use the following heuristic to try to minimize the number of intermediate configurations: sort the players in ascending order of the sizes of their projected action sets. This reduces the amount of work we do in earlier iterations of Algorithm 1, but does not change its overall complexity. The Case of Symmetric Strategies in Symmetric AGG- \/0s As described in Section 3.2.1, if a game is symmetric it can be represented as an AGG-\/0 with Ai =A for all i\u2208N. Given a symmetric game, we are often interested in computing expected utilities under symmetric mixed strategy profiles, where a mixed strategy profile \u03c3 is symmetric if \u03c3i = \u03c3 j \u2261 \u03c3\u2217 for all i, j \u2208 N. In Section 3.5.2 we will discuss algorithms that make use of expected utility computation under symmetric strategy profiles to compute a symmetric Nash equilibrium of symmetric games. To compute the expected utility V iai(\u03c3\u2217), we could use the algorithm we pro- posed for general AGG-\/0s under arbitrary mixed strategies, which requires time polynomial in the size of the AGG-\/0. But we can gain additional computational speedup by exploiting the symmetry in the game and the strategy profile. As before, we want to use Equation (3.4.5) to compute the expected utility, so the crucial task is again computing the probability distribution over projected con- figurations, Pr(c(ai)|\u03c3 (ai)). Recall that \u03c3 (ai) \u2261 (ai,\u03c3 (ai)\u2212i ). Define Pr(c(ai)|\u03c3 (ai) \u2217 ) to be the distribution induced by \u03c3 (ai)\u2212i , the partial mixed strategy profile of players other than i, each playing the symmetric strategy \u03c3 (ai)\u2217 . Once we have the distri- bution Pr(c(ai)|\u03c3 (ai)\u2217 ), we can then compute the distribution Pr(c(ai)|\u03c3 (ai)) straight- forwardly by applying player i\u2019s strategy ai. In the rest of this section we focus on computing Pr(c(ai)|\u03c3 (ai)\u2217 ). Define S (c(ai)) to be the set containing all action profiles a(ai) such that C (a(ai))= c(ai). Since all agents have the same mixed strategies, each pure action profile in 73 S (c(ai)) is equally likely, so for any a(ai) \u2208S (c(ai)) Pr ( c(ai)|\u03c3 (ai)\u2217 ) = \u2223\u2223\u2223S (c(ai))\u2223\u2223\u2223Pr(a(ai)|\u03c3 (ai)\u2217 ) , (3.4.7) Pr ( a(ai)|\u03c3 (ai)\u2217 ) = \u220f \u03b1\u2208A (ai) (\u03c3 (ai)\u2217 (\u03b1)) c(ai)(\u03b1). (3.4.8) The sizes of S (c(ai)) are given by the multinomial coefficient \u2223\u2223\u2223S (c(ai))\u2223\u2223\u2223= (n\u22121)!\u220f\u03b1\u2208A (ai) (c(ai)(\u03b1))! . (3.4.9) Better still, using a Gray code technique we can avoid reevaluating these equa- tions for every c(ai) \u2208C(ai). Denote the configuration obtained from c(ai) by decre- menting by one the number of agents taking action \u03b1 \u2208 A (ai) and incrementing by one the number of agents taking action \u03b1 \u2032 \u2208 A (ai) as c(ai)\u2032 \u2261 c(ai)(\u03b1\u2192\u03b1 \u2032). Then consider the graph HC(ai) whose nodes are the elements of the set C(ai), and whose directed edges indicate the effect of the operation (\u03b1 \u2192 \u03b1 \u2032). This graph is a regu- lar triangular lattice inscribed within a (|A (ai)|\u22121)-dimensional simplex. Having computed Pr(c(ai)|\u03c3 (ai)\u2217 ) for one node of HC(ai) corresponding to configuration c(ai), we can compute the result for an adjacent node in O(1) time, Pr ( c (ai) (\u03b1\u2192\u03b1 \u2032)|\u03c3 (ai) \u2217 ) = \u03c3 (ai)\u2217 (\u03b1 \u2032)c(ai)(\u03b1) \u03c3 (ai)\u2217 (\u03b1) ( c(ai)(\u03b1 \u2032)+1 ) Pr(c(ai)|\u03c3 (ai)\u2217 ) . (3.4.10) HC(ai) always has a Hamiltonian path (attributed to an unpublished result of Knuth by Klingsberg [1982]), so having computed Pr(c(ai)|\u03c3 (ai)\u2217 ) for an initial c(ai) using Equation (3.4.8), the results for all other projected configurations (nodes in HC(ai)) can be computed by using Equation (3.4.10) at each subsequent step on the path. Generating the Hamiltonian path corresponds to finding a combinatorial Gray code for compositions; an algorithm with constant amortized running time is given by Klingsberg [1982]. Intuitively, it is easy to see that a simple, \u201clawnmower\u201d Hamiltonian path exists for any lower-dimensional projection of HC(ai) , with the only state required to compute the next node in the path being a direction value for each dimension. Our algorithm for computing the distribution Pr ( c(ai)|\u03c3 (ai)\u2217 ) is summarized in 74 Algorithm 3 Computing distribution Pr ( c(ai)|\u03c3 (ai)\u2217 ) in a symmetric AGG-\/0 1. let c(ai) = c(ai)0 , where c (ai) 0 is the initial node of a Hamiltonian path of HC(ai) . 2. compute Pr ( c(ai)|\u03c3 (ai) \u2217 ) using Equation (3.4.7): Pr ( c(ai)|\u03c3 (ai)\u2217 ) = (n\u2212 1)! \u220f\u03b1\u2208A (ai) ( c(ai)(\u03b1) ) ! \u220f \u03b1\u2208A (ai) (\u03c3 (ai)\u2217 (\u03b1)) c(ai)(\u03b1). 3. While there are more configurations in C(ai): (a) get the next configuration c(ai) (\u03b1\u2192\u03b1 \u2032) in the Hamiltonian path, using Klingsberg\u2019s algorithm [Klingsberg, 1982]. (b) compute Pr ( c (ai) (\u03b1\u2192\u03b1 \u2032)|\u03c3 (ai) \u2217 ) using Equation (3.4.10): Pr ( c (ai) (\u03b1\u2192\u03b1 \u2032)|\u03c3 (ai) \u2217 ) = \u03c3 (ai)\u2217 (\u03b1 \u2032)c(ai)(\u03b1) \u03c3 (ai) \u2217 (\u03b1) ( c(ai)(\u03b1 \u2032)+ 1 ) Pr(c(ai)|\u03c3 (ai)\u2217 ) . (c) let c(ai) = c(ai) (\u03b1\u2192\u03b1 \u2032). 4. output Pr ( c(ai)|\u03c3 (ai) \u2217 ) for all c(ai) \u2208C(ai). Algorithm 3. For computing expected utility, we again use Algorithm 2, except with Algorithm 3 replacing Algorithm 1 as the subroutine for computing the distri- bution Pr ( c(ai)|\u03c3 (ai)\u2217 ) . Theorem 3.4.2. Computation of the expected utility V iai(\u03c3\u2217) under a symmetric strategy profile for symmetric action-graph games using Equations (3.4.5), (3.4.7), (3.4.8) and (3.4.10) takes time O(|A |+ |\u03bd(ai)| \u2223\u2223C(ai)(\u03c3 (ai))\u2223\u2223). Proof. Projection to \u03c3 (ai)\u2217 takes O(|A |) time since the strategies are symmetric. Equation (3.4.5) has \u2223\u2223C(ai)(\u03c3 (ai))\u2223\u2223 summands. The probability for the initial con- figuration requires O(n) time. Using Gray codes the computation of subsequent probabilities can be done in constant amortized time for each configuration. Since each look-up of the utility function takes O(|\u03bd(ai)|) time, the total complexity of the algorithm is O(|A |+ |\u03bd(ai)| \u2223\u2223C(ai)(\u03c3 (ai))\u2223\u2223). 75 Algorithm 4 Computing the probability distribution Pr(c(ai)|\u03c3 (ai)) in a k- symmetric AGG-\/0 under a k-symmetric mixed strategy profile \u03c3 (ai). 1. Partition the players according to {N1, . . . ,Nk}. 2. For each l \u2208 {1, . . . ,k}, compute Pr(c(ai)|\u03c3 (ai)Nl ), the probability distribution induced by \u03c3 (ai)Nl , the partial strategy profile of players in Nl . Since \u03c3 (ai) Nl is symmetric, this can be computed efficiently using Algorithm 3 as discussed in Section 3.4.1. 3. Combine the k probability distributions together using Algorithm 1, resulting in the distribution Pr(c(ai)|\u03c3 (ai)). Note that this is faster than our dynamic programming algorithm for general AGG-\/0s under arbitrary strategies, whose complexity is \u0398(n|A |+n|\u03bd(ai)|2 \u2223\u2223C(ai)(\u03c3 (ai))\u2223\u2223) by Theorem 3.4.1. In the usual case where the second term dominates the first, the algorithm for symmetric strategies is faster by a factor of n|\u03bd(ai)|. k-symmetric Games We now move to a generalization of symmetry in games that we call k-symmetry. Definition 3.4.3. An AGG- \/0 is k-symmetric if there exists a partition {N1, . . . ,Nk} of N such that for all l \u2208 {1, . . . ,k}, for all i, j \u2208 Nl, Ai = A j. Intuitively, k-symmetric AGG-\/0s represent games with k classes of identical agents, where agents within each class are identical. Note that all games are triv- ially n-symmetric. The Ice Cream Vendor game of Example 3.2.5 is a nontrivial k-symmetric AGG-\/0 with k = 3. Given a k-symmetric AGG-\/0 with partition {N1, . . . ,Nk}, a mixed strategy pro- file \u03c3 is k-symmetric if for all l \u2208 {1, . . . ,k}, for all i, j \u2208 Nl , \u03c3i = \u03c3 j. We are often interested in computing expected utility under k-symmetric strategy profiles. For example in Section 3.5.2 we will discuss algorithms that make use of such expected utility computations to find k-symmetric Nash equilibria in k-symmetric games. To compute expected utility under a k-symmetric mixed strategy profile, we can use a hybrid approach when computing the probability distribution over configurations, shown in Algorithm 4. Observe that this algorithm combines our specialized Algo- rithm 3 for handling symmetric games from Section 3.4.1 with the idea of running 76 Algorithm 1 on the joint mixed strategies of subgroups of agents discussed at the end of Section 3.4.1. 3.4.2 Computing Expected Payoff with AGG-FNs Algorithm 1 cannot be directly applied to AGG-FNs with arbitrary f p. First of all, projection of strategies does not work directly, because a player j playing an action a j 6\u2208 \u03bd(\u03b1) could still affect c(\u03b1) via function nodes. Furthermore, the gen- eral idea of using dynamic programming to build up the probability distribution by adding one player at a time does not work because for an arbitrary function node p \u2208 \u03bd(\u03b1), each player would not be guaranteed to affect c(p) independently. We could convert the AGG-FN to an AGG-\/0 in order to apply our algorithm, but then we would not be able to translate the extra compactness of AGG-FNs over AGG-\/0s into more efficient computation. In this section we identify two sub- classes of AGG-FN for which expected utility can be efficiently computed. In Section 3.4.2 we show that when all function nodes belong to a restricted class of contribution-independent function nodes, expected utility can be computed in polynomial time. In Section 3.4.2 we reinterpret the expected utility problem as a Bayesian network inference problem, which can be computed in polynomial time if the resulting Bayesian network has bounded treewidth. Contribution-Independent Function Nodes Definition 3.4.4. A function node p in an AGG-FN is contribution-independent (CI) if \u2022 \u03bd(p)\u2286A , i.e., the neighbors of p are action nodes. \u2022 There exists a commutative and associative operator \u2217, and for each \u03b1 \u2208 \u03bd(p) an integer w\u03b1 , such that given an action profile a= (a1, . . . ,an), c(p) = \u2217i\u2208N:ai\u2208\u03bd(p)wai . \u2022 The running time of each \u2217 operation is bounded by a polynomial in n, |A | and |P|. Furthermore, \u2217 can be represented in space polynomial in n, |A | and |P|. 77 An AGG-FN is contribution-independent if all its function nodes are contribution- independent. Note that it follows from this definition that c(p) can be written as a function of c(p) by collecting terms: c(p)\u2261 f p(c(p)) = \u2217\u03b1\u2208\u03bd(p)(\u2217c(\u03b1)k=1 w\u03b1). Simple aggregators can be represented as contribution-independent function nodes, with the + operator serving as \u2217, and w\u03b1 = 1 for all \u03b1 . The Coffee Shop game is thus an example of a contribution-independent AGG-FN. For the parity game in Example 3.2.8, \u2217 is instead addition mod 2. An example of a non-additive CI function node arises in a perfect-information model of an (advertising) auction in which actions correspond to bid amounts [Thompson and Leyton-Brown, 2009]. Here we want c(p) to represent the amount of the winning bid, and so we let w\u03b1 be the bid amount corresponding to action \u03b1 , and \u2217 be the max operator. The advantage of contribution-independent AGG-FNs is that for all function nodes p, each player\u2019s strategy affects c(p) independently. This fact allows us to adapt our algorithm to efficiently compute the expected utility V iai(\u03c3\u2212i). For simplicity we present the algorithm for the case where we have one operator \u2217 for all p \u2208 P , but our approach can be directly applied to games with different operators and w\u03b1 associated with different function nodes. We define the contribution of action \u03b1 to node m \u2208A \u222aP , denoted \u03b4\u03b1(m), as 1 if m = \u03b1 , 0 if m \u2208A \\{\u03b1}, and \u2217m\u2032\u2208\u03bd(m)(\u2217 \u03b4\u03b1 (m\u2032) k=1 w\u03b1) if m \u2208P . Then it is easy to verify that given an action profile a = (a1, . . . ,an), c(\u03b1) = \u2211nj=1 \u03b4a j(\u03b1) for all \u03b1 \u2208 A and c(p) = \u2217nj=1 \u03b4a j(p) for all p \u2208 P . Given that player i played ai, and for all \u03b1 \u2208A , we define the projected contribution of action \u03b1 under ai, denoted \u03b4 (ai)\u03b1 , as the tuple (\u03b4\u03b1(m))m\u2208\u03bd(ai). Note that different actions \u03b1 may have identical projected contributions under ai. Player j\u2019s mixed strategy \u03c3 j induces a probabil- ity distribution over j\u2019s projected contributions, Pr(\u03b4 (ai)|\u03c3 j) = \u2211 a j :\u03b4 (ai)a j =\u03b4 (ai) \u03c3 j(a j). Now we can operate entirely using the probabilities on projected contributions in- stead of the mixed strategy probabilities. This is analogous to the projection of \u03c3 j to \u03c3 (ai)j in our algorithm for AGG-\/0s. Algorithm 1 for computing the distribution Pr(c(ai)|\u03c3) can be straightforwardly adopted to work with contribution-independent AGG-FNs. Whenever we apply player k\u2019s contribution \u03b4 (ai)ak to c (ai) k\u22121, the resulting configuration c (ai) k is computed 78 componentwise as follows: c(ai)k (m) = \u03b4 (ai) ak (m)+c (ai) k\u22121(m) if m\u2208A , and c (ai) k (m)= \u03b4 (ai)ak (m)\u2217c (ai) k\u22121(m) if m \u2208P . To analyze the complexity of computing expected utility, it is necessary to know the representation size of a contribution-independent AGG-FN. For each function node p we need to specify \u2217 and (w\u03b1)\u03b1\u2208\u03bd(p) instead of f p directly. Let \u2016\u2217\u2016 denote the representation size of \u2217. Then the total size of a contribution- independent AGG-FN is O(\u2211\u03b1\u2208A |C(\u03b1)|+ \u2016\u2217\u2016). As discussed in Section 3.2.2, this size is not necessarily polynomial in n, |A | and |P|; although when the con- ditions in Corollary 3.2.11 are satisfied, the representation size is polynomial. Theorem 3.4.5. Expected utility can be computed in time polynomial in the size of a contribution-independent AGG-FN. Furthermore, if the in-degrees of the action nodes are bounded by a constant and the sizes of ranges |R( f p)| for all p\u2208P are bounded by a polynomial in n, |A | and |P|, then expected utility can be computed in time polynomial in n, |A | and |P|. Proof Sketch. Following similar complexity analysis as Theorem 3.4.1, if an AGG-FN is contribution-independent, expected utility V iai(\u03c3\u2212i) can be computed in O(n|A ||C (ai)|(T\u2217+ |\u03bd(ai)|)) time, where T\u2217 denotes the maximum running time of an \u2217 operation. Since T\u2217 is polynomial in n, |A | and |P| by Definition 3.4.4, the running time for computing expected utility is polynomial in the size of the AGG-FN representa- tion. The second part of the theorem follows from a direct application of Corollary 3.2.11. For AGG-FNs whose function nodes are all simple aggregators, each player\u2019s set of projected contributions has size at most |\u03bd(ai)+1|, as opposed to |A | in the general case. This leads to a run time complexity of O(n|A |+ n|\u03bd(ai)|2|C(ai)|), which is better than the complexity of the general case proved in Theorem 3.4.5. Applied to the Coffee Shop game, since |C(\u03b1)|= O(n3) and all function nodes are simple aggregators, our algorithm takes O(n|A |+n4) time, which grows linearly in |A |. 79 Beyond Contribution Independence What about the case where not all function nodes are contribution-independent\u2014is there anything we can do besides converting the AGG-FN into its induced AGG-\/0? It turns out that by reducing the problem of computing expected utility to a Bayesian network inference problem, we can still efficiently compute expected utilities for certain additional classes of AGG-FNs. Bayesian networks compactly represent probability distributions exhibiting con- ditional independence structure (see, e.g., [Pearl, 1988, Russell and Norvig, 2003]). A Bayesian network is a DAG in which nodes represent random variables and edges represent direct probabilistic dependence. Each node X is associated with a condi- tional probability distribution (CPD) specifying the probability of each realization of random variable X conditional on the realizations of its parent random variables. A key step in our approach for computing expected utility in AGG-FNs is com- puting the probability distribution over configurations Pr(c(ai)|\u03c3 (ai)). If we treat each node m\u2019s configuration c(m) as a random variable, then the distribution over configurations can be interpreted as the joint probability distribution over the set of random variables {c(m)}m\u2208\u03bd(ai). Given an AGG-FN, a player i and an action ai \u2208 Ai, we can construct an induced Bayesian network Biai : \u2022 The nodes of Biai consist of (i) one node for each element of \u03bd(ai); (ii) one node for each neighbor of a function node belonging to \u03bd(ai); and (iii) one node for each neighbor of a function node added in the previous step, and so on until no more function nodes are added. Each of these nodes m represents the random variable c(m). We further introduce another kind of node: (iv) n nodes \u03c31, . . . ,\u03c3n, representing each player\u2019s mixed strategy. The domain of each random variable \u03c3i is Ai. \u2022 The edges of Biai are constructed by keeping all edges that go into the func- tion nodes that are included in B, ignoring edges that go into action nodes. Furthermore for each player j, we create an edge from \u03c3 j to each of j\u2019s actions a j \u2208 A j. \u2022 The conditional probability distribution (CPD) at each function node p is just the deterministic function f p. The CPD at each action node \u03b1 \u2032 is a de- terministic function that returns the number of its parents (observe that these 80 are all mixed strategy nodes) that take the value \u03b1 \u2032. Mixed strategy nodes have no incoming edges; their (unconditional) probability distributions are the mixed strategies of the corresponding players, except for player i, whose node \u03c3i takes the deterministic value ai. It is straightforward to verify that Biai is a DAG, and that the joint distribution on random variables {c(m)}m\u2208\u03bd(\u03b1) is exactly the distribution over configurations Pr(c(ai)|(ai,\u03c3 (ai) \u2212i )). This joint distribution can then be computed using a standard algorithm such as clique tree propagation or variable elimination. The running times of such algorithms are worst-case exponential; however, for Bayesian net- works with bounded tree-width, their running times are polynomial. Further speedups are possible at nodes in the induced Bayesian network that correspond to action nodes and contribution-independent function nodes. The de- terministic CPDs at such nodes can be formulated using independent contributions from each player\u2019s strategy. This is an example of causal independence structure in Bayesian networks studied by Heckerman and Breese [1996] and Zhang and Poole [1996], who proposed different methods for exploiting such structure to speed up Bayesian network inference. Such methods share the common underlying idea of decomposing the CPDs into independent contributions, which is intuitively similar to our approach in Algorithm 1.6 3.4.3 Computing Expected Payoff with AGG-FNAs Due to the linearity of expectation, the expected utility of i playing an action ai with an additive utility function with coefficients (\u03bbm)m\u2208\u03bd(ai) is V iai(\u03c3\u2212i) = \u2211 m\u2208\u03bd(ai) \u03bbmE[c(m)|ai,\u03c3\u2212i], (3.4.11) where E[c(m)|ai,\u03c3\u2212i] is the expected value of c(m) given the strategy profile (ai,\u03c3\u2212i). Thus we can compute these expected values for each m \u2208 \u03bd(ai), then sum them up as in Equation (3.4.11) to get the expected utility. If m is an action node, then E[c(m)|ai,\u03c3\u2212i] is the expected number of players that chose m, which is 6This approach of reducing expected utility computation to Bayesian network inference is fur- ther developed in Chapters 5 and 6, for Temporal Action-Graph Games and Bayesian Action-Graph Games respectively. 81 \u2211i\u2208N \u03c3i(m). The more interesting case is when m is a function node. Recall that c(m) \u2261 f m(c(m)) where c(m) is the configuration over the neighbors of m. We can write the expected value of c(m) as E[c(m)|ai,\u03c3\u2212i] = \u2211 c(m)\u2208C(m) f m(c(m))Pr(c(m)|ai,\u03c3\u2212i). (3.4.12) This has the same form as Equation (3.4.5) for the expected utility V iai(\u03c3\u2212i), except that we have f m instead of u\u03b1 . Thus our results for the computation of Equation (3.4.5) also apply here. That is, if the neighbors of m are action nodes and\/or contribution-independent function nodes, then E[c(m)|ai,\u03c3\u2212i] can be computed in polynomial time. Theorem 3.4.6. Suppose u\u03b1 is represented as an additive utility function in a given AGG-FNA. If each of the neighbors of \u03b1 is either (i) an action node, or (ii) a func- tion node whose neighbors are action nodes and\/or contribution-independent func- tion nodes, then the expected utility V i\u03b1(\u03c3\u2212i) can be computed in time polynomial in the size of the representation. Furthermore, if the in-degrees of the neighbors of \u03b1 are bounded by a constant, and the sizes of ranges |R( f p)| for all p \u2208 P are bounded by a polynomial in n, |A | and |P|, then the expected utility can be computed in time polynomial in n, |A | and |P|. It is straightforward to verify that our AGG-FNA representations of polyma- trix games, congestion games, player-specific congestion games and the game in Example 3.3.4 all satisfy the conditions of Theorem 3.4.6. 3.5 Computing Sample Equilibria with AGGs In this section we consider some theoretical and practical applications of our ex- pected utility algorithm. In Section 3.5.1 we analyze the complexity of finding a sample \u03b5-Nash equilibrium in an AGG and show that it is PPAD-complete. In Sec- tion 3.5.2 we extend our expected utility algorithm to the computation of payoff Jacobians, which is a key step in several algorithms for computing \u03b5-Nash equilib- ria, including the Govindan-Wilson algorithm. In Section 3.5.3 we show that it can also speed up the simplicial subdivision algorithm, and in Section 3.5.4 we show that it can be used to find a correlated equilibrium in polynomial time. 82 3.5.1 Complexity of Finding a Nash Equilibrium In this section we consider the complexity of finding a Nash equilibrium of an AGG. As discussed in Section 2.2.1, since a Nash equilibrium for a game of more that two players may require irrational numbers in the probabilities, for practical computation it is necessary to consider approximations to Nash equilibria. Here we consider the frequently-used notion of \u03b5-Nash equilibrium as defined in Definition 2.2.3. Recall from Section 2.2 that for any game representation, its NASH problem is defined to be the problem of finding an \u03b5-Nash equilibrium of a game encoded in that representation, for some \u03b5 given as part of the input. Also recall from Section 2.2.1 that the NASH problem for n-player normal-form games with n\u2265 2 is complete for the complexity class PPAD, which is contained in NP but not known to be in P. Turning to compact representations, recall from Section 2.2.2 and in particular Theorem 2.2.4 that the complexity of computing expected utility plays a vital role in the complexity of finding an \u03b5-Nash equilibrium. By leveraging Algorithm 1, we are able to apply Theorem 2.2.4 to AGGs. Corollary 3.5.1. The complexity of NASH for AGG- \/0s is PPAD-complete. Remark. It may not be clear why this would be surprising or encouraging; indeed, the PPAD-hardness part of the claim is neither. However, the PPAD-membership part of the claim is a positive result. Specifically, it implies that the problem of finding a Nash equilibrium in an AGG-\/0 can be reduced to the problem of finding a Nash equilibrium in a two-player normal-form game with size polynomial in the size of the AGG-\/0. This is in contrast to the normal form representation of the original game, which can be exponentially larger than the AGG-\/0. In other words, if we instead try to solve for a Nash equilibrium using the normal form representation of the original game, we would face a PPAD-complete problem with an input exponentially larger than the AGG-\/0 representation. Proof sketch. The first condition of Theorem 2.2.4\u2014polynomial type\u2014is satisfied by all AGG variants, since action sets are represented explicitly. We first show that the problem belongs to PPAD, by constructing a circuit that computes expected util- ity and satisfies the second condition of Theorem 2.2.4.7 Recall that our expected utility algorithm consists of Equation (3.4.4), then Algorithm 1, and finally Equa- 83 tion (3.4.5). Equations (3.4.4) and (3.4.5) can be straightforwardly translated into arithmetic circuits using addition and multiplication nodes. Algorithm 1 involves for loops that cannot be directly translated to an arithmetic circuit, but we observe that we can unroll the for loops and still end up with a polynomial number of op- erations. The resulting circuit resembles a lattice with n levels; at the k-th level there are |C(ai)k | addition nodes. Each addition node corresponds to a configuration c (ai) k \u2208C (ai) k , and calculates Pk[c (ai) k ] as in iteration k of Algorithm 1. Also there are |A(ai)k | multiplication nodes for each c (ai) k , in order to carry out the multiplications in iteration k of Algorithm 1. To show PPAD-hardness, we observe that an arbitrary graphical game can be encoded as an AGG-\/0 without loss of compactness (see Section 3.2.1). Thus the problem of finding a Nash equilibrium in a graphical game can be reduced to the problem of finding a Nash equilibrium in an AGG-\/0. Since finding a Nash equilib- rium in a graphical game is known to be PPAD-hard, finding a Nash equilibrium in an AGG-\/0 is PPAD-hard. For AGG-FNs that satisfy the conditions for Theorem 3.4.5 or AGG-FNAs that satisfy Theorem 3.4.6, similar arguments apply, and we can prove PPAD- completeness for those subclasses of games if we make the reasonable assump- tion that the operator \u2217 used to define the CI function nodes can be implemented as an arithmetic circuit of polynomial length that satisfies the second condition of Theorem 2.2.4. 3.5.2 Computing a Nash Equilibrium: The Govindan-Wilson Algorithm Now we move from the theoretical to the practical. The PPAD-hardness result of Corollary 3.5.1 implies that a polynomial-time algorithm for Nash equilibrium is unlikely to exist, and indeed known algorithms for identifying sample Nash equi- libria have worst-case exponential running times. Nevertheless, we will show that our dynamic programming algorithm for expected utility can be used to achieve exponential speedups in such algorithms, as well as an algorithm for computing a 7Observe that the second condition in Theorem 2.2.4 implies that the expected utility algorithm must take polynomial time; however, some polynomial algorithms (e.g., those that rely on division) do not satisfy this condition. 84 sample correlated equilibrium. Specifically, we use a black-box approach as dis- cussed in Section 2.2.2. First we consider Govindan and Wilson\u2019s [2003] global Newton method, a state-of-the-art method for finding mixed-strategy Nash equilibria in multi-player games. Recall from Sections 2.2.1 and 2.3 that a bottleneck of the algorithm is the computation of payoff Jacobians, and the Gametracer package provides a black- box implementation of the global Newton method that allows one to directly plug in representation-specific subroutines for this task. The payoff Jacobian is defined to be the Jacobian of the function V : \u03a3 \u2192 R\u2211i |Ai|, whose (i,\u03b1i)-th component is the expected utility V i\u03b1i(\u03c3\u2212i). The corre- sponding Jacobian at \u03c3 is a (\u2211i |Ai|)\u00d7 (\u2211i |Ai|) matrix with entries \u2202V iai(\u03c3\u2212i) \u2202\u03c3i\u2032(ai\u2032) \u2261 \u2207V i,i\u2032ai,ai\u2032 (\u03c3) (3.5.1) = \u2211 a\u2208A u(ai,C (ai,ai\u2032 ,a))Pr(a|\u03c3) (3.5.2) if i 6= i\u2032, and zero otherwise. Here an overbar is shorthand for the subscript \u2212{i, i\u2032} where i 6= i\u2032 are two players; e.g., a\u2261 a\u2212{i,i\u2032}. The rows of the matrix are indexed by i and ai while the columns are indexed by i\u2032 and ai\u2032 . Given entry \u2207V i,i \u2032 ai,ai\u2032 (\u03c3), we call ai its primary action node, and ai\u2032 its secondary action node. We note that efficient computation of the payoff Jacobian is important for more than simply Govindan and Wilson\u2019s global Newton method. For example, recall from Section 2.2.1 that the iterated polymatrix approximation (IPA) method [Govindan and Wilson, 2004] has the same computational problem at its core. Computing the Payoff Jacobian Now we consider how the payoff Jacobian may be computed. Equation (3.5.2) shows that the \u2207V i,i \u2032 ai,ai\u2032 (\u03c3) element of the Jacobian can be interpreted as the ex- pected utility of agent i when she takes action ai, agent i\u2032 takes action ai\u2032 , and all other agents use mixed strategies according to \u03c3 . So a straightforward\u2014and quite effective\u2014approach is to use our expected utility algorithm to compute each entry of the Jacobian. However, the Jacobian matrix has certain extra structure that allows us to achieve 85 further speedup. For example, observe that some entries of the Jacobian are iden- tical. If two entries have the same primary action node \u03b1 , then they are expected payoffs on the same utility function u\u03b1 , and so have the same values if their in- duced probability distributions over C(\u03b1) are the same. We need to consider two cases: 1. The two entries come from the same row of the Jacobian, say player i\u2019s action ai. There are two sub-cases to consider: (a) The columns of the two entries belong to the same player j, but differ- ent actions a j and a\u2032j. If a (ai) j = a \u2032(ai) j , i.e., a j and a\u2032j both project to the same projected action in ai\u2019s projected action graph,8 then \u2207V i, jai,a j = \u2207V i, j ai,a\u2032j . This implies that when a j,a\u2032j 6\u2208 \u03bd(ai), \u2207V i, j ai,a j = \u2207V i, jai,a\u2032j . (b) The columns of the entries correspond to actions of different players. We observe that for all j and a j such that \u03c3 (ai)(a(ai)j ) = 1, \u2207V i, jai,a j(\u03c3) = V iai(\u03c3\u2212i). As a special case, if A (ai) j = { \/0}, i.e., agent j does not affect i\u2019s payoff when i plays ai, then for all a j \u2208 A j, \u2207V i, jai,a j(\u03c3) =V iai(\u03c3\u2212i). 2. If ai and a j correspond to the same action node \u03b1 (but owned by agents i and j respectively), thus sharing the same payoff function u\u03b1 , then \u2207V i, jai,a j = \u2207V j,ia j ,ai . Furthermore, if there exist a\u2032i \u2208 Ai,a\u2032j \u2208 A j such that a\u2032i (\u03b1) = a\u2032j (\u03b1) (or \u03b4 (\u03b1) a\u2032i = \u03b4 (\u03b1) a\u2032j for contribution-independent AGG-FNs), then \u2207V i, j ai,a\u2032j = \u2207V j,i a j ,a\u2032i . A consequence of 1(a) is that any Jacobian of an AGG has at most \u2211i \u2211ai\u2208Ai(n\u2212 1)(\u03bd(ai)+1) distinct entries. For AGGs with bounded in-degree, this is O(n\u2211i |Ai|). For each set of identical entries, we only need to do the expected utility computa- tion once. Even when two entries in the Jacobian are not identical, we can exploit the similarity of the projected strategy profiles (and thus the similarity of the in- duced distributions) between entries, reusing intermediate results when computing the induced distributions of different entries. Since computing the induced proba- bility distributions is the bottleneck of our expected payoff algorithm, this provides significant speedup. 8For contribution-independent AGG-FNs, the condition becomes \u03b4 (ai)a j = \u03b4 (ai) a\u2032j , i.e., a j and a\u2032j have the same projected contribution under ai. 86 First we observe that if we fix the row (i,ai) and the column\u2019s player j, then \u03c3 is the same for all secondary actions a j \u2208 A j. We can compute the probability distribution Pr(cn\u22121|ai,\u03c3 (ai)), then for all a j \u2208 A j, we just need to apply the action a j to get the induced probability distribution for the entry \u2207V i, jai,a j . Now suppose we fix the row (i,ai). For two column players j and j\u2032, their corresponding strategy profiles \u03c3\u2212{i, j} and \u03c3\u2212{i, j\u2032} are very similar, in fact they are identical in n\u22123 of the n\u22122 components. For AGG-\/0s, we can exploit this similar- ity by computing the distribution Pr(cn\u22121|\u03c3 (ai)\u2212i ), then for each j 6= i, we \u201cundo\u201d j\u2019s mixed strategy to get the distribution induced by \u03c3\u2212{i, j}, by treating distributions Pr(cn\u22121|\u03c3 (ai) \u2212i ) and \u03c3 j as coefficients of polynomials and computing their quotient using long division. (See Section 2.3.5 of [Jiang, 2006] for a more detailed discus- sion of interpreting distributions over configurations as polynomials.) Finding equilibria of symmetric and k-symmetric games Nash proved [1951] that all finite symmetric games have at least one symmetric Nash equilibrium. The Govindan-Wilson algorithm can be adapted to find symmet- ric Nash equilibria in symmetric AGG-\/0s. The modified algorithm now operates in the space of symmetric mixed strategy profiles \u03a3\u2217 = \u03d5(A ), and follows a path of symmetric equilibria of perturbed symmetric games to a symmetric equilibrium of the unperturbed game. A key step of the algorithm is the computation of the Jaco- bian of the function V : \u03a3\u2217\u2192R|A |, whose \u03b1-th entry V\u03b1(\u03c3\u2217) is the expected utility of one player choosing \u03b1 while the others play mixed strategy \u03c3\u2217. This Jacobian at \u03c3\u2217 is a |A |\u00d7|A |matrix whose entry at row \u03b1 and column \u03b1 \u2032 is n\u22121 multiplied by the expected utility of a player choosing action \u03b1 , when another player is choosing action \u03b1 \u2032 and the rest of the players play mixed strategy \u03c3\u2217. Such an entry can be efficiently computed using the techniques for symmetric expected utility computa- tion discussed in Section 3.4.1, which are faster than our expected utility algorithm for general AGGs. Techniques discussed in the current section can further be used to speed up the computation of Jacobians in the symmetric case. In particular, it is straightforward to check that the Jacobian has at most \u2211\u03b1\u2208A (\u03bd(\u03b1)+ 1) = O(|E|) identical entries, where E is the set of edges of the action graph. A straightforward corollary of Nash\u2019s [1951] proof is that any k-symmetric 87 AGG-\/0 has at least one k-symmetric Nash equilibrium. For each equivalence class \u2113 of the players let \u03a3\u2113\u2217 denote the set of symmetric strategy profiles for N\u2113, and let A\u2113 denote the set of actions of a player in N\u2113. Relying on similar arguments as above, we can adapt the Govindan-Wilson algorithm to find k-symmetric equilibria in k-symmetric AGG-\/0s. The bottleneck is the computation of the Jacobian of the function V : \u220f\u2113 \u03a3\u2113\u2217\u2192 R\u2211\u2113 |A \u2113| , whose (\u2113,\u03b1)-th entry is the utility of a player in N\u2113 playing action \u03b1 , while the others play according to the given k-symmetric strategy profile (\u03c3 1\u2217 , . . . ,\u03c3 k\u2217 ). The entry at row \u2113,\u03b1 and column \u2113\u2032,\u03b1 \u2032 of the Jacobian matrix is equal to (|N\u2113\u2032 |\u22121\u2113=\u2113\u2032) multiplied by the expected utility of a player in N\u2113 choos- ing action \u03b1 , when another player in N \u2032\u2113 is choosing action \u03b1 \u2032 and the others play according to the given k-symmetric strategy profile. Such expected utilities can be efficiently computed using the techniques discussed in Section 3.4.1. 3.5.3 Computing a Nash Equilibrium: The Simplicial Subdivision Algorithm Another algorithm for computing a sample Nash equilibrium is van der Laan, Tal- man & van der Heyden\u2019s [1987] simplicial subdivision algorithm. Recall from Section 2.2.1 that one of the bottlenecks is the computation of labels of a given sub- simplex in a simplicial subdivision of \u03a3, which in turn depends on computation of expected utilities under mixed strategy profiles. The GAMBIT package [McKelvey et al., 2006] provides an implementation of the simplicial subdivision algorithm for the normal form. We adapted this code into a black-box implementation that allows one to plug in representation-specific subroutines for expected utility computation. Combining this with an implementation of our AGG-based Algorithm 2 is then sufficient for an exponential speedup compared to the normal-form-based imple- mentation of the simplicial subdivision algorithm. An advantage of the black-box implementation is that this is useful for other representations besides AGGs; e.g., in Chapter 6 we are able to use this for computing sample Bayes-Nash equilibria for Bayesian Action-Graph Games. 88 3.5.4 Computing a Correlated Equilibrium In Section 2.2.7 we gave an overview of the literature on the computation of a sam- ple correlated equilibrium. In summary, Papadimitriou and Roughgarden [2008] proposed a polynomial-time algorithm for computing a sample correlated equilib- rium given a game representation with polynomial type and a polynomial-time subroutine for computing expected utility under mixed strategy profiles. Recently, Stein et al. [2010] showed that Papadimitriou and Roughgarden\u2019s algorithm can fail to find an exact correlated equilibrium, and presented a slight modification of the algorithm that efficiently computes an \u03b5-correlated equilibrium. (An \u03b5-correlated equilibrium is an approximation of the correlated equilibrium solution concept, where \u03b5 measures the extent to which the incentive constraints for correlated equi- librium are violated.) Incorporating this fix, we have the following. Theorem 3.5.2 ([Papadimitriou and Roughgarden, 2008]). If a game representa- tion has polynomial type, and has a polynomial algorithm for computing expected utility, then an \u03b5-correlated equilibrium can be computed in time polynomial in log 1\u03b5 and the representation size. In Chapter 7 we present a modified version of Papadimitriou and Roughgar- den\u2019s algorithm that is able to compute an exact correlated equilibrium in polyno- mial time. Theorem 3.5.3 (Restatement of Theorem 7.4.5; also [Jiang and Leyton-Brown, 2011]). If a game representation has polynomial type, and has a polynomial al- gorithm for computing expected utility, then a correlated equilibrium can be com- puted in time polynomial in the representation size. The second condition in both theorems involve the computation of expected utility. As a direct corollary of Theorem 3.5.3 and Theorem 3.4.1, there exists a polynomial algorithm for computing an exact correlated equilibrium given an AGG-\/0. Corollary 3.5.4. Given a game represented as an AGG- \/0, an exact correlated equilibrium can be computed in time polynomial in the size of the AGG- \/0. 89 Similarly, for AGG-FNs and AGG-FNAs for which the expected utility prob- lem can be solved in polynomial time (see Theorems 3.4.5 and 3.4.6), correlated equilibria can be computed in polynomial time. 3.6 Experiments Although our theoretical results show that there are significant benefits to working with AGGs, they might leave the reader with two worries. First, the reader might be concerned that while AGGs offer asymptotic computational benefits, they might not be practically useful. Second, even if convinced about the usefulness of AGGs, the reader might want to know the size of problems that can be tackled by the com- putational tools we have developed so far. We address both of these worries in this section, by reporting on the results of extensive computational experiments. Specif- ically, we compare the performance of the AGG representation and our AGG-based algorithms against normal-form-based solutions using the (highly optimized) Ga- meTracer package [Blum et al., 2002]. As benchmarks, we used AGG and normal- form representations of instances of Coffee Shop games, Job Market games, and symmetric AGG-\/0s on random graphs. We compared the representation sizes of AGG and normal-form representations, and compared their performance resulting from using these representations to compute expected utility, to compute Nash equi- libria using the Govindan-Wilson algorithm, and to compute Nash equilibria using the simplicial subdivision algorithm. Finally, we show how sample equilibria of these games can be visualized on action graphs. 3.6.1 Software Implementation and Experimental Setup We implemented our algorithms in a freely-available software package, in order to make it easy for other researchers to use AGGs to model problems of interest. Our software is capable of: \u2022 reading in a description of an AGG; \u2022 computing expected utility and Jacobian given mixed strategy profile; \u2022 computing Nash equilibria by adapting GameTracer\u2019s [Blum et al., 2002] implementation of Govindan and Wilson\u2019s [2003] global Newton method; 90 and \u2022 computing Nash equilibria by adapting GAMBIT\u2019s [McKelvey et al., 2006] implementation of the simplicial subdivision algorithm [van der Laan et al., 1987]. We extended GAMUT [Nudelman et al., 2004], a suite of game instance generators, by implementing generators of instances of AGGs including Ice Cream Vendor games (Example 3.2.5), Coffee Shop games (Example 3.2.7), Job Market games (Example 3.3.1) and symmetric AGG-\/0s on a random action graph with random payoffs. Finally, with Damien Bargiacchi, we also developed a graphical user in- terface for creating and editing AGGs. More details on these as well as software implementations of other algorithms from this thesis are given in Appendix A. All of our software is freely available at http:\/\/agg.cs.ubc.ca. When using Coffee Shop games in our experiments, we set payoffs randomly in order to test on a wide set of utility functions. For the visualization of equilibria in Section 3.6.7 we set the Coffee Shop game utility functions to be u\u03b1(c(\u03b1),c(p\u2032\u03b1 ),c(p \u2032\u2032 \u03b1 )) = 20\u2212 [c(\u03b1)]2\u2212 c(p\u2032\u03b1)\u2212 log(c(p\u2032\u2032\u03b1)+1), where p\u2032\u03b1 is the function node representing the number of players choosing ad- jacent locations and p\u2032\u2032\u03b1 is the function node representing the number of players choosing other locations. When using Job Market games in our experiments, we set the utility functions to be u\u03b1(c(\u03b1)) = R\u03b1 c(\u03b1)+\u2211\u03b1 \u2032\u2208\u03bd(\u03b1)\u2212{\u03b1}0.1c(\u03b1 \u2032) \u2212K\u03b1 , with R\u03b1 set to 2,4,6,8,10 and K\u03b1 set to 1,2,3,4,5 for the five levels from high school to PhD. When using Ice Cream Vendor games for the visualization of equilibria in Sec- tion 3.6.7 we set the utilities so that for a player i choosing action \u03b1 , each vendor choosing a location \u03b1 \u2032 \u2208 \u03bd(\u03b1) contributes w f wl utility to i. w f is -1 when \u03b1 \u2032 has the same food type as \u03b1 , and 0.8 otherwise. wl is 1 when \u03b1 \u2032 and \u03b1 correspond to the same location, and 0.6 when they correspond to different (but neighboring) locations. In other words, there is a negative effect from players choosing the same 91 food type, and a weaker positive effect from players choosing a different food type. Furthermore, effects from neighboring locations are weaker than effects from the same location. All our experiments were performed using a computer cluster consisting of 55 machines with dual Intel Xeon 3.2GHz CPUs, 2MB cache and 2GB RAM, running Suse Linux 10.1. 3.6.2 Representation Size First, we compared the representation sizes of AGG-FNs and their induced normal forms. For each game instance we counted the number of payoff values that needed to be stored. We first looked at 5\u00d75 block Coffee Shop games, varying the number of play- ers. Figure 3.9 (left) has a log-scale plot of the number of payoff values in each representation versus the number of players. The normal form representation grew exponentially with respect to the number of players, and quickly became imprac- tical. The size of the AGG representation grew polynomially with respect to n. As we can see from Figure 3.9 (right), even for a game instance with 80 play- ers, the AGG-FN representation stored only about 2 million numbers. In contrast, the corresponding normal form representation would have had to store 1.2\u00d710115 numbers. We then fixed the number of players at 4 and varied the number of actions; for ease of comparison we fixed the number of columns at 5 and only changed the number of rows. Recall from Section 3.2.2 that the representation size of Coffee Shop games\u2014expressed both as AGGs and in the normal form\u2014depends only on the number of players and number of actions, but not on the shape of the region. (Recall that the number of actions is B+1, where B is the total number of blocks.) Figure 3.9 (left) shows a log-scale plot of the number of payoff values versus the number of actions, and Figure 3.9 (right) gives a plot for just the AGG-FN rep- resentation. The size of the AGG representation grew linearly with the number of rows, whereas the size of the normal form representation grew like a higher- order polynomial. For a Coffee Shop game with 4 players on an 80\u00d7 5 grid, the AGG-FN representation stores only about 8000 numbers, whereas the normal form 92 110 100 1000 10000 100000 1000000 10000000 100000000 3 4 5 6 7 8 9 10 11 12 13 14 15 16 p a y o ff s st o re d number of players AGG NF 00001 p a y o ff s st o re d 1000 10000 100000 1000000 10000000 ff s st o re d 1 10 100 6 14 22 30 38 46 54 62 70 78 p a y o ff s st o re d p a y o ff s st o re d number of players p a y o ff s st o re d p a y o ff s st o re d 100 1000 10000 100000 1000000 10000000 100000000 1000000000 16 26 36 46 56 66 76 p a y o ff s st o re d number of actions AGG NF p a y o ff s st o re d 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 p a y o ff s st o re d 91 121 151 181 211 241 271 301 331 361 391 p a y o ff s st o re d number of actions Figure 3.9: Representation sizes of coffee shop games. Top left: 5\u00d7 5 grid with 3 to 16 players (log scale). Top right: AGG only, 5\u00d7 5 grid with up to 80 players (log scale). Bottom left: 4-player r\u00d75 grid, r varying from 3 to 15 (log scale). Bottom right: AGG only, up to 80 rows. representation would have to store 1.0\u00d71011 numbers. We also tested on Job Market games from Example 3.3.1, which have 13 ac- tions. We varied the number of players from 3 to 24. The results are similar, as shown in Figure 3.11 (left). This is consistent with our theoretical observation that the sizes of normal form representations grow exponentially in n while the sizes of AGG representations grow polynomially in n. 3.6.3 Expected Utility Computation We tested the performance of our dynamic programming algorithm for computing expected utilities in AGG-FNs against GameTracer\u2019s normal-form-based algorithm for computing expected utilities. For each game instance, we generated 1000 ran- dom strategy profiles with full support, and measured the CPU (user) time spent computing V nan(\u03c3\u2212n) under these strategy profiles. Then we divided this measure- ment by 1000 to obtain the average CPU time. We first looked at Coffee Shop games of different sizes. We fixed the size of blocks at 5\u00d7 5 and varied the number of players. Figure 3.10 shows plots of the results. For very small games the normal-form-based algorithm is faster due 93 to its smaller bookkeeping overhead; as the number of players grows larger, our AGG-based algorithm\u2019s running time grows polynomially, while the normal-form- based algorithm scales exponentially. For more than five players, we were not able to store the normal form representation in memory. Meanwhile, our AGG-based algorithm scaled to much larger numbers of players, averaging about a second to compute an expected utility for an 80-player Coffee Shop game. Next, we fixed the number of players at 4 and the number of columns at 5, and varied the number of rows. Our algorithm\u2019s running time grew roughly lin- early with the number of rows, while the normal-form-based algorithm grew like a higher-order polynomial. This was consistent with our theoretical observation that our algorithm takes O(n|A |+n4) time for this class of games while normal-form- based algorithms take O(|A |n\u22121) time. We also considered strategy profiles having partial support. While ensuring that each player\u2019s support included at least one action, we generated strategy profiles with each action included in the support with probability 0.4. GameTracer took about 60% of its full-support running times to compute expected utilities for the Coffee Shop game instances mentioned above, while our AGG-based algorithm required about 20% of its full-support running times. We also tested on Job Market games, varying the numbers of players. The results are shown in Figure 3.11 (right). The normal-form-based implementation ran out of memory for more than 6 players, while the AGG-based implementation averaged about a quarter of a second to compute expected utility in a 24-player game. 3.6.4 Computing Payoff Jacobians We ran similar experiments to investigate the computation of payoff Jacobians. As discussed in Section 3.5.2, the entries of a Jacobian can be formulated as expected payoffs, so a Jacobian can be computed by doing an expected payoff computation for each of its entries. In Section 3.5.2 we discussed methods that exploit the struc- ture of the Jacobian to further speed up the computation. GameTracer\u2019s normal- form-based implementation also exploits the structure of the Jacobian by reusing partial results of expected payoff computations. When comparing our AGG-based 94 0 00001 0.0001 0.001 0.01 0.1 1 C P U t im e ( s) AGG NF . 3 4 5 6 7 8 9 10 11 12 13 14 15 16 C P U t im e ( s) number of players 0.1 1 s) 0.001 0.01 C P U t im e ( s) 0.0001 6 14 22 30 38 46 54 62 70 78 C P U t im e ( s) number of players 0.0001 0.001 0.01 0.1 C P U t im e ( s) AGG NF 0.00001 16 26 36 46 56 66 76 C P U t im e ( s) number of actions 0 0.0001 0.0002 0.0003 0.0004 0.0005 0.0006 0.0007 91 121 151 181 211 241 271 301 331 361 391 C P U t im e ( s) number of actions C P U t im e ( s) Figure 3.10: Running times for payoff computation in the Coffee Shop game. Top left: 5\u00d7 5 grid with 3 to 16 players. Top right: AGG only, 5\u00d7 5 grid with up to 80 players. Bottom left: 4-player r\u00d75 grid, r varying from 3 to 15. Bottom right: AGG only, up to 80 rows. 1 10 100 1000 10000 100000 1000000 10000000 100000000 3 6 9 12 15 18 21 24 p a y o ff s st o re d number of players AGG NF 00001 p a y o ff s st o re d 0 00001 0.0001 0.001 0.01 0.1 1 C P U t im e ( s) AGG NF . 4 6 8 10 12 14 16 18 20 22 24 C P U t im e ( s) number of players Figure 3.11: Job Market games, varying numbers of players. Left: compar- ing representation sizes. Right: running times for computing 1000 expected utilities. Jacobian algorithm (as described in Section 3.5.2) to GameTracer\u2019s implementa- tion, we observed results very similar to those for computing expected payoffs: our implementation scaled polynomially in n while GameTracer scaled exponen- tially in n. We instead focus on the question of how much speedup the methods in Section 3.5.2 provided, by comparing our algorithm in Section 3.5.2 against the algorithm that computes expected payoffs (using our AGG-based algorithm de- scribed in Section 3.4) for each of the Jacobian\u2019s entries. We tested on Coffee Shop games on a 5\u00d7 5 grid with 3 to 10 players, as well as Coffee Shop games with 4 95 players, 5 columns and varying numbers of rows. For each instance of the game we randomly generated 100 strategy profiles with partial support. For each of these game instances, our algorithm as described in Section 3.5.2 was consistently about 50 times faster than computing expected payoffs for each of the Jacobian\u2019s en- tries. This confirms that the methods discussed in Section 3.5.2 provide significant speedup for computing payoff Jacobians. 3.6.5 Finding a Nash Equilibrium Using Govindan-Wilson Now we show experimentally that the speedup we achieved for computing Jaco- bians using the AGG representation led to a speedup in the Govindan-Wilson algo- rithm. We compared two versions of the Govindan-Wilson algorithm: one is the implementation in GameTracer, where the Jacobian computation is based on the normal-form representation; the other is identical to the GameTracer implemen- tation, except that the Jacobians are computed using our algorithm for the AGG representation. Both techniques compute the Jacobians exactly. As a result, given an initial perturbation to the original game, these two implementations follow the same path and return exactly the same Nash equilibrium. Again, we tested the two algorithms on Coffee Shop games of varying sizes: first we fixed the sizes of blocks at 4\u00d7 4 and varied the number of players; then we fixed the number of players at 4 and number of columns at 4 and varied the number of rows. For each game instance, we randomly generated 10 initial per- turbation vectors, and for each initial perturbation we ran the two versions of the Govindan-Wilson algorithm. Although the algorithm can (sometimes) find more than one equilibrium, we stopped both versions of the algorithm after one equilib- rium was found. Since the running time of the Govindan-Wilson algorithm is very sensitive to the initial perturbation, for each game instance the running times with different initial perturbations had large variance. To control for this, for each ini- tial perturbation we looked at the ratio of running times between the normal-form implementation and the AGG implementation (i.e., a ratio greater than 1 means the AGG implementation ran more quickly than the normal form implementation). We present the results in Figure 3.12 (left). We see that as the size of the games grew (either in the number of players or in the number of actions), the speedup of 96 the AGG implementation over that of the normal-form implementation increased. The normal-form implementation ran out of memory for game instances with more than 5 players, preventing us from reporting ratios above n = 5. Thus, we ran the AGG-based implementation alone on game instances with larger numbers of play- ers, giving the algorithm a one-day cutoff time. As shown by the log-scale boxplot of CPU times in Figure 3.12 (top right), for game instances with up to 12 players, the algorithm terminated within one day for most initial perturbations. A normal form representation of such a game would have needed to store 7.0\u00d71015 numbers. Figure 3.12 (bottom right) shows a boxplot of the CPU times for the AGG-based implementation, varying the number of actions while fixing the number of players at 4. For game instances with up to 49 actions (a 4\u00d712 grid plus one action for not entering the market), the algorithm terminated within an hour. We also tested on Job Market games with varying numbers of players. The results are shown in Figure 3.13. For the game instance with 6 players, the AGG- based implementation was about 100 times faster than the normal-form-based im- plementation. While the normal-form-based implementation ran out of memory for Job Market games with more than 6 players, the AGG-based implementation was able to solve games with 16 players in an average of 24 minutes. 3.6.6 Finding a Nash Equilibrium Using Simplicial Subdivision As discussed in Section 3.5.3, we can speed up the normal-form-based simplicial subdivision algorithm by replacing the subroutine that computes expected utility by our AGG-based algorithm. We have done so to GAMBIT\u2019s implementation of simplicial subdivision. As with the Govindan-Wilson algorithm, from a given starting point both the original version of simplicial subdivision and our AGG ver- sion follow a deterministic path to determine exactly the same equilibrium. Thus, all performance differences are due to the choice of representation. We compared the performance of AGG-based simplicial subdivision against normal-form-based simplicial subdivision on instances of Coffee Shop games as well as instances of randomly-generated symmetric AGG-\/0s on small world graphs. We always started from the mixed strategy profile in which each player gives equal probability to each of her actions. 97 3 4 5 0 5 10 15 20 25 30 ra tio o f N F an d AG G ti m es number of players 3 4 5 6 7 8 9 10 11 12 10 100 1000 10000 100000 CP U tim e in s ec on ds number of players 13 17 21 25 29 33 37 41 45 49 1 2 3 4 5 6 7 ra tio o f N F an d AG G ti m es number of actions 13 17 21 25 29 33 37 41 45 49 0.1 1 10 100 1000 10000 CP U tim e in s ec on ds number of actions Figure 3.12: Govindan-Wilson algorithm; Coffee Shop game. Top row: 4\u00d74 grid, varying number of players. Bottom row: 4-player r\u00d7 4 grid, r varying from 3 to 12. For each row, the left figure shows ratio of running times; the right figure shows logscale plot of CPU times for the AGG-based implementation. The dashed horizontal line indicates the one day cutoff time. We first considered instances of Coffee Shop games with 4 rows, 4 columns and varying numbers of players. For each game size we generated 10 instances with random payoffs. Figure 3.14 (left) gives a boxplot of the ratio of running times between the two implementations. The AGG-based implementation was about 3 times faster for the 3-player instances and about 30 times faster for the 4-player instances. We also tested on Coffee Shop games with 3 players, 3 columns and numbers of rows varying from 4 to 7, again generating 10 instances with random payoffs at each size. Figure 3.14 (right) gives a boxplot of the ratio of running times. As expected, the AGG-based implementation was faster and the gap in performance widened as games grew. We then investigated symmetric AGG-\/0s on randomly generated small world 98 3 4 5 6 0 20 40 60 80 100 120 140 ra tio o f N F an d AG G ti m es number of players 3 4 5 6 7 8 9 10 11 12 13 14 15 16 0.1 1 10 100 1000 10000 CP U tim e in s ec on ds number of players Figure 3.13: Govindan-Wilson algorithm; Job Market games, varying num- bers of players. Left: ratios of running times. Right: logscale plot of CPU times for the AGG-based implementation. 3 4 5 10 15 20 25 30 ra tio o f N F an d AG G ti m es number of players 13 16 19 22 2 2.5 3 3.5 4 4.5 5 ra tio o f N F an d AG G ti m es number of actions Figure 3.14: Ratios of running times of simplicial subdivision algorithms on Coffee Shop games. Left: 4\u00d7 4 grid with 3 to 4 players. Right: 3- player r\u00d73 grid, r varying from 4 to 7. graphs with random payoffs. The small world graphs were generated using GAMUT\u2019s implementation with parameters K = 1 and p = 0.5. For each game size we gener- ated 10 instances. We first fixed the number of action nodes at 5 and varied the num- ber of players. Results are shown in Figure 3.15 (top row). While there was large variance in the absolute running times across different instances, the ratios of run- ning times between normal-form-based and AGG-based implementations showed a clear increasing trend as the number of players increased. The normal-form-based implementation ran out of memory for instances with more than 5 players. Mean- while, we ran the AGG-based implementation on larger instances with a one-day cutoff time. As shown by the boxplot, the AGG-based implementation solved most 99 3 4 5 1 2 3 4 5 6 7 ra tio o f N F an d AG G ru nn in g tim es number of players 3 4 5 6 7 8 0.1 1 10 100 1000 10000 100000 CP U Ti m e in s ec on ds number of players 4 5 6 7 8 9 10 11 12 13 14 15 16 0 10 20 30 40 ra tio o f N F an d AG G ru nn in g tim es number of actions 4 5 6 7 8 9 10 11 12 13 14 15 16 0.1 1 10 100 1000 10000 CP U Ti m e in s ec on ds number of actions Figure 3.15: Simplicial subdivision algorithm; symmetric AGG-\/0s on small world graphs. Top row: 5 actions, varying number of players. Bottom row: 4 players, varying number of actions. The left figures show ratios of running times; the right figures show logscale plots of CPU times for the AGG-based implementation. The dashed horizontal line indicates the one day cutoff time. instances with up to 8 players within 24 hours. We then fixed the number of play- ers at 4 and varied the number of action nodes from 4 to 16. Results are shown in Figure 3.15 (bottom row). Again, while the actual running times on different instances varied substantially, the ratios of running times showed a clear increas- ing trend as the number of actions increased. The AGG-based implementation was able to solve a 16-action instance in an average of about 3 minutes, while the normal-form-based implementation averaged about 2 hours. 3.6.7 Visualizing Equilibria on the Action Graph Besides facilitating representation and computation, the action graph can also be used to visualize strategy profiles in a natural way. A strategy profile \u03c3 (e.g., a Nash 100 Figure 3.16: Visualization of a Nash equilibrium of a 16-player Coffee Shop game on a 4\u00d7 4 grid. The function nodes and the edges of the action graph are not shown. The action node at the bottom corresponds to not entering the market. equilibrium) can be visualized on the action graph by displaying the expected num- bers of players that choose each of the actions. We call such a tuple the expected configuration under \u03c3 . This can be easily computed given \u03c3 : for each action node \u03b1 , we sum the probabilities of playing \u03b1 , i.e. E[c(\u03b1)] = \u2211i\u2208N \u03c3i(\u03b1) where \u03c3i(\u03b1) is 0 when \u03b1 6\u2208 Ai. When the strategy profile consists of pure strategies, the result is simply the corresponding configuration. The expected configuration often has natural interpretations. For example in Coffee Shop games and other scenarios where actions correspond to location choices, an expected configuration can be seen as a density map describing expected player locations. We illustrate using a 16-player Coffee Shop game on a 4\u00d7 4 grid. We ran the (AGG-based) Govindan-Wilson algorithm, finding a Nash equilibrium in 77 seconds. The expected configuration of this (pure strategy) equilibrium is visu- alized in Figure 3.16. We also examined a Job Market game with 20 players. A normal form repre- sentation of this game would have needed to store 9.4\u00d710134 numbers. We ran the AGG-based Govindan-Wilson algorithm, finding a Nash equilibrium in 860 sec- 101 Figure 3.17: Visualization of a Nash equilibrium of a Job Market game with 20 players. Left: expected configuration of the equilibrium. Right: two mixed equilibrium strategies. onds. The expected configuration of this equilibrium is visualized in Figure 3.17 (left). Note that the equilibrium expected configuration on some of the nodes are non-integer values, as a result of mixed strategies by some of the players. We also visualize two players\u2019 mixed equilibrium strategies in Figure 3.17 (right). Finally, we examined an Ice Cream Vendor game (Example 3.2.5) with 4 lo- cations, 6 ice cream vendors, 6 strawberry vendors, and 4 west-side vendors. The Govindan-Wilson algorithm found an equilibrium in 9 seconds. The expected con- figuration of this (pure strategy) equilibrium is visualized in Figure 3.18. Observe that the west side is relatively denser due to the west-side vendors. The locations at the east and west ends were chosen relatively more often than the middle locations, because the ends have fewer neighbors and thus experience less competition. 3.7 Conclusions We proposed action-graph games (AGGs), a fully expressive game representation that can compactly express utility functions with structure such as context-specific independence and anonymity. We also extended the basic AGG representation by 102 Figure 3.18: Visualization of a Nash equilibrium of an Ice Cream Vendor game. introducing function nodes and additive utility functions, allowing us to compactly represent a wider range of structured utility functions. We showed that AGGs can efficiently represent games from many previously studied compact classes in- cluding graphical games, symmetric games, anonymous games, and congestion games. We presented a polynomial-time algorithm for computing expected util- ities in AGG-\/0s and contribution-independent AGG-FNs. For symmetric and k- symmetric AGG-\/0s, we gave more efficient, specialized algorithms for computing expected utilities under symmetric and k-symmetric strategy profiles respectively. We also showed how to use these algorithms to achieve exponential speedups of existing methods for computing a sample Nash equilibrium and a sample corre- lated equilibrium. We showed experimentally that using AGGs allows us to model and analyze dramatically larger games than can be addressed with the normal-form representation. In several later chapters of this thesis we present our efforts to extend and gen- eralize our AGG framework. In Chapter 4 we consider the problem of computing PSNE. In Chapter 6 we propose Bayesian action-graph games (BAGGs) for rep- resenting Bayesian games, and in Chapter 5 we propose temporal action-graph games (TAGGs) for representing imperfect-information dynamic games. 103 Chapter 4 Computing Pure-strategy Nash Equilibria in Action-Graph Games 4.1 Introduction In this chapter, we analyze the problem of computing pure-strategy Nash equilibria (PSNE) in AGGs. Recall from Section 2.2.6 that PSNEs do not always exist in a game. We focus on the problems of deciding if a PSNE exists, and of finding a PSNE, and later extend our analysis to the problem of computing a PSNE with optimal social welfare. The existence problem for AGGs is known to be NP- complete, even for symmetric AGG-\/0s with bounded in-degrees. Our goal in this chapter is to identify classes of AGGs for which this problem is tractable. We pro- pose a dynamic programming approach and show that if the AGG-\/0 is symmetric and the action graph has bounded treewidth, our algorithm determines the exis- tence of pure equilibria in polynomial time. We then extend our approach beyond symmetric AGG-\/0s.1 1This chapter is based on joint work with Kevin Leyton-Brown. Our earlier publication [Jiang and Leyton-Brown, 2007a] was restricted to the case of symmetric AGG-\/0s, and furthermore the proposed algorithm contained an error. In the current chapter we describe the corrected algorithm for symmetric AGGs, and furthermore extend the algorithm to certain classes of asymmetric AGGs. 104 We give a brief overview of our approach, and contrast it with some of the related literature mentioned in Section 2.2.6. Recall from Definition 2.2.5 that a PSNE is a pure-strategy profile satisfying certain incentive constraints. For sym- metric AGGs, we can cast the problem in terms of configurations and constraints on configurations. With the graphical structure of AGGs, a natural idea is to con- struct global solutions (i.e., configurations corresponding to PSNE) from partial solutions, which are configurations over a subset of action nodes satisfying certain local constraints on the corresponding subgraph of the action graph. One difficulty when combining partial solutions from subgraphs is that of inconsistency. For the PSNE problem on graphical games, Gottlob et al. [2005] and Daskalakis and Papadimitriou [2006] showed that an effective technique for dealing with inconsis- tency is tree decomposition (and the related concept of hypertree decomposition). Roughly, a tree decomposition [Robertson and Seymour, 1986] of a graph consists of a family of overlapping subsets of vertices of the graph, and a tree structure with these subsets as nodes, satisfying certain properties such that algorithms for trees can be adapted to work on the tree decomposition, with running time exponen- tial only in the tree decomposition\u2019s width (which measures the size of the largest subset). The treewidth of a graph is defined to be the width of the best tree de- composition for that graph. As a result, many NP-hard problems on graphs can be solved in polynomial time for graphs with bounded treewidth (see e.g., the recent survey by Bodlaender [2007]). For graphical games on bounded-treewidth graphs, it is sufficient to combine partial solutions from the leaves to the root of the tree de- composition while maintaining consistency across adjacent subsets, resulting in a polynomial-time algorithm for PSNE [Daskalakis and Papadimitriou, 2006]. How- ever, whereas in graphical games the incentive constraints can be defined locally at each neighborhood, for AGGs we face an additional difficulty, because an agent could profitably deviate from playing an action in one part of the action graph to an- other. That is, the incentive constraints for PSNE in an AGG cannot be entirely cap- tured by local constraints on subgraphs of the action graph. A simplified version of this difficulty was successfully dealt with in Ieong et al. [2005]\u2019s polynomial-time algorithm for finding PSNE in singleton congestion games, which correspond to symmetric AGGs with only self edges. Their dynamic-programming algorithm is able to check against such deviations without having to store the exponential-sized 105 set of partial solutions, by maintaining sufficient statistics (specifically, bounds on utilities) that summarize the partial solutions compactly. Recall from Chapter3 that AGGs unify these existing representations; it turns out that our algorithm for AGGs also generalizes the existing algorithms for graphical games and singleton congestion games. Specifically, we define restricted games as AGGs played on subgraphs, equilibria of which satisfy the local incentive constraints; we then use tree-decomposition techniques to divide the action graph into subgraphs, allowing us construct equilibria of the game from equilibria of restricted games while main- taining consistency; and we use sufficient statistics (corresponding to the concept of characteristics [e.g., Bodlaender, 2007]) to check against deviations across par- tial solutions. Compared to the case of singleton congestion games, the edges (i.e., utility dependence) between action nodes in AGGs complicates the design of the sufficient static. Nevertheless we are able to overcome this technical challenge by further exploiting properties of tree decompositions. 4.2 Preliminaries 4.2.1 AGGs We refer readers to Chapter 3 for definitions of AGG-\/0s, symmetric AGG-\/0s and k-symmetric AGG-\/0s. Recall that I is the maximum in-degree of the action graph. For an AGG-\/0 \u0393 = (N, A, G, u), let ||\u0393|| denote the number of utility values the representation stores. Recall from Proposition 3.2.6 that this number is less or equal to |A | (n\u22121+I )!(n\u22121)!I ! , with equality holding when the AGG-\/0 is symmetric. Let U be the set of distinct utilities of the game \u0393. Whereas in Chapter 3 we only need to consider configurations restricted to the neighborhood of some action node, in this chapter we will need to talk about configurations over arbitrary sets of action nodes. For a configuration c and a set of actions X \u2282A , let c[X ] denote the restriction of c over X , i.e. c[X ] = (c[\u03b1 ])\u03b1\u2208X , where c[\u03b1 ] is the number of players choosing action \u03b1 . Let C [X ] denote the set of restricted configurations over X . Given an action graph G = (A ,E) and a set of actions X \u2282 A , let GX be the action graph restricted to the action nodes X . Formally, GX \u2261 (X ,{(\u03b1 ,\u03b1 \u2032) \u2208 E|\u03b1 ,\u03b1 \u2032 \u2208 X}). For a set of actions X \u2282 A , 106 define \u03bd(X) \u2261 {\u03b1 \u2208 A \\X |\u2203x \u2208 X such that (\u03b1 ,x) \u2208 E}: the set of actions not in X that are neighbors of some action in X . Also define X \u2261 A \\X to be the complement of X . Then \u03bd(X)\u2261 {x \u2208 X |\u2203\u03b1 \u2208A \\X such that (x,\u03b1) \u2208 E}, the set of actions in X that are neighbors of some action not in X . Define \u03c4(X) \u2261 {x \u2208 X |\u2203\u03b1 \u2208A \\X such that (x,\u03b1) \u2208 E or (\u03b1 ,x) \u2208 E}. Given a configuration c[X ], let #c[X ]\u2261 \u2211x\u2208X c[x]. 4.2.2 Complexity of Computing PSNE Consider the problem determining whether a PSNE exists in a given AGG-\/0. Re- call from Section 2.2.6 that the obvious algorithm of checking every possible action profile runs in linear time in the normal form representation of the game. However, since AGGs can be exponentially more compact than the normal form, the running time of this algorithm is worst-case exponential in the size of the AGG. Indeed, the PSNE problem becomes NP-complete when the input is an AGG-\/0. Proposition 4.2.1. The problem of determining whether a pure Nash equilibrium exists in an AGG- \/0 is NP-complete. Proof Sketch. It is straightforward to see that the problem is in NP, because given a pure strategy profile it takes polynomial time to verify whether that profile is a Nash equilibrium. NP-hardness follows from the fact that any graphical game can be transformed (in polynomial time) to an equivalent AGG-\/0 having the same space complexity, and the fact that the problem of determining the existence of pure equilibrium in graphical games is NP-hard [Daskalakis and Papadimitriou, 2006, Gottlob et al., 2005]. Perhaps more interestingly, the problem remains hard even if we restrict the games to be symmetric, in which case we cannot leverage existing results about graphical games. The following theorem was proved independently by Vincent Conitzer (personal communication) and Daskalakis et al. [2009]. Theorem 4.2.2 (Conitzer [pers. comm., 2004], Daskalakis et al. [2009]). The prob- lem of determining whether a pure Nash equilibrium exists in a symmetric AGG is NP-complete, even when the in-degree of the action graph is at most 3. 107 4.3 Computing PSNE in AGGs with Bounded Number of Action Nodes Now we look at classes of AGGs in which |A |, the number of action nodes, is bounded by some constant. We show that in this case, the problem of finding pure equilibria can be solved in polynomial time. While this is a very restricted class of AGGs, we will use these results as building blocks for our dynamic programming approach for solving more complex AGGs. We first look at symmetric AGGs. We restate the following well-known prop- erty of symmetric games [e.g., Brandt et al., 2009] in the language of AGGs: Lemma 4.3.1. Suppose \u0393 is a symmetric AGG. If a and \u03b1 \u2032 induce the same config- uration, then a is a PSNE of \u0393 iff \u03b1 \u2032 is a PSNE of \u0393. This is because the configuration determines the utilities, and since in a sym- metric AGG any player can choose any action in A , the configuration determines whether the incentive constraints for PSNE are satisfied. Note that this argument requires the symmetry property; in particular, the lemma no longer holds for asym- metric AGGs. Lemma 4.3.1 allows us to consider only the configurations instead of all the pure strategy profiles. We say a configuration c is a PSNE of \u0393 if its corresponding pure strategy profiles are PSNE. The following straightforward lemma (a special- ization of known facts about symmetric games [e.g., Brandt et al., 2009]) gives the incentive constraints for PSNE in terms of configurations. Lemma 4.3.2. A configuration c\u2217 is a PSNE of a symmetric game iff for all \u03b1 ,\u03b1 \u2032 \u2208 A , if c\u2217[\u03b1 ]> 0, u\u03b1(c\u2217)\u2265 u\u03b1 \u2032 (c\u2217\u03b1\u2192\u03b1 \u2032) (4.3.1) where c\u2217\u03b1\u2192\u03b1 \u2032 is the resulting configuration when one agent playing \u03b1 in c\u2217 deviates to \u03b1 \u2032. Formally, for all x \u2208A , c\u2217\u03b1\u2192\u03b1 \u2032[x] = \uf8f1\uf8f4\uf8f2 \uf8f4\uf8f3 c\u2217[x]\u22121 if x = \u03b1 c\u2217[x]+1 if x = \u03b1 \u2032 c\u2217[x] otherwise 108 Given a configuration c, we can check whether it is a pure equilibrium in poly- nomial time. Theorem 4.3.3. The problem of determining whether a pure Nash equilibrium exists in a symmetric AGG with bounded |A | is in P. Proof. A polynomial algorithm is to check all configurations. Since |A | is bounded, the number of configurations ( n+|A |\u22121 |A |\u22121 ) = O(n|A |\u22121) is polynomial. This can easily be extended to k-symmetric AGGs. Definition 4.3.4. Suppose \u0393 is a k-symmetric AGG in which the players are par- titioned into equivalence classes {N1, . . . ,Nk} with the corresponding distinct ac- tion sets {A 1, . . . ,A k}. Then given a pure strategy profile a, its corresponding k-configuration is a tuple (c\u2113)1\u2264\u2113\u2264k where c\u2113 is the configuration over A \u2113 induced by the players in N\u2113. In other words, for all \u03b1 \u2208A \u2113, c\u2113[\u03b1 ] = |{i \u2208 N\u2113|ai = \u03b1}|. Just as configurations capture all relevant information about pure strategy pro- files in symmetric games, k-configurations capture all relevant information about pure strategy profiles in k-symmetric games. Thus we can determine the existence of pure equilibrium by checking all k-configurations. When k is bounded by a constant, there are polynomial number of k-configurations. Lemma 4.3.5. The problem of determining whether a pure Nash equilibrium exists in a k-symmetric AGG with bounded |A | and bounded k is in P. Proof. A polynomial algorithm is to check all k-configurations. Since |A | is bounded, for each l \u2208{1, . . . ,k} the number of distinct cl is (|Nl |+|A l |\u22121 |A l |\u22121 ) =O(|Nl||A l |\u22121). Therefore the number of distinct k-configurations is O(nk(|A |\u22121)), which is polyno- mial when k is bounded. For each k-configuration, checking whether it forms a Nash equilibrium takes polynomial time. Therefore the algorithm runs in polyno- mial time. Now consider the full class of AGGs with bounded |A |. Interestingly, our problem remains easy to solve. Theorem 4.3.6. The problem of determining whether a pure Nash equilibrium exists in an arbitrary AGG with bounded |A | is in P. 109 Proof. Any AGG \u0393 is k-symmetric by definition, where k is the number of distinct action sets. Since Ai \u2286A for all i, the number of distinct nonempty action sets is at most 2|A |\u2212 2. This is bounded, since |A | is bounded by a constant. Thus \u0393 is k-symmetric with bounded k, and Lemma 4.3.5 applies. 4.4 Computing PSNE in Symmetric AGGs We now consider classes of AGGs in which |A | is not bounded. We first focus on symmetric AGG-\/0s. Since in this case all players have the same action set A , we can identify a symmetric AGG-\/0 by the tuple \u3008n,G = (A ,E),u\u3009. Whereas enumer- ating the configurations works well for AGGs with bounded |A |, this approach is less effective in the general case with unbounded |A |: in a symmetric AGG-\/0, the number of configurations over A is ( n+|A |\u22121 |A |\u22121 ) , which is superpolynomial in ||\u0393|| when I is bounded. Our approach is to use dynamic programming to construct PSNE of the game from PSNE of games restricted to parts of the action graph. This approach belongs to a large family of tree-decomposition-based dynamic programming algorithms for problems on graphs. In particular, in this section we adapt the standard con- cepts of partial solutions and characteristics [e.g., Bodlaender, 1997] to the PSNE problem in AGGs. 4.4.1 Restricted Games and Partial Solutions We first introduce the concept of a restricted game on R \u2282A , which intuitively is the game played by a subset of players when we \u201crestrict\u201d them to the subgraph GR, i.e., require them to choose their actions from R. Of course, the utility functions of this restricted game are not defined until we specify a configuration on \u03bd(R). Definition 4.4.1. Given a symmetric AGG- \/0 \u0393, a set of actions R \u2282 A , a config- uration c[\u03bd(R)] and n\u2032 \u2264 n, we define the restricted game \u0393(n\u2032,R,c[\u03bd(R)]) to be a symmetric AGG with n\u2032 players and with GR as the action graph. Each action \u03b1 \u2208 R has the utility function u\u03b1 |c[\u03bd(R)], which is the same as u\u03b1 as defined in \u0393 except that the configuration of nodes outside R is assigned by c[\u03bd(R)]. Formally, \u0393(n\u2032,R,c[\u03bd(R)]) = \u2329 n\u2032,GR, ( u\u03b1 |c[\u03bd(R)] ) \u03b1\u2208R \u232a . 110 B1 B3 T4 B4B2 T3T2T1 B5 B7 T8 B8B6 T7T6T5 Figure 4.1: The road game with m = 8 and the action graph of its AGG representation. B1 B3 T4 B4B2 T3T2T1 B5 B7 T8 B8B6 T7T6T5 Figure 4.2: Restricted game on the rightmost 6 actions. Example 4.4.2. Suppose each of n agents is interested in opening a business, and can choose to locate in any block along either side of a road of length m. Multiple agents can choose the same block. Agent i\u2019s payoff depends on the number of agents who chose the same block as he did, as well as the numbers of agents who chose each of the adjacent blocks of land. This game can be compactly represented as a symmetric AGG, whose action graph is illustrated in Figure 4.1. To specify a restricted game on the rightmost 6 action nodes R = {T6,T7,T8,B6,B7,B8} of the road game of Figure 4.1, we need to specify the number of players on R as well as the configuration over \u03bd(R) = {T5,B5}. This is illustrated in Figure 4.2, with R enclosed by the shaded rectangle and \u03bd(R) in green. Lemma 4.3.1 tells us that we only need to consider configurations instead of strategy profiles. Likewise, for a restricted game on the subgraph X \u2282A , we only need to consider restricted configurations c[X ]. The following lemma is straightfor- ward. Lemma 4.4.3. If c\u2217 is a pure equilibrium of \u0393, then c\u2217[X ] is a pure equilibrium of the restricted game \u0393(#c\u2217[X ],X ,c\u2217[\u03bd(X)]). We want to use equilibria of restricted games as building blocks to construct equilibria of the entire game. Of course, a restricted game on X \u2282 A is not well- defined until we specify c[\u03bd(X)]. Thus we define a partial solution as a config- uration on X \u222a \u03bd(X) which describes a restricted game on X as well as a pure equilibrium of it. Definition 4.4.4. A partial solution on X \u2286A is a configuration c[X \u222a\u03bd(X)] such that c[X ] is a pure equilibrium of the restricted game \u0393(#c[X ],X ,c[\u03bd(X)]). 111 B1 B3 T4 B4B2 T3T2T1 B5 B7 T8 B8B6 T7T6T5 Figure 4.3: A partial solution on the rightmost 6 actions describes the config- uration over these 8 actions. For the restricted game in Figure 4.2, the corresponding partial solution on R = {T6,T7,T8,B6,B7,B8} is a configuration over R\u222a\u03bd(R), illustrated in Figure 4.3 as green nodes. We say a partial solution c[X \u222a\u03bd(X)] can be extended if there exists a configu- ration c\u2217 such that c\u2217 is a PSNE of \u0393 and c\u2217[X \u222a\u03bd(X)] = c[X \u222a\u03bd(X)]. 4.4.2 Combining Partial Solutions In order to combine partial solutions to form a partial solution on a larger subgraph, we need to make sure that the result is a valid restricted strategy profile. We say two partial solutions c\u2032[X ] and c\u2032\u2032[Y ] are consistent if there exists a configuration c of the AGG-\/0 such that c[X ] = c\u2032[X ] and c[Y ] = c\u2032\u2032[Y ]. The following lemma shows that it is simple to check whether c[X ] and c\u2032[Y ] are consistent. Lemma 4.4.5. Given X ,Y \u2286A , c[X ] is consistent with c\u2032[Y ] iff 1. for all \u03b1 \u2208 X \u2229Y, c[\u03b1 ] = c\u2032[\u03b1 ], and 2. Let n\u2032 = #c[X ] + #c\u2032[Y \\X ], then n\u2032 \u2264 n. Furthermore, if X \u222aY = A then n\u2032 = n. We omit the straightforward proof. For two configurations c[X ],c\u2032[Y ] that are consistent with each other, we define c[X ]\u222a c\u2032[Y ] to be the (unique) configuration on X \u222aY that is consistent with both c[X ] and c\u2032[Y ]. However, if we simply combine two consistent partial solutions that describe equilibria of restricted games on two disjoint sets X ,Y \u2208 A , the result would not 112 necessarily induce an equilibrium of the restricted game on X \u222aY . This is because an agent who was playing an action in X might profitably deviate by playing an action in Y , and vice versa. We could deal with this problem by keeping track of all pure equilibria of each restricted game, and determine case-by-case whether two equilibria can be com- bined (by checking whether agents could profitably deviate from one restricted game to the other). But as we combine the restricted games to form larger re- stricted games and eventually the unrestricted game on the entire action graph G, the number of equilibria we would have to store could grow exponentially. 4.4.3 Dynamic Programming via Characteristics Perhaps we don\u2019t need to keep track of all partial solutions. Imagine we had a function ch that summarized them, i.e. it mapped each partial solution to a charac- teristic from a finite set C which is smaller than the set of partial solutions. For this characteristic function to be useful, it need to be equilibrium-preserving, defined as follows. Definition 4.4.6. For X \u2282 A , a function ch() that maps partial solutions to their characteristics is equilibrium-preserving if for all pairs of partial solutions c[X ] and c\u2032[X ], if ch(c[X ])= ch(c\u2032[X ]) then (c[X ] can be extended)\u21d4 (c\u2032[X ] can be extended). Thus an equilibrium-preserving characteristic function ch() induces a partition of the set of partial solutions into equivalence classes. All partial solutions with the same characteristic behave the same way, so we only need to consider the set of all distinct characteristics. For X \u2282A , we define CX \u2282 C to be the set of characteris- tics of partial solutions on X . Formally, CX = {ch(c[X \u222a\u03bd(X)]) | c[X \u222a\u03bd(X)] is a partial solution on X}. Given such a function ch, a dynamic-programming algorithm for determining the existence of PSNE of \u0393 has the following high-level structure: 1. Construct X = {X1, . . . ,Xm} such that \u22c3 1\u2264 j\u2264m X j = A . 2. For each Xi \u2208X , compute CXi , the set of characteristics of partial solutions on Xi. 3. While |X | \u2265 2: 113 (a) Take X ,Y \u2208X . Remove them from X . (b) Compute CX\u222aY from CX and CY . (c) Add X \u222aY to X . 4. Now X has only one member, A . Return TRUE iff CA is not empty. Since a partial solution on A is by definition a pure equilibrium of \u0393, there exists a pure equilibrium of \u0393 if and only if CA is not empty. For this algorithm to run in polynomial time, the function ch() must satisfy the following properties: Property 1: At all times during the algorithm, for all X \u2208 X , the size of CX is polynomial. This is necessary since all restricted strategy profiles could po- tentially be partial solutions, and so CX could potentially be the set of all possible characteristics for X . Property 2: For each of the initial X j, CX j can be computed in polynomial time. Property 3: CX\u222aY can be computed from CX and CY in polynomial time. One algorithm having the above structure is Ieong et al. [2005]\u2019s algorithm for computing PSNE in singleton congestion games (corresponding to symmetric AGG-\/0s with only self-edges). Given such an AGG-\/0, the algorithm starts by parti- tioning A into sets each containing one action, and combines them in an arbitrary order. Consider two restricted games \u0393\u2032 and \u0393\u2032\u2032 on two disjoint sets of action nodes X and Y respectively. Observe that in this case, to check consistency between two equilibria of \u0393\u2032 and \u0393\u2032\u2032 respectively, it is sufficient to check the numbers of play- ers in \u0393\u2032 and \u0393\u2032\u2032. Given a restricted game \u0393\u2032 on X \u2282 A and an equilibrium c\u2217 of \u0393\u2032, define the worst current utility WCU(c\u2217,\u0393\u2032) to be the utility of the worst-off player in \u0393\u2032, or \u221e if \u0393\u2032 has 0 players. Define the best entrance utility BEU(c\u2217,\u0393\u2032) to be the best payoff a player currently playing an action outside of X can get by playing an action in X , assuming the current players in \u0393\u2032 play c\u2217. If \u0393\u2032 already has all n players, BEU(c\u2217,\u0393\u2032) = \u2212\u221e. Since all players in a symmetric game are identical, if any player can profitably deviate out of \u0393\u2032, then the worst-off player (with utility WCU(c\u2217,\u0393\u2032)) can profitably deviate out of \u0393\u2032; similarly if an agent can profitably deviate to any action in \u0393\u2032, then she can achieve utility BEU(c\u2217,\u0393\u2032). Therefore, to check whether agents could profitably deviate from \u0393\u2032 currently in 114 equilibrium c\u2032 to \u0393\u2032\u2032 in equilibrium c\u2032\u2032, we just need to check whether WCU(c\u2032,\u0393\u2032) is greater than BEU(c\u2032\u2032,\u0393\u2032). Thus WCU(c\u2032,\u0393\u2032) and BEU(c\u2032,\u0393\u2032) can be used as suf- ficient statistics for checking existence of profitable deviations out of and into the restricted game \u0393\u2032, and #c[X ] for checking consistency. The resulting character- istics are equilibrium-preserving, and require less space than keeping track of the partial solutions on X because WCU and BEU are utility values and thus there are at most ||\u0393||2 possible pairs. We adapt Ieong et al. [2005]\u2019s characteristic function to general symmetric AGGs. First of all, we now need c[\u03bd(X)] in order to specify restricted games and partial solutions on X . As a result, to check consistency between a partial solution on X and partial solutions on other parts of the graph, we need to keep track of the number of players in X , the configuration over \u03bd(X), and the configuration over \u03bd(X). Furthermore, in general action graphs, we may have sets X ,Y \u2282 A such that \u03bd(X)\u2229Y 6= \/0. In such cases deviating from an action in \u03bd(X)\u2229Y to a restricted game \u0393\u2032 on X changes the configuration on \u03bd(X), which in turn affects the utility functions of \u0393\u2032. In other words, the best utility a player originally playing an action \u03b1 \u2208 X can get by deviating into \u0393\u2032 on X with current configuration c\u2217 is a quantity that depends on (1) whether \u03b1 is in \u03bd(X) and (2) if so, \u03b1 itself. As a result, simply using BEU(c\u2217,\u0393\u2032) and WCU(c\u2217,\u0393\u2032) is no longer sufficient for checking profitable deviations. We thus need more sophisticated sufficient statistics for checking deviations in this case. One approach is to extend our definition of BEU(c\u2217,\u0393\u2032) by making it vector-valued, specifying the best utilities when the deviating player is an outside player and when the player is playing each of the actions in \u03bd(X). The length of the resulting vector is thus |\u03bd(X)|+1. Furthermore we could extend WCU(c\u2217,\u0393\u2032) by making it a vector consisting of the worst utility from X \\\u03bd(X) and from each of the actions in \u03bd(X). Although it is intuitive, it turns out that this approach yields a polynomial-time algorithm only in the case of symmetric AGG-\/0s with bounded treewidth and bounded in-degree. Instead, in this chapter we describe a different approach that yields a polynomial- time algorithm for bounded-treewidth symmetric AGG-\/0s, thus eliminating the sep- arate requirement on in-degree. First, we redefine BEU(c\u2217,\u0393\u2032) in terms of devia- 115 tions from players outside of X \u222a\u03bd(X). Definition 4.4.7. Given a restricted game \u0393\u2032 on X \u2282 A and an equilibrium c\u2217 of \u0393\u2032, the best entrance utility BEU(c\u2217,\u0393\u2032) is the best payoff an outside player (a player currently playing an action outside of X \u222a \u03bd(X)) can get by playing an action in X, assuming the current players in \u0393\u2032 play c\u2217. If there are 0 outside players, BEU(c\u2217,\u0393\u2032) =\u2212\u221e. In order to check deviations into and out of X , we partition X into P and X \\P, and check the corresponding restricted games separately. We will specify P in Section 4.4.4; for now we only require that X \u2287 P \u2287 \u03c4(X). Recall that \u03c4(X) are the set of nodes in X with outgoing edges to and\/or incoming edges from nodes outside X . Intuitively, P contains all nodes in X that we cannot apply BEU and WCU to. This implies \u03bd(X \\P)\u2229X = \/0 and \u03bd(X)\u2229 (X \\P) = \/0. Thus we can use WCU and BEU for restricted games on X \\P as sufficient statistics for checking deviations between X \\P and nodes outside X . The remaining task is to check deviations between P and nodes outside X . We do this by explicitly keeping track of configurations on Q \u2287 P\u222a\u03bd(P). We will exactly specify Q in Section 4.4.4. In other words, we keep track of the partial solutions on P. Note in particular that this provides enough information to specify the corresponding restricted games on P. Finally, since configurations over X \\P will not be referred to by partial solutions on any Y \u2282A that is disjoint from X , in order to maintain consistency it is sufficient to keep track of the number of players playing in X and the configuration over P\u222a\u03bd(X), which is a subset of Q. Taking these together, we have the following characteristic function. Lemma 4.4.8. Given X \u2282A , P\u2286 X such that P \u2287 \u03c4(X), and Q \u2287 P\u222a\u03bd(P), con- sider the characteristic function chP,Q that maps a partial solution c[X \u222a\u03bd(X)] to chP,Q(c[X \u222a\u03bd(X)]) = (c[Q],#c[X ],WCU(c[X \u2032],\u0393\u2032),BEU(c[X \u2032],\u0393\u2032)), where \u0393\u2032 = \u0393(#c[X \u2032],X \u2032,c[\u03bd(X \u2032)]) and X \u2032 = X \\ P. Then chP,Q is equilibrium- preserving. Proof. Suppose we have two partial solutions c[X \u222a \u03bd(X)] and c\u2032[X \u222a \u03bd(X)] such that chP,Q(c[X \u222a \u03bd(X)]) = chP,Q(c\u2032[X \u222a \u03bd(X)]). Furthermore c[X \u222a \u03bd(X)] can be 116 extended, i.e., there exists a PSNE c\u2217 of the game such that c\u2217[X \u222a \u03bd(X)] = c[X \u222a \u03bd(X)]. We need to show that c\u2032[X \u222a\u03bd(X)] can be extended. Since c\u2217[X \u222a\u03bd(X)] and c[X \u222a \u03bd(X)] are consistent, and since c[X \u222a \u03bd(X)] and c\u2032[X \u222a \u03bd(X)] have the same characteristic (in particular, the same configuration on \u03bd(X)\u222a \u03bd(X) and the same number of players in X ), therefore c\u2217[X \u222a \u03bd(X)] and c\u2032[X \u222a \u03bd(X)] are consistent. Consider the configuration c\u2032\u2217 \u2261 c\u2217[X \u222a\u03bd(X)]\u222a c\u2032[X \u222a\u03bd(X)]. We claim that c\u2032\u2217 is a PSNE of the game (which directly implies that c\u2032[X \u222a\u03bd(X)] can be extended). To show this, we observe that since c\u2217[X \u222a\u03bd(X)] and c\u2032[X \u222a\u03bd(X)] are already partial solutions on X and X respectively (and are consistent with each other), we only need to make sure there are no profitable deviations between them. We partition X into P and X \u2032 = X \\P. Since there were no profitable deviations between partial solutions c[P\u222a\u03bd(P)] and c\u2217[X \u222a\u03bd(X)], and since c[P\u222a\u03bd(P)] = c\u2032[P\u222a\u03bd(P)], there are no profitable deviations between partial solutions c\u2032[P\u222a\u03bd(P)] and c\u2217[X\u222a\u03bd(X)]. Suppose there is a profitable deviation from X \u2032 under partial solution c\u2032[X \u2032\u222a\u03bd(X \u2032)] to X under partial solution c\u2217[X \u222a\u03bd(X)]. Then there is a profitable deviation from the worst-off player in X \u2032 under c\u2032[X \u2032\u222a\u03bd(X \u2032)]. Since her utility is equal to that of the worst-off player in X \u2032 under c[X \u2032\u222a\u03bd(X \u2032)], there must be a profitable deviation from the partial solution c[X \u2032\u222a \u03bd(X \u2032)] to c\u2217[X \u222a \u03bd(X)], a contradiction. A similar argument shows that there is no profitable deviation from X under c\u2217[X \u222a\u03bd(X)] to X \u2032 under c\u2032[X \u2032\u222a\u03bd(X \u2032)]. We denote by C P,QX the set of characteristics on X under the characteristic func- tion chP,Q. For the restricted game in Example 4.4.2, we can use P = {T6,B6} and Q = P\u222a\u03bd(P) = {T5,T6,T7,B5,B6,B7}. These are illustrated in Figure 4.4. The following lemma shows how sets of characteristics from two subsets X \u2032 and X \u2032\u2032 of A (with characteristic functions chP\u2032,Q\u2032 and chP\u2032\u2032,Q\u2032\u2032 respectively) can be combined together. Here we require that X \u2032 and X \u2032\u2032 have a limited amount of overlap; specifically, we require that X \u2032\u2229X \u2032\u2032 \u2286 P\u2032 \u222aP\u2032\u2032. Intuitively, the combina- tion of subsets with such overlap is manageable because (1) we can calculate the total number of players in X \u2032\u222aX \u2032\u2032 from the characteristics because we know the configuration of (and thus the number of players in) X \u2032\u2229X \u2032\u2032; and (2) since the con- figuration of X \u2032\u2229X \u2032\u2032 is already \u201cin equilibrium\u201d with both sides, it is sufficient to check deviations from X \u2032\u2032 \\X \u2032 to X \u2032 \\X \u2032\u2032 and vice versa. We do this by partitioning 117 B1 B3 T4 B4B2 T3T2T1 B5 B7 T8 B8B6 T7T6T5 Figure 4.4: Characteristic function chP,Q for the rightmost 6 actions with P = {T6,B6} and Q = {T5,T6,T7,B5,B6,B7}. the former into X \u2032\u2032 \\P\u2032\u2032 and P\u2032\u2032 \\X \u2032, and the latter into X \u2032 \\P\u2032 and P\u2032 \\X \u2032\u2032, then checking the resulting set of deviations using information provided by the charac- teristics. Lemma 4.4.9. Suppose that X ,P,Q,X \u2032,P\u2032,Q\u2032,X \u2032\u2032,P\u2032\u2032,Q\u2032\u2032 are subsets of A such that \u03c4(X)\u2286 P \u2286 X, \u03c4(X \u2032) \u2286 P\u2032 \u2286 X \u2032, \u03c4(X \u2032\u2032) \u2286 P\u2032\u2032 \u2286 X \u2032\u2032, Q \u2287 P\u222a\u03bd(P), Q\u2032 \u2287 P\u2032\u222a \u03bd(P\u2032), Q\u2032\u2032 \u2287 P\u2032\u2032\u222a\u03bd(P\u2032\u2032), X \u2032\u2229X \u2032\u2032 \u2286 P\u2032\u222aP\u2032\u2032, and X \u2032\u222aX \u2032\u2032 = X. For all c[Q] \u2208C[Q], integer B\u2264 n, and Uc,Ue \u2208U , the tuple (c[Q],B,Uc,Ue)\u2208C P,QX if and only if there exist c\u2032[Q\u2032], c\u2032\u2032[Q\u2032\u2032], B\u2032, B\u2032\u2032, and U \u2032c, U \u2032\u2032c , U \u2032e, and U \u2032\u2032e such that 1. (c\u2032[Q\u2032],B\u2032,U \u2032c,U \u2032e) \u2208 C P \u2032,Q\u2032 X \u2032 , 2. (c\u2032\u2032[Q\u2032\u2032],B\u2032\u2032,U \u2032\u2032c ,U \u2032\u2032e ) \u2208 C P \u2032\u2032,Q\u2032\u2032 X \u2032\u2032 , 3. c\u2032[Q\u2032] is consistent with c\u2032\u2032[Q\u2032\u2032], 4. c[Q] = c\u2032\u2032\u2032[Q] where c\u2032\u2032\u2032 = c\u2032[Q\u2032]\u222a c\u2032\u2032[Q\u2032\u2032], 5. B = B\u2032+B\u2032\u2032\u2212 c\u2032\u2032\u2032[X \u2032\u2229X \u2032\u2032], and if X = A then B = n, 6. U \u2032c \u2265U \u2032\u2032e and U \u2032\u2032c \u2265U \u2032e, 7. U \u2032c \u2265 BEU(c\u2032\u2032[P\u2032\u2032 \\X \u2032],\u0393\u2032\u2032), WCU(c\u2032[P\u2032 \\X \u2032\u2032],\u0393\u2032) \u2265 U \u2032\u2032e , U \u2032\u2032c \u2265 BEU(c\u2032[P\u2032 \\ X \u2032\u2032],\u0393\u2032), WCU(c\u2032\u2032[P\u2032\u2032\\X \u2032],\u0393\u2032\u2032)\u2265U \u2032e where \u0393\u2032=\u0393(#c\u2032[P\u2032\\X \u2032\u2032], P\u2032\\X \u2032\u2032,c\u2032[\u03bd(P\u2032\\ X \u2032\u2032)]) and \u0393\u2032\u2032 = \u0393(#c\u2032\u2032[P\u2032\u2032 \\X \u2032], P\u2032\u2032 \\X \u2032,c\u2032\u2032[\u03bd(P\u2032\u2032 \\X \u2032)]), 8. c[P\u2032\u222aP\u2032\u2032] is an equilibrium of \u0393(#c[P\u2032\u222aP\u2032\u2032], P\u2032\u222aP\u2032\u2032, c\u2032\u2032\u2032[\u03bd(P\u2032\u222aP\u2032\u2032)], 9. Uc =min{U \u2032c,U \u2032\u2032c ,WCU(c\u2032\u2032\u2032[Z],\u0393Z)} and Ue =max{U \u2032e,U \u2032\u2032e ,BEU(c\u2032\u2032\u2032[Z],\u0393Z)}, where Z = (P\u2032\u222aP\u2032\u2032)\\P and \u0393Z = \u0393(#c\u2032\u2032\u2032[Z],Z,c\u2032\u2032\u2032[\u03bd(Z)]). 118 Proof Sketch. \u21d2 (\u201conly if\u201d) part: Suppose c[X \u222a \u03bd(X)] is a partial solution on X with characteristic (c[Q],B,Uc,Ue). Then let c\u2032[X \u2032 \u222a \u03bd(X \u2032)] = c[X \u2032 \u222a \u03bd(X \u2032)]. It is straightforward to see that c\u2032[X \u2032] is an equilibrium of the restricted game \u0393(#c\u2032[X \u2032],X \u2032,c[\u03bd(X \u2032)]). Therefore c\u2032[X \u2032\u222a \u03bd(X \u2032)] is a partial solution on X \u2032. Sim- ilarly, let c\u2032\u2032[X \u2032\u2032 \u222a \u03bd(X \u2032\u2032)] = c[X \u2032 \u222a \u03bd(X \u2032\u2032)], and the same argument applies. Then it is straightforward to verify that the characteristics of c\u2032[X \u2032\u222a\u03bd(X \u2032)] and c\u2032\u2032[X \u2032\u2032\u222a \u03bd(X \u2032\u2032)] satisfy the above conditions. \u21d0 (\u201cif\u201d) part: Suppose c\u2032[X \u2032\u222a\u03bd(X \u2032)] and c\u2032\u2032[X \u2032\u2032\u222a\u03bd(X \u2032\u2032)] are partial solutions with characteristics (c\u2032[Q\u2032],B\u2032,U \u2032c,U \u2032e) and (c\u2032\u2032[Q\u2032\u2032],B\u2032\u2032,U \u2032\u2032c ,U \u2032\u2032e ) respectively, and there exists c[Q],B,Uc,Ue such that conditions 3 to 9 are satisfied. Then conditions 3 and 5 together with Lemma 4.4.5 imply that c\u2032[X \u2032 \u222a \u03bd(X \u2032)] and c\u2032\u2032[X \u2032\u2032\u222a \u03bd(X \u2032\u2032)] are consistent. Let c = c\u2032[X \u2032\u222a\u03bd(X \u2032)]\u222ac\u2032\u2032[X \u2032\u2032\u222a\u03bd(X \u2032\u2032)]. By a similar argument as in the proof of Lemma 4.4.8, conditions 6 to 8 ensure that there are no profitable de- viations between the partial solutions c\u2032[X \u2032\u222a\u03bd(X \u2032)] and c\u2032\u2032[X \u2032\u2032\u222a\u03bd(X \u2032\u2032)], and there- fore c[X ] is an equilibrium of the restricted game \u0393(B,X ,c[\u03bd(X)]). Let Y = X \\P. Then X \u2032 \\P\u2032, X \u2032\u2032 \\P\u2032\u2032 and Z partitions Y . By the definition of worst current utility, WCU(c[Y ],\u0393(#c[Y ],Y,c[\u03bd(Y )])) is the minimum of {U \u2032c,U \u2032\u2032c ,WCU(c\u2032\u2032\u2032[Z],\u0393Z)}, which are the worst current utilities on X \u2032 \\ P, X \u2032\u2032 \\ P\u2032\u2032 and Z respectively. Therefore WCU(c[Y ],\u0393(#c[Y ],Y,c[\u03bd(Y )])) = Uc. Similarly BEU(c[X ],\u0393(B,X ,c[\u03bd(X)])) = Ue. Therefore c[X \u222a\u03bd(X)] is a partial solution with characteristic (c[Q],B,Uc,Ue). Lemma 4.4.9 implies that it takes polynomial time to check if two character- istics (c\u2032[Q\u2032],B\u2032,U \u2032c,U \u2032e) \u2208 C P \u2032,Q\u2032 X \u2032 and (c\u2032\u2032[Q\u2032\u2032],B\u2032\u2032, U \u2032\u2032c , U \u2032\u2032e ) \u2208 C P \u2032\u2032,Q\u2032\u2032 X \u2032\u2032 are consistent and if there are no profitable deviations between them, and if so to construct a char- acteristic in C P,QX for their combined partial solutions. Thus if we iterate over all pairs of characteristics in C P \u2032,Q\u2032 X \u2032 and C P\u2032\u2032,Q\u2032\u2032 X \u2032\u2032 respectively, we can construct C P,Q X in time polynomial in the sizes of C P \u2032,Q\u2032 X \u2032 and C P\u2032\u2032,Q\u2032\u2032 X \u2032\u2032 . Let us now consider the size of C P,QX for an arbitrary X \u2286 A . Recall that the WCU and BEU are utility values and thus each has at most |U | \u2264 ||\u0393|| distinct values. Also #c[X ] \u2208 {0, . . . ,n} by definition. So the number of distinct charac- teristics can be much smaller than the number of corresponding partial solutions c[X \u222a\u03bd(X)] when |Q|\u226a |X \u222a\u03bd(X)|. However, since Q\u2287 \u03bd(X) and |\u03bd(X)| is |X |I 119 ?>=<89:;A )) oo \/\/ ?>=<89:;B iiOO \u000f\u000f ?>=<89:;E oo \/\/ ?>=<89:;D oo \/\/ ?>=<89:;C oo \/\/ ?>=<89:;F oo \/\/ ?>=<89:;G Figure 4.5: An action graph G. ?>=<89:;A ?? ?? ?? ?>=<89:;B ?? ?? ?? ?>=<89:;E ?>=<89:;D ?>=<89:;C ?>=<89:;F ?>=<89:;G Figure 4.6: The primal graph G\u2032. R1={A,B} R5={D,E} R3={C,D} R2={B,C} R4={C,F} R6={F,G} Figure 4.7: Tree decomposition of und(G) X1={A,B,C} X5={C,D,E} X3={B,C,D,E,F} X2={A,B,C,D,F} X4={B,C,D,F,G} X6={C,F,G} Figure 4.8: Tree decomposition of primal graph G\u2032, satisfy- ing the conditions of Lemma 4.4.11. in the worst case, the number of possible configurations over Q is superpolynomial in ||\u0393|| in the worst case. Since C P,QX could potentially include every distinct tu- ple (c[Q],B,Uc,Ue), the size of C P,QX is superpolynomial in the worst case. Indeed, Theorem 4.2.2 showed that we will not find a poly-time algorithm for general sym- metric AGGs unless P = NP. Nevertheless, we next show that if the action graph G has bounded treewidth, we can combine the restricted games in a way such that the number of configurations |C[Q]| (and thus |C P,QX |) remains polynomial in ||\u0393|| as X grows. 4.4.4 Algorithm for Symmetric AGGs with Bounded Treewidth We first introduce some notation. Given an action graph G = (A ,E), define H (G) to be the hypergraph (A ,E ) with E = {{\u03b1} \u222a \u03bd(\u03b1)|\u03b1 \u2208 A }. In other words, for each action \u03b1 \u2208 A , there is a hyperedge containing \u03b1 and its neigh- bors. Duplicate hyperedges are removed. Let G\u2032 be the primal graph of the hypergraph H (G). G\u2032 is a undirected graph on the same set of vertices, and there is an edge between two nodes if they are in some hyperedge in H (G). G\u2032 = (A ,{{u,v}|\u2203h \u2208 E such that u,v \u2208 h}). Thus for each \u03b1 \u2208 A , \u03b1 and its neighbors in G form a clique in G\u2032. In the Bayes net literature G\u2032 is also known 120 as the moral graph of G. For example, Figure 4.5 shows the action graph G of a symmetric AGG. Its hypergraph H (G) has the same set of vertices and the hy- peredges {A,B}, {A,B,C}, {D,E}, {C,D,E}, {F,G}, {C,F,G}, and {B,C,D,E}. Figure 4.6 shows G\u2019s primal graph G\u2032. The concept of tree decomposition and treewidth was introduced by Robertson and Seymour [1986]. Definition 4.4.10. A tree decomposition of an undirected graph G\u2032 = (V,E) is a pair (X ,T ) with T = (I,F) a tree (where I and F are the nodes and edges of the tree respectively), and X = {Xi|i \u2208 I} a family of subsets of V , one for each node of T , such that 1. \u22c3 i\u2208I Xi =V , 2. for all edges {v,w} \u2208 E there exists an i \u2208 I with v \u2208 Xi and w \u2208 Xi, and 3. for all i, j,k \u2208 I: if j is on the path from i to k in T , then Xi\u2229Xk \u2286 X j. The width of a tree decomposition is maxi\u2208I |Xi| \u2212 1. The treewidth tw(G\u2032) of a graph G\u2032 is the minimum width over all tree decompositions of G\u2032. Condition 3 of the definition can be equivalently stated as the following: for all v \u2208V , the set {i \u2208 I|v \u2208 Xi} induces a subtree of T . Let the treewidth tw(\u0393) of an AGG \u0393 be the treewidth of und(G), the undi- rected version of its action graph G (excluding self-edges). Figure 4.7 shows a tree decomposition ({Ri|i\u2208 I},T = (I,F)) of the undirected version of the action graph G in Figure 4.5. In this case und(G) is a tree. The width of the tree decomposi- tion is 1 since each tree node contains at most 2 vertices of und(G). This is a tree decomposition of minimum width, since any tree decomposition must have nodes containing e.g., both A and B since {A,B} is an edge in und(G). In fact, it is known in general that the treewidth of a connected tree is 1. A tree decomposition of und(G) provides a family of subsets (R1, . . . ,R6 in Figure 4.7) of vertices that cover A , and if the width of the decomposition is bounded by a constant that implies the sizes of Ri are bounded. We will be using Ri as the P\u2019s in Lemmas 4.4.8 and 4.4.9. However, we also need to control the size of Q \u2287 P\u222a \u03bd(P) in those lemmas in order to control the running time of the 121 resulting dynamic programming algorithm. It turns out that a tree decomposition of the primal graph can be constructed that yields the appropriate Q\u2019s of Lemmas 4.4.8 and 4.4.9. Given a tree graph T = (I,F) and J \u2282 I, let TJ be the subgraph of T restricted to J. Lemma 4.4.11. Given a symmetric AGG- \/0 \u0393 with treewidth w, there exists a tree decomposition ({Xi|i \u2208 I},T = (I,F)) of the primal graph G\u2032 of width at most (w+1)(I +1)\u22121, and {Ri|i \u2208 I} such that 1. \u22c3 i\u2208I Ri = A , and Ri\u222a\u03bd(Ri)\u2286 Xi for all i \u2208 I, 2. Let J\u2282 I such that TJ is a connected graph and connects to the rest of the tree via only one edge { j, j\u2032} \u2208 F with j \u2208 J. Let YJ =\u22c3i\u2208J Ri. Then \u03c4(YJ)\u2286 R j. Proof. By assumption there exists a tree decomposition of und(G) of width w. De- note this decomposition ({Ri|i \u2208 I},T = (I,F)). Then \u22c3 i\u2208I Ri = A . Let Xi = Ri\u222a\u03bd(Ri) for all i \u2208 I. Daskalakis and Papadimitriou [2006] proved that the result- ing ({Xi|i \u2208 I},T ) is a tree decomposition of the primal graph G\u2032 having width at most (w+1)(I +1)\u22121. Then Ri\u222a\u03bd(Ri)\u2286 Xi. Given J, j and YJ as defined in the statement of the lemma, we claim that \u03c4(YJ) \u2286 R j. To see this, consider each \u03b1 \u2208 \u03c4(YJ). Then by definition there must be an \u03b1 \u2032 \u2208 YJ such that {\u03b1 ,\u03b1 \u2032} is an edge in und(G). We note that TI\\J is also connected. Since YJ = \u22c3 i\u2208J Ri, we have YJ \u2286 \u22c3 i\u2208I\\J Ri = YI\\J and thus \u03b1 \u2032 \u2208 YI\\J . Since {\u03b1 ,\u03b1 \u2032} is an edge in und(G), by condition 2 of Definition 4.4.10 there exists i\u2032 \u2208 I such that \u03b1 ,\u03b1 \u2032 \u2208 Ri\u2032 . Furthermore such i\u2032 must be in I \\J since \u03b1 \u2032 6\u2208YJ . Since \u03b1 is contained in some Ri with i \u2208 J, by condition 3 of Definition 4.4.10 \u03b1 must be contained in all Ri\u2032\u2032 such that i\u2032\u2032 is on the path from i to i\u2032 in T . Since j is on this path, \u03b1 \u2208 R j. Since the undirected version of the action graph in Figure 4.5 has treewidth 1, Lemma 4.4.11 guarantees a tree decomposition of the primal graph with width at most 7 satisfying the above conditions. Figure 4.8 shows such a tree decomposition (with width 4) of the primal graph G\u2032 from Figure 4.6. Each node i \u2208 I of the tree is labeled with Xi. Lemma 4.4.11 together with Lemma 4.4.8 imply that: 122 Corollary 4.4.12. Given any J, j and YJ satisfying condition 2 of Lemma 4.4.11, chR j,X j is an equilibrium-preserving characteristic function on YJ . Also observe that for all i\u2208 I, chRi,Xi is trivially an equilibrium-preserving char- acteristic function on Ri. Pick an arbitrary node r \u2208 I to be the root of T . We say node j is a descendant of node i (equivalently i is an ancestor of j) if i is on the path from r to j. Define Zi = {v \u2208 R j| j = i or j is a descendant of i}. Then Zr \u2261 A . Intuitively, when we combine the restricted games associated with node i and its descendants in T , we would get a restricted game on Zi. For each node i \u2208 I with children q1, . . . ,qm \u2208 I, for each j \u2264 m, define Zi, j = Ri\u222aZq1 \u222a . . .\u222aZq j . This implies that Zi,m \u2261 Zi. Then Corollary 4.4.12 implies that for any Zi, j, chRi,Xi is an equilibrium-preserving char- acteristic function. We write CZi, j \u2261 C Ri,Xi Zi, j . For our tree decomposition in Figure 4.8, if we let node 1 be the root r, then Z5 = R5, Z6 = R6, Z3 = R3\u222aR5 = {C,D,E}, Z4 = R4 \u222aR6 = {C,F,G}, Z2 = R2 \u222aR3 \u222aR4 \u222aR5 \u222aR6 = {B,C,D,E,F,G}, and Z1 = A . Since node 2 has two children q1 = 3 and q2 = 4, then Z2,1 = R2\u222aZ3 = {B,C,D,E} and Z2,2 = Z2,1\u222aZ4 = Z2 = {B,C,D,E,F,G}. We adapt our dynamic programming algorithm from the previous section so that {Ri|i \u2208 I} is the initial family of subsets that covers A , and the order in which the subsets are combined is guided by the tree decomposition, from the leaves to the root. 1. For each Ri, compute CRi . This can be done by enumerating all possible configurations c[Xi] and keeping those that induce a pure equilibrium of the restricted game on Ri. 2. Initialize the set Done\u2286 I to contain the leaves of the tree T . 3. While \u2203i \u2208 I \\Done such that {i\u2032 \u2208 I|i\u2032 is a child of i} \u2286 Done: (a) Let CZi,0 := CRi (b) Let q1, . . . ,qm be the children of i. (c) For j = 1 to m, compute CZi, j from CZi, j\u22121 and CZq j by applying Lemma 4.4.9. (d) CZi := CZi,m 123 (e) Add i to Done. 4. Return TRUE iff CZr is nonempty. For the tree decomposition in Figure 4.8 with node 1 being the root, our algorithm would start from the leaves 5 and 6, then compute CZ3 = C Z3,1 by combining CR3 and CR5 , compute CZ4 = C Z4,1 by combining CR4 and CR6 , compute CZ2,1 = C{B,C,D,E} by combining CR2 and CZ3 , then compute CZ2 = CZ2,2 = C{B,C,D,E,F,G} by combining CZ2,1 and CZ4 , and finally compute CZ1 by combining CR1 and CZ2 . Theorem 4.4.13. Deciding the existence of pure equilibrium in symmetric AGG- \/0s with bounded treewidth is in P. Proof. Suppose the treewidth of the AGG is bounded by a constant, w. Then a tree decomposition of und(G) having width at most w can be constructed in time exponential only in w, i.e., in polynomial time (see e.g. [Bodlaender, 1996, Kloks, 1994]). Then we can apply Lemma 4.4.11 to construct in polynomial time the tree decomposition ({Xi|i \u2208 I},T = (I,F)) of the primal graph G\u2032 and {Ri|i \u2208 I}. It is straightforward to check that our algorithm above correctly computes all CZi, j . Specifically, at step 3c, since Zi, j\u22121 and Zq j correspond to disjoint subgraphs of T connected by edge {i,q j} \u2208F , we have Zi, j\u22121\u222aZq j \u2286 Ri. Therefore we can ap- ply Lemma 4.4.9. Since Zr \u2261A , the algorithm correctly determines the existence of pure equilibrium in \u0393. The running time of the algorithm is polynomial in the size of the CZi\u2019s. The size of each CZi is bounded by n||\u0393||2|C [Xi]|. Since the tree decomposition has width at most (w + 1)(I + 1)\u2212 1, |C [Xi]| \u2264 ( n+(w+1)(I+1) (w+1)(I+1) ) . The latter is the number of ordered combinatorial compositions of n into (w+ 1)(I + 1)+ 1 non- negative integers. An equivalent way of counting this number is as follows: 1. break n into w+1 nonnegative integers x1, . . . ,xw+1 such that \u2211w+1i=1 xi = n. 2. then break each of the first w integers into I + 1 nonnegative parts in the same way, and the last one (xw+1) into I +2 nonnegative parts. There are ( n+w w ) different ways of carrying out step 1. Since each integer con- sidered in step 2 is at most n, there are at most ( n+I+1 I+1 ) ways of breaking each 124 integer. Therefore ( n+(w+1)(I+1) (w+1)(I+1) ) \u2264 ( n+w w )( n+I+1 I+1 )w+1 . Since w is a constant, this is polynomial in ||\u0393||. Hence our algorithm runs in polynomial time. When the input is an AGG-\/0 encoding of a singleton congestion game, i.e., a symmetric AGG-\/0 with only self-edges, the resulting und(G) has treewidth 0 and by Theorem 4.4.13 the existence of PSNE can be determined in polynomial time. Of course, our result applies to a much larger class of games. Road games (Example 4.4.2) have treewidth 2 for all m. Thus by Theorem 4.4.13 the existence of PSNE can be determined in polynomial time for these games. Our approach can be straightforwardly extended to the computation of related solution concepts such as pure-strategy \u03b5-Nash equilibrium and strict equilibrium. For example, for pure-strategy \u03b5-Nash equilibrium, we define partial solutions such that they induce \u03b5-Nash equilibria of the corresponding restricted games, and use a modified version of Lemma 4.4.9 where the conditions that compare best entrance utilities and worst current utilities are relaxed by \u03b5 ; e.g., U \u2032c \u2265U \u2032\u2032e is replaced by U \u2032c + \u03b5 \u2265U \u2032\u2032e . 4.4.5 Finding PSNE So far we have focused on the problem of deciding the existence of PSNE. Our dy- namic programming approach can also be used to find these equilibria if they exist. We first consider the problem of constructing a single PSNE. After the bottom-up pass of the tree decomposition as discussed above, if CZr is not empty, we do a top-down pass as follows: 1. Initialize Done \u2286 I to be {r}, 2. Pick an arbitrary (c[Xr],Br,U rc ,U re ) \u2208 CZr 3. Set CZr = {(c[Xr],Br,U rc ,U re )}, 4. While Done 6= I: (a) Take i \u2208 Done such that {i\u2032|i\u2032 is a child of i}\u2229Done = \/0 (b) Let q1, . . . ,qm be the children of i. (c) CZi \u2261 CZi,m will have a single element (c[Xi],Bi,U ic,U ie). 125 (d) Let CZi,0 := CRi = {ch(c[Xi])} (e) For each j \u2208 m,m\u22121, . . . ,1: i. pick (c[Xq j ],Bq j ,U q j c ,U q j e )\u2208CZq j and (c[Xi],Bi, j\u22121,U i, j\u22121 c ,U i, j\u22121e )\u2208 CZi, j\u22121 such that they combine to form the single element of Ci, j while satisfying the conditions of Lemma 4.4.9. ii. set CZq j := {(c[Xq j ],Bq j ,U q j c ,U q j e )} and CZi, j\u22121 := {(c[Xi],Bi, j\u22121,U i, j\u22121 c ,U i, j\u22121e )}. iii. add q j to Done. 5. Now each CRi contains a single element ch(c[Xi]). Output configuration\u22c3 i\u2208I c[Xi]. Since the bottom-up pass has established the correct CZi, j , step 4(e)i can always be carried out. Therefore the algorithm is correct, and by the same argument as in the proof of Theorem 4.4.13 the algorithm runs in polynomial time. This proves: Corollary 4.4.14. The problem of finding a PSNE is in P for symmetric AGG- \/0s with bounded treewidth. A similar top-down pass would make sure that each CZi, j contains exactly the characteristics of extendable partial solutions. Although the number of pure equi- libria of an AGG could be exponential in the representation size ||\u0393||, the resulting set of CZi, j along with the tree decomposition constitutes a succinct description of the set of PSNE of the game, analogous to Daskalakis and Papadimitriou [2006]\u2019s construction of succinct descriptions of the set of PSNE of graphical games. Given a symmetric AGG-\/0 with bounded treewidth, such a succinct description can be computed in polynomial time. The succinct description can be used e.g., to enu- merate the set of all PSNE in time polynomial in the size of input and output, and to check if there exists a PSNE with a specific configuration at certain action nodes. 4.4.6 Computing Optimal PSNE Recall from Chapter 2 that the social welfare is the sum of the players\u2019 utilities. Given a configuration c in a symmetric AGG-\/0 \u0393, the social welfare can be written as W\u0393(c) = \u2211 \u03b1\u2208A c[\u03b1 ]u\u03b1 (c[\u03bd(\u03b1)]). 126 Our algorithm can be extended to compute the socially optimal PSNE if one ex- ists. The characteristics now also store the social wealth of the restricted games. Specifically, we use the characteristic function chopt(c[Zi, j \u222a\u03bd(Zi, j)]) = (chRi,Xi(c[Zi j \u222a\u03bd(Zi, j)]),W\u0393\u2032(c[Zi, j])) where \u0393\u2032 = \u0393(#c[Zi j],Zi j,c[\u03bd(Zi j)]) is the restricted game on Zi j induced by the partial solution. Let C optZi, j be the corresponding set of characteristics. The way characteristics from two sets X \u2032,X \u2032\u2032\u2286A are combined is also slightly different from Lemma 4.4.9. Once we have checked consistency and profitable deviations as in Lemma 4.4.9, we now need to compute the social welfare of the resulting characteristic from the given characteristics of X \u2032 and X \u2032\u2032. Simply adding the social welfare values would not be correct due to the possible overlap of X \u2032 and X \u2032\u2032; fortunately we know the configuration over X \u2032 \u2229X \u2032\u2032 and their neighbors (by assumption of Lemma 4.4.9) so we are able to calculate the social welfare of the overlap and subtract it from the sum. Corollary 4.4.15. Suppose X = X \u2032\u222aX \u2032\u2032, and X \u2032,X \u2032\u2032,P,P\u2032,P\u2032\u2032,Q,Q\u2032,Q\u2032\u2032 satisfy the prerequisites of Lemma 4.4.9. For all c[Q],B,Uc,Ue,W\u222a, we have (c[Q],B,Uc,Ue,W\u222a)\u2208 C opt X if and only if there exist (c\u2032[Q\u2032],B\u2032,U \u2032c,U \u2032e,W \u2032)\u2208C optX \u2032 and (c\u2032\u2032[Q\u2032\u2032],B\u2032\u2032,U \u2032\u2032c ,U \u2032\u2032e ,W \u2032\u2032)\u2208 C opt X \u2032\u2032 satisfying the conditions of Lemma 4.4.9, and W\u222a =W \u2032+W \u2032\u2032\u2212W\u0393\u2229(c[X \u2032\u2229X \u2032\u2032]) where \u0393\u2229 = \u0393(#c[X \u2032\u2229X \u2032\u2032],X \u2032\u2229X \u2032\u2032,c[\u03bd(X \u2032\u2229X \u2032\u2032)]). Using this characteristic function together with the bottom-up pass above, we can compute the optimal social welfare achieved by a PSNE, if one exists. A top- down pass then constructs such a PSNE. One issue with this approach is that due to the additional social welfare term in a characteristic, the number of characteristics in each C optZi, j can be greater than |CZi, j |. Fortunately, it is straightforward to show that: Lemma 4.4.16. Suppose partial solutions c[X \u222a\u03bd(X)] and c\u2032[X \u222a\u03bd(X)] induce the same characteristic under chopt except that the former\u2019s social welfare is less than 127 the latter\u2019s. Then the former can be extended to a PSNE if and only if the latter can be extended to a PSNE with greater social welfare. This implies that whenever we have multiple characteristics in C optZi, j that differ only in their social welfare values, we can safely prune away all but the one with the greatest social welfare. The resulting C optZi, j has the same cardinality as CZi, j , therefore the algorithm runs in polynomial time. Corollary 4.4.17. Computing a maximum social welfare PSNE in symmetric AGG- \/0s with bounded treewidth is in P. 4.5 Beyond symmetric AGGs 4.5.1 Algorithm for k-Symmetric AGG- \/0s Our results for symmetric AGG-\/0s can be straightforwardly extended to k-symmetric AGG-\/0s with bounded k. Consider a k-symmetric AGG-\/0 \u0393 with player classes N1, . . . ,Nk. As discussed in Section 4.3, it is sufficient to consider k-configurations. Define restricted game \u0393((n\u2032\u2113)1\u2264\u2113\u2264k,X ,(c\u2113[\u03bd\u2113(X)])1\u2264\u2113\u2264k) to be the k-symmetric AGG-\/0 played on GX , in which each player class \u2113 \u2208 {1 . . .k} has n\u2032\u2113 \u2264 |N\u2113| \u2212 c\u2113[\u03bd(X)] players, and the utility function for each \u03b1 \u2208 X is u\u03b1 |(c\u2113(\u03bd(X)))1\u2264\u2113\u2264k, i.e., the same as u\u03b1 of \u0393 except that the configuration of nodes outside X are given by the k-configuration (c\u2113(\u03bd(X)))1\u2264\u2113\u2264k. We define a partial solution on X to be a k-configuration (c\u2113[X \u222a \u03bd(X)])1\u2264\u2113\u2264k such that (c\u2113[X ])1\u2264\u2113\u2264k is a PSNE of the re- stricted game \u0393((#c\u2113[X ])1\u2264\u2113\u2264k,X ,(c\u2113[\u03bd\u2113(X)])1\u2264\u2113\u2264k). Similarly, we extend the characteristic functions of Section 4.4 by replacing each component of the characteristic with its k-tuple version. Definition 4.5.1. Given a restricted game \u0393\u2032 on X \u2282A and a PSNE (c\u2217\u2113)1\u2264\u2113\u2264k of \u0393\u2032, player class \u2113\u2019s worst current utility WCU\u2113((c\u2217\u2113)1\u2264\u2113\u2264k,\u0393\u2032) is the utility of the worst-off player from class \u2113 in \u0393\u2032, or \u221e if \u0393\u2032 has 0 players in class \u2113. Player class \u2113\u2019s best entrance utility BEU\u2113((c\u2217\u2113)1\u2264\u2113\u2264k,\u0393\u2032) is the best payoff an outside player (a player currently playing an action outside of X \u222a \u03bd(X)) from class \u2113 can get by playing an action in X \u2229A \u2113, assuming the current players in \u0393\u2032 play (c\u2217\u2113)1\u2264\u2113\u2264k. If there are 0 outside players from class \u2113 or X \u2229A \u2113 = \/0, BEU((c\u2217\u2113)1\u2264\u2113\u2264k,\u0393\u2032) =\u2212\u221e. 128 Lemma 4.5.2. Given a k-symmetric AGG- \/0 \u0393, X \u2282A , P\u2286 X such that P\u2287 \u03c4(X), and Q \u2287 P\u222a \u03bd(P), consider the characteristic function chkP,Q that maps a partial solution (c\u2113[X \u222a\u03bd(X)])1\u2264\u2113\u2264k to (c\u2113[Q],#c\u2113[X ],WCU\u2113(c[X \u2032],\u0393\u2032),BEU\u2113(c[X \u2032],\u0393\u2032))1\u2264\u2113\u2264k, where \u0393\u2032 = \u0393((#c\u2113[X \u2032])1\u2264\u2113\u2264k,X \u2032,(c\u2113[\u03bd(X \u2032)])1\u2264\u2113\u2264k) and X \u2032 = X \\P. Then chkP,Q is equilibrium-preserving. Lemma 4.4.9 can be similarly extended to the k-symmetric case. Therefore we can use this characteristic function together with our bottom-up pass algorithm to determine the existence of PSNE in k-symmetric AGG-\/0s, and use the top-down algorithm to find a PSNE if one exists. For k-symmetric AGG-\/0s with bounded k and bounded treewidths, each of the k components of chkRi,Xi \u2019s output can take at most poly(||\u0393||) values, and as a result the number of characteristics is polynomial in ||\u0393||. We thus have the following generalization of Theorem 4.4.13, Corollary 4.4.14 and Corollary 4.4.17. Corollary 4.5.3. For k-symmetric AGG- \/0s with bounded k and bounded treewidths, the problems of determining the existence of PSNE, of constructing a PSNE, and of finding maximum social welfare PSNE are all in P. We observe that when k = 1, i.e., when the game is symmetric, chkP,Q degen- erates into chP,Q we previously defined for the symmetric case, and this algorithm simplifies into our algorithm for symmetric AGG-\/0s. 4.5.2 General AGG- \/0s and the Augmented Action Graph We now consider the case of general AGG-\/0s. We note that such games can still be viewed as k-symmetric (with k at most n), but now k may grow with the input size. Our approach in Section 4.5.1 for k-symmetric AGG-\/0s works well only when k is bounded by a constant, since the number of characteristics under chkP,Q grows exponentially in k. Can this approach be extended to the case of general AGG-\/0s? We observe that in order to check deviations out of and into X \\P, we do not need to keep track of information about player classes whose action sets are either (1) fully contained in X \\P, or (2) disjoint from X \\P. In the former case, no player 129 of that class can deviate outside X \\P; this is reflected in chkP,Q as best entrance utilities of \u2212\u221e for that class in the restricted game on X \\P, but we also do not need to keep track of the worst current utilities for the class. Similarly, in the latter case, no player of that class can deviate into X \\P. To check deviations out of and into P, we only need to keep track of information on player classes whose action sets intersect Q. In other words, it is sufficient to define a characteristic function in terms of the player classes that are relevant to the current subset of nodes. Formally, Lemma 4.5.4. Consider a k-symmetric AGG- \/0 \u0393 with player classes 1, . . . ,k cor- responding to sets of players N1, . . . ,Nk and action sets A 1, . . . ,A k. Given X \u2282A , P \u2286 X such that P \u2287 \u03c4(X), and Q \u2287 P\u222a \u03bd(P), let L(X ,P) = {\u2113|1 \u2264 \u2113 \u2264 k, A \u2113 6\u2286 (X \\P), A \u2113\u2229 (X \\P) 6= \/0}, and let K(Q) = {\u2113|1 \u2264 \u2113\u2264 k, A \u2113\u2229Q 6= \/0}. Consider the characteristic function ch+P,Q that maps a partial solution (c\u2113[X \u222a \u03bd(X)])1\u2264\u2113\u2264k to ( (c\u2113[Q])\u2113\u2208K(Q),(#c\u2113[X \\P])\u2113\u2208L(X ,P),(WCU\u2113(c[X \u2032],\u0393\u2032))\u2113\u2208L(X ,P),(BEU\u2113(c[X \u2032],\u0393\u2032))\u2113\u2208L(X ,P) ) , where \u0393\u2032 = \u0393((#c\u2113[X \u2032])1\u2264\u2113\u2264k,X \u2032,(c\u2113[\u03bd(X \u2032)])1\u2264\u2113\u2264k) and X \u2032 = X \\P. Then ch+P,Q is equilibrium-preserving. Lemma 4.4.9 can be similarly extended. The number of characteristics under ch+P,Q is exponential in |Q|, |K(Q)| and |L(X ,P)|. Intuitively, as we combine these characteristics to form characteristics on larger subgraphs, |L(X ,P)| will also grow, unless we \u201cfinish off\u201d certain player classes, i.e., player class \u2113 such that A \u2113 be- come a subset of X \\P. Can we divide the action graph and combine the restricted games in a way that keeps |Q|, |K(Q)| and |L(X ,P)| small? A natural idea is to turn to tree decompositions of G, as we did in Section 4.4.4. However, [Daskalakis et al., 2009] proved that the problem of determining the existence of PSNE is NP-hard even for AGGs with tree-width 1 and constant in-degree. In other words, we cannot hope for a polynomial-time algorithm for general AGGs with constant treewidths, unless P = NP. On the other hand, there exist classes of asymmetric AGGs that are poly-time solvable, e.g. those corresponding to tree graphical games. This implies that looking at the action graph alone is insufficient for identifying such tractable classes of AGGs. We have seen that information about the action sets of the AGG 130 is needed in order to define ch+P,Q. Thus a natural idea is to define an object that incorporates information about the action sets as well as the action graph of the AGG. Definition 4.5.5. Given an AGG- \/0 \u0393 with player classes 1, . . . ,k, define the aug- mented action graph2 to be a directed graph AG = (V+,E+) = (A \u222a{1, . . . ,k},E \u222a{(\u2113,\u03b1)|({\u03b1}\u222a\u03bd(\u03b1))\u2229A \u2113 6= \/0}). Let I + be the maximum in-degree of AG. In other words, we add to the action graph G new vertices {1, . . . ,k} corre- sponding to the player classes, and an edge from each player class \u2113 to action \u03b1 if \u03b1 or any of its neighbors are in the action set of class \u2113. Intuitively, the edges from player class nodes to action nodes in the augmented action graph ensure that in the resulting tree decomposition, the set of tree nodes to which a player class is rele- vant forms a connected subgraph of the tree. This is formalized in the following result for augmented action graphs, which is analogous to Lemma 4.4.11 for action graphs. Lemma 4.5.6. Given a k-symmetric AGG- \/0 \u0393 whose augmented action graph AG has treewidth w, there exists a tree decomposition ({Xi \u222a Ki|Xi \u2286 A ,Ki \u2286 {1, . . . ,k}, i \u2208 I},T = (I,F)) of AG\u2019s primal graph AG\u2032 of width at most (w + 1)(I ++1)\u22121, and {Ri \u2286A |i \u2208 I} such that 1. \u22c3 i\u2208I Ri = A , and Ri\u222a\u03bd(Ri)\u2286 Xi for all i \u2208 I, 2. Let J\u2282 I such that TJ is a connected graph and connects to the rest of the tree via only one edge { j, j\u2032} \u2208 F with j \u2208 J. Let YJ = \u22c3i\u2208J Ri. Then \u03c4(YJ)\u2286 R j, K(X j)\u2286 K j, and L(YJ ,R j)\u2286 L j. Proof. The construction is very similar to that of Lemma 4.4.11: given a tree de- composition ({Ri \u222aLi|Ri \u2286 A ,Li \u2286 {1, . . . ,k}, i \u2208 I},T = (I,F)) of AG, we build 2We note that our definition of augmented action graph is different from the augmented graph of Daskalakis et al. [2009]. The computational problem that Daskalakis et al. [2009] were trying to solve (finding approximate mixed-strategy Nash equilibria) is different from the PSNE problem considered in this chapter. 131 a tree decomposition of AG\u2032 by adding to each tree node i \u2208 I the neighboring vertices of Ri (vertices in Li have no neighbors). Lemma 4.5 of [Daskalakis and Pa- padimitriou, 2006] ensures that the result is a tree decomposition of AG\u2032 with width at most (w+ 1)(I + + 1)\u2212 1. The resulting tree decomposition ({Xi \u222aKi|Xi \u2286 A ,Ki \u2286 {1, . . . ,k}, i \u2208 I},T = (I,F)) will have Xi = Ri\u222a \u03bd(Ri) as in the proof of Lemma 4.4.11, and Ki = Li\u222a{\u2113|1 \u2264 \u2113 \u2264 k,A \u2113 \u2229 (Ri\u222a \u03bd(Ri)) 6= \/0}. This implies K(Xi)\u2286 Ki for all i \u2208 I. By the same argument as in the proof of Lemma 4.4.11, we have \u03c4(YJ) \u2286 R j. It remains to show that L(YJ ,R j) \u2286 L j. Consider an arbitrary \u2113 \u2208 L(YJ,R j). This implies that A \u2113\u2229 (YJ \\R j) 6= \/0 and A \u2113\u2229 (YJ \\R j) 6= \/0. But this implies that there exists \u03b1 \u2208 (YJ \\R j) such that (\u2113,\u03b1) \u2208 E+, and there exists \u03b1 \u2032 \u2208 YJ \\R j such that (\u2113,\u03b1 \u2032) \u2208 E+. Since the tree nodes that contain \u03b1 must be in J \\ { j}, and by condition 2 of Definition 4.4.10 (\u2113,\u03b1) must be contained in some tree node, we must have that \u2113 \u2208 Li for some i \u2208 J \\{ j}. Similarly we must have \u2113 \u2208 Li\u2032 for some i\u2032 \u2208 J \\{ j}. But by condition 3 of Definition 4.4.10 we must have \u2113 \u2208 L j, and therefore L(YJ,R j)\u2286 L j. Lemma 4.5.6 implies that we can apply the bottom-up pass algorithm using the characteristic function ch+Ri,Xi for Zi, j, and correctly determines the existence of PSNE. If a PSNE exists then a top-down pass constructs one. Let us consider the running time of this approach. If we assume that AG has bounded indegree and bounded treewidth, this immediately implies that |Xi| and Ki are bounded for all i \u2208 I, and the number of characteristics are polynomial in n and |A |. This in turn implies that our algorithm runs in polynomial time in this case. Proposition 4.5.7. For AGG- \/0s whose action graphs have bounded indegree and bounded treewidth, the problems of determining the existence of PSNE and of find- ing a PSNE are in P. One question is whether it is possible to show that the run time is polynomial in the input size when the augmented action graph has bounded treewidth, i.e., with- out any requirement on the in-degree. However, this turns out to be more difficult than in the symmetric case. Specifically, in order to prove such a result without any requirement on the in-degree, we would need to compare the runtime with (a lower 132 bound of) the input size. Whereas for symmetric AGGs we have exact estimates of the input size, for general AGGs we only proved upper bounds in Chapter 3. The complexity of PSNE for AGG-\/0s with bounded-treewidth augmented action graphs remains an open problem. One interesting case is when the input is an AGG-\/0 encoding of a bounded- treewidth graphical game. Recall that the PSNE problem for such games is known to be tractable [Daskalakis and Papadimitriou, 2006, Gottlob et al., 2005]. We show that our algorithm runs in polynomial time given AGG-\/0 encodings of such games, thus providing another proof of this result. Proposition 4.5.8. Determining the existence of PSNE in bounded-treewidth graph- ical games is in P. Proof. Recall from Chapter 3 that the size of the AGG-\/0 encoding is proportional to that of the graphical game, which is \u0398(\u2211\u2113\u2208N |A\u2113|\u220f j\u2208\u03bdg(\u2113) |A j|), where \u03bdg(\u2113) is the set of neighboring players of \u2113 in the graphical game. The AGG has k = n player classes, each containing a single player. We denote by \u2113 the player class corresponding to player \u2113 \u2208 N. Suppose the underlying graph (N,Eg) of the graph- ical game has treewidth w and maximum in-degree Ig. Then the corresponding action graph G = (A ,E) is given in Chapter 3 and the corresponding augmented action graph AG = (A \u222aN,E \u222a{(\u2113,\u03b1)|\u2113 \u2208 N,\u03b1 \u2208 A\u2113}). Given a tree decomposi- tion ({Li|i \u2208 I},T = (I,F)) of the graph (N,Eg) with width w, it is straightforward to show that ({Ri\u222aLi|i \u2208 I},T ), where Ri = \u22c3 \u2113\u2208Li A\u2113 for all i \u2208 I, is a tree decom- position of the augmented action graph AG. The width of this decomposition is O(max\u2113\u2208N |A\u2113|w). Construct the tree decomposition ({Xi\u222aKi|i \u2208 I},T ) for the primal graph AG\u2032 according to Lemma 4.5.6. It is straightforward to verify that Ki = Li \u222a \u03bdg(Li) and Xi = \u22c3 \u2113\u2208Ki A\u2113. Therefore |Ki|= O(wIg), |Xi|= O(max\u2113\u2208Ki |A\u2113|wIg), and the width of the decomposition is O(max\u2113\u2208N |A\u2113|wIg). Now consider the number of characteristics under ch+Ri,Xi . Since for each \u2113 \u2208 N and J \u2286 I we either have A\u2113 \u2286 YJ or A\u2113 \u2229YJ = \/0, this implies that L(YJ,R j) = \/0 for all j \u2208 I and J \u2286 I. Thus the only nontrivial component of the characteristic is (c\u2113[Xi])\u2113\u2208Ki . Since each \u2113 \u2208 Ki corresponds to a single player, |C\u2113[Xi]|= |A\u2113|. Thus the number of possible (c\u2113[Xi])\u2113\u2208Ki is \u220f\u2113\u2208Ki |A\u2113|, which is polynomial in the input 133 size since |Ki|= O(wIg). Thus the number of characteristics is polynomial in ||\u0393||, which implies that our algorithm runs in polynomial time. We see from the above proof that in this case the characteristic degenerates into (c\u2113[Xi])\u2113\u2208Ki , which carries the same amount of information as the partial pure strategy profile of players in Ki. This is exactly the same sufficient statistic used by Daskalakis and Papadimitriou [2006]\u2019s algorithm for graphical games, and as a result our algorithm simplifies to the equivalent of Daskalakis and Papadimitriou [2006]\u2019s algorithm when given an AGG-\/0 encoding of a graphical game. We also note that our algorithms for symmetric and k-symmetric AGG-\/0s can be seen as special cases of our augmented-action-graph-based algorithm. In partic- ular, consider a k-symmetric AGG-\/0 with action graph G, and suppose und(G) has a tree decomposition ({Ri|i \u2208 I},T = (I,F)). Then our algorithm for k-symmetric AGG-\/0s corresponds to applying the augmented-action-graph-based algorithm to the tree decomposition ({Ri\u222a{1, . . . ,k}|i \u2208 I},T ) for AG\u2032, i.e., having all k player classes in each of the tree nodes of the decomposition. 4.6 Conclusions and Open Problems In this chapter we analyzed the problem of computing PSNE in AGGs. We pro- posed a dynamic programming algorithm and showed that for symmetric AGG-\/0s with bounded treewidth, our algorithm determines the existence of PSNE in poly- nomial time. We extended our approach to certain classes of asymmetric AGG-\/0s, and showed that our algorithm generalizes existing dynamic-programming approaches for computing PSNE in graphical games and singleton congestion games. One question is whether our approach has captured all the tractable classes of AGG-\/0s for the PSNE problem. The answer is no. For example, consider an asymmetric AGG-\/0 whose action graph has no inter-vertex edges and only self edges. This is the same as the singleton congestion games studied by Ieong et al. [2005] except that here the game is not symmetric. It is straightforward to see that this game corresponds to a congestion game, and thus PSNE always exist. Furthermore, by a similar argument as Ieong et al. [2005], given such a game a PSNE can be found by iterated best response dynamics in polynomial time. On the other hand, the augmented graph of such an AGG-\/0 might have large treewidth. 134 This example can be generalized: if the action graph contains a set X of such singleton nodes, and the action sets that intersect X does not contain any node not in X , then the subgraph of the singleton nodes does not affect the existence of PSNE, i.e., a PSNE exists in the game if and only if a PSNE exists in the restricted game on the rest of the graph. We can even further generalize this: consider a subgraph GX such that (as above) the action sets that intersect X does not contain any node not in X , and that X has only incoming edges from the rest of G and no out going edges (i.e., \u03bd(X) = \/0), then GX does not affect the existence of PSNE, and we can safely delete the subgraph and solve the rest of the graph. This process can be repeated. (This is analogous to, and indeed a generalization of, the case of graphical games with sinks which was discussed in [Jiang and Safari, 2010].) Note that for these examples, a greedy approach is used instead of (or in addition to) the dynamic programming approach used in this chapter. For the problem of existence of PSNE in graphical games, Jiang and Safari [2010] was able to completely characterize the tractable classes of bounded-indegree graphs. An open problem is completely characterizing the types of restrictions to the graphical structure of AGG-\/0s that make the PSNE problem tractable, perhaps by leveraging some of the techniques developed in [Jiang and Safari, 2010]. Another future direction is to extend our approach to AGG-FNs. Recall that the configuration on a function node is the value of a deterministic function of the con- figuration of its neighbors. Thus given a symmetric AGG-FN, its PSNE correspond to configurations over its action nodes and function nodes such that the configura- tion over each function node is equal to the appropriate value, and the configuration over action nodes satisfy the incentive and consistency constraints as before. As- suming the deterministic functions for the function nodes are explicitly represented, it is then relatively straightforward to extend our dynamic-programming approach to work on the action graphs of symmetric AGG-FNs. An interesting question is whether this can be extended to efficiently deal with compactly-represented func- tion nodes such as summation function nodes. Finally, as we have seen in this chapter, one faces additional technical challenges when going beyond the symmet- ric case. It would be interesting to see if our approaches discussed in Section 4.5 can be extended to AGG-FNs. 135 Chapter 5 Temporal Action-Graph Games: A New Representation for Dynamic Games 5.1 Introduction In this chapter we1 turn our focus to compact representations of dynamic games. As mentioned in Section 2.1.2, the most influential compact representation for imperfect-information dynamic games is multiagent influence diagrams, or MAIDs [Koller and Milch, 2003]. MAIDs are compact when players\u2019 utility functions ex- hibit independencies; such compactness can also be leveraged for computational benefit (see Section 2.2.4). Consider the following example of a dynamic game. Example 5.1.1. Twenty cars are approaching a tollbooth with three lanes. The drivers must decide which lane to use. The cars arrive in four waves of five cars each. In each wave, the drivers must pick lanes simultaneously, and can see the number of cars before them in each lane. A driver\u2019s utility decreases with the number of cars that chose the same lane either before him or at the same time. 1This chapter is based on published joint work with Kevin Leyton-Brown and Avi Pfeffer [Jiang et al., 2009]. 136 A straightforward MAID representation of the game of Example 5.1.1 contains very little structure; in particular, each player will have a utility node, whose par- ents are all the decision nodes of the drivers before her. As the number of players grow, the representation size of the utility functions grow exponentially. Compu- tation using such a representation would be highly inefficient. However, the game really is highly structured: agents\u2019 payoffs exhibit context-specific independence (utility depends only on the number of cars in the chosen lane) and agents\u2019 payoffs exhibit anonymity (utility depends on the numbers of other agents taking given ac- tions, not on these agents\u2019 identities). The problem with a straightforward MAID representation of this game is that it does not capture either of these kinds of payoff structure. As we have seen in Chapter 2, a wider variety of compact game representa- tions exist for simultaneous-move games. In particular, several of these game rep- resentations (including congestion games and local effect games) can compactly represent anonymity and context-specific independence (CSI) structures. We saw in Chapter 3 that AGGs unify these past representations by compactly representing both anonymity and CSI while still retaining the ability to represent any game. Fur- thermore, structure in AGGs can be leveraged for computational benefit. However, AGGs are unable to represent the game presented in Example 5.1.1 because they cannot describe sequential moves or imperfect information. In this chapter we present a new representational framework called Temporal Action-Graph Games (TAGGs) that allows us to capture this kind of structure. Like AGGs, TAGGs can represent anonymity and CSI, but unlike AGGs they can also represent games with dynamics, imperfect information and uncertainty. We first define the representation of TAGGs, and then show formally how they define a game using an induced Bayesian network (BN). We demonstrate that TAGGs can represent any MAID, but can also represent situations that are hard to capture nat- urally as MAIDs. If the TAGG representation of a game contains anonymity or CSI, the induced BN will have special structure that can be exploited by inference algorithms. We present an algorithm for computing expected utility of TAGGs that exploits this structure. Our algorithm first transforms the induced BN to another BN that represents the structure more explicitly, then computes expected utility using a specialized inference algorithm on the transformed BN. We show that it 137 performs better than using a MAID in which the structure is not represented explic- itly, and better than using a standard BN inference algorithm on the transformed BN. 5.2 Representation 5.2.1 Temporal Action-Graph Games At a high level, Temporal Action-Graph Games (TAGGs) extend the AGG repre- sentation by introducing the concepts of time, uncertainty and imperfect informa- tion, while adapting the AGG concepts of action nodes and action-specific utility functions to the dynamic setting. We first give an informal description of these concepts. Temporal structure. A TAGG describes a dynamic game played over a series of time steps 1, . . . ,T , on a set of action nodes A . At each time step a version of a static AGG is played by a subset of agents on A , and the action counts on the action nodes are accumulated. Chance variables. TAGGs model uncertainty via chance variables. Like random variables in a BN, a chance variable is associated with a set of parents and a con- ditional probability table (CPT). The parents may be action nodes or other chance variables. Each chance variable is associated with an instantiation time; once in- stantiated, its value stays the same for the rest of the game. Chance variables can be thought of as a generalization of the (deterministic) function nodes in AGG-FNs. Decisions. At each time step one or more agents move simultaneously, represented by agent-specific decisions. TAGGs model imperfect information by allowing each agent to condition his decision on observed values of a given subset of decisions, chance variables, and the previous time step\u2019s action counts. Action nodes. Each decision is a choice of one from a number of available action nodes. As in AGGs, the same action may be available to more than one player. Action nodes provide a time-dependent tally: the action count for each action A in each time step \u03c4 is the number of times A has been chosen during the time period 1, . . . ,\u03c4 . 138 Utility functions. There is a utility function U \u03c4A associated with each action A at each time \u03c4 , which specifies the utility a player receives at time \u03c4 for having chosen action A. Each U \u03c4A has a set of parents which must be action nodes or chance variables. The utility of playing action A depends only on what happens over these parents. An agent who took action A (once) may receive utility at multiple times (e.g., short-term cost and long-term benefit); this is captured by associating a set of payoff times with each decision. An agent\u2019s overall utility is defined as the sum of the utilities received at all time steps. Play of a TAGG can be summarized as follows: 1. At time 0, action counts are initialized to zero; chance variables with instan- tiation time 0 are instantiated, 2. At each time \u03c4 \u2208 {1, . . . ,T}: (a) all agents with decisions at \u03c4 observe the appropriate action counts, chance variables, and decisions, if any. (b) all decisions at \u03c4 are made simultaneously. (c) action counts at \u03c4 are tallied. (d) chance variables at time \u03c4 are instantiated. (e) for each action A, utility function U \u03c4A is evaluated, with this amount of utility accruing to every agent who took action a at a decision whose payoff times include \u03c4 ; the result is not revealed to any of the players.2 3. At the end of the game, each agent receives the sum of all utility allocations throughout the game. Intuitively, the process can be seen as a sequence of simultaneous-move AGGs played over time. At each time step \u03c4 , the players that have a decision at time \u03c4 participate in a simultaneous-move AGG on the set of action nodes, whose action counts are initialized to be the counts at \u03c4 \u2212 1. Each action A\u2019s utility function is U \u03c4A and A\u2019s neighbors in the action graph correspond to the parents of U \u03c4A . 2If an agent plays action A for two decisions that have the same payoff time \u03c4 , then the agent receives twice the value of U\u03c4A . 139 We observe that decisions and chance variables in TAGGs are similar to deci- sion nodes and chance nodes (respectively) in MAIDs, except that here their par- ents can be time-dependent action counts. Thus the need to specify the time steps that decisions and chance nodes in a TAGG are instantiated; but once instantiated their values stay fixed. We also observe that the time-dependent nature of action counts in TAGGs is similar to how dynamic Bayesian networks (DBNs) [Dean and Kanazawa, 1989, Murphy, 2002], a probabilistic graphical model of tempo- ral domains, model their time-dependent random variables. Just as a DBN can be unrolled into a BN; later on we will see that a TAGG can also be unrolled into a MAID. Before formally defining TAGGs, we need to first define the concept of a config- uration at time \u03c4 over a set of action nodes, decisions and chance variables, which is intuitively an instantiation at time \u03c4 of a corresponding set of variables. Definition 5.2.1. Given a set of action nodes A , a set of decisions D , a set of chance variables X , and a set B\u2286A \u222aX \u222aD , a configuration at time \u03c4 over B, denoted as C\u03c4B, is a |B|-tuple of values, one for each node in B. For each node b\u2208 B, the corresponding element in C\u03c4B, denoted as C\u03c4(b), must satisfy the following: \u2022 if b \u2208A , C\u03c4(b) is an integer in {0, . . . , |D |} specifying the action count on b at \u03c4 , i.e. the number of times action b has been chosen during the time period 1, . . . ,\u03c4 . \u2022 if b \u2208D , C\u03c4(b) is an action in A , specifying the action chosen at D. \u2022 if b \u2208X , C\u03c4(b) is a value from the domain of the random variable, Dom[b]. Let C \u03c4B be the set of all configurations at \u03c4 over B. We now offer formal definitions of chance variables, decisions, and utility func- tions. Definition 5.2.2. A chance variable X is defined by: 1. a domain Dom[X ], which is a nonempty finite set; 2. a set of parents Pa[X ], which consists of chance variables and\/or actions; 140 3. an instantiation time t(X), which specifies the time at which the action counts in Pa[X ] are instantiated; 4. a CPT Pr(X |Pa[X ]), which specifies the conditional probability distribution of X given each configuration Ct(X)Pa[X ]. We require that each chance variable\u2019s instantiation time be no earlier than its parent chance variable\u2019s instantiation times, i.e. if chance variable X \u2032 \u2208 Pa[X ], then t(X \u2032)\u2264 t(X). Definition 5.2.3. A decision D is defined by: 1. the player making the decision, pl(D). A player may make multiple decisions; the set of decisions belonging to a player \u2113 is denoted by Decs[\u2113]. 2. its decision time t(D) \u2208 {1, . . . ,T}. Each player has at most one decision at each time step. 3. its action set Dom[D], a nonempty set of actions. 4. the set of payoff times pt(D)\u2286 {1, . . . ,T}. We assume that \u03c4 \u2265 t(D) for all \u03c4 \u2208 pt(D). 5. its observation set O[D]: a set of decisions, actions, and chance variables, whose configuration at time t(D)\u22121 (i.e. Ct(D)\u22121O[D] ) is observed by pl(D) prior to making the decision. We require that if decision D\u2032 is an observation of D, then t(D\u2032)< t(D). Furthermore if chance variable X is an observation of D, then t(X)< t(D). Definition 5.2.4. Each action A at each time \u03c4 is associated with one utility func- tion U \u03c4A . Each U \u03c4A is associated with a set of parents Pa[U \u03c4A ], which is a set of actions and chance variables. We require that if chance variable X \u2208 Pa[U \u03c4A ], then t(X) \u2264 \u03c4 . Each utility function U \u03c4A is a mapping from the set of configurations C \u03c4Pa[U\u03c4A ] to a real value. We can now formally define TAGGs. Definition 5.2.5. A Temporal Action-Graph Game (TAGG) is a tuple (N,T,A ,X ,D ,U ), where: 141 1. N = {1, . . . ,n} is a set of players. 2. T is the duration of the game. 3. A is a set of actions. 4. X is a set of chance variables. Let G be the induced directed graph over X . We require that G be a directed acyclic graph (DAG). 5. D is the set of decisions. We require that each decision D\u2019s action set Dom[D]\u2286A . 6. U = {U \u03c4A : A \u2208A ,1\u2264 \u03c4 \u2264 T} is the set of utility functions. First, let us see how to represent Example 5.1.1 as a TAGG. The set N corre- sponds to the cars. The duration T = 4. We have one action node for each lane. For each time \u03c4 , we have five decisions, each belonging to a car that arrives at time \u03c4 . The action set for each decision is the entire set A . The payoff time for each decision is the time the decision is made, i.e., pt(D) = {t(D)}. Each decision has all actions as observations. For each A and \u03c4 , the utility U \u03c4A has A as its only parent. The representation size of each utility function is at most n; the size of the entire TAGG is O(|A |T n). The TAGG representation is useful beyond compactly representing MAIDs. The representation can also be used to specify information structures that would be difficult to represent in a MAID. For example, we can represent games in which agents\u2019 abilities to observe the decisions made by previous agents depend on what actions these agents took. Example 5.2.6. There are 2T ice cream vendors, each of which must choose a lo- cation along a beach. For every day from 1 to T , two of the vendors simultaneously set up their ice cream stands. Each vendor lives in one of the locations. When a vendor chooses an action, it knows the location of vendors who set up stands in previous days in the location where it lives or in one of the neighboring locations. The payoff to a vendor in a given day depends on how many vendors set up stands in the same location or in a neighboring location. Example 5.2.6 can be represented as a TAGG, the key elements of which are as follows. There is an action A for each location. Each player j has one decision 142 D j, whose observations include actions for the location j lives in and neighboring locations. The payoff time for each decision is T , and the utility function UTA has A and its neighboring locations as parents. Let us consider the size of a TAGG. It follows from Definition 5.2.5 that the space bottlenecks of the representation are the CPTs Pr(X |Pa[X ]) and the utility functions U \u03c4A , which have polynomial sizes when the numbers of their parents are bounded by a constant. Lemma 5.2.7. Given TAGG (N,T,A ,X ,D ,U ), if maxX\u2208X |Pa[X ]| and maxU\u2208U |Pa[U ]| are bounded by a constant, then the size of the TAGG is bounded by a polynomial in maxX\u2208X Dom[X ], |X |, |D |, |U |, and T . 5.2.2 Strategies In Section 2.2.4 we introduced the standard concepts of pure, mixed and behavior strategies in dynamic games. We now apply these concepts to the case of TAGGs. We start with pure strategies, where at each decision D, an action is chosen deter- ministically as a function of observed information, i.e., the configuration Ct(D)\u22121O[D] . A mixed strategy of a player i is a probability distribution over pure strategies of i. Recall that since there can be an exponential number of pure strategies in a dy- namic game, a mixed strategy is generally an exponential-sized object. We thus restrict our attention to behavior strategies, in which the action choices at different decisions are randomized independently. Definition 5.2.8. A behavior strategy at decision D is a function \u03c3 D : C t(D)\u22121O[D] \u2192 \u03d5(Dom[D]), where \u03d5(Dom[D]) is the set of probability distributions over Dom[D]. A behavior strategy for player i, denoted \u03c3i, is a tuple consisting of a behavior strategy for each of her decisions. A behavior strategy profile \u03c3 = (\u03c31, . . . ,\u03c3n) consists of a behavior strategy \u03c3i for all i. An agent has perfect recall when she never forgets her action choices and ob- servations at earlier decisions. The TAGG representation does not enforce per- fect recall; TAGGs can represent perfect recall games as well as non-perfect-recall games. A technical issue on representing perfect recall games as TAGGs is the following: in order to preserve the perfect-recall property of the resulting TAGG, 143 each decision D of player i should observe all of i\u2019s earlier decisions and obser- vations. However, recall that if an action A is in the observation set of one of i\u2019s earlier decisions at time t \u2032 < t(D), it means that the action count at time t \u2032\u22121 was observed. Directly including A in O[D] would instead imply that the action count of A at time t(D)\u22121 is observed by D, in which case the information structure of the TAGG is different from the original game and is thus not a faithful representa- tion. Instead, we model the situation by creating a deterministic chance variable X t \u2032\u22121 A with instantiation time t \u2032\u2212 1; its only parent is A and its value is the action count of A at time t \u2032\u22121. We then include X t \u2032\u22121A in O[D]. It is straightforward to see that X t \u2032\u22121A carries the information equivalent to observing the action count of A at time t \u2032\u22121, and the resulting TAGG provides a correct representation of the perfect recall game. 5.2.3 Expected Utility Now we use the language of Bayesian networks to formally define an agent\u2019s ex- pected utility in a TAGG given a behavior strategy profile \u03c3 . Specifically, we define an induced BN that formally describes how the TAGG is played out. Given a be- havioral strategy profile, decisions, chance variables and utilities can naturally be understood as random variables. On the other hand, action counts are time depen- dent. Thus, we have a separate action count variable for each action at each time step. Definition 5.2.9. Let A \u2208 A be an action and \u03c4 \u2208 {1, ...,T} be a time point. A\u03c4 denotes the action count variable representing the number of times A was chosen from time 1 to time \u03c4 . Let A0 be the variable which is 0 with probability 1. We would like to define expected utility for each player, which is the sum of expected utilities of the player\u2019s decisions. On the other hand, the utility functions in TAGGs are action specific. To bridge the gap, we create new decision-payoff variables in the induced BN that represent the utilities of decisions received at each of their payoff time points. Definition 5.2.10. Given a TAGG and a behavior strategy profile \u03c3 , the induced BN is defined over the following variables: for each decision D \u2208 D there is a 144 behavior strategy variable which by abuse of notation we shall also denote by D; for each chance variable X \u2208X there is a variable which we shall also denote by X; there is a variable A\u03c4 for each action A \u2208A and time step \u03c4 \u2208 {1, ...,T}; for each utility function U \u03c4A for actions A \u2208A and time points \u03c4 \u2208 {1, ...,T}, there is a utility variable also denoted by U \u03c4A ; for each decision D and each time \u03c4 \u2208 pt(D), there is a decision-payoff variable u\u03c4D. We define the actual parents of each variable V , denoted APa[V ], as follows: The actual parents of a behavior strategy variable D are the variables correspond- ing to O[D], with each action Ak \u2208 O[D] replaced by At(D)\u22121k . The actual par- ents of an action count variable A\u03c4 are all behavior strategy variables D whose decision time t(D) \u2264 \u03c4 and A \u2208 Dom[D]. The actual parents of a chance vari- able X are the variables corresponding to Pa[X ], with each action Ak \u2208 Pa[X ] replaced by At(X)k . The actual parents of a utility variable U \u03c4A are the variables corresponding to Pa[U \u03c4A ], with each action Ak \u2208 Pa[U \u03c4A ] replaced by A\u03c4k . where {A1, ...,A\u2113}= Dom[D]. The CPDs of chance variables are the CPDs of the corresponding chance vari- ables in the TAGG. The CPD of each behavior strategy variable D is the behavior strategy \u03c3 D. The CPD of each utility variable U \u03c4A is a deterministic function defined by the corresponding utility function U \u03c4A . The CPD of each action count variable A\u03c4 is a deterministic function that counts the number of decisions in APa[A] that are assigned value A. The CPD of each decision-payoff variable u\u03c4D is a multiplexer, i.e. a deterministic function that selects the value of its utility variable parent ac- cording to the choice of its decision parent. For example, if the value of D is Ak, then the value of u\u03c4D is the value of U \u03c4Ak . Theorem 5.2.11. Given a TAGG, let F be the directed graph over the variables of the induced BN in which there is an edge from V1 to V2 iff V1 is an actual parent of V2. Then F is acyclic. This follows from the definition of TAGGs and the way we set up the actual parents in Definition 5.2.10. By Theorem 5.2.11, the induced BN defines a joint probability distribution over its variables, which we denote by P\u03c3 . Given \u03c3 , denote by E\u03c3 [V ] the expected value of variable V in the induced BN. We are now ready to define the expected utility to 145 decisions action count variables decision-payoff variable utility variables Figure 5.1: Induced BN of the TAGG of Example 5.1.1, with 2 time steps, 3 lanes, and 3 players per time step. Squares represent behavior strat- egy variables, circles represent action count variables, diamonds rep- resent utility variables and shaded diamonds represent decision-payoff variables. To avoid cluttering the graph, we only show utility variables at time step 2 and a decision-payoff variable for one of the decisions. players under behavior strategy profiles. Definition 5.2.12. The expected utility to player \u2113 under behavior strategy profile \u03c3 is EU\u03c3 (\u2113) = \u2211D\u2208Decs[\u2113]\u2211\u03c4\u2208pt(D) E\u03c3 [u\u03c4D]. Figure 5.1 shows an induced BN of a TAGG based on Example 5.1.1 with six cars and three lanes. Note that although we use squares to represent behavior strat- egy variables, they are random variables and not actual decisions as in influence diagrams. 5.2.4 The Induced MAID of a TAGG Given a TAGG we can construct a MAID that describes the same game. We use a similar construction as the induced Bayesian Network, but with two differences. First, instead of behavior strategy variables with CPDs assigned by \u03c3 , we have decision nodes in the MAID. Second, each decision-payoff variable u\u03c4D becomes a utility node for player pl(D) in the MAID. The resulting MAID describes the same game as the TAGG, because it offers agents the same strategies and their expected utilities are defined by the same BN. We call this the induced MAID of the TAGG. 146 5.2.5 Expressiveness It is natural to ask about the expressiveness of TAGGs: what games can we repre- sent? It turns out that TAGGs are able to compactly represent all MAIDs. Lemma 5.2.13. Any MAID can be represented as a TAGG with the same space complexity. Proof. Recall that a MAID consists of a set of decisions, a set of chance nodes and a set of utility nodes. Given a MAID, we construct a TAGG in the following way: \u2022 For each decision D\u2032 of the MAID and each value d\u2032 \u2208 Dom[D\u2032], create an unique action Ad\u2032 in the TAGG. \u2022 Decisions and chance nodes of the MAID can be directly copied over to the TAGG. \u2022 Utility nodes in MAIDs are player-specific: each utility node is associated with some player. Utility nodes in TAGGs are action specific. We can encode MAID utility nodes as TAGG utility nodes as follows: Given a MAID utility node U \u2032 associated with player j, create a dummy decision DU \u2032 belonging to player j, whose action set contains exactly one action AU \u2032 . We then encode the utility function for U \u2032 in the MAID as the utility associated with action AU \u2032 in the TAGG. \u2022 One difference between MAIDs and TAGGs is that in MAIDs decisions can be parents of chance and utility nodes; in TAGGs only chance variables and actions can be parents of chance and utility nodes. Nevertheless, MAID chance nodes and utility nodes can be encoded in TAGGs by replacing each decision parent D\u2032 by the corresponding set of actions in Dom[D\u2032]. \u2022 Decisions and chance nodes of MAIDs are not associated with time points. Nevertheless, since the MAID is a directed acyclic graph, we can assign decision times to decisions and instantiation times to chance variables that are consistent with the topological order of the MAID. The payoff times of each decision is assigned to be the singleton {T}, i.e. at the end of the game. 147 As a result, TAGGs can represent any extensive form game representable as a MAID. These include all perfect recall games, and the subclass of imperfect recall games where each information set does not involve multiple time steps. Now consider the converse problem of reducing TAGGs to MAIDs. In this case, since the induced MAID of a TAGG is payoff equivalent to the TAGG, it trivially follows that any TAGG can be represented by a MAID. However, the induced MAID has a large in-degree, and can thus be exponentially larger than the TAGG. For example, in the games of Examples 5.1.1 and 5.2.6, the induced MAIDs have max in-degrees that are equal to the number of decisions, which implies that the sizes of the MAIDs grow exponentially with the number of decisions, whereas the sizes of the TAGGs for the same games grow linearly in the number of decisions. This is not surprising, since TAGGs can exploit more kinds of structure in the game (CSI, anonymity) compared to a straightforward MAID representation. In Section 5.3.1 we show that the induced MAID can be transformed into a MAID that explicitly represents the underlying structure. The size of the transformed MAID is polynomial in the size of the TAGG. The TAGG representation is also a true generalization of AGGs, since any AGG-\/0 can be straightforwardly represented as a TAGG with T = 1. Function nodes in AGG-FNs and AGG-FNAs can be modeled as chance nodes with a deter- ministic CPT, thus AGG-FNs and AGG-FNAs can also be represented as TAGGs with T = 1. 5.3 Computing Expected Utility In this section, we consider the task of computing expected utility EU\u03c3 [ j] to a player j given a mixed strategy profile \u03c3 . As mentioned in Section 2.2.4, compu- tation of EU is an essential step in many game-theoretic computations for dynamic games, such as finding a best response given other players\u2019 strategy profile, check- ing whether a strategy profile is a Nash equilibrium, and heuristic algorithms such as fictitious play and iterated best response. In Section 5.4 we discuss extending our methods in this section to a subtask in the Govidan-Wilson algorithm for com- puting Nash equilibria. One benefit of formally defining EU in terms of BNs is that now the problem of 148 computing EU can be naturally cast as a BN inference problem. (In Chapter 3 we discussed such a reduction in the context of AGGs.) By Definition 5.2.12, EU\u03c3 [ j] is the sum of a polynomial number of terms of the form E\u03c3 [u\u03c4D]. We thus focus on computing one such E\u03c3 [u\u03c4D]. This can be computed by applying a standard BN inference algorithm on the induced BN. In fact, BN inference is the standard ap- proach for computing expected utility in MAIDs [Koller and Milch, 2003]. Thus the above approach for TAGGs is computationally equivalent to the standard ap- proach for a natural MAID representation of the same game. In this section, we show that the induced BNs of TAGGs have special structure that can be exploited to speed up computation, and present an algorithm that exploits this structure. 5.3.1 Exploiting Causal Independence The standard BN inference approach for computing EU does not take advantage of some kinds of TAGG structure. In particular, recall that in the induced network, each action count variable A\u03c4\u2019s parents are all previous decisions that have A\u03c4 in their action sets, implying large in-degrees for action variables. Considering for ex- ample the clique-tree algorithm, this means large clique sizes, which is problematic because running time scales exponentially in the largest clique size of the clique tree. However, the CPDs of these action count variables are structured counting functions. Such structure is an instance of causal independence in BNs [Hecker- man and Breese, 1996]. It also corresponds to anonymity structure for static game representations like symmetric games and AGGs. We can exploit this structure to speed up computation of expected utility in TAGGs. Our approach is a specialization of Heckerman and Breese\u2019s [1996] method for exploiting causal independence in BNs. At a high level, Heckerman and Breese\u2019s method transforms the original BN by creating new nodes that represent intermedi- ate results, and re-wiring some of the arcs, resulting in an equivalent BN with small in-degree. They then apply conventional inference algorithms on the new BN. For example, given an action count variable A\u03c4k with parents {D1 . . .D\u2113}, create a node Mi for each i \u2208 {1 . . . \u2113\u2212 1}, representing the count induced by D1 . . .Di. Then, instead of having D1 . . .D\u2113 as parents of A\u03c4k , its parents become D\u2113 and M\u2113\u22121, and each Mi\u2019s parents are Di and Mi\u22121. The resulting graph would have in-degree at 149 Figure 5.2: The transformed BN of the tollbooth game from Figure 5.1 with 3 lanes and 3 cars per time step. most 2 for A\u03c4k and the Mi\u2019s. In our induced BN, the action count variables Atk at earlier time steps t < \u03c4 already represent some of these intermediate counts, so we do not need to duplicate them. Formally, we modify the original BN in the following way: for each action count variable A\u03c4k , first remove the edges from its current parents. Instead, A\u03c4k now has two parents: the action count variable A\u03c4\u22121k and a new node M\u03c4Ak representing the contribution of decisions at time \u03c4 to the count of Ak. If there is more than one decision at time \u03c4 that has Ak in its action set, we create intermediate variables as in Heckerman and Breese\u2019s method. We call the resulting BN the transformed BN of the TAGG. Figure 5.2 shows the transformed BN of the tollbooth game whose induced BN was given in Figure 5.1. We can then use standard algorithms to compute probabilities P(ut \u2032D) on the transformed BN. For classes of BNs with bounded treewidths, these probabilities (and thus E[ut \u2032D]) can be computed in polynomial time. 5.3.2 Exploiting Temporal Structure In practice, the standard inference approaches use heuristics to find an elimination ordering. This might not be optimal for our BNs. We present an algorithm based on the idea of eliminating variables in the temporal order. For the rest of the section, we fix D and a time t \u2032 \u2208 pt(D) and consider the computation of E\u03c3 [ut \u2032D]. We first group the variables of the induced network by time steps: variables at time \u03c4 include decisions at \u03c4 , action count variables A\u03c4 , chance variables X with instantiation time \u03c4 , intermediate nodes between decisions and action counts at \u03c4 , 150 and utility variables U \u03c4A . As we are only concerned about E\u03c3 [ut \u2032 D] for a t \u2032 \u2208 pt(D), we can safely discard the variables after time t \u2032, as well as utility variables before t \u2032. It is straightforward to verify that the actual parents of variables at time \u03c4 are either at \u03c4 or before \u03c4 . We say a network satisfies the Markov property if the actual parents of variables at time \u03c4 are either at \u03c4 or at \u03c4\u22121. Parts of the induced BN (e.g. the action count variables) already satisfy the Markov property, but in general the network does not satisfy the property. Exceptions include chance variable parents and decision parents from more than one time step ago. Given an induced BN, we can transform it into an equivalent network satisfying the Markov property. If a variable V1 at t1 is a parent of variable V2 at t2, with t2\u2212 t1 > 1, then for each t1 < \u03c4 < t2 we create a dummy variable V \u03c41 belonging to time \u03c4 so that we copy the value of V1 to V t2\u221211 . We then delete the edge from V1 to V2 and add an edge from V t2\u221211 to V2. The Markov property is computationally desirable because variables in time \u03c4 d-separate past variables from future variables. A straightforward approach to exploiting the Markov property is the following: as \u03c4 goes from 1 to t \u2032, compute the joint distribution over variables at \u03c4 using the joint distribution over variables at \u03c4\u22121. In fact, we can do better by adapting the interface algorithm [Darwiche, 2001] for dynamic Bayesian networks to our setting.3 Define the interface I\u03c4 to be the set of variables in time \u03c4 that have children in time \u03c4 +1. I\u03c4 d-separates past from future, where past is all variables before \u03c4 and non-interface variables in \u03c4 , and future is all variables after \u03c4 . In an induced BN, I\u03c4 consists of: action count variables at time \u03c4 ; chance variables X at time \u03c4 that have children in future; decisions at \u03c4 that are observed by future decisions; decision D which is a parent of ut \u2032D, and dummy variables created by the transform. We define the set of effective variables at time \u03c4 , denoted by V\u03c4 , as the subset 3Whereas in DBNs the set of variables for each time step remains the same, for our setting this is no longer the case. It turns out that the interface algorithm can be adapted to work on our transformed BNs. Also, the transformed BNs of TAGGs have more structure than DBNs, particularly within the same time step, which we exploit for further computational speedup. 151 of I\u03c4 that are ancestors of ut \u2032D. For time t \u2032, we let Vt \u2032 = {ut \u2032 D}. Intuitively, at each time step \u03c4 we only need to keep track of the distribution P(V\u03c4), which acts as a sufficient statistic as we go forward in time. For each \u03c4 , we calculate P(V\u03c4) by conditioning on instantiations of P(V\u03c4\u22121). The interface algorithm for TAGGs can be summarized as the following: 1. compute distribution P(V0) 2. for \u03c4 = 1 to t \u2032 (a) for each instantiation of V\u03c4\u22121, v\u03c4\u22121j , compute the distribution over V\u03c4 : P ( V\u03c4 |V\u03c4\u22121 = v\u03c4\u22121j ) (b) P(V\u03c4) = \u2211v P ( V\u03c4 |V\u03c4\u22121 = v ) P ( V\u03c4\u22121 = v ) 3. since Vt \u2032 = {ut \u2032D}, we now have P(ut \u2032 D) 4. return the expected value E[ut \u2032D] We can further improve on this, in particular on the subtask of computing P(V\u03c4 |V\u03c4\u22121). We observe that there is also a temporal order among variables in each time \u03c4 : first the decisions and intermediate variables, then action count vari- ables, and finally chance variables. Partition V\u03c4 into four subsets consisting of action count variables A\u03c4 , chance variables X\u03c4 , behavior strategy variables D\u03c4 and dummy copy variables C\u03c4 . Then P(V\u03c4 |V\u03c4\u22121) can be factored into P(C\u03c4 |V\u03c4\u22121)P(D\u03c4 ,A\u03c4 |V\u03c4\u22121)P(X\u03c4 |A\u03c4 ,V\u03c4\u22121). This allows us to first focus on decisions and action count variables to compute P(D\u03c4 ,A\u03c4 |V\u03c4\u22121) and then carry out inference on the chance variables. Calculating P(D\u03c4 ,A\u03c4 |V\u03c4\u22121) involves eliminating all behavior strategy variables not in D\u03c4 as well as the intermediate variables. Note that conditioned on V\u03c4\u22121, all decisions at time \u03c4 are independent. This allows us to efficiently eliminate vari- ables along the chains of intermediate variables. Let the decisions at time \u03c4 be {D\u03c41, . . . ,D \u03c4 \u2113}. Let M\u03c4 be the set of intermediate variables corresponding to action count variables in A\u03c4 . Let M\u03c4k be the subset of M\u03c4 that summarizes the contribu- tion of D\u03c41, . . . ,D\u03c4k . We eliminate variables in the order D\u03c41,D\u03c42,M\u03c42,D\u03c43,M\u03c43, . . . ,M\u03c4\u2113 , except for decisions in D\u03c4 . The tables in the variable elimination algorithm need 152 to keep track of at most |D\u03c4 |+ |A\u03c4 | variables. Thus the complexity of computing P(D\u03c4 ,A\u03c4 |V\u03c4\u22121) for an instantiation of V\u03c4\u22121 is exponential only in |D\u03c4 |+ |A\u03c4 |. Computing P(X\u03c4 |A\u03c4 ,V\u03c4\u22121) for each instantiation of A\u03c4 ,V\u03c4\u22121 involves elimi- nating the chance variables not in X\u03c4 . Any standard inference algorithm can be applied here. The complexity is exponential in the treewidth of the induced BN restricted on all chance variables at time \u03c4 , which we denote by G\u03c4 . Putting everything together, the bottleneck of our algorithm is constructing the tables for the joint distributions on V\u03c4 , as well as doing inference on G\u03c4 . Theorem 5.3.1. Given a TAGG and behavior strategy profile \u03c3 , if for all \u03c4 , both |V\u03c4 | and the treewidth of G\u03c4 are bounded by a constant, then for any player j the expected utility EU\u03c3 [ j] can be computed in time polynomial in the size of the TAGG representation and the size of \u03c3 . Our algorithm is especially effective for induced networks that are close to having the Markov property, in which case we only add a small number of dummy copy variables to V\u03c4 . If only a constant number of dummy copy variables are added, the time complexity of computing expected utility then grows linearly in the duration of the game. On the other hand, for induced networks far from having the Markov property, |V\u03c4 | can grow linearly as \u03c4 increases, implying that the time complexity is exponential. 5.3.3 Exploiting Context-Specific Independence TAGGs have action-specific utility functions, which allows them to express context- specific payoff independence: which utility function is used depends on which action is chosen at the decision. This is translated to context-specific independence structure in the induced BN, specifically in the CPD of u\u03c4D. Conditioned on the value of D, u\u03c4D only depends on one of its utility variable parents. There are several ways of exploiting such structure computationally, including conditioning on the value of the decision D [Boutilier et al., 1996], or exploiting the context-specific independence in a variable elimination algorithm [Poole and Zhang, 2003]. One particularly simple approach that works for multiplexer utility nodes is to decompose the utility into a sum of utilities [Pfeffer, 2000]. For each utility node parent U tk of utD, there is a utility function utD,k that depends on U tk 153 and D. If D = k, utD,k is equal to U tk. Otherwise, utD,k is 0. It is easy to see that utD(U t1, . . . ,U tm,D) = \u2211mk=1 utD,k(U tk,D). We can then modify our algorithm to compute each E[utD,k] instead of E[utD]. This results in a reduction in the set of effective variables V\u03c4k , which are now the variables at \u03c4 that are ancestors of utD,k. Furthermore, whenever V\u03c4k = V\u03c4k\u2032 for some k,k\u2032, the distributions over them are identical and thus can be reused. For static games represented as TAGGs with T = 1, our algorithm is equivalent to the polynomial-time expected utility algorithm for AGGs described in Chapter 3. Applying our algorithm to tollbooth games of Example 5.1.1 and ice cream games of Example 5.2.6, we observe that for both cases V\u03c4 consists of a subset of action count variables at \u03c4 plus the decision whose utility we are computing. Therefore the expected utilities of these games can be computed in polynomial time if |A | is bounded by a constant. 5.4 Computing Nash Equilibria Since the induced MAID of a TAGG is payoff equivalent to the TAGG, algorithms for computing the Nash equilibria of MAIDs [Blum et al., 2006, Koller and Milch, 2003, Milch and Koller, 2008] can be directly applied to an induced MAID to find Nash equilibria of a TAGG. However, this approach does not exploit all TAGG structure. We can do better by constructing a transformed MAID, in a manner sim- ilar to the transformed BN, exploiting causal independence and CSI as in Sections 5.3.1 and 5.3.3. We can do better yet and exploit the temporal structure as described in Section 5.3.2, if we use a solution algorithm that requires computation of probabilities and expected utilities. Govindan and Wilson [2002] presented an algorithm for computing equilibria in perfect-recall extensive-form games. Blum, Shelton and Koller [2006] adapted this algorithm to MAIDs. A key step in the algorithm is, for each pair of players i and j, and one of i\u2019s utility nodes, computing the marginal distribution over i\u2019s decisions and their parents, j\u2019s decisions and their parents, and the utility node. Our algorithm in Section 5.3.2 can be straightforwardly adapted to compute this distribution. This approach is efficient if each player only has a small number of decisions, as in the games in Examples 5.1.1 and 5.2.6. 154 100 1000 10000 (s e c o n d s ) 1 10 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 C P U t im e ( T, duration of the TAGG 100 1000 10000 s e c o n d s ) 0.1 1 10 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 C P U t im e ( s cars per time step 100 1000 10000 (s e c o n d s ) 0.1 1 10 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 C P U t im e ( T, duration of the TAGG Figure 5.3: Running times for expected utility computation. Triangle data points represent Approach 1 (induced BN), diamonds represent Ap- proach 2 (transformed BN), squares represent Approach 3 (proposed algorithm). However, we did not implement these algorithms for TAGGs, because of a lack of publicly-available implementations for these algorithms. In particular, whereas Gametracer [Blum et al., 2002] provided an implementation of Govindan and Wil- son\u2019s [2003] global Newton method for normal form games, it did not provide an implementation of Govindan and Wilson\u2019s [2002] algorithm for extensive-form games. 5.5 Experiments We have implemented our algorithm for computing expected utility in TAGGs, and run experiments on the efficiency and scalability of our algorithm. We compared three approaches for computing expected utility given a TAGG: Approach 1 applying the standard clique tree algorithm (as implemented by the Bayes Net Toolbox [Murphy, 2007]) on the induced BN; Approach 2 applying the same clique tree algorithm on the transformed BN; Approach 3 our proposed algorithm in Section 5.3. All approaches were implemented in MATLAB. All our experiments were per- formed using a computer cluster consisting of machines with dual Intel Xeon 3.2GHz CPUs, 2MB cache and 2GB RAM. We ran experiments on tollbooth game instances of varying sizes. For each game instance we measured the CPU times for computing expected utility of 100 155 random behavior strategy profiles. Figure 5.3 (left) shows the results in log scale for toll booth games with 3 lanes and 5 cars per time step, with the duration varying from 1 to 15. Approach 1 ran out of memory for games with more than 1 time step. Approach 2 was more scalable; but ran out of memory for games with more than 5 time steps. Approach 3 was the most scalable. On smaller instances it was faster than the other two approaches by an order of magnitude, and it did not run out of memory as we increased the size of the TAGGs to at least 20 time steps. For the toll booth game with 14 time steps it took 1279 seconds, which is approximately the time Approach 2 took for the game instance with 5 time steps. Figure 5.3 (middle) shows the results in log scale for tollbooth games with 3 time steps and 3 lanes, varying the number of cars per time step from 1 to 20. Approach 1 ran out of memory for games with more than 3 cars per time step; Approach 2 ran out of memory for games with more than 6 cars per time step; and again Approach 3 was the most scalable. We also ran experiments on the ice cream games of Example 5.2.6. Figure 5.3 (right) shows the results in log scale for ice cream games with 4 locations, two vendors per time step, and durations varying from 1 to 15. The home locations for each vendor were generated randomly. Approaches 1 and 2 ran out of memory for games with more than 3 and 4 time steps, respectively. Approach 3 finished for games with 15 time steps in about the same time as Approach 2 took for games with 4 time steps. 5.6 Conclusions TAGGs are a novel graphical representation of imperfect-information extensive- form games. They are an extension of simultaneous-move AGGs to the dynamic setting; and can be thought of as a sequence of AGGs played over T time steps, with action counts accumulating as time progresses. This process can be formally described by the induced BN. For situations with anonymity or CSI structure, the TAGG representation can be exponentially more compact than a direct MAID rep- resentation. We presented an algorithm for computing expected utility for TAGGs that exploits its anonymity, CSI as well as temporal structure. We showed both theoretically and empirically that our approach is significantly more efficient than 156 the standard approach on a direct MAID representation of the same game. Another interesting solution concept is extensive-form correlated equilibrium [von Stengel and Forges, 2008]. EFCE was defined for perfect-recall extensive- form games, but the concept can be applied to other representations of perfect- recall dynamic games. One interesting direction is to adapt Huang and Von Sten- gel\u2019s [2008] polynomial-time algorithm for computing sample EFCE to compact representations like MAIDs and TAGGs. As mentioned in Section 2.2.4, dynamic games with perfect recall have nice properties including the existence of Nash equilibria in behavior strategies. Fur- thermore, most of existing algorithmic approaches for dynamic games assume per- fect recall. However, strategies in perfect-recall games can be computationally expensive to represent and reason about. For example in a perfect-recall TAGG, since each decision of a player has to condition on all previous decisions and ob- servations of the player, the representation size of a behavior strategy grows ex- ponentially in the number of previous decisions of that player. Representations like MAIDs and TAGGs can compactly express the utility functions, but this expo- nential blow-up of the strategy space is an inherent property of perfect recall. This blow-up already arises for two-player zero-sum games such as poker. Perfect recall is thus also problematic as a realistic model of rationality, since real-life agents do not have unlimited amount of memory. In light of this, an interesting direction is to explore imperfect-recall models, and solution concepts and algorithms for such models. In single-agent settings, there has been research on relaxing perfect recall using limited memory influence diagrams (LIMIDs) [Nilsson and Lauritzen, 2000]. However, for multi-agent imperfect recall games, existence of Nash equilibria in behavior strategies is not guaranteed. There has been some research on classes of imperfect recall games in which such equilibria do exist. One approach is based on \u201cforgetting\u201d certain \u201cpayoff-irrelevant\u201d information from certain classes of per- fect recall games, and showing that the resulting imperfect-recall game has a Nash equilibrium in behavior strategies that is also a Nash equilibrium of the original perfect recall game. Such equilibria are called Markov Perfect Equilibria (MPE) [e.g., Fudenberg and Tirole, 1991]. Milch and Koller [2008] took such an approach for MAIDs, in which case forgetting information corresponds to deleting certain edges into decision nodes. However, even if Nash equilibria in behavior strategies 157 exist in the resulting imperfect-recall game, there is currently no general-purpose algorithm for finding such equilibria. For the zero-sum game of poker, Waugh et al. [2009] considered the approach of formulating imperfect-recall models where players forget certain information. The reduction in strategy space allowed them to solve larger instances (corresponding to finer abstractions to the game of poker) than previously possible. They solved the resulting imperfect-recall game using counterfactual regret minimization, a heuristic algorithm without theoretical guar- antees but appeared to empirically converge to approximate equilibria. Although unlike the MPE case, the transformation is not lossless (i.e., a Nash equilibrium of the imperfect-recall game is no longer a Nash equilibrium of the original game), they showed empirically that agents using the resulting strategies performed well. There have also been research on weaker solution concepts than MPE that allow players to ignore more information, such as Mean Field Equilibrium [e.g., Adlakha et al., 2010, Iyer et al., 2011]. Another approach is to consider restricted settings that admit stronger theo- retical and practical properties. For instance, in Chapter 6 we consider Bayesian games, which (recall from Section 2.1.3) can be formulated as dynamic games; however they have specific structure that makes them computationally friendlier than arbitrary dynamic games. In particular, these games do not have the problem of exponential blow-up of strategy space. We are able to leverage techniques from simultaneous-move games for representing and computing with Bayesian games. 158 Chapter 6 Bayesian Action-Graph Games 6.1 Introduction In this chapter we1 consider static games of incomplete information (or Bayesian games) [Harsanyi, 1967], in which (recall from Section 2.1.3) players are uncer- tain about the underlying game. Bayesian games have found many applications in economics, including most notably auction theory and mechanism design. Our interest is in computing with Bayesian games, and particularly in identify- ing sample Bayes-Nash equilibrium. We surveyed the relevant literature in Chapter 2, specifically Sections 2.1.3 and 2.2.3. To summarize, there are two key obstacles to performing such computations efficiently. The first is representational: recall that the straightforward tabular representation of Bayesian game utility functions (the Bayesian Normal Form) requires space exponential in the number of players. The second obstacle is the lack of existing algorithms for identifying sample Bayes- Nash equilibrium for arbitrary Bayesian games. Recall that a Bayesian game can be interpreted as an equivalent complete-information game via \u201cinduced normal form\u201d or \u201cagent form\u201d interpretations. Thus one approach is to interpret a Bayesian game as a complete-information game, enabling the use of existing Nash-equilibrium- finding algorithms. However, generating the normal form representations under both of these complete-information interpretations causes an exponential blowup in representation size, even when the Bayesian game has only two players. 1This chapter is based on joint work with Kevin Leyton-Brown [2010]. 159 In this chapter we propose Bayesian Action-Graph Games (BAGGs), a com- pact representation for Bayesian games. BAGGs can represent arbitrary Bayesian games, and furthermore can compactly express Bayesian games with commonly encountered types of structure. The type profile distribution is represented as a Bayesian network, which can exploit conditional independence structure among the types. BAGGs represent utility functions in a way similar to the AGG repre- sentation, and like AGGs, are able to exploit anonymity and action-specific utility independencies. Furthermore, BAGGs can compactly express Bayesian games ex- hibiting type-specific independence: each player\u2019s utility function can have differ- ent kinds of structure depending on her instantiated type. We provide an algorithm for computing expected utility in BAGGs, a key step in many algorithms for game- theoretic solution concepts. As in Chapter 5, our approach interprets expected utility computation as a probabilistic inference problem on an induced Bayesian Network. In particular, our algorithm runs in polynomial time for the important case of independent type distributions. To compute Bayes-Nash equilibria for BAGGs, we consider the agent form interpretation of the BAGG. Howson and Rosenthal [1974] showed that the agent form of an arbitrary two-player Bayesian game is a polymatrix game, which can be represented compactly (thus avoiding the aforementioned blowup) and solved using a variant of the Lemke-Howson algorithm. However, for n-player BAGGs the corresponding agent forms do not correspond to polymatrix games or any other known representation, and the Lemke-Howson algorithm cannot be applied. Never- theless, we are able to generalize Howson and Rosenthal\u2019s approach to propose an algorithm for finding sample Bayes-Nash equilibria for arbitrary BAGGs. Specifi- cally, we show that BAGGs can act as a general compact representation of the agent form; in particular, computational tasks on the agent form can be done efficiently by leveraging our expected utility algorithm for BAGGs. We then apply black-box approaches for Nash equilibria in complete-information games discussed in Sec- tions 2.2.2 and 3.4, specifically the simplicial subdivision algorithm [van der Laan et al., 1987] and Govindan and Wilson\u2019s [2003] global Newton method. We show empirically that our approach outperforms the existing approaches of solving for Nash on the induced normal form or on the normal form representation of the agent form. 160 Bayesian games can be interpreted as dynamic games with a initial move by Nature; thus, also related is the literature on representations for dynamic games, including MAIDs and TAGGs. Compared to these representations for dynamic games, BAGGs focus explicitly on structure common to Bayesian games; in par- ticular, only BAGGs can efficiently express type-specific utility structure. Also, by representing utility functions and type distributions as separate components, BAGGs can be more versatile. For example, one future direction made possible by this separation is to model Bayesian games without common type distributions. Another future direction is to answer computational questions that do not depend on the type distribution, such as computing ex-post equilibria. Furthermore, we will see that BAGGs enjoy nicer computational properties than arbitrary dynamic games. For example, BAGGs can be solved by adapting Govindan and Wilson\u2019s global Newton method [2003] (see Section 2.2.1) for static games; this is generally more practical than their related Nash equilibrium algorithm [2002] that directly works on dynamic games: while both approaches avoid the exponential blowup of transforming to the induced normal form, the global Newton method for dynamic games has to solve an additional quadratic program at each step of the homotopy. A limitation of BAGGs is that it requires the types to be discrete. There has been some research on heuristic methods for finding Bayes-Nash equilibria for Bayesian games with continuous types, including Reeves and Wellman [2004]\u2019s work on iterated best response for certain classes of auction games and Rabinovich et al. [2009]\u2019s work on fictitious play. Developing general compact representations and efficient algorithms for Bayes-Nash equilibria for such games remain interest- ing open problems. 6.2 Preliminaries The standard definition of a Bayesian game (N,{Ai}i\u2208N ,\u0398,P,{ui}i\u2208N) is given in Definition 2.1.3. The standard concepts of pure strategy si, mixed strategy \u03c3i, ex- pected utility for Bayesian games, and Bayes-Nash equilibrium are introduced in Section 2.2.3. Recall from Section 2.1.3 that the space bottlenecks of representing a Bayesian game are the type distribution and the utility function. Representing them as tables, the Bayesian normal form requires n\u00d7\u220fni=1(|\u0398i|\u00d7|Ai|)+\u220fni=1 |\u0398i| 161 numbers to specify. We say a Bayesian game has independent type distributions if players\u2019 types are drawn independently, i.e. the type-profile distribution P(\u03b8) is a product distribu- tion: P(\u03b8) = \u220fi P(\u03b8i). In this case the distribution P can be represented compactly using \u2211i |\u0398i| numbers. Given a permutation of players pi : N \u2192N and an action profile a= (a1, . . . ,an), let api = (api(1), . . . ,api(n)). Similarly let \u03b8pi = (\u03b8pi(1), . . . ,\u03b8pi(n)). We say the type distribution P is symmetric if |\u0398i|= |\u0398 j| for all i, j \u2208 N, and if for all permutations pi : N \u2192N, P(\u03b8) = P(\u03b8pi). We say a Bayesian game has symmetric utility functions if |Ai|= |A j| and |\u0398i|= |\u0398 j| for all i, j \u2208 N, and if for all permutations pi : N \u2192 N, we have ui(a,\u03b8) = upi(i)(api ,\u03b8pi) for all i \u2208 N. A Bayesian game is symmetric if its type distribution and utility functions are symmetric. The utility functions of such a game range over at most |\u0398i||Ai| ( n\u22122+|\u0398i||Ai| |\u0398i||Ai|\u22121 ) unique utility values. A Bayesian game exhibits conditional utility independence if each player i\u2019s utility depends on the action profile a and her own type \u03b8i, but does not depend on the other players\u2019 types. Then the utility function of each player i ranges over at most |A||\u0398i| unique utility values. 6.2.1 Complete-information interpretations Harsanyi [Harsanyi, 1967] showed that any Bayesian game can be interpreted as one of two complete-information games, the Nash equilibria of each of which cor- respond to Bayes-Nash equilibria of the Bayesian game. A Bayesian game can be converted to its induced normal form, which is a complete-information game with the same set of n players, in which each player\u2019s set of actions is her set of pure strategies in the Bayesian game. Each player\u2019s utility under an action profile is defined to be equal to the player\u2019s expected utility under the corresponding pure strategy profile in the Bayesian game. Alternatively, a Bayesian game can be transformed to its agent form, where each type of each player in the Bayesian game is turned into one player in a complete-information game. Formally, given a Bayesian game (N,{Ai}i\u2208N ,\u0398,P, {ui}i\u2208N), we define its agent form as the complete-information game ( \u02dcN, { \u02dcA j,\u03b8 j}( j,\u03b8 j)\u2208 \u02dcN ,{u\u0303 j,\u03b8 j}( j,\u03b8 j)\u2208 \u02dcN), where \u02dcN consists of \u2211 j\u2208N |\u0398 j| players, one for ev- 162 ery type of every player of the Bayesian game. We index the players by the tuple ( j,\u03b8 j) where j \u2208 N and \u03b8 j \u2208 \u0398 j. For each player ( j,\u03b8 j) \u2208 \u02dcN of the agent form game, her action set \u02dcA( j,\u03b8 j) is A j, the action set of j in the Bayesian game. The set of action profiles is then \u02dcA = \u220f j,\u03b8 j A( j,\u03b8 j). The utility function of player ( j,\u03b8 j) is u\u0303 j,\u03b8 j : \u02dcA \u2192R. For all a\u0303 \u2208 \u02dcA, u\u0303 j,\u03b8 j (a\u0303) is equal to the expected utility of player j of the Bayesian game given type \u03b8 j, under the pure strategy profile sa\u0303, where for all i and all \u03b8i, sa\u0303i (\u03b8i) = a\u0303(i,\u03b8i). Observe that there is a one-to-one correspondence between action profiles in the agent form and pure strategies of the Bayesian game. A similar correspondence exists for mixed strategy profiles: each mixed strategy profile \u03c3 of the Bayesian game corresponds to a mixed strategy \u03c3\u0303 of the agent form, with \u03c3\u0303(i,\u03b8i)(ai) = \u03c3i(ai|\u03b8i) for all i,\u03b8i,ai. It is straightforward to verify that u\u0303i,\u03b8i(\u03c3\u0303) = ui(\u03c3 |\u03b8i) for all i,\u03b8i. This implies a correspondence between Bayes Nash equilibria of a Bayesian game and Nash equilibria of its agent form. Proposition 6.2.1. \u03c3 is a Bayes-Nash equilibrium of a Bayesian game if and only if \u03c3\u0303 is a Nash equilibrium of its agent form. 6.3 Bayesian Action-Graph Games In this section we introduce Bayesian Action-Graph Games (BAGGs), a compact representation of Bayesian games. First consider representing the type distribu- tions. Specifically, the type distribution P is specified by a Bayesian network (BN) containing at least n random variables corresponding to the n players\u2019 types \u03b81, . . . ,\u03b8n. For example, when the types are independently distributed, then P can be specified by the simple BN with n variables \u03b81, . . . ,\u03b8n and no edges. Now consider representing the utility functions. Our approach is to adapt con- cepts from the AGG representation (see Chapter 3) to the Bayesian game setting. At a high level, a BAGG is a Bayesian game on an action graph, a directed graph on a set of action nodes A . To play the game, each player i, given her type \u03b8i, simultaneously chooses an action node from her type-action set Ai,\u03b8i \u2286 A . Each action node thus corresponds to an action choice that is available to one or more of the players. Once the players have made their choices, an action count is tallied for each action node \u03b1 \u2208 A , which is the number of agents that have chosen \u03b1 . A player\u2019s utility depends only on the action node she chose and the action counts 163 on the neighbors of the chosen node. We observe that the main difference between the AGG and BAGG representations is that whereas in an AGG each player\u2019s set of available actions is specified by her action set, in a BAGG we have type-action sets, meaning each player\u2019s set of available actions can depend on her instantiated type. We now turn to a formal description of BAGGs\u2019 utility function representation. Central to our model is the action graph.2 An action graph G = (A ,E) is a di- rected graph where A is the set of action nodes, and E is a set of directed edges, with self edges allowed. We say \u03b1 \u2032 is a neighbor of \u03b1 if there is an edge from \u03b1 \u2032 to \u03b1 , i.e., if (\u03b1 \u2032,\u03b1) \u2208 E . Let the neighborhood of \u03b1 , denoted \u03bd(\u03b1), be the set of neighbors of \u03b1 . For each player i and each instantiation of her type \u03b8i \u2208 \u0398i, her type-action set Ai,\u03b8i \u2286 A is the set of possible action choices of i given \u03b8i. These subsets are unrestricted: different type-action sets may (partially or completely) overlap. De- fine player i\u2019s total action set to be A\u222ai = \u22c3 \u03b8i\u2208\u0398i Ai,\u03b8i . We denote by A = \u220fi A\u222ai the set of action profiles, and by a \u2208 A an action profile. Observe that the action profile a provides sufficient information about the type profile to be able to deter- mine the outcome of the game; there is no need to additionally encode the realized type distribution. We note that for different types \u03b8i,\u03b8 \u2032i \u2208 \u0398i, Ai,\u03b8i and Ai,\u03b8 \u2032i may have different sizes; i.e., i may have different numbers of available action choices depending on her realized type. A configuration c is a vector of |A | non-negative integers, specifying for each action node the numbers of players choosing that action. Let c(\u03b1) be the element of c corresponding to the action \u03b1 . Let C : A 7\u2192C be the function that maps from an action profile a to the corresponding configuration c. Formally, if c = C (a) then c(\u03b1)= |{i\u2208N : ai =\u03b1}| for all \u03b1 \u2208A . Define C = {c : \u2203a \u2208 A such that c = C (a)}. In other words, C is the set of all possible configurations in the BAGG. Observe that the concept of configurations in BAGGs is related to the concept of configu- rations in AGGs in the following way: C in a BAGG is isomorphic to the set of configurations in an AGG-\/0 with the same action graph G= (A ,E) but with action sets corresponding to total action sets of the BAGG, i.e., Ai \u2261 A\u222ai . 2The definition of action graph coincides with the corresponding concept in AGGs. We repeat the definition here in order to give a complete description of BAGGs. 164 We can also define a configuration over a subset of nodes. In particular, we will be interested in configurations over a node\u2019s neighborhood. Given a configuration c \u2208 C and a node \u03b1 \u2208 A , let the configuration over the neighborhood of \u03b1 , de- noted c(\u03b1), be the restriction of c to \u03bd(\u03b1), i.e., c(\u03b1) = (c(\u03b1 \u2032))\u03b1 \u2032\u2208\u03bd(\u03b1). Similarly, let C(\u03b1) denote the set of configurations over \u03bd(\u03b1) in which at least one player plays \u03b1 . Let C (\u03b1) : A 7\u2192 C(\u03b1) be the function that maps from an action profile to the corresponding configuration over \u03bd(\u03b1). Definition 6.3.1. A Bayesian action-graph game (BAGG) is a tuple (N, \u0398, P, {Ai,\u03b8i}i\u2208N,\u03b8i\u2208\u0398i , G,{u\u03b1}\u03b1\u2208A ) where N is the set of agents; \u0398 = \u220fi \u0398i is the set of type profiles; P is the type distribution, represented as a Bayesian network; Ai,\u03b8i \u2286 A is the type-action set of i given \u03b8i; G = (A ,E) is the action graph; and for each \u03b1 \u2208A , the utility function is u\u03b1 : C(\u03b1)\u2192R. As in the case of AGGs, shared actions in a BAGG capture the game\u2019s anonymity structure. Furthermore, the (lack of) edges between nodes in the action graph of a BAGG expresses action- and type-specific independencies of utilities of the game: depending on player i\u2019s chosen action node (which also encodes information about her type), her utility depends on configurations over different sets of nodes. Lemma 6.3.2. An arbitrary Bayesian game given in Bayesian normal form can be encoded as a BAGG storing the same number of utility values. Proof. Given an arbitrary Bayesian game (N,{Ai}i\u2208N ,\u0398,P,{ui}i\u2208N) represented in Bayesian normal form, we construct the BAGG (N, \u0398, P, {A\u2032i,\u03b8i}i\u2208N,\u03b8i\u2208\u0398i , G, {u\u03b1}\u03b1\u2208A ) as follows. The Bayesian normal form\u2019s tabular representation of type profile distribution P can be straightforwardly represented as a BN, e.g. by cre- ating a random variable representing \u03b8 as the only parent of the random vari- ables \u03b81, . . . ,\u03b8n. To represent utility functions, we create an action graph G with \u2211i |\u0398i||Ai| action nodes; in other words, all type-action sets A\u2032i,\u03b8i are disjoint. Each action ai \u2208 Ai of the Bayesian normal form corresponds to |\u0398i| action nodes in the BAGG, one for each type instantiation \u03b8i. For each player i and each type \u03b8i \u2208 \u0398i, each action node \u03b1 \u2208 A\u2032i,\u03b8i has incoming edges from all action nodes from type- action sets A\u2032j,\u03b8 j for all j 6= i, \u03b8 j \u2208\u0398 j, i.e. all action nodes of the other players. For each action node \u03b1 \u2208 A\u2032i,\u03b8i corresponding to ai \u2208 Ai, the utility function u \u03b1 is de- fined as follows: given configuration c(\u03b1) we can infer the action profile a\u2032\u2212i \u2208 A\u2032\u2212i 165 of the BAGG, which then tells us the corresponding a\u2212i and \u03b8\u2212i of the Bayesian normal form, which gives us the utility ui(a,\u03b8). The number of utility values stored in this BAGG is the same as the Bayesian normal form. Bayesian games with symmetric utility functions exhibit anonymity structure, which can be expressed in BAGGs by sharing action nodes. Specifically, we la- bel each \u0398i as {1, . . . ,T}, so that each t \u2208 {1, . . . ,T} corresponds to a class of equivalent types. Then for each t \u2208 {1, . . . ,T}, we have Ai,t = A j,t for all i, j \u2208 N, i.e. type-action sets for equivalent types are identical. Figure 6.1 shows the action graph for a symmetric Bayesian game with two types and two actions per type. \u0001\u0002 \u0003\u0002 \u0001\u0004 \u0003\u0004 \u0001\u0002\u0003\u0004\u0005\u0006 \u0001\u0002\u0003\u0004\u0005\u0007 Figure 6.1: Action graph for a symmetric Bayesian game with n players, 2 types, 2 actions per type. 6.3.1 BAGGs with Function Nodes In this section we extend the basic BAGG representation by introducing function nodes to the action graph, as we did for AGG-FNs in Chapter 3. Function nodes allow us to exploit a much wider variety of utility structures in BAGGs. In this extended representation,3 the action graph G\u2019s vertices consist of both the set of action nodes A and the set of function nodes P . We require that no function node p \u2208P can be in any player\u2019s action set. Each function node p \u2208P is associated with a function f p : C(p)\u2192R. We extend c by defining c(p) to be the result of applying f p to the configuration over p\u2019s neighbors, f p(c(p)). Intuitively, c(p) can be used to describe intermediate parameters that players\u2019 utilities depend 3The definitions of function nodes and contribution-independent function nodes coincides with the corresponding concepts in AGGs. We repeat them here for completeness. 166 on. To ensure that the BAGG is meaningful, the graph restricted to nodes in P is required to be a directed acyclic graph. As before, for each action node \u03b1 we define a utility function u\u03b1 : C(\u03b1)\u2192R. Of particular computational interest is the subclass of contribution-independent function nodes. A function node p in a BAGG is contribution-independent if \u03bd(p) \u2286 A , there exists a commutative and associative operator \u2217, and for each \u03b1 \u2208 \u03bd(p) an integer w\u03b1 , such that given an action profile a = (a1, . . . ,an), c(p) = \u2217i\u2208N:ai\u2208\u03bd(p)wai . A BAGG is contribution-independent if all its function nodes are contribution-independent. Intuitively, if function node p is contribution-independent, each player\u2019s strategy affects c(p) independently. A very useful kind of contribution-independent function nodes are simple ag- gregator function nodes, which set \u2217 to the summation operator + and the weights to 1. Such a function node p simply counts the number of players that chose any action in \u03bd(p). Let us consider the size of a BAGG representation. The representation size of the Bayesian network for P is exponential only in the in-degree of the BN. The utility functions store \u2211\u03b1 |C(\u03b1)| values. Recall that C and thus C(\u03b1) correspond to configurations in an related AGG. We can thus apply the same analysis for the representation size of AGGs in Chapter 3. As in Chapter 3, estimations of this size generally depend on what types of function nodes are included. We state only the following (relatively straightforward) result since in this chapter we are mostly concerned with BAGGs with simple aggregator function nodes. Theorem 6.3.3. Consider BAGGs whose only function nodes, if any, are simple aggregator function nodes. If the in-degrees of the action nodes as well as the in- degrees of the Bayesian networks for P are bounded by a constant, then the sizes of the BAGGs are bounded by a polynomial in n, |A |, |P|, \u2211i |\u0398i| and the sizes of domains of variables in the BN. The proof is by a direct application of Corollary 3.2.11. This theorem shows a nice property of simple aggregator function nodes: representation size does not grow exponentially in the in-degrees of these function nodes. The next example (an extension of Example 3.2.7) illustrates the usefulness of simple aggregator function nodes, including for expressing conditional utility independence. 167 Example 6.3.4 (Coffee Shop game). Consider a symmetric Bayesian game involv- ing n players; each player plans to open a new coffee shop in a downtown area, but has to decide on the location. The downtown area is represented by a r\u00d7 k grid. Each player can choose to open a shop located within any of the B \u2261 rk blocks or decide not to enter the market. Each player has one of T types, representing her private information about her cost of opening a coffee shop. Players\u2019 types are independently distributed. Conditioned on player i choosing some location, her utility depends on: (a) her own type; (b) the number of players that chose the same block; (c) the number of players that chose any of the surrounding blocks; and (d) the number of players that chose any other location. The Bayesian normal form representation of this game has size n[T (B+ 1)]n. The game can be expressed as a BAGG as follows. Since the game is symmetric, we label the types as {1, . . . ,T}. A contains one action O corresponding to not entering and T B other action nodes, with each location corresponding to a set of T action nodes, each representing the choice of that location by a player with a different type. For each t \u2208 {1, . . . ,T}, the type-action sets Ai,t =A j,t for all i, j \u2208N and each consists of the action O and B actions corresponding to locations for type t. For each location (x,y) we create three function nodes: pxy representing the number of players choosing this location, p\u2032xy representing the number of players choosing any surrounding blocks, and p\u2032\u2032xy representing the number of players choosing any other block. Each of these function nodes is a simple aggregator function node, whose neighbors are action nodes corresponding to the appropriate locations (for all types). Each action node for location (x,y) has three neighbors, pxy, p\u2032xy, and p\u2032\u2032xy. Figure 6.2 shows the action graph for the game with T = 2 on an 1\u00d7 k grid. Since the BAGG action graph has maximum in-degree 3, by Theorem 6.3.3 the representation size is polynomial in n, B and T . 6.4 Computing a Bayes-Nash Equilibrium In this section we consider the problem of finding a sample Bayes-Nash equilib- rium given a BAGG. Our overall approach is to interpret the Bayesian game as a complete-information game, and then to apply existing algorithms for finding Nash equilibria of complete-information games. We consider two state-of-the-art 168 \u0001\u0002 \u0002 \u0002 \u0001\u0002\u0003\u0004\u0005\u0006 \u0001\u0002\u0003\u0004\u0005\u0007 Figure 6.2: BAGG representation for a Coffee Shop game with 2 types per player on an 1\u00d7 k grid. Nash equilibrium algorithms, van der Laan et al\u2019s simplicial subdivision [1987] and Govindan and Wilson\u2019s global Newton method [2003]. Recall from Section 6.2.1 that a Bayesian game can be transformed into its induced normal form or its agent form. In the induced normal form, each player i has |Ai||\u0398i| actions (corresponding to her pure strategies of the Bayesian game). Solving such a game would be infeasible for large |\u0398i|; just to represent an Nash equilibrium requires space exponential in |\u0398i|. A more promising approach is to consider the agent form. Note that we can straightforwardly adapt the agent-form transformation described in Section 6.2.1 to the setting of BAGGs: now the action set of player (i,\u03b8i) of the agent form corre- sponds to the type-action set Ai,\u03b8i of the BAGG. The resulting complete-information game has \u2211i\u2208N |\u0398i| players and |Ai,\u03b8i | actions for each player (i,\u03b8i); a Nash equi- librium can be represented using just \u2211i \u2211\u03b8i |Ai,\u03b8i | numbers. However, the normal form representation of the agent form has size \u2211 j\u2208N |\u0398 j|\u220fi,\u03b8i |Ai,\u03b8i |, which grows exponentially in n and |\u0398i|. Applying the Nash equilibrium algorithms to this nor- mal form would be infeasible for large games. Fortunately, we do not have to explicitly represent the agent form as a normal form game. Instead, we treat a BAGG as a compact representation of its agent form, and carry out any required computation on the agent form by operating directly on the BAGG. Recall from 169 Section 2.2.1 that a key computational task required by both Nash equilibrium al- gorithms in their inner loops is the computation of expected utility of the agent form. Recall from Section 6.2.1 that for all (i,\u03b8i) the expected utility u\u0303i,\u03b8i(\u03c3\u0303) of the agent form is equal to the expected utility ui(\u03c3 |\u03b8i) of the Bayesian game. Thus in the remainder of this section we focus on the problem of computing expected utility in BAGGs. 6.4.1 Computing Expected Utility in BAGGs Recall from Section 2.2.3 that \u03c3 \u03b8i\u2192ai is the mixed strategy profile that is identical to \u03c3 except that i plays ai given \u03b8i. The main quantity we are interested in is ui(\u03c3 \u03b8i\u2192ai |\u03b8i), player i\u2019s expected utility given \u03b8i under the strategy profile \u03c3 \u03b8i\u2192ai . Note that the expected utility ui(\u03c3 |\u03b8i) can then be computed as the sum ui(\u03c3 |\u03b8i) = \u2211ai ui(\u03c3 \u03b8i\u2192ai |\u03b8i)\u03c3i(ai|\u03b8i). One approach is to directly apply Equation (2.2.2), which has (|\u0398\u2212i| \u00d7 |A|) terms in the summation. For games represented in Bayesian normal form, this al- gorithm runs in time polynomial in the representation size. Since BAGGs can be exponentially more compact than their equivalent Bayesian normal form represen- tations, this algorithm runs in exponential time for BAGGs. In this section we present a more efficient algorithm that exploits BAGG struc- ture. We first formulate the expected utility problem as a Bayesian network infer- ence problem. Given a BAGG and a mixed strategy profile \u03c3 \u03b8i\u2192ai , we construct the induced Bayesian network (IBN) as follows. We start with the BN representing the type distribution P, which includes (at least) the random variables \u03b81, . . . ,\u03b8n. The conditional probability distributions (CPDs) for the network are unchanged. We add the following random variables: one strategy variable D j for each player j; one action count variable for each action node \u03b1 \u2208A , representing its action count, denoted c(\u03b1); one function variable for each function node p \u2208P , representing its configuration value, denoted c(p); and one utility variable U\u03b1 for each action node \u03b1 . We then add the following edges: an edge from \u03b8 j to D j for each player j; for each player j and each \u03b1 \u2208 A\u222aj , an edge from D j to c(\u03b1); for each function variable c(p), all incoming edges corresponding to those in the action graph G; and for each \u03b1 \u2208A , for each action or function node 170 m \u2208 \u03bd(\u03b1) in G, an edge from c(m) to U\u03b1 in the IBN. The CPDs of the newly added random variables are defined as follows. Each strategy variable D j has domain A\u222aj , and given its parent \u03b8 j, its CPD chooses an action from A\u222aj according to the mixed strategy \u03c3 \u03b8i\u2192ai j . In other words, if j 6= i then Pr(D j = a j|\u03b8 j) is equal to \u03c3 j(a j|\u03b8 j) for all a j \u2208 A j,\u03b8 j and 0 for all a j \u2208 A\u222aj \\A j,\u03b8 j ; and if j = i we have Pr(D j = ai|\u03b8 j) = 1. For each action node \u03b1 , the parents of its action-count variable c(\u03b1) are strategy variables that have \u03b1 in their domains. The CPD is a deterministic function that returns the number of its parents that take value \u03b1 ; i.e., it calculates the action count of \u03b1 . For each function variable c(p), its CPD is the deterministic function f p. The CPD for each utility variable U\u03b1 is a deterministic function specified by u\u03b1 . Remark 6.4.1. Observe that our construction of IBN here is similar to the con- struction of induced BN from a TAGG in Chapter 5. One difference is that in a BAGG, type affects utility indirectly through type-action sets, resulting in a differ- ent construction of CPDs at the strategy variables D j from the TAGG case. Also, each strategy variable in a BAGG has in-degree 1, whereas in a perfect-recall TAGG the in-degree of a decision of player i grows linearly in the number of i\u2019s previous decisions. It is straightforward to verify that the IBN is a directed acyclic graph (DAG) and thus represents a valid joint distribution. Furthermore, the expected utility ui(\u03c3 ti\u2192ai |\u03b8i) is exactly the expected value of the variable Uai conditioned on the instantiated type \u03b8i. Lemma 6.4.2. For all i \u2208 N, all \u03b8i \u2208 \u0398i and all ai \u2208 Ai,\u03b8i , we have ui(\u03c3 \u03b8i\u2192ai |\u03b8i) = E[Uai |\u03b8i]. Standard BN inference methods could be used to compute E[Uai|\u03b8i]. How- ever, such standard algorithms do not take advantage of structure that is inherent in BAGGs. In particular, recall that in the induced network, each action count variable c(\u03b1)\u2019s parents are all strategy variables that have \u03b1 in their domains, implying large in-degrees for action count variables. As in the TAGG case, the CPDs of action count variables exhibit causal independence, and we can apply a version of Heck- erman and Breese\u2019s method [Heckerman and Breese, 1996] to transform the IBN 171 into an equivalent BN small in-degree. Given an action count variable c(\u03b1) with parents (say) {D1 . . .Dn}, for each i\u2208 {1 . . .n\u22121} we create a node M\u03b1 ,i, represent- ing the count induced by D1 . . .Di. Then, instead of having D1 . . .Dn as parents of c(\u03b1), its parents become Dn and M\u03b1 ,n\u22121, and each M\u03b1 ,i\u2019s parents are Di and M\u03b1 ,i\u22121. The resulting graph has in-degree at most 2 for c(\u03b1) and the M\u03b1 ,i\u2019s. The CPDs of function variables corresponding to contribution-independent function nodes also exhibit causal independence, and thus we can use a similar transformation to re- duce their in-degree to 2. We call the resulting Bayesian network the transformed Bayesian network (TBN) of the BAGG. As in Chapter 5, it is straightforward to verify that the representation size of the TBN is polynomial in the size of the BAGG. We can then use standard inference algorithms to compute E[U\u03b1 |\u03b8i] on the TBN. For classes of BNs with bounded treewidths, this can be computed in polynomial time. Since the graph structure (and thus the treewidth) of the TBN does not depend on the strategy profile (but, rather, only on the BAGG itself), we have the following result. Theorem 6.4.3. For BAGGs whose TBNs have bounded treewidths, expected utility can be computed in time polynomial in n, |A |, |P| and |\u2211i \u0398i|. Bayesian games with independent type distributions are an important class of games and have many applications, such as independent-private-value auctions. When contribution-independent BAGGs have independent type distributions, ex- pected utility can be efficiently computed. Theorem 6.4.4. For contribution-independent BAGGs with independent type dis- tributions, expected utility can be computed in time polynomial in the size of the BAGG. Note that this result is stronger than that of Theorem 6.4.3, which only guaran- tees efficient computation when TBNs have constant treewidth. Proof. We reduce the problem of computing expected utility ui(\u03c3 \u03b8i\u2192ai |\u03b8i) for BAGGs with independent type distributions to the problem of computing expected utility for AGGs. Given a BAGG (N,G,{u\u03b1}\u03b1\u2208A ), we consider the AGG \u0393 specified by (N, {A\u222ai }i\u2208N ,G,{u\u03b1}\u03b1\u2208A ), i.e., an AGG with the same set of players, the same action 172 graph and the same utility functions, but with action sets corresponding to total action sets of the BAGG. The representation size of the AGG \u0393 is proportional to the size of the BAGG. Furthermore, since the BAGG is contribution-independent, all function nodes in the AGG \u0393 are contribution-independent. Given i, \u03b8i and \u03c3 \u03b8i\u2192ai , for each player j 6= i we can calculate Pr(D j) by sum- ming out \u03b8 j: Pr(D j = a j) = \u2211\u03b8 j \u03c3 j(a j|\u03b8 j). Observe that this distribution of the strategy variable D j can be interpreted as a (complete-information) mixed strategy \u03c3 \u2032j of the AGG \u0393\u2019s player j. Similarly for player i, the distribution Pr(Di|\u03b8i) can be interpreted as a mixed strategy \u03c3 \u2032i for \u0393\u2019s player i. Furthermore these distribu- tions are independent, so they induce the same distribution over configurations of the BAGG as the distribution over configurations of the AGG \u0393 induced by the mixed-strategy profile \u03c3 \u2032 = (\u03c3 \u20321, . . . ,\u03c3 \u2032n). Therefore the expected utility ui(\u03c3 \u03b8i\u2192ai |\u03b8i) for the BAGG is equal to the ex- pected utility of i in the AGG \u0393 under the mixed strategy profile \u03c3 \u2032. Expected utility for contribution-independent AGGs can be computed in polynomial time by running the algorithm described in Section 3.4.2. An alternative approach for proving Theorem 6.4.4 is to work on the TBN of the BAGG, which can be shown to have treewidth as most |\u03bd(ai)|. Although |\u03bd(ai)| is not necessarily a constant, meaning that Theorem 8 cannot be directly applied, it can be shown that a variable elimination algorithm needs to store at most |C(ai)| numbers in each of its tables, which is polynomial in the size of the BAGG. These two proof approaches can be thought of as two interpretations of the same expected utility algorithm. 6.5 Experiments We have implemented our approach for computing a Bayes-Nash equilibrium given a BAGG by applying Nash equilibrium algorithms on the agent form of the BAGG. We adapted two algorithms, GAMBIT\u2019s [McKelvey et al., 2006] implementation of simplicial subdivision and GameTracer\u2019s [Blum et al., 2002] implementation of Govindan and Wilson\u2019s global Newton method, by replacing calls to expected util- ity computations of the complete-information game with corresponding expected utility computations of the BAGG. Recall from Section 3.5 that we have adapted 173 110 100 1000 10000 100000 in se co nd s BAGG-AF NF-AF INF 0.1 3 4 5 6 7 CP U tim e i n se co nd s number of players Figure 6.3: GW, varying players. 10000 100000 1000 nd s 10 100 ec on ds 1 in se co nd s 0.1tim e i n s ec on ds 6 8 10 12 14 16 18 20 CP U tim e i n s ec on ds number of locationsCP U tim e i n s ec on ds Figure 6.4: GW, varying locations. 10000 100 1000 ds 10 ec on ds 0 1 1 in se co nd s 0.01 . tim e i n se co nd s 2 3 4 5 6 7 8 CP U tim e i n se co nd s types per playerCP U tim e i n se co nd s Figure 6.5: GW, varying types. 0.1 1 10 100 1000 10000 100000 2 3 4 5 6 7 8 CP U tim e i n s ec on ds number of players BAGG- AF NF-AF Figure 6.6: Simplicial subdivision. GAMBIT\u2019s implementation of simplicial subdivision to a black-box implementa- tion, and that Gametracer\u2019s implementation is already black-box, thus further adap- tation of the algorithms to the BAGG case was relatively straightforward to imple- ment once we have the expected utility subroutine. We ran experiments that tested the performance of our approach (denoted by BAGG-AF) against two approaches that compute a Bayes-Nash equilibrium for arbitrary Bayesian games. The first (de- noted INF) computes a Nash equilibrium on the induced normal form; the second (denoted NF-AF) computes a Nash equilibrium on the normal form representation of the agent form. Both were implemented using the original, normal-form-based implementations of simplicial subdivision and global Newton method. We thus studied six concrete algorithms, two for each game representation. We tested these algorithms on instances of the Coffee Shop Bayesian game described in Example 6.3.4. We created games of different sizes by varying the number of players, the number of types per player and the number of locations. For each size we generated 10 game instances with random integer payoffs, and measured the running (CPU) times. Each run was cut off after 10 hours if it had not yet finished. All our experiments were performed using a computer cluster consisting of 55 machines with dual Intel Xeon 3.2GHz CPUs, 2MB cache and 2GB RAM, running Suse Linux 11.1. We first tested the three approaches based on the Govindan-Wilson (GW) algo- 174 rithm. Figure 6.3 shows running time results for Coffee Shop games with n players, 2 types per player on a 2\u00d7 3 grid, with n varying from 3 to 7. Figure 6.4 shows running time results for Coffee Shop games with 3 players, 2 types per player on a 2\u00d7 x grid, with x varying from 3 to 10. Figure 6.5 shows results for Coffee Shop games with 3 players, T types per player on a 1\u00d73 grid, with T varying from 2 to 8. The data points represent the median running time of 10 game instances, with the error bars indicating the maximum and minimum running times. All results show that our BAGG-based approach (BAGG-AF) significantly outperformed the two normal-form-based approaches (INF and NF-AF). Furthermore, as we increased the dimensions of the games the normal-form based approaches quickly ran out of memory (hence the missing data points), whereas BAGG-NF did not. We also did experiments on BAGG-AF and NF-AF running the simplicial sub- division algorithm. Figure 6.6 shows running time results for Coffee Shop games with n players, 2 types per player on a 1\u00d73 grid, with n varying from 3 to 7. Again, BAGG-AF significantly outperformed NF-AF, and NF-AF ran out of memory for game instances with more than 4 players. 175 Chapter 7 Polynomial-time Computation of Exact Correlated Equilibrium in Compact Games 7.1 Introduction So far we have focused on the AGG representation and its extensions. For the remaining two technical chapters (this chapter and Chapter 8) we switch our at- tention to algorithms that work for a wide class of compact representations includ- ing AGGs. Specifically, we consider problems regarding correlated equilibrium (CE) [Aumann, 1974, 1987]. In this chapter we consider the problem of com- puting a sample correlated equilibrium. In Section 2.2.7 we gave an overview of literature on this problem; in order to motivate our results in this chapter we first take a more in-depth look at some of the relevant papers. The \u201cEllipsoid Against Hope\u201d algorithm [Papadimitriou, 2005, Papadimitriou and Roughgarden, 2008] is a polynomial-time method for identifying (a polynomial-size representation of) a CE, given a game representation satisfying two properties: polynomial type and the polynomial expectation property, which requires access to a polynomial-time algo- rithm that computes the expected utility of any player under any mixed-strategy profile. Recall that most existing compact game representations discussed in Sec- 176 tion 2.1.1 (including graphical games, symmetric games, congestion games, poly- matrix games and action-graph games) satisfy these properties. At a high level, the Ellipsoid Against Hope algorithm works by solving an infeasible dual LP (D) using the ellipsoid method (exploiting the existence of a separation oracle), and arguing that the LP (D\u2032) formed by the generated cutting planes must also be in- feasible. Solving the dual of this latter LP (which has polynomial size) yields a CE, which is represented as a mixture of the product distributions generated by the separation oracle. The Ellipsoid Against Hope algorithm is an instance of the black-box approach: it calls the expected utility subroutine as part of its separation oracle computation, but does not access the internal details of the representation. 7.1.1 Recent Uncertainty About the Complexity of Exact CE In a recent paper, Stein, Parrilo and Ozdaglar [2010] raised two interrelated con- cerns about the Ellipsoid Against Hope algorithm. First, they identified a symmet- ric 3-player, 2-action game with rational1 utilities on which the algorithm can fail to compute an exact CE. Indeed, they showed that the same problem arises on this game for a whole class of related algorithms. Specifically, if an algorithm (a) out- puts a rational solution, (b) outputs a convex combination of product distributions, and (c) outputs a convex combination of symmetric product distributions when the game is symmetric, then that algorithm fails to find an exact CE on their game, because the only CE of their game that satisfies properties (b) and (c) has irrational probabilities. This implies that any algorithm for exact rational CE must violate (b) or (c). Second, Stein, Parrilo and Ozdaglar also showed that the original analysis by Papadimitriou and Roughgarden [2008] incorrectly handles certain numerical pre- cision issues, which we now briefly describe. Recall that a run of the ellipsoid method requires as inputs an initial bounding ball with radius R and a volume bound v such that the algorithm stops when the ellipsoid\u2019s volume is smaller than v. To correctly certify the (in)feasibility of an LP using the ellipsoid method, R and v need to be set to appropriate values, which depend on the maximum encod- ing size of a constraint in the LP. However (as pointed out by Papadimitriou and 1Throughout this chapter, by \u201crational\u201d we mean rational numbers (ratios of integers) rather than rationality of players. 177 Roughgarden [2008]), each cut returned by the separation oracle is a convex com- bination of the constraints of the original dual LP (D) and thus may require more bits to represent than any of the constraints in (D); as a result, the infeasibility of the LP (D\u2032) formed by these cuts is not guaranteed. Papadimitriou and Roughgar- den [2008] proposed a method to overcome this difficulty, but Stein et al. showed that this method is insufficient for finding an exact CE. For the related problem of finding an approximate correlated equilibrium (\u03b5-CE), Stein et al. gave a slightly modified version of the Ellipsoid Against Hope algorithm that runs in time poly- nomial in log 1\u03b5 and the game representation size. 2 For problems that can have necessarily irrational solutions, it is typical to consider such approximations as ef- ficient; however, the computation of a sample CE is not such a problem, as there always exists a rational CE in a game with rational utilities, since CE are defined by linear constraints. It remains an open problem to determine whether the Ellipsoid Against Hope algorithm can be modified to compute an exact, rational correlated equilibrium.3 7.1.2 Our Results In this chapter, we use an alternate approach\u2014completely sidestepping the issues just discussed\u2014to derive a polynomial-time algorithm for computing an exact (and rational) correlated equilibrium given a game representation that has polynomial type and satisfies the polynomial expectation property. Specifically, our approach is based on the observation that if we use a separation oracle (for the same dual LP formulation proposed by Papadimitriou and Roughgarden [2008]) that gen- erates cuts corresponding to pure-strategy profiles (instead of Papadimitriou and Roughgarden\u2019s separation oracle that generates nontrivial product distributions), then these cuts are actual constraints in the dual LP, as opposed to convex combi- nations of constraints. As a result we no longer encounter the numerical accuracy issues that prevented the previous approaches from finding exact correlated equi- libria. Both the resulting algorithm and its analysis are also considerably simpler 2An \u03b5-CE is defined to be a distribution that violates the CE incentive constraints by at most \u03b5 . 3In a recent addendum to their original paper, Papadimitriou and Roughgarden [2010] acknowl- edged the flaw in the original algorithm. We note also that Stein et al. subsequently withdrew their paper from arXiv. It is our belief that their results are nevertheless correct; we discuss them here because they help to motivate our alternate approach. 178 than the original: standard techniques from the theory of the ellipsoid method are sufficient to show that our algorithm computes an exact CE using a polynomial number of oracle queries. The key issue is the identification of pure-strategy-profile cuts. It is relatively straightforward to show that such cuts always exist: since the product distribution generated by the Ellipsoid Against Hope algorithm ensures the nonnegativity of a certain expected value, then by a simple application of the probabilistic method there must exist a pure-strategy profile that also ensures the nonnegativity of that expected value. The key is to go beyond this nonconstructive proof of existence to also compute pure-strategy-profile cuts in polynomial time. We show how to do this by applying the method of conditional probabilities [Erdo\u030bs and Selfridge, 1973, Raghavan, 1988, Spencer, 1994], an approach for derandomizing probabilis- tic proofs of existence. At a high level, our new separation oracle begins with the product distribution generated by Papadimitriou and Roughgarden\u2019s separation oracle, then sequentially fixes a pure strategy for each player in a way that guar- antees that the corresponding conditional expectation given the choices so far re- mains nonnegative. Since our separation oracle goes though players sequentially, the cuts generated can be asymmetric even for symmetric games. Indeed, we can confirm (see Section 7.4.2) that it makes such asymmetric cuts on Stein, Parrilo and Ozdaglar\u2019s symmetric game\u2014thus violating their condition (c)\u2014because our algorithm always identifies a rational CE. As with the Ellipsoid Against Hope al- gorithm and Stein et al.\u2019s modified algorithm, our algorithm is also a black-box algorithm that calls the expected utility subroutine. Another effect of our use of pure-strategy-profile cuts is that the correlated equilibria generated by our algorithm are guaranteed to have polynomial-sized sup- ports; i.e., they are mixtures over a polynomial number of pure strategy profiles. Correlated equilibria with polynomial-sized supports are known to exist in every game (e.g., [Germano and Lugosi, 2007]); intuitively this is because CE are defined by a polynomial number of linear constraints, so a basic feasible solution of the lin- ear feasibility program would have a polynomial number of non-zero entries. Such small-support correlated equilibria are more natural solutions than the mixtures of product distributions produced by the Ellipsoid Against Hope algorithm: because of their simpler form they require fewer bits to represent and fewer random bits to 179 sample from; furthermore, verifying whether a given polynomial-support distribu- tion is a CE only requires evaluating the utilities of a polynomial number of pure strategy profiles, whereas verifying whether a mixture of product distributions is a CE would require evaluating expected utilities under product distributions, which is generally more expensive. No tractable algorithm has previously been proposed for identifying such a CE, thus our algorithm is the first algorithm that computes in polynomial time a CE with polynomial support given a compactly-represented game. In fact, we show that any CE computed by our algorithm corresponds to a basic feasible solution of the linear feasibility program that defines CE, and is thus an extreme point of the set of CE of the game. Since Papadimitriou and Roughgarden [2008] proposed the Ellipsoid Against Hope algorithm for computing a CE, researchers have proposed algorithms for re- lated problems that used a similar approach (which we call the Ellipsoid Against Hope approach): first solving an infeasible LP using the ellipsoid method with some separation oracle, then arguing that the LP formed by the cutting planes is also infeasible, and finally solving the dual of the latter polynomial-sized LP. For example, Hart and Mansour [2010] considered the setting where each player ini- tially knows only her own utility function, and proposed a communication proce- dure that finds a CE with polynomial communication complexity using a straight- forward adaptation of the Ellipsoid Against Hope algorithm. Huang and Von Sten- gel [2008] proposed a polynomial-time algorithm for computing a extensive-form correlated equilibrium (EFCE) [von Stengel and Forges, 2008], a solution concept for extensive-form games, by applying the Ellipsoid Against Hope approach to the LP formulation of EFCE. For both algorithms, the separation oracle outputs a mix- ture of the original constraints, and hence the flaws of the Ellipsoid Against Hope algorithm pointed out by Stein et al. [2010] also apply. We show that our techniques can be adapted to these two algorithms, yielding in both cases exact solutions with polynomial-sized supports. In particular, we replace the original separation oracles with \u201cpurified\u201d versions that output cutting planes corresponding to the original constraints, which ensures that the resulting algorithms avoid the numerical issues. The rest of the chapter is organized as follows. We start with basic defini- tions and notation in Section 7.2. In Section 7.3 we summarize Papadimitriou and Roughgarden\u2019s Ellipsoid Against Hope algorithm. In Section 7.4 we describe our 180 algorithm and prove its correctness. In Sections 7.5 and 7.6 we describe our fixes to Hart and Mansour\u2019s [2010] and Huang and Von Stengel\u2019s [2008] algorithms respectively, and Section 7.7 concludes. This chapter is based on published joint work with Kevin Leyton-Brown [2011]. New material that does not appear in [Jiang and Leyton-Brown, 2011] includes Sections 7.5 and 7.6. 7.2 Preliminaries In this chapter and Chapter 8 we largely follow the notation of Papadimitriou [2005] and Papadimitriou and Roughgarden [2008], which has become standard notation for the literature on CE computation. The notation is slightly differ- ent from the one we used in the previous (AGG-specific) chapters. Consider a simultaneous-move game with n players. Denote a player p, and player p\u2019s set of pure strategies (i.e., actions) Sp. Let m = maxp |Sp|. Denote a pure strategy profile s = (s1, . . . ,sn) \u2208 S, with sp being player p\u2019s pure strategy. Denote by S\u2212p the set of partial pure strategy profiles of the players other than p. Player p\u2019s utility under pure strategy profile s is ups . We assume that utilities are nonnegative integers (but results in this chapter can be straightforwardly adapted to rational utilities). Denote the largest utility of the game as u. A correlated distribution is a probability distribution over pure strategy pro- files, represented by a vector x \u2208 RM, where M = \u220fp |Sp|. Then xs is the proba- bility of pure strategy profile s under the distribution x. A correlated distribution x is a product distribution when it can be achieved by each player p randomizing independently over her actions according to some distribution xp, i.e., xs = \u220fp xpsp . Such a product distribution is also known as a mixed-strategy profile, with each player p playing the mixed strategy xp. Throughout the paper we assume that a game is given in a representation satis- fying two properties, following Papadimitriou and Roughgarden [2008]: \u2022 polynomial type: recall from Section 2.1.1 that this means the number of players and the number of actions for each player are bounded by polynomi- als in the size of the representation. \u2022 the polynomial expectation property: we have access to an algorithm that 181 computes the expected utility of any player p under any product distribution x, i.e., \u2211s\u2208S ups xs, in time polynomial in the size of the representation. Definition 7.2.1. A correlated distribution x is a correlated equilibrium (CE) if it satisfies the following incentive constraints: for each player p and each pair of her actions i, j \u2208 Sp, \u2211 s\u2208S\u2212p [upis\u2212u p js]xis \u2265 0, (7.2.1) where the subscript \u201cis\u201d (respectively \u201c js\u201d) denotes the pure strategy profile in which player p plays i (respectively j) and the other players play according to the partial profile s \u2208 S\u2212p. We write these incentive constraints in matrix form as Ux \u2265 0. Thus U is an N\u00d7M matrix, where N = \u2211p |Sp|2. The rows of U , corresponding to the left-hand sides of the constraints (7.2.1), are indexed by (p, i, j) where p is a player and i, j \u2208 Sp are a pair of p\u2019s actions. Denote by Us the column of U corresponding to pure strategy profile s. These incentive constraints, together with the constraints x\u2265 0, \u2211 s\u2208S xs = 1, (7.2.2) which ensure that x is a probability distribution, form a linear feasibility program that defines the set of CE. The largest value in U is at most u. We define the support of a correlated equilibrium x as the set of pure strategy profiles assigned positive probability by x. Germano and Lugosi [2007] showed that for any n-player game, there always exists a correlated equilibrium with sup- port size at most 1+\u2211p |Sp|(|Sp|\u22121)=N+1\u2212\u2211p |Sp|. Intuitively, such correlated equilibria are basic feasible solutions of the linear feasibility program for CE, i.e., vertices of the polyhedron defining the feasible region. Furthermore, these basic feasible solutions involve only rational numbers for games with rational payoffs (see e.g. Lemma 6.2.4 of [Gro\u0308tschel et al., 1988]). 7.3 The Ellipsoid Against Hope Algorithm In this section, we summarize Papadimitriou and Roughgarden\u2019s [2008] Ellipsoid Against Hope algorithm for finding a sample CE, which can be seen as an effi- 182 ciently constructive version of earlier proofs [Hart and Schmeidler, 1989, Myer- son, 1997, Nau and McCardle, 1990] of the existence of CE. We will concentrate on the main algorithm and only briefly point out the numerical issues discussed at length by both Papadimitriou and Roughgarden [2008] and Stein et al. [2010], as our analysis will ultimately sidestep these issues. Papadimitriou and Roughgarden\u2019s approach considers the linear program max \u2211 s\u2208S xs (P) Ux\u2265 0, x\u2265 0, which is modified from the linear feasibility program for CE by replacing the con- straint \u2211s\u2208S xs = 1 from (7.2.2) with the maximization objective. (P) either has x = 0 as its optimal solution or is unbounded; in the latter case, taking a feasible solution and scaling it to be a distribution yields a correlated equilibrium. Thus one way to prove the existence of CE is to show the infeasibility of the dual problem UT y\u2264\u22121, y\u2265 0. (D) The Ellipsoid Against Hope algorithm uses the following lemma, versions of which were also used by Nau and McCardle [1990] and Myerson [1997]. Lemma 7.3.1 ([Papadimitriou and Roughgarden, 2008]). For every dual vector y\u2265 0, there exists a product distribution x such that xUT y= 0. Furthermore there exists an algorithm that given any y \u2265 0, computes the corresponding x (represented by x1, . . . ,xn) in time polynomial in n and m. We will not discuss the details of this algorithm; we will only need the facts that the resulting x is a product distribution and can be computed in polynomial time. Note also that the resulting x is symmetric if y is symmetric. Lemma 7.3.1 implies that the dual problem (D) is infeasible (and therefore a CE must exist): xUT y is a convex combination of the left hand sides of the rows of the dual, and for any feasible y the result must be less than or equal to \u22121. The Ellipsoid Against Hope algorithm runs the ellipsoid algorithm on the dual (D), with the algorithm from Lemma 7.3.1 as separation oracle, which we call the 183 the Product Separation Oracle. At each step of the ellipsoid algorithm, the separa- tion oracle is given a dual vector y(i). The oracle then generates the corresponding product distribution x(i) and indicates to the ellipsoid algorithm that (x(i)UT )y\u2264\u22121 is violated by y(i). The ellipsoid algorithm will stop after a polynomial number of steps and determine that the program is infeasible. Let X be the matrix whose rows are the generated product distributions x(1), . . . ,x(L). Consider the linear program [XUT ]y\u2264\u22121, y\u2265 0, (D\u2032) and observe that the rows of [XUT ]y \u2264 \u22121 are the cuts generated by the ellipsoid method. If we apply the same ellipsoid method to (D\u2032) and use a separation oracle that returns the cut x(i)UT y \u2264 \u22121 given query y(i), the ellipsoid algorithm would go through the same sequence of queries y(i) and cutting planes x(i)UT y\u2264\u22121 and return infeasible. Presuming that numerical problems do not arise,4 we will find that (D\u2032) is infeasible. This implies that its dual [UXT ]\u03b1 \u2265 0, \u03b1 \u2265 0 is unbounded and has polynomial size, and thus can be solved for a nonzero feasible \u03b1 . We can thus scale \u03b1 to obtain a probability distribution. We then observe that XT \u03b1 satisfies the incentive constraints (7.2.1) and the probability distribution constraints (7.2.2) and is therefore a correlated equilibrium. The distribution XT \u03b1 is the mixture of product distributions x(1), . . . ,x(L) with weights \u03b1 , and thus can be represented in polynomial space and can be efficiently sampled from. One issue remains. Although the matrix XUT is polynomial sized, computing it using matrix multiplication would involve an exponential number of operations. On the other hand, entries of XUT are differences between expected utilities that arise under product distributions. Since we have assumed that the game represen- 4Since each row of (D\u2032)\u2019s constraint matrix XUT may require more bits to represent than any row of the constraint matrix UT for (D), running the ellipsoid algorithm on (D\u2032) with the original bounding ball and volume lower bound for (D) would not be sound, and as a result (D\u2032) is not guaranteed to be infeasible. Indeed, Stein et al. [2010] showed that when running the algorithm on their symmetric game example, (D\u2032) would remain feasible, and thus the output of the algorithm would not be an exact CE. Furthermore, since the only CE of that game that is a mixture of symmetric product distributions is irrational, there is no way to resolve this issue without breaking at least one of the symmetry and product distribution properties of the Ellipsoid Against Hope algorithm. For more on these issues and possible ways to address them, please see Papadimitriou and Roughgarden [2008, 2010], Stein et al. [2010]. 184 tation admits a polynomial-time algorithm for computing such expected utilities, XUT can be computed in polynomial time. Lemma 7.3.2 ([Papadimitriou and Roughgarden, 2008]). There exists an algo- rithm that given a game representation with polynomial type and satisfying the polynomial expectation property, and given an arbitrary product distribution x, computes xUT in polynomial time. As a result, XUT can be computed in poly- nomial time. 7.4 Our Algorithm In this section we present our modification of the Ellipsoid Against Hope algorithm, and prove that it computes exact CE. There are two key differences between our approach and the original algorithm for computing approximate CE. 1. Our modified separation oracle produces pure-strategy-profile cuts; 2. The algorithm is simplified, no longer requiring a special mechanism to deal with numerical issues (because pure-strategy-profile cuts can be represented directly as rows of (D)\u2019s constraint matrix). 7.4.1 The Purified Separation Oracle We start with a \u201cpurified\u201d version of Lemma 7.3.1. Lemma 7.4.1. Given any dual vector y \u2265 0, there exists a pure strategy profile s such that (Us)T y\u2265 0. Proof. Recall that Lemma 7.3.1 states that given dual vector y \u2265 0, a product dis- tribution x can be computed in polynomial time such that xUT y = 0. Since x[UT y] is a convex combination of the entries of the vector UT y, there must exist some nonnegative entry of UT y. In other words, there exists a pure strategy profile s such that (Us)T y\u2265 xUT y = 0. The proof of Lemma 7.4.1 is a straightforward application of the probabilistic method: since xUT y is the expected value of (Us)T y under distribution x, which we denote Es\u223cx[(Us)T y], the nonnegativity of this expectation implies the existence of 185 some s such that (Us)T y \u2265 0. Like many other probabilistic proofs, this proof is not efficiently constructive; note that there are an exponential number of possible pure strategy profiles. It turns out that for game representations with polynomial type and satisfying the polynomial expectation property, an appropriate s can indeed be identified in polynomial time. Our approach can be seen as derandomizing the probabilistic proof using the method of conditional probabilities [Erdo\u030bs and Selfridge, 1973, Raghavan, 1988, Spencer, 1994]. At a high level, for each player p our algorithm picks a pure strategy sp, such that the conditional expectation of (Us)T y given the choices so far remains nonnegative. This requires us to compute the conditional expectations, but this can be done efficiently using the expected utility subroutine guaranteed by the polynomial expectation property. Lemma 7.4.2. There exists a polynomial-time algorithm that given \u2022 an instance of a game in a representation satisfying polynomial type and the polynomial expectation property, \u2022 a polynomial-time subroutine for computing expected utility under any prod- uct distribution (as guaranteed by the polynomial expectation property), and \u2022 a dual vector y\u2265 0, finds a pure strategy profile s \u2208 S such that (Us)T y\u2265 0. Proof. Given a product distribution x, let x(p\u2192sp) be the product distribution in which player p plays sp and all other players play according to x. Since x is a product distribution, x(p\u2192sp)UT y is the conditional expectation of (Us)T y given that p plays sp, and furthermore we have for any p, xUT y = \u2211 sp [ x(p\u2192sp)U T y ] xpsp . (7.4.1) Since xp is a distribution, the right hand side of (7.4.1) is a convex combination and thus there must exist an action sp \u2208 Sp such that x(p\u2192sp)UT y \u2265 xUT y \u2265 0. Since x(p\u2192sp) is a product distribution, this process can be repeated for each player 186 Algorithm 5 Computes a pure strategy profile s such that (Us)T y\u2265 0. 1. Given y\u2265 0, identify a product distribution x satisfying xUT y = 0, using the algorithm described in Lemma 7.3.1. 2. Sequentially for each player p \u2208 {1, . . . ,n}, (a) iterate through actions sp \u2208 Sp, and compute x(p\u2192sp)UT using the algo- rithm described in Lemma 7.3.2, until we find an action s\u2217p \u2208 Sp such that [ x(p\u2192s\u2217p)U T ] y\u2265 0. (b) set x to be x(p\u2192s\u2217p). 3. The resulting x corresponds to a pure strategy profile s. Output s. to yield a pure strategy profile s such that (Us)T y \u2265 xUT y \u2265 0. This is formalized in Algorithm 5. We now consider the running time of Algorithm 5. We observe that x remains a product distribution throughout the algorithm and can thus be represented by its marginals x1, . . . ,xn, requiring only polynomial space. Due to the polynomial expectation property, the algorithm described in Lemma 7.3.2 is polynomial, which implies that in Step 2a, for each sp \u2208 Sp, x(p\u2192sp)UT can be computed in polynomial time. Since Step 2a requires at most |Sp| such computations, and since polynomial type implies that n and |Sp| are polynomial in the input size, the algorithm runs in polynomial time. A straightforward corollary is the following: Corollary 7.4.3. Algorithm 5 can be used as a separation oracle for the dual LP (D) in the Ellipsoid Against Hope algorithm: for each query point y, the oracle computes the corresponding pure-strategy profile s according to Algorithm 5 and returns the half space (Us)T y \u2264 \u22121. We call this the Purified Separation Oracle. This separation oracle has the following properties: \u2022 Each returned half space is one of the constraints of (D). \u2022 Since Algorithm 5 iterates through the players sequentially, the generated 187 pure-strategy profiles can be asymmetric even for symmetric games and sym- metric y. \u2022 Since a pure-strategy profile is a special case of a product distribution, the resulting pure-strategy profile s also satisfies Lemma 7.3.1, with x being the unit vector corresponding to s. 7.4.2 The Simplified Ellipsoid Against Hope Algorithm We now modify the Ellipsoid Against Hope Algorithm by replacing the Product Separation Oracle with our Purified Separation Oracle. The rows of X in (D\u2032) become unit vectors corresponding to the pure-strategy profiles generated by the oracle. Thus, we can write (D\u2032) as (U \u2032)T y\u2264\u22121, y\u2265 0, (D\u2032\u2032) where the matrix U \u2032 \u2261UXT consists of the columns Us(i) that correspond to pure- strategy profiles s(i) generated by the separation oracle. Note that each constraint of (D\u2032\u2032) is also one of the constraints of (D), and as a result neither the maximum value of the coefficients nor the right-hand sides of (D\u2032\u2032) are greater than in (D). Therefore, a starting ball and volume lower bound that are valid for a run of the ellipsoid method on (D) is also valid for (D\u2032\u2032). We thus avoid the precision issues faced by the Ellipsoid Against Hope algorithm, and it is sufficient to use standard values for the initial radius and volume lower bound, and standard perturbation methods for dealing with non-full-dimensional solutions. The resulting CE is a mixture over a polynomial number of pure strategy profiles. We can make a further conceptual simplification of the algorithm: instead of using X as in the Ellipsoid Against Hope algorithm, we can directly treat the generated pure-strategy profiles as columns of U , and use U \u2032 in place of UXT . We now formally state and prove our result. Note that although we only briefly discussed the way numerical issues are addressed in the original Ellipsoid Against Hope algorithm in Section 7.3, we do go into detail about how our algorithm en- sures its own numerical accuracy. Nevertheless that task is comparatively easy, as it is sufficient for us to apply standard techniques from the theory of the ellip- 188 Algorithm 6 Computes an exact rational CE given a game representation satisfying polynomial type and the polynomial expectation property. 1. Apply the ellipsoid method to (D), using the Purified Separation Oracle, a starting ball with radius of R = u5N3 centered at 0, and stopping when the volume of the ellipsoid is below v = \u03b1Nu\u22127N 5 , where \u03b1N is the volume of the N-dimensional unit ball. 2. Form the matrix U \u2032 whose columns are the Us(1) , . . . ,Us(L) generated by the separation oracle during the run of the ellipsoid method. 3. Compute a basic feasible solution x\u2032 of the linear feasibility program U \u2032x\u2032 \u2265 0, x\u2032 \u2265 0, 1T x\u2032 = 1, (P\u2217) by applying the ellipsoid method on the explicitly represented (P\u2217) and re- covering a basis using, e.g., Algorithm 4.2 of Dantzig and Thapa [2003]. 4. Output x\u2032 and s(1), . . . ,s(L), interpreted as a distribution over pure-strategy profiles s(1), . . . ,s(L) with probabilities x\u2032. soid method. Our analysis makes use of the following lemma from Gro\u0308tschel et al. [1988]. Lemma 7.4.4 (Lemma 6.2.6, [Gro\u0308tschel et al., 1988]). Let P = {y \u2208RN |Ay \u2264 b} be a full-dimensional polyhedron defined by the system of inequalities, with the encoding length of each inequality at most \u03d5 . Then P contains a ball with radius 2\u22127N3\u03d5 . Moreover, this ball is contained in the ball with radius 25N2\u03d5 centered at 0. We note that the only restriction on P is full dimensionality; we do not need to assume that P is bounded, or that A has full row rank. Theorem 7.4.5. Given a game representation with polynomial type and satisfying the polynomial expectation property, Algorithm 6 computes an exact and rational CE with support size at most 1+\u2211p |Sp|(|Sp|\u22121) in polynomial time. Proof. We begin by proving the correctness of the algorithm. First, we will show that the ellipsoid method in Step 1 is a valid run for (D), which certifies that the 189 feasible set of (D) is either empty or not full dimensional.5 Suppose the contrary, i.e., the feasible set of (D) is feasible and full dimensional. Since the encoding length of each constraint of (D) is at most N log2 u, then by Lemma 7.4.4, the feasible set must contain a ball with radius u\u22127N4 , and thus volume \u03b1Nu\u22127N 5 , and furthermore this ball must be contained in the ball with radius u5N3 centered at 0, which is the initial ball of our ellipsoid method in Step 1. Since at the end of Step 1 the ellipsoid method certifies that the intersection of the initial ball and the feasible set has volume less than v= \u03b1Nu\u22127N 5 , we reach a contradiction and therefore either the LP (D) must be infeasible or the feasible set must not be full dimensional. Since the largest magnitude of the coefficients in (D\u2032\u2032) is also u, Step 1 is also a valid run for (D\u2032\u2032) and therefore either (D\u2032\u2032) must be infeasible or the feasible set of (D\u2032\u2032) must not be full dimensional. Of course a non-full-dimensional feasible set is not sufficient for our purpose; we now perturb (D\u2032\u2032) to get an infeasible LP. Fix \u03c1 > 1. Perturbing the constraints (U \u2032)T y\u2264\u22121 of (D\u2032\u2032) by multiplying the RHS by \u03c1 , we get the LP: min0 (7.4.2) (U \u2032)T y\u2264\u2212\u03c11 y\u2265 0. We claim that (7.4.2) is infeasible. Suppose otherwise: then there exists a y \u2208RN such that y\u2265 0 and (U \u2032)T y\u2264\u2212\u03c11. Let y\u2032 \u2208RN be a vector such that 0\u2264 y\u2032j\u2212y j \u2264 \u03c1\u22121 Nu for all j. Then y\u2032 \u2265 0, and each component s of U \u2032T y\u2032 satisfies (U \u2032s)T y\u2264 (U \u2032s)T y+ \u03c1\u22121 Nu \u2211j |U \u2032 j s | \u2264 \u2212\u03c1 +\u03c1\u22121 \u2264\u22121. Thus, any such y\u2032 is feasible for (D\u2032\u2032). However, the set of all such vectors y\u2032 is a 5Since the ellipsoid method relies on shrinking the volume of the candidate set, it is not able to distinguish between non-full-dimensional feasible sets and infeasibility. We overcome this by perturbing the LP after the ellipsoid method has been applied; an alternate method perturbs the LP in advance to ensure the feasible set is either empty or full dimensional. 190 full-dimensional cube. This contradicts the fact that (D\u2032\u2032) is either infeasible or not full dimensional, and therefore (7.4.2) is infeasible. This means that (7.4.2)\u2019s dual max\u03c11T x\u2032 (7.4.3) U \u2032x\u2032 \u2265 0 x\u2032 \u2265 0 is unbounded (since it is feasible, e.g. x\u2032 = 0). Then a nonzero feasible vector x\u2032 is (after normalization) a distribution over the pure strategy profiles corresponding to columns of U \u2032. Treating it as a sparse representation of a correlated distribution x, it satisfies the feasibility program for CE and is therefore an exact CE. This CE is exact but its support size could be greater than 1+\u2211p |Sp|(|Sp|\u22121) (although as we argue below it is still polynomial). To get a CE with the required support size, we notice that since (7.4.3) is unbounded, a feasible solution of the bounded linear feasibility program (P\u2217) is a CE. Note that (P\u2217) has the same set of constraints as the feasibility program for CE defined by (7.2.1) and (7.2.2), and that for each player p and action i\u2208 Sp, the incentive constraint (p, i, i) corresponds to deviating from action i to itself and is therefore redundant. Thus the number of bounding constraints of (P\u2217) is at most 1+ \u2211p |Sp|(|Sp| \u2212 1) and therefore a basic feasible solution x\u2032 of (P\u2217) will have the required support size. Since the coefficients and right-hand sides of (P\u2217) are rational, then (by e.g. Lemma 6.2.4 of Gro\u0308tschel et al. [1988]) its basic feasible solution x\u2032 is also rational and can be represented using at most 4N3u bits. We now consider the running time of the algorithm. Since Step 1 is a standard run of the ellipsoid method, it terminates in a polynomial number of iterations. For example if we use the ellipsoid algorithm presented in Theorem 3.2.1 of Gro\u0308tschel et al. [1988], then by Lemma 3.2.10 of Gro\u0308tschel et al. [1988] the ratio between volumes of successive ellipsoids vol(Ek+1)\/vol(Ek) \u2264 e\u22121\/(5N). With the volume of the initial ellipsoid at most \u03b1NRN and stopping when volume is below v, the 191 number of iterations L is at most 5N [ ln(\u03b1NRN)\u2212 lnv ] = 5N [ 5N4 lnu+7N5 lnu ] = O(N6 lnu), which is polynomial in the input size since N \u2261\u2211p |Sp|2 is polynomial. Since each call to the separation oracle takes polynomial time by Lemma 7.4.2, Step 1 takes polynomial time. L being polynomial also ensures that (P\u2217) has polynomial size, and thus a basic feasible solution can be found in polynomial time. We note that the estimates on R and v (and thus L) can be improved, but our main goal here is to prove that the running time of our algorithm is polynomial. The reader may wonder how our algorithm would deal with Stein et al. [2010]\u2019s counterexample, a symmetric game in which the only CE that is a convex combina- tion of symmetric product distributions has irrational probabilities. Since we have proved that our algorithm computes a rational CE as a convex combination of prod- uct distributions, it must violate the symmetry property. Indeed as we discussed in Section 7.4.1, our Purified Separation Oracle can return asymmetric cuts for sym- metric games and symmetric queries, and thus for this game it must return at least one asymmetric cut. 7.5 Uncoupled Dynamics with Polynomial Communication Complexity Hart and Mansour [2010] considered the setting where each player initially knows only her own utility function, and analyzed the communication complexity for such uncoupled dynamics to reach various equilibrium concepts. They use a straightfor- ward adaptation of Papadimitriou and Roughgarden\u2019s Ellipsoid Against Hope al- gorithm to show that a CE can be reached using polynomial communication. The recent discovery by Stein et al. [2010] of flaws of the Ellipsoid Against Hope al- gorithm imply that Hart and Mansour\u2019s procedure as proposed would not reach an exact CE. We show that our modified version of the Ellipsoid Against Hope 192 algorithm can be straightforwardly adapted into a polynomial communication pro- cedure for exact CE. Formally, in Hart and Mansour\u2019s setting, each player p initially knows only her utility function up. No assumption is made on how the game is represented and the cost of computation is of no concern; instead, we focus on the amount of communication required to reach a CE. Hart and Mansour\u2019s approach used the following property of the Product Separation Oracle (Lemma 7.3.1): given y \u2265 0, the corresponding product distribution x depends only on y and not on the utilities of the game. Although generating the cutting plane requires computing xUT which does depend on the utilities, each entry (p, i, j) of the vector xUT depends only on the utilities of player p. We now describe Hart and Masour\u2019s procedure. A center runs the Ellipsoid Against Hope algorithm; when the Product Separation Oracle generates a product distribution x, the center sends it to all players, and asks each player p to compute her segment of the vector xUT , i.e., entries (p, i, j) for all i, j \u2208 Sp, to send back to the center. This exactly simulates the Ellipsoid Against Hope algorithm, and its communication costs are those of sending the product distributions to players and each player sending back her part of xUT . This procedure can be modified to use the Purified Separation Oracle instead. At Step 2a of the Purified Separation Oracle (Algorithm 5), for each sp \u2208 Sp the center sends x(p\u2192sp) to all players and asks each to compute her segment of x(p\u2192sp)U T . After assembling the vector x(p\u2192sp)UT from the segments, the center checks whether [ x(p\u2192sp)UT ] y\u2265 0. We call the resulting modified version of Algo- rithm 5 the Uncoupled Purified Separation Oracle. It is straightforward to see that this exactly simulates the Purified Separation Oracle. The communication costs are those of the center sending the product distributions and the players sending back segments of x(p\u2192sp)UT . At most \u2211p |Sp| rounds of such exchange are required for each call to the Purified Separation Oracle, therefore the total amount of communi- cation is polynomially bounded. Corollary 7.5.1. Modify Hart and Mansour\u2019s procedure by replacing its separa- tion oracle with the Uncoupled Purified Separation Oracle. The resulting commu- nication procedure reaches an exact CE while both the number of bits of communi- 193 cation required and the size of the support are polynomial in n and \u2211p |Sp|. 7.6 Computing Extensive-form Correlated Equilibria Recently, von Stengel and Forges [2008] proposed extensive-form correlated equi- librium (EFCE), a solution concept for extensive-form games that is closely related to correlated equilibrium. Here we focus on the computational problem of find- ing an EFCE and refer interested readers to von Stengel and Forges [2008] for details on EFCE as a solution concept. Huang and Von Stengel [2008] described a polynomial-time algorithm for computing sample extensive-form correlated equi- libria. Their algorithm follows a very similar structure as Papadimitriou and Rough- garden\u2019s Ellipsoid Against Hope algorithm, and the problems pointed out by Stein et al. [2010] carry over. As a result, the algorithm can fail to find an exact EFCE. We extend our fix for Papadimitriou and Roughgarden\u2019s Ellipsoid Against Hope algorithm to Huang and Von Stengel\u2019s algorithm, allowing it to compute an exact EFCE with polynomial-sized support. We first give a high-level description of Huang and Von Stengel\u2019s algorithm, following Huang [2011]. 6 The input of the problem is an n-player extensive-form game with perfect recall. Each nonterminal node of the game tree is a decision node for either one of the players or Chance. H denotes the set of information sets, and Ch denotes the set of moves available from h \u2208 H , and T denotes the set of terminal nodes. Due to the tree structure of the extensive form, for each node there exists a unique path from the root of the tree to that node. Let s be a pure-strategy profile; s(h) denotes the move at infor- mation set h \u2208 H . Let z be a distribution over the set of pure-strategy profiles. The size of z is exponential. Huang and Von Stengel [2008] showed that z is an EFCE if it satisfies a polynomial number of linear constraints, which can be written as Az+Bv\u2265 0 where v is an auxiliary vector of polynomial size. They considered the 6We assume that readers are familiar with the standard concepts of extensive form games, infor- mation sets, perfect recall, and behavior strategies. 194 exponential-sized primal LP max\u2211 s zs (7.6.1) Az+Bv\u2265 0 z\u2265 0, and its dual AT y\u2264\u22121 (7.6.2) BT y = 0 y\u2265 0 which has a polynomial number of variables and exponential number of constraints. The following is a key lemma: Lemma 7.6.1. [Huang and Von Stengel, 2008] For all y \u2265 0 such that BT y = 0, there exists a product distribution z such that zT AT y = 0. Unlike the simultaneous-move game case, z being a product distribution (mixed- strategy profile) does not imply that it can be concisely represented, as the number of pure strategies for each player can be exponential. Fortunately the z constructed by Lemma 7.6.1 corresponds to a behavior strategy profile, which specifies a distri- bution (denoted zh) over moves for each information set h. Formally, given zh for all h \u2208H , the resulting distribution over pure-strategy profiles is given by \u2200s, zs = \u2211 t\u2208T :t agrees with s p(t)xt , where we say t agrees with pure-strategy profile s if all the moves by the players on the path from the root to t are given by s, p(t) is the product of probabilities of moves by Chance along the path from the root to t, and xt = \u220fh precedes t zhs(h) is the product of probabilities of moves by the players along the path from the root to t. Here by \u201ch precedes t\u201d we mean that h is an information set on the path from the root to t. Note that perfect recall ensures that an information set h appears at most once along the path from the root to t. Such a behavior strategy profile requires 195 only a polynomial number of values to specify. Given y, the corresponding z can be computed in polynomial time. By the same argument as for the Ellipsoid Against Hope algorithm, Lemma 7.6.1 implies the infeasibility of (7.6.2), and can be used as a separation oracle for a ellipsoid method on (7.6.2). In order to generate the cutting plane [zAT ]y\u2264\u22121, the oracle needs to compute zAT whose inner dimensions are exponential. It turned out that zAT can be formulated as expected utility computations which can be carried out in polynomial time. Huang and Von Stengel\u2019s algorithm thus proceeds similarly as in the Ellipsoid Against Hope algorithm to produce a feasible solution to (7.6.1), which can be scaled to be an EFCE. By the same argument as our fix of the Ellipsoid Against Hope algorithm, in order to overcome the problems pointed out by Stein et al. [2010] it is sufficient to construct a Purified Separation Oracle that given a y \u2265 0 such that BT y = 0, com- putes a pure-strategy profile s such that (As)T y \u2265 0. We construct such an oracle using a similar application of the method of conditional probabilities. For a behav- ior strategy profile z, an information set h, and a move d \u2208Ch, define z(h\u2192d) to be the behavior strategy profile that is identical to z except at information set d, where the corresponding player deterministically chooses d instead. Our Purified Separa- tion Oracle starts with the behavior strategy profile constructed by Lemma 7.6.1, and uses the same algorithm as Algorithm 5, except that instead of going through players in step 2a, we go through information sets sequentially, and for each infor- mation set h we iterate through z(h\u2192d) until we find a d\u2217 such that [z(h\u2192d\u2217)AT ]y\u2265 0. To show that our algorithm is correct, we use the following lemma: Lemma 7.6.2. Given a behavior strategy profile z, for each information set h, z = \u2211 d\u2208Ch z(h\u2192d)z h d , where zhd is the probability of choosing d at h prescribed by z. Proof. Recall that zs = \u2211 t\u2208T :t agrees with s p(t)xt , where xt = \u220fh precedes t zhs(h). Since the moves along the path to t are uniquely deter- 196 mined by t, xt is fully specified by the behavior strategies and does not depend on s. We can write this in matrix form as z = Fx, with x \u2208R|T |. Let x(h\u2192d) \u2208R|T | be the vector induced by behavior strategy profile z(h\u2192d). We then have z(h\u2192d) = Fx(h\u2192d). Furthermore, we observe that for all h, x = \u2211 d\u2208Ch x(h\u2192d)z h d . (It is straightforward to verify the above by considering the terminal nodes t for which h precedes t and then the other terminal nodes.) We thus have z = Fx = F \u2211 d\u2208Ch x(h\u2192d)z h d = \u2211 d\u2208Ch z(h\u2192d)z h d , which is the required equality. The correctness and the polynomial running time of our algorithm for Purified Separation Oracle then follow by the same argument as in the proof of Lemma 7.4.2. After modifying Huang and Von Stengel\u2019s algorithm by replacing their sepa- ration oracle with our Purified Separation Oracle, the resulting algorithm computes in polynomial time an exact EFCE that is a mixture of a polynomial number of pure-strategy profiles. Corollary 7.6.3. Given a game in extensive form, an exact EFCE with polynomial- sized support can be computed in polynomial time. 7.7 Conclusion We have proposed a polynomial-time algorithm, a variant of Papadimitriou and Roughgarden\u2019s Ellipsoid Against Hope approach, for computing an exact CE given a game representation with polynomial type and satisfying the polynomial expecta- tion property. A key component of our approach is a derandomization of Papadim- itriou and Roughgarden\u2019s separation oracle using the method of conditional proba- bilities, yielding a polynomial-time separation oracle that outputs cuts correspond- ing to pure-strategy profiles. Our approach is then spared from dealing with the numerical precision issues that were a major focus of previous approaches, and the 197 algorithm is considerably simplified as a result. Furthermore, the correlated equi- libria returned by our algorithm have polynomial-sized supports. We expect these properties of our algorithm to be independently interesting, beyond its usefulness in resolving the recent uncertainty about the computational complexity of identifying exact CE. For example, we show that our techniques can be adapted to two exist- ing algorithms that are based on the Ellipsoid Against Hope approach, Hart and Mansour\u2019s [2010] CE procedure with polynomial communication complexity and Huang and Von Stengel\u2019s [2008] polynomial-time algorithm for extensive-form correlated equilibria, yielding in both cases exact solutions with polynomial-sized supports. Our algorithm has additional practical benefits: the resulting cutting planes are deeper cuts than those produced by the original oracle, resulting in a smaller num- ber of iterations required to reach convergence, albeit at the cost of more work per iteration. It is also possible to return cuts corresponding to pure strategy profiles with (e.g.) good social welfare, yielding a heuristic method for generating corre- lated equilibria with good social welfare. However, recall from Section 2.2.7 that finding a CE with optimal social welfare is generally NP-hard for many game rep- resentations [Papadimitriou and Roughgarden, 2008]. In Chapter 8 we analyze the optimal CE problem using a somewhat different approach. 198 Chapter 8 A General Framework for Computing Optimal Correlated Equilibria in Compact Games 8.1 Introduction In this chapter we1 continue to focus on correlated equilibrium (CE). We have seen from the previous chapter and its related literature [Jiang and Leyton-Brown, 2011, Papadimitriou and Roughgarden, 2008] that finding a sample CE is tractable, even for compactly represented games. However, since in general there can be an infi- nite number of CE even in a generic game, finding an arbitrary one is of limited value. Instead, here we focus on the problem of computing a correlated equilib- rium that optimizes some objective. In particular we consider two kinds of objec- tives: (1) A linear function of players\u2019 expected utilities. For example, computing the best (or worst) social welfare corresponds to maximizing (or minimizing) the sum of players\u2019 utilities, respectively. (2) Max-min welfare: maximizing the util- ity of the worst-off player. (More generally, maximizing the minimum of a set of linear functions of players\u2019 expected utilities.) We are also interested in comput- 1This chapter is based on joint work with Kevin Leyton-Brown. A shorter version is published in the Proceedings of the Seventh Workshop on Internet and Network Economics (WINE), 2011. 199 ing optimal coarse correlated equilibrium (CCE) [Hannan, 1957]. Recall from Sec- tion 2.2.7 that the empirical distribution of any no-external-regret learning dynamic converges to the set of CCE, while the empirical distribution of no-internal-regret learning dynamics converges to the set of CE. Thus, optimal CE \/ CCE provide use- ful bounds on the social welfare of the empirical distributions of these dynamics. Optimal CE \/ CCE can also be used as bounds on optimal NE since CE and CCE are both relaxations of NE. Hence they are also useful for computing (bounds on) the price of anarchy and price of stability of a game. The problems of computing optimal CE \/ CCE can be formulated as linear programs with sizes polynomial in the size of normal form. However, as with the rest of the thesis, we are interested in the case when the input is a compactly-represented game. We are particularly interested in the relationship between the optimal CE \/ CCE problems and the problem of computing the optimal social welfare outcome (i.e. strategy profile) of the game, which is exactly the optimal social welfare CE prob- lem without the incentive constraints. This is an instance of a line of questions that has received much interest from the algorithmic game theory community: \u201cHow does adding incentive constraints to an optimization problem affect its complex- ity?\u201d This question in the mechanism design setting is perhaps one of the central questions of algorithmic mechanism design [Nisan and Ronen, 2001]. Of course, a more constrained problem can in general be computationally easier than the relaxed version of the problem. Nevertheless, results from complexity of Nash equilibria and algorithmic mechanism design suggest that adding incentive constraints to a problem is unlikely to decrease its computational difficulty. That is, when the op- timal social welfare problem is hard, we tend also to expect that the optimal CE problem will be hard as well. On the other hand, we are interested in the other direction: when it is the case for a class of games that the optimal social welfare problem can be efficiently computed, can the same structure be exploited to effi- ciently compute the optimal CE? As mentioned in Section 2.2.7, Papadimitriou and Roughgarden [2008] consid- ered the optimal linear objective CE problem and proved that the problem is NP- hard for many representations, while tractable for a couple of representations. We now take a more in-depth look at this paper. In particular, the representations shown to be NP-hard include graphical games, polymatrix games, and congestion games. 200 These hardness results, although nontrivial, are not surprising: the optimal social welfare problem is already NP-hard for these representations. On the tractability side, Papadimitriou and Roughgarden [2008] focused on so-called \u201creduced form\u201d representations, meaning representations for which there exist player-specific par- titions of the strategy profile space into payoff-equivalent outcomes. They showed that if a particular separation problem is polynomial-time solvable, the optimal CE problem is polynomial-time solvable as well. Finally, they showed that this separa- tion problem is polynomial-time solvable for bounded-treewidth graphical games, symmetric games and anonymous games. Perhaps most surprising and interesting is the form of Papadimitriou and Rough- garden\u2019s sufficient condition for tractability: their separation problem for an in- stance of a reduced-form-based representation is essentially equivalent to solving the optimal social welfare problem for an instance of that representation with the same reduced form but possibly different payoffs. In other words, if we have a polynomial-time algorithm for the optimal social welfare problem for a reduced- form-based representation, we can turn that into a polynomial-time algorithm for the optimal social welfare CE problem. However, Papadimitriou and Roughgar- den\u2019s sufficient condition for tractability only applies to reduced-form-based rep- resentations. Their definition of reduced forms is unable to handle representations that exploit linearity of utility, and in which the structure of player p\u2019s utility func- tion may depend on the action she chose. As a result, many representations do not fall into this characterization, such as polymatrix games, congestion games, and action-graph games. Although the optimal CE problems for these representa- tions are NP-hard in general, we are interested in identifying tractable subclasses of games, and a sufficient condition that applies to all representations would be helpful. In this chapter, we propose a different algorithmic approach for the optimal CE problem that applies to all compact representations. By applying the ellipsoid method to the dual of the LP for optimal CE, we show that the polynomial-time solvability of what we call the deviation-adjusted social welfare problem is a suf- ficient condition for the tractability of the optimal CE problem. We also give a sufficient condition for tractability of the optimal CCE problem: the polynomial- time solvability of the coarse deviation-adjusted social welfare problem, which we 201 show reduces to the deviation-adjusted social welfare problem. Our algorithms are instances of the black-box approach, with the required subroutines being the com- putations of the deviation-adjusted social welfare problem and the coarse deviation- adjusted social welfare problem, respectively. We show that for reduced-form- based representations, the deviation-adjusted social welfare problem can be re- duced to the separation problem of Papadimitriou and Roughgarden [2008]. Thus the class of reduced forms for which our problem is polynomial-time solvable con- tains the class for which the separation problem is polynomial-time solvable. More generally, we show that if a representation can be characterized by \u201clinear reduced forms\u201d, i.e. player-specific linear functions over partitions, then for that represen- tation, the deviation-adjusted social welfare problem can be reduced to the optimal social welfare problem. As an example, we show that for graphical polymatrix games on trees, optimal CE can be computed in polynomial time. Such games are not captured by the reduced-form framework.2 The key feature of these represen- tations upon which our argument relies is that the partitions for player p (which characterize the structure of the utility function for p) do not depend on the action chosen by p. On the other hand, representations like action-graph games and congestion games have action-specific structure, and as a result the deviation-adjusted social welfare problems and coarse deviation-adjusted social welfare problems on these representations are structured differently from the corresponding optimal social welfare problems. Nevertheless, we are able to show a polynomial-time algorithm for the optimal CCE problem on singleton congestion games [Ieong et al., 2005], a subclass of congestion games. We use a symmetrization argument to reduce the optimal CCE problem to the coarse deviation-adjusted social welfare problem with player-symmetric deviations, which can be solved using a dynamic-programming algorithm. This is an example where the optimal CCE problem is tractable while the complexity of the optimal CE problem is not yet known. 2In a recent paper Kamisetty et al. [2011] has independently proposed an algorithm for optimal CE in graphical polymatrix games on trees. They used a different approach that is specific to graph- ical games and graphical polymatrix games, and it is not obvious whether their approach can be extended to other classes of games. 202 8.2 Problem Formulation We follow the notation of Chapter 7. Furthermore, let N = {1, . . . ,n} be the set of players. Let w be the vector of social welfare for each pure profile, that is w = \u2211p\u2208N up, with ws denoting the social welfare for pure profile s. Throughout the chapter we assume that the game is given in a representation with polynomial type. Unlike in Chapter 7, here we do not assume the existence of a polynomial-time algorithm for expected utility. 8.2.1 Correlated Equilibrium Correlated equilibrium (CE) is defined in Definition 7.2.1. The problem of com- puting a maximum social welfare CE can be formulated as the LP max wT x (P) Ux\u2265 0 x\u2265 0 \u2211 s\u2208S xs = 1 Another objective of interest is the max-min welfare CE problem: computing a CE that maximizes the utility of the worst-off player. max r (8.2.1) \u2211 s xsu p s \u2265 r \u2200p (8.2.2) Ux\u2265 0 x\u2265 0 \u2211 s\u2208S xs = 1 Another solution concept of interest is coarse correlated equilibrium (CCE). Whereas CE requires that each player has no profitable deviation even if she takes into account the signal she receives from the intermediary, CCE only requires that each player has no profitable unconditional deviation. 203 Definition 8.2.1. A correlated distribution x is a coarse correlated equilibrium (CCE) if it satisfies the following incentive constraints: for each player p and each of his actions j \u2208 Sp, \u2211 (i,s\u2212p)\u2208S [upis\u2212p \u2212u p js\u2212p ]xis\u2212p \u2265 0. (8.2.3) We write these incentive constraints in matrix form as Cx \u2265 0. Thus C is an (\u2211p |Sp|)\u00d7M matrix. By definition, a CE is also a CCE. The problem of computing a maximum social welfare CCE can be formulated as the LP max wT x (CP) Cx\u2265 0 x\u2265 0 \u2211 s\u2208S xs = 1. 8.3 The Deviation-Adjusted Social Welfare Problem Consider the dual of (P), min t (D) UT y+w\u2264 t1 y\u2265 0. We label the (p, i, j)-th element of y \u2208 RN (corresponding to row (p, i, j) of U ) as ypi, j . This is an LP with a polynomial number of variables and an exponential number of constraints. Given a separation oracle, we can solve it in polynomial time using the ellipsoid method. A separation oracle needs to determine whether a given (y, t) is feasible, and if not output a hyperplane that separates (y, t) from the feasible set. We focus on a restricted form of separation oracles, which outputs a violated constraint for infeasible points.3 Such a separation oracle needs to solve 3This is a restriction because in general there exist separating hyperplanes other than the violated constraints. For example as we saw in Chapter 7, Papadimitriou and Roughgarden [2008]\u2019s algo- 204 the following problem: Problem 8.3.1. Given (y, t) with y \u2265 0, determine if there exists an s such that (Us)T y+ws > t; if so output such an s. The left-hand-side expression (Us)T y+ws is the social welfare at s plus the term (Us)T y. Observe that the (p, i, j)-th entry of Us is ups \u2212 upjs\u2212p if sp = i and is zero otherwise. Thus (Us)T y = \u2211p \u2211 j\u2208Sp ypsp, j ( u p s \u2212u p js\u2212p ) . We now reexpress (Us)T y+ws in terms of deviation-adjusted utilities and deviation-adjusted social welfare. Definition 8.3.2. Given a game, and a vector y\u2208RN such that y\u2265 0, the deviation- adjusted utility for player p under pure profile s is u\u0302ps (y) = u p s + \u2211 j\u2208Sp ypsp, j ( ups \u2212u p js\u2212p ) . The deviation-adjusted social welfare is w\u0302s(y) = \u2211p u\u0302ps (y). By construction, the deviation-adjusted social welfare w\u0302s(y) = \u2211p ups + \u2211p \u2211 j\u2208Sp ypsp, j ( u p s \u2212u p js\u2212p ) = (Us)T y+ws. Therefore, Problem 8.3.1 is equivalent to the following deviation-adjusted social welfare problem. Definition 8.3.3. For a game representation, the deviation-adjusted social welfare problem is the following: given an instance of the representation and rational vector (y, t) \u2208 QN+1 such that y \u2265 0, determine if there exists an s such that the deviation-adjusted social welfare w\u0302s(y)> t; if so output such an s. Proposition 8.3.4. If the deviation-adjusted social welfare problem can be solved in polynomial time for a game representation, then so can the problem of comput- ing the maximum social welfare CE. Proof. Recall that an algorithm for Problem 8.3.1 can be used as a separation ora- cle for (D). Then we can apply the ellipsoid method using the given algorithm for the deviation-adjusted social welfare problem as a separation oracle. This solves rithm for computing a sample CE uses a separation oracle that outputs a convex combination of the constraints as a separating hyperplane. 205 (D) in polynomial time. By LP duality, the optimal objective of (D) is the so- cial welfare of the optimal CE. The cutting planes generated during the ellipsoid method can then be used to compute such a CE with polynomial-sized support. We observe that our approach has certain similarities to the Ellipsoid Against Hope algorithm and its variants discussed in Chapter 7: both approaches are black- box approaches based on LP duality formulations of the respective problems, and both make use of the ellipsoid method to overcome the exponential size of the LPs. On the other hand, due to the different LP formulations of the sample CE prob- lem and the optimal CE problem respectively, the two approaches require different separation oracles, which leads to the different requirements on the subroutines provided by the representation. Let us consider interpretations of the dual variables y and the deviation-adjusted social welfare of a game. The dual (D) can be rewritten as miny\u22650 maxs w\u0303s(y). By weak duality, for a given y \u2265 0 the maximum deviation-adjusted social welfare maxs w\u0303s(y) is an upper bound on the maximum social welfare CE. So the task of the dual (D) is to find y such that the resulting maximum deviation-adjusted social welfare gives the tightest bound.4 At optimum, y corresponds to the concept of \u201cshadow prices\u201d from optimization theory; that is, ypi j equals the rate of change in the social welfare objective when the constraint (p, i, j) is relaxed infinitesimally. Compared to the maximum social welfare CE problem, the maximum deviation- adjusted social welfare problem replaces the incentive constraints with a set of additional penalties or rewards. Specifically, we can interpret y as a set of nonnega- tive prices, one for each incentive constraint (p, i, j) of (P). At strategy profile s, for each incentive constraint (p, i, j) we impose a penalty equal to ypi j times the amount the constraint (p, i, j) is violated by s. Note that the penalty can be negative, and is zero if sp 6= i. Then w\u0303s(y) is equal to the social welfare of the modified game. Practical computation. We have seen from Chapters 2, 3 and 7 that the prob- lem of computing the expected utility given a mixed strategy profile has been established as an important subproblem for both the sample NASH problem and 4An equivalent perspective is to view y as Lagrange multipliers, and the optimal deviation- adjusted SW problem as the Lagrangian relaxation of (P) given the multipliers y. 206 the sample CE problem, both in theory and in practice. Our results in this chapter suggest that the deviation-adjusted social welfare problem is of similar importance to the optimal CE problem. This connection is more than theoretical: our algo- rithmic approach can be turned into a practical method for computing optimal CE. In particular, although it makes use of the ellipsoid method, we can easily substi- tute a more practical method, such as simplex with column generation. In contrast, Papadimitriou and Roughgarden [2008]\u2019s algorithmic approach for reduced forms makes two nested applications of the ellipsoid method, and is less likely to be prac- tical. Furthermore, even for representations without a polynomial-time algorithm for the deviation-adjusted social welfare problem, a promising direction would be to formulate the deviation-adjusted social welfare problem as a integer program or constraint program and solve using e.g. CPLEX. 8.3.1 The Weighted Deviation-Adjusted Social Welfare Problem For the max-min welfare CE problem, we can form the dual of (8.2.1), min t (8.3.1) UT y+\u2211 p vpu p \u2264 t1 (8.3.2) y\u2265 0, v\u2265 0 \u2211 p vp = 1. This is again an LP with polynomial number of variables and exponential number of constraints; specifically, block (8.3.2) is exponential. We observe that (8.3.2) is similar to the corresponding block in (D), except for the weighted sum \u2211p vpup in- stead of the social welfare w. Thus, in order to express the left-hand side of (8.3.2) we need notions slightly different from those given in Definition 8.3.2, which we call weighted deviation-adjusted utility and weighted deviation-adjusted social wel- fare. Definition 8.3.5. Given a game, a vector y \u2208 RN such that y \u2265 0, and a vector v \u2208 Rn such that v \u2265 0 and \u2211p vp = 1, the weighted deviation-adjusted utility for 207 player p under pure profile s is u\u0302ps (y,v) = vpu p s + \u2211 j\u2208Sp ypsp, j(u p s \u2212u p js\u2212p). The weighted deviation-adjusted social welfare is w\u0302s(y,v) = \u2211p u\u0302ps (y,v). Following analysis similar to that given above, the following problem serves as a separation oracle of LP (8.3.1). Definition 8.3.6. For a game representation, the weighted deviation-adjusted so- cial welfare problem is the following: given an instance of the representation, and rational vector (y,v, t) \u2208 QN+n+1 such that y\u2265 0, v \u2265 0 and \u2211p vp = 1, determine if there exists an s such that the deviation-adjusted social welfare w\u0302s(y) > t; if so output such an s. Proposition 8.3.7. If the weighted deviation-adjusted social welfare problem can be solved in polynomial time for a game representation, then the problem of com- puting the max-min welfare CE is in polynomial time for this representation. It is straightforward to see that the deviation-adjusted social welfare problem reduces to the weighted deviation-adjusted social welfare problem. In all represen- tations that we consider in this chapter, the weighted and unweighted versions have the same structure and thus the same complexity. 8.3.2 The Coarse Deviation-Adjusted Social Welfare Problem For the optimal social welfare CCE problem, we can form the dual of (CP) min t (8.3.3) CT y+w\u2264 t1 y\u2265 0 Definition 8.3.8. We label the (p, j)-th element of y as ypj . Given a game, and a vector y \u2208R\u2211p |Sp| such that y \u2265 0, the coarse deviation-adjusted utility for player 208 p under pure profile s is u\u0303ps (y) = u p s + \u2211 j\u2208Sp ypj (u p s \u2212u p js\u2212p). The coarse deviation-adjusted social welfare is w\u0303s(y) = \u2211p u\u0303ps (y). Proposition 8.3.9. If the coarse deviation-adjusted social welfare problem can be solved in polynomial time for a game representation, then the problem of comput- ing the maximum social welfare CCE is in polynomial time for this representation. The coarse deviation-adjusted social welfare problem reduces to the deviation- adjusted social welfare problem. To see this, given an input vector y for the coarse deviation-adjusted social welfare problem, we can construct an input vector y\u2032 \u2208 QN for the deviation-adjusted social welfare problem with y\u2032pi j = ypj for all p \u2208N and i, j \u2208 Sp. 8.4 The Deviation-Adjusted Social Welfare Problem for Specific Representations In this section we study the deviation-adjusted social welfare problem and its vari- ants on specific representations. Depending on the representation, the deviation- adjusted social welfare problem is not always solvable in polynomial time. In- deed, Papadimitriou and Roughgarden [2008] showed that for many representa- tions the problem of optimal CE is NP-hard. Nevertheless, for such representa- tions we can often identify tractable subclasses of games. We will argue that the deviation-adjusted social welfare problem is a more useful formulation for identify- ing tractable classes of games than the separation problem formulation of Papadim- itriou and Roughgarden [2008], as the latter only applies to reduced-form-based representations. 8.4.1 Reduced Forms Papadimitriou and Roughgarden [2008] gave the following reduced form charac- terization of representations. 209 Definition 8.4.1 ([Papadimitriou and Roughgarden, 2008]). Consider a game G = (N , {Sp}p\u2208N ,{up}p\u2208N ). For p = 1, . . . ,n, let Pp = {C1p . . .C rp p } be a partition of S\u2212p into rp classes. The set P = {P1, . . . ,Pn} of partitions is a reduced form of G if ups = ups\u2032 whenever (1) sp = s\u2032p and (2) both s\u2212p and s\u2032\u2212p belong to the same class in Pp. The size of a reduced form is the number of classes in the partitions plus the bits required to specify a payoff value for each tuple (p,k, \u2113) where 1 \u2264 p \u2264 n, 1\u2264 k \u2264 rp and \u2113 \u2208 Sp. Intuitively, the reduced form imposes the condition that p\u2019s utility for choos- ing an action sp depends only on which class in the partition Pp the profile of the others\u2019 actions belongs to. Papadimitriou and Roughgarden [2008] showed that several compact representations such as graphical games and anonymous games have natural reduced forms whose sizes are (roughly) equal to the sizes of the rep- resentation. We say such a compact representation has a concise reduced form. Intuitively, such a reduced form describes the structure of the game\u2019s utility func- tions. Example 8.4.2. Recall from Section 2.1.1 that a graphical game [Kearns et al., 2001] is associated with a graph (N ,E), such that player p\u2019s utility depends only on her action and the actions of her neighbors in the graph. The sizes of the utility functions are exponential only in the degrees of the graph. Such a game has a natural reduced form where the classes in Pp are identified with the pure profiles of p\u2019s neighbors, i.e., s\u2212p and s\u2032\u2212p belong to the same class if and only if they agree on the actions of p\u2019s neighbors. The size of the reduced form is exactly the number of utility values required to specify the graphical game\u2019s utility functions. Let Sp(k, \u2113) denote the set of pure strategy profiles s such that sp = \u2113 and s\u2212p is in the k-th class Ckp of Pp, and let u p (k,\u2113) denote the utility of p for that set of strategy profiles. Papadimitriou and Roughgarden [2008] defined the following Separation Problem for a reduced form. Definition 8.4.3 ([Papadimitriou and Roughgarden, 2008]). Let P be a reduced form for game G. The Separation Problem for P is the following: Given rational numbers \u03b3p(k, \u2113) for all p \u2208 {1, . . . ,n}, k \u2208 {1, . . . ,rp}, and \u2113 \u2208 Sp, is there a pure strategy profile s such that \u2211p,k,\u2113:s\u2208Sp(k,\u2113) \u03b3p(k, \u2113)< 0? If so, find such an s. 210 Since s \u2208Sp(k, \u2113) implies sp = \u2113, the left-hand side of the above expression is equivalent to \u2211p \u2211k:s\u2208Sp(k,sp) \u03b3p(k,sp). Furthermore, since s belongs to exactly one class in Pp, the expression is a sum of exactly n summands, one for each player. Papadimitriou and Roughgarden [2008] proved that if the separation problem can be solved in polynomial time, then a CE that maximizes a given linear objec- tive in the players\u2019 utilities can be computed in time polynomial in the size of the reduced form. How does Papadimitriou and Roughgarden [2008]\u2019s sufficient con- dition relate to ours, provided that the game has a concise reduced form? We show that the class of reduced form games for which our weighted deviation-adjusted social welfare problem is polynomial-time solvable contains the class for which the separation problem is polynomial-time solvable. Proposition 8.4.4. Let P be a reduced form for game G. Suppose the separation problem can be solved in polynomial time. Then the weighted deviation-adjusted social welfare problem can be solved in time polynomial in the size of the reduced form. Proof. First we observe that if a game G has a reduced form P , then its deviation- adjusted utilities (and weighted deviation-adjusted utilities) also satisfy the parti- tion structure specified by P , i.e., given y and v, the weighted deviation-adjusted utility u\u0302ps (y,v) depends only on a player\u2019s action sp and the class in Pp that s\u2212p belongs to. To see why, suppose s\u2212p \u2208Ckp. Then u\u0302 p \u2113s\u2212p (y,v) = vpu p \u2113s\u2212p + \u2211 j\u2208Sp yp\u2113, j(u p \u2113s\u2212p \u2212upjs\u2212p) = vpu p (k,\u2113)+ \u2211 j\u2208Sp yp\u2113, j(u p (k,\u2113)\u2212u p (k, j)), which depends only on \u2113 and k. This proves the following, which will be useful later. Lemma 8.4.5. Let P be a reduced form for game G. 1. For all y\u2208RN , v\u2208Rn, for all players p, sp \u2208 Sp, and for all s\u2212p,s\u2032\u2212p \u2208 S\u2212p, if s\u2212p and s\u2032\u2212p are in the same class in Pp then the weighted deviation-adjusted utilities u\u0302psp,s\u2212p(y,v) = u\u0302 p sp,s\u2032\u2212p (y,v). 211 2. Write the weighted deviation-adjusted utility for player p, given her pure strategy \u2113 \u2208 Sp and class Ckp, as u\u0302 p (k,\u2113)(y,v) (well defined by the above). We have u\u0302 p (k,\u2113)(y,v) \u2261 vpu p (k,\u2113)+ \u2211 j\u2208Sp yp\u2113, j(u p (k,\u2113)\u2212u p (k, j)). Given an instance of the weighted deviation-adjusted social welfare problem with a game with reduced form P and rational vectors y \u2208 RN , v \u2208 Rn and t \u2208 R, we construct an instance of the separation problem by letting \u03b3p(k, \u2113) = t\/n\u2212 u\u0302 p (k,\u2113)(y,v), where u\u0302 p (k,\u2113)(y,v) is as defined in Lemma 8.4.5 and can be efficiently computed given the reduced form. Recall that the separation problem asks for pure profile s such that \u2211p,k,\u2113:s\u2208Sp(k,\u2113) \u03b3p(k, \u2113) < 0, the left hand side of which is a sum of n terms. By construction, for all s, \u2211p,k,\u2113:s\u2208Sp(k,\u2113) \u03b3p(k, \u2113) < 0 if and only if \u2211p \u2211k:s\u2208Sp(k,sp) ( t\/n\u2212 u\u0302p(k,sp)(y,v) ) < 0, and since the left hand side is a sum of n terms, this holds if and only if w\u0302ps (y,v) > t. Therefore the weighted deviation-adjusted social welfare problem instance has a solution s if and only if the corresponding separation problem instance has a solution s, and a polynomial-time algorithm for the separation problem can be used to solve the weighted deviation- adjusted social welfare problem in polynomial time. We now compare the the weighted deviation-adjusted social welfare problem with the optimal social welfare problem for these representations. We observe from Lemma 8.4.5 that the weighted deviation-adjusted social welfare problem can be formulated as an instance of the optimal social welfare problem on another game with the same reduced form but different payoffs. Can we claim that the existence of a polynomial-time algorithm for the optimal social welfare problem for a rep- resentation implies the existence of a polynomial-time algorithm for the weighted social welfare problem (and thus the optimal CE problem)? This is not necessar- ily the case, because the representation might impose certain structure on the utility functions that are not captured by the reduced forms, and the polynomial-time algo- rithm for the optimal social welfare problem could depend on the existence of such structure. The weighted deviation-adjusted social welfare problem might no longer exhibit such structure and thus might not be solvable using the given algorithm. Nevertheless, if we consider a game representation that is \u201ccompletely charac- 212 terized\u201d by its reduced forms, the weighted deviation-adjusted social welfare prob- lem is equivalent to the decision version of the optimal social welfare outcome problem for that representation. To make this more precise, we say a game rep- resentation is a reduced-form-based representation if there exists a mapping from instances of the representation to reduced forms such that it maps each instance to a concise reduced form of that instance, and if we take such a reduced form and change its payoff values arbitrarily, the resulting reduced form is a concise reduced form of another instance of the representation. Corollary 8.4.6. For a reduced-form-based representation, if there exists a polynomial- time algorithm for the optimal social welfare problem, then the optimal social wel- fare CE problem and the max-min welfare CE problem can be solved in polynomial time. Of course, this can be derived using the separation problem for reduced forms without the deviation-adjusted social welfare formulation. On the other hand, the deviation-adjusted social welfare formulation can be applied to representations without concise reduced forms. In fact, we will use it to show below that the con- nection between the optimal social welfare problem and the optimal CE problem applies to a wider classes of representations than just reduced-form-based repre- sentations. 8.4.2 Linear Reduced Forms One class of representations that does not have concise reduced forms are those that represent utility functions as sums of other functions, such as polymatrix games and the hypergraph games of Papadimitriou and Roughgarden [2008]. In this sec- tion we characterize these representations using linear reduced forms, showing that linear-reduced-form-based representations satisfy a property similar to Corollary 8.4.6. Roughly speaking, a linear reduced form has multiple partitions for each agent, rather than just one; an agent\u2019s overall utility is a sum over utility functions defined on each of that agent\u2019s partitions. Definition 8.4.7. Consider a game G=(N ,{Sp}p\u2208N ,{up}p\u2208N ). For p= 1, . . . ,n, let Pp = {Pp,1, . . . ,Pp,tp}, where Pp,q = {C1p,q . . .C rpq p,q} is a partition of S\u2212p into rpq 213 classes. The set P = {P1, . . . ,Pn} is a linear reduced form of G if for each p there exist up,1, . . . ,up,tp \u2208 RM such that for all s, ups = \u2211q up,qs , and for each q \u2264 tp, u p,q s = u p,q s\u2032 whenever (1) sp = s\u2032p and (2) both s\u2212p and s\u2032\u2212p belong to the same class in Pp,q. The size of a reduced form is the number of classes in the partitions plus the bits required to specify a number for each tuple (p,q,k, \u2113) where 1 \u2264 p \u2264 n, 1\u2264 q\u2264 tp, 1\u2264 k \u2264 rpq and \u2113 \u2208 Sp. We write up,q(k,\u2113) for the value corresponding to tuple (p,q,k, \u2113), and for k = (k1, . . . ,ktp) we write u p (k,\u2113) \u2261 \u2211q up,q(kq,\u2113). Example 8.4.8 (polymatrix games). Recall from Section 2.1.1 that in a polyma- trix game, each player\u2019s utility is the sum of utilities resulting from her bilateral interactions with each of the n\u2212 1 other players: ups = \u2211p\u2032 6=p eTspApp \u2032 esp\u2032 where App\u2032 \u2208R|Sp|\u00d7|Sp\u2032 | and esp \u2208R|Sp| is the unit vector corresponding to sp. The utility functions of such a representation require only \u2211p,p\u2032\u2208N |Sp|\u00d7|Sp\u2032 | values to specify. Polymatrix games do not have a concise reduced-form encoding, but can easily be written as linear-reduced-form games. Essentially, we create one partition for ev- ery matrix game that an agent plays, with each class differing in the action played by the other agent who participates in that matrix game, and containing all the strategy profiles that can be adopted by all of the other players. Formally, given a polymatrix game, we construct its linear reduced form with Pp = {Pp,q}q\u2208N \\{p}, and Pp,q = {C\u2113p,q}\u2113\u2208Sq with C\u2113p,q = {s\u2212p|sq = \u2113}. Most of the results in Section 8.4.1 straightforwardly translate to linear reduced forms. Lemma 8.4.9. Let P be a linear reduced form for game G. Then for all y \u2208 RN , v \u2208 Rn, for all players p, there exist u\u0302p,1(y,v), . . . , u\u0302p,tp(y,v) \u2208 RM such that the weighted deviation-adjusted utilities u\u0302p(y,v) = \u2211q u\u0302p,q(y,v), and for all q\u2264 tp, sp \u2208 Sp and s\u2212p,s\u2032\u2212p \u2208 S\u2212p, if s\u2212p and s\u2032\u2212p are in the same class in Pp,q, then u\u0302 p,q sp,s\u2212p(y,v) = u\u0302 p,q sp,s\u2032\u2212p (y,v). Write the weighted deviation-adjusted utility for player p, her pure strategy \u2113\u2208 Sp and classes Ck1p,1, . . . ,C ktp p,tp as u\u0302 p (k,\u2113)(y,v) where k= (k1, . . . ,ktp). Furthermore, we have u\u0302 p (k,\u2113)(y,v) \u2261 vpu p (k,\u2113)+ \u2211 j\u2208Sp yp\u2113, j(u p (k,\u2113)\u2212u p (k, j)). 214 Corollary 8.4.10. For a linear-reduced-form-based representation, if there exists a polynomial-time algorithm for the optimal social welfare problem, then the optimal social welfare CE problem and the max-min welfare CE problem can be solved in polynomial time. Graphical Polymatrix Games A polymatrix game may have graphical-game-like structure: player p\u2019s utility may depend only on a subset of the other player\u2019s actions. In terms of utility functions, this corresponds to App\u2032 = 0 for certain pairs of players p, p\u2032. As with graphical games, we can construct the (undirected) graph G = (N ,E) where there is an edge {p, p\u2032} \u2208 E if App\u2032 6= 0 orAp\u2032p 6= 0. We call such a game a graphical polymatrix game. This can also be understood as a graphical game where each player p\u2019s utility is the sum of bilateral interactions with her neighbors. A tree polymatrix game is a graphical polymatrix game whose correspond- ing graph is a tree. Consider the optimal CE problem on tree polymatrix games. Since such a game is also a tree graphical game, Papadimitriou and Roughgarden [2008]\u2019s optimal CE algorithm for tree graphical games can be applied. However, this algorithm does not run in polynomial time, because the representation size of tree polymatrix games can be exponentially smaller than that of the corresponding graphical game (which grows exponentially in the degree of the graph). However, we can give a different polynomial-time algorithm for this problem. Theorem 8.4.11. Optimal CE in tree polymatrix games can be computed in poly- nomial time. Proof. It is sufficient to give an algorithm for the deviation-adjusted social welfare problem. Using an argument similar to that given in Example 8.4.8, tree polymatrix games have a natural linear reduced form, and it is straightforward to verify that tree polymatrix games are a linear-reduced-form-based representation. By Corol- lary 8.4.10 it is sufficient to construct an algorithm for the optimal social welfare problem. Let Np be the set of players in the subtree rooted at p. Suppose p\u2019s parent in the tree is q. Let the social welfare contribution of Np be the social welfare of players in Np minus eTspA pqesq . Let the social welfare contribution of the root player be the 215 social welfare of N . Then the social welfare contribution of Np depends solely on the pure strategy profile restricted to Np. The following dynamic programming algorithm solves the optimal social wel- fare problem in polynomial time. We go from the leaves to the root of the tree. Each child q of p passes to its parent the message {wNq,sq}sq\u2208Sq , where wNq,sq is the optimal social welfare contribution of Nq provided that q plays sq. Given the messages from all of p\u2032s children q1, . . . ,qk, we can compute the message of p as follows: for each sp \u2208 Sp, wNp,sp = max sq1 ,...,sqk k \u2211 j=1 [ w Nq j ,sq j + eTspA p,q j esq j ] = k \u2211 j=1 max sq j [ w Nq j ,sq j + eTspA p,q j esq j ] . The second equality is due to the fact that the j-th summand depends only on sq j . It is straightforward to verify that the optimal social welfare is maxsr wNr ,sr where r is the root player, and that the algorithm runs in polynomial time. The corresponding optimal pure strategy profile can be constructed by going from the root to the leaves. This algorithm can be straightforwardly extended to yield a polynomial-time algorithm for optimal CE in graphical polymatrix games with constant treewidth, for hypergraphical games [Papadimitriou and Roughgarden, 2008] on acyclic hy- pergraphs, and more generally for hypergraphs with constant hypertree-width. 8.4.3 Representations with Action-Specific Structure The above results for reduced forms and linear reduced forms crucially depend on the fact that the partitions (i.e., the structure of the utility functions) depend on p but do not depend on the action chosen by player p. There are represen- tations whose utility functions have action-dependent structure, including conges- tion games [Rosenthal, 1973], local effect games [Leyton-Brown and Tennenholtz, 2003], and action-graph games [Jiang et al., 2011]. For such representations, we can define a variant of the reduced form that has action-dependent partitions. For 216 example: Definition 8.4.12. Consider a game G = (N ,{Sp}p\u2208N ,{up}p\u2208N ). For p = 1, . . . ,n, \u2113 \u2208 Sp, let Pp,\u2113 = {Pp,\u2113,1, . . . ,Pp,\u2113,tp\u2113}, where Pp,\u2113,q = {C1p,\u2113,q . . .C rp\u2113q p,\u2113,q} is a partition of S\u2212p into rp\u2113q classes. The set P = {Pp,\u2113}p\u2208N ,\u2113\u2208Sp is a action-specific linear reduced form of G if for each p, \u2113 there exist up,\u2113,1, . . . ,up,\u2113,tp\u2113 \u2208RM such that for each p \u2208N , \u2113 \u2208 Sp, and q\u2264 tp, 1. for all s\u2212p \u2208 S\u2212p, up\u2113s\u2212p = \u2211q u p,\u2113,q \u2113s\u2212p ; 2. up,\u2113,q\u2113s\u2212p = u p,\u2113,q \u2113s\u2032\u2212p whenever both s\u2212p and s\u2032\u2212p belong to the same class in Pp,\u2113,q. The size of a reduced form is the number of classes in the partitions plus the bits required to specify a number for each tuple (p,q,k, \u2113) where 1\u2264 p\u2264 n, 1\u2264 q\u2264 tp\u2113, 1\u2264 k \u2264 rp\u2113q and \u2113 \u2208 Sp. However, unlike both the reduced form and linear reduced form, the weighted deviation-adjusted utilities no longer satisfy the same partition structure as the util- ities. Intuitively, the weighted deviation-adjusted utility at s has contributions from the utilities of the strategy profiles when player p deviates to different actions. Whereas for linear reduced forms these deviated strategy profiles correspond to the same class as s in the partition, we now consider different partitions for each action to which p deviates. As a result the weighted deviation-adjusted social wel- fare problem has a more complex form that the optimal social welfare problem. Singleton Congestion Games As mentioned in Chapters 2 and 4, Ieong et al. [2005] studies a class of games called singleton congestion games and showed that the optimal PSNE can be com- puted in polynomial time. Such a game can be formulated as an instance of con- gestion games where each action contains a single resource, or an instance of sym- metric AGGs where the only edges are self edges. Formally, a singleton congestion game is specified by (N ,A ,{ f \u03b1}\u03b1\u2208A )where N = 1, . . . ,n is the set of players, A the set of actions, and for each action \u03b1 \u2208A , f \u03b1 : [n]\u2192R. The game is symmetric; each player\u2019s set of actions Sp \u2261A . Each strategy profile s induces an action count c(\u03b1) = |{p|sp = \u03b1}| on each \u03b1 : the 217 number of players playing action \u03b1 . Then the utility of a player that chose \u03b1 is f \u03b1(c(\u03b1)). The representation requires O(|A |n) numbers to specify. We now show that the optimal social welfare CCE problem can be computed in polynomial time for singleton congestion games. Before attacking the problem, we first note that the optimal social welfare problem can be solved in polynomial time by a relatively straightforward dynamic-programming algorithm which is a simplified version of Ieong et al. [2005]\u2019s algorithm for optimal PSNE in singleton congestion games. First observe that the social welfare of a strategy profile can be written in terms of the action counts: ws = \u2211 \u03b1 c(\u03b1) f \u03b1(c(\u03b1)). The optimal social welfare problem is equivalent to finding a vector of action counts that sums to n and maximizes the above expression. The social welfare can be further decomposed into contributions from each action \u03b1 . The dynamic- programming algorithm starts with a single action and adds one action at a time un- til all actions are added. At each iteration, it maintains a set of tuples {(n\u2032,wn\u2032)}1\u2264n\u2032\u2264n, specifying that the best social welfare contribution from the current set of actions is wn\u2032 when exactly n\u2032 players chose actions in the current set. Consider the optimal social welfare CCE problem. Can we leverage the algo- rithm for the optimal social welfare problem to solve the coarse deviation-adjusted social welfare problem? Our task here is slightly more complicated: in general the coarse deviation-adjusted social welfare problem no longer has the same sym- metric structure due to the fact that y can be asymmetric. However, when y is player-symmetric (that is, ypj = yp \u2032 j for all pairs of players (p, p\u2032)), then we recover symmetric structure. Lemma 8.4.13. Given a singleton congestion game and player-symmetric input y, the coarse deviation-adjusted social welfare problem can be solved in polynomial time. 218 Proof. The coarse deviation-adjusted social welfare can be written as w\u0303s(y) = \u2211 p ups (1+ \u2211 j 6=sp ypj )\u2212\u2211 p \u2211 j 6=sp ypj u p js\u2212p = \u2211 \u03b1\u2208A [ c(\u03b1) f \u03b1 (c(\u03b1)) ( 1+ \u2211 j 6=\u03b1 ypj ) \u2212 (n\u2212 c(\u03b1)) f \u03b1(c(\u03b1)+1)yp\u03b1 ] . The contribution from each action \u03b1 depends only on c(\u03b1). Therefore, using a sim- ilar dynamic-programming algorithm as above we can solve the coarse deviation- adjusted social welfare problem in polynomial time. Therefore if we can guarantee that during a run of ellipsoid method for (8.3.3) all input queries y to the separation oracle are symmetric, then we can apply Lemma 8.4.13 to solve the problem in polynomial time. We observe that for any symmetric game, there must exist a symmetric CE that optimizes the social welfare. This is because given an optimal CE we can create a mixture of permuted versions of this CE, which must itself be a CE by convexity, and must also achieve the same social welfare by symmetry. However, this argument in itself does not guarantee that the y we obtain by the method above will be symmetric. Instead, we observe that if we solve (8.3.3) using a ellipsoid method with a player-symmetric initial ball, and use a separation oracle that returns a player-symmetric cutting plane, then the query points y will be player-symmetric. We are able to construct such a separation oracle using a symmetrization argument. Theorem 8.4.14. Given a singleton congestion game, the optimal social welfare CCE can be computed in polynomial time. Proof. As argued in Section 8.4.3, it is sufficient to construct a separation oracle for (8.3.3) that returns a player-symmetric cutting plane. The cutting plane corre- sponding to a pure strategy profile solution s of the coarse deviation-adjusted social welfare problem is not player-symmetric in general; but we can symmetrize it by constructing a mixture of permutations of s. Since by symmetry each permuted version of s correspond to a violated constraint, the resulting cutting plane is still correct and is symmetric. Enumerating all permutations over players would be ex- ponential, but it turns out that for our purposes it is sufficient to use a small set of 219 permutations. Formally, let pii be the permutation over the set of players N that maps each p to p+ i mod n. Then the set of permutations {pii}0\u2264i\u2264n\u22121 corresponds to the cyclic group. Suppose s is a solution of the coarse deviation-adjusted social welfare problem with symmetric input y. The corresponding cut (violated constraint) is (Cs)T y+ ws \u2264 t. Recall that the (p, j)-th entry of Cs is Cp, js = (ups \u2212upjs\u2212p). For a permutation pi over N , write spi the permuted profile induced by pi , i.e. spi = (spi(1), . . . ,spi(n)). Then spi is also a solution of the coarse deviation-adjusted social welfare problem. Form the following convex combination of n of the constraints of (8.3.3): 1 n n\u22121 \u2211 i=0 [ (Cspii )T y+wspii ] \u2264 t The left-hand side can be simplified to ws +(Cs)T y where Cs = 1n \u2211n\u22121i=0 Cspii . We claim that this cutting plane is player-symmetric, meaning Cp, js =C p\u2032, j s for all pairs of players p, p\u2032 and all j \u2208A . This is because Cp, js = 1 n n\u22121 \u2211 i=0 Cp, jspii = 1 n n\u22121 \u2211 i=0 (upspii \u2212u p jspii\u2212p ) = 1 n [ \u2211 \u03b1 6= j c(\u03b1) f \u03b1 (c(\u03b1))\u2212 (n\u2212 c( j)) f j(c( j)+1) ] =Cp \u2032, j s . This concludes the proof. Our approach for singleton congestion games crucially depends on the fact that the coarse deviation profile ypj does not care which action it is deviating from. This allowed us to (in the proof of Lemma 8.4.13) decompose the coarse deviation- adjusted social welfare into terms that only depend on the action count on one action. The same approach cannot be directly applied to solve the optimal CE problem, because then the deviation profile would give a different ypi j for each action i that p deviates from, and the resulting expression for deviation-adjusted social welfare would involve summands that depend on the action counts on pairs of actions. 220 An interesting future direction is to explore whether our approach for singleton congestion games can be generalized to other classes of symmetric games, such as symmetric AGGs with bounded treewidth. 8.5 Conclusion and Open Problems We have proposed an algorithmic approach for solving the optimal correlated equi- librium problem in succinctly represented games, substantially extending a previ- ous approach due to Papadimitriou and Roughgarden [2008]. In particular, we showed that the optimal CE problem is tractable when the deviation-adjusted so- cial welfare problem can be solved in polynomial time. We generalized the reduced forms of Papadimitriou and Roughgarden [2008] to show that if a representation can be characterized by \u201clinear reduced forms\u201d, i.e. player-specific linear functions over partitions, then for that representation, the deviation-adjusted social welfare problem can be reduced to the optimal social welfare problem. Leveraging this result, we showed that the optimal CE problem is tractable in graphical polymatrix games on tree graphs. We also considered the problem of computing the optimal coarse correlated equilibrium, and derived a similar sufficient condition. We used this condition to prove that the optimal CCE problem is tractable for singleton congestion games. Our work points the way to a variety of open problems, which we briefly sum- marize here. Price of Anarchy. Our results imply that for compactly represented games with polynomial-time algorithms for the optimal social welfare problem and the weighted deviation-adjusted social welfare problem, the Price of Anarchy (POA) for correlated equilibria (i.e., the ratio of social welfare under the best outcome and the worst correlated equilibrium) can be computed in polynomial time. Simi- larly for the Price of Total Anarchy (i.e., the ratio of social welfare under the best outcome and the worst coarse correlated equilibrium). There is an extensive litera- ture on proving bounds on the POA for various solution concepts and for various classes of games. One line of research that is particularly relevant to our work is the \u201csmoothness bounds\u201d method pioneered by Roughgarden [2009]. In particular, that work showed that if a certain smoothness relation can be shown to hold for a 221 class of games, then it can be used to prove an upper bound on POA for these games that holds for many solution concepts including pure and mixed NE, CE and CCE. More recently, Nadav and Roughgarden [2010] gave a primal-dual LP formulation for proving POA bounds and showed that finding the best smoothness coefficients corresponds to the dual of the LP for the POA for average coarse correlated equilib- rium (ACCE), a weaker solution concept than CCE. The primal-dual LP formula- tion of Nadav and Roughgarden [2010] and our LPs (P) and (D) are equivalent up to scaling; however whereas Nadav and Roughgarden [2010] focused on the task of proving POA upper bounds for classes of games, here we focus on computing the optimal CE \/ CCE and POA for individual games. One interesting direction is to use our algorithms together with an game instance generator to automatically find game instances with large POA, thus improving the lower bounds on POA for given classes of games. Complexity separations. We have shown that for singleton congestion games, the optimal social welfare problem and the optimal CCE problem are tractable while the complexity of the optimal CE problem is unknown. An open problem is to prove a separation of the complexities of these problems for singleton congestion games or for another class. Another related problem is the optimal PSNE problem, which can be thought of as the optimal CE problem plus integer constraints on x. We do not know the exact relationship between the optimal PSNE problem and the other problems. For example the optimal PSNE problem is known to be tractable for singleton congestion games [Ieong et al., 2005] while we do not know how to solve the optimal CE problem. On the other hand for tree polymatrix games we showed the CE problem is in polynomial time, while the complexity of the PSNE problem is unknown. Necessary condition for tractability. Another open question is the following: is tractability of the deviation-adjusted social welfare problem a necessary condi- tion for tractability of the optimal CE problem? We know (e.g., from Gro\u0308tschel et al. [1988]) that the separation oracle problem for the dual LP (D) is equivalent to the problem of optimizing an arbitrary linear objective on the feasible set of (D). However this in itself is not enough to prove equivalence of the deviation-adjusted social welfare problem and the optimal CE problem. First of all the separation oracle problem is more general: it allows cutting planes other than constraints cor- 222 responding to pure strategy profiles. Furthermore, (D) has a particular objective, but optimizing an arbitrary linear objective means allowing the objective to depend on y as well as t. If we take the dual of such an LP with (e.g.) objective rT y+ t for some vector r \u2208RN , we get a generalized version of the optimal CE problem, with constraints Ux\u2265 r instead of Ux\u2265 0. Relaxations and approximations. Another interesting direction worth explor- ing is relaxations of the incentive constraints of these problems, either as hard bounds or as soft constraints that add penalties to the objective, as well as the problem of approximating the optimal CE. For these problems we can define cor- responding variants of the deviation-adjusted social welfare problem as sufficient conditions, but it remains to be seen whether one can prove concrete results, e.g., for approximating optimal CE for specific representations for which the exact opti- mal CE problem is hard. Communication complexity of uncoupled dynamics. Hart and Mansour [2010] considered a setting in which each player is informed only about her own utility function, and analyzed the communication complexity for so-called uncou- pled dynamics to reach various kinds of equilibrium. They used a straightforward adaptation of Papadimitriou and Roughgarden [2008]\u2019s algorithm for a sample CE to show that a CE can be reached using polynomial amount of communication. We can consider the question of reaching an optimal CE by uncoupled dynamics. Our approach can be straightforwardly adapted to this setting, reducing the problem to finding a communication protocol for the uncoupled version of the deviation- adjusted social welfare problem in which each player knows only her own utility function. Proposition 8.5.1. If there is a polynomial communication protocol for the uncou- pled deviation-adjusted social welfare problem, then there is a polynomial commu- nication protocol for the optimal CE problem. At a high level, the protocol has a center running the ellipsoid method on (D), using the communication protocol for the uncoupled deviation-adjusted social wel- fare problem as a separation oracle. An open problem is whether there exist more \u201cnatural\u201d types of dynamics that converge to optimal CE. For example, there is extensive literature on no-internal-regret learning dynamics that converges to the 223 set of approximate CE in a polynomial number of steps. Can such dynamics be modified to yield optimal CE? 224 Bibliography S. Adlakha, R. Johari, and G. Y. Weintraub. Equilibria of dynamic games with many players: Existence, approximation, and market structure. CoRR, abs\/1011.5537, 2010. B. Adsul, J. Garg, R. Mehta, and M. A. Sohoni. Rank-1 bimatrix games: a homeomorphism and a polynomial time algorithm. In STOC: Proceedings of the Annual ACM Symposium on Theory of Computing, pages 195\u2013204, 2011. C. Alvarez, J. Gabarro, and M. Serna. Pure Nash equilibria in a game with large number of actions. In Mathematical Foundations of Computer Science, 2005. R. Aumann. Subjectivity and correlation in randomized strategies. Journal of Mathematical Economics, 1(1):67\u201396, 1974. R. Aumann. Correlated equilibrium as an expression of Bayesian rationality. Econometrica: Journal of the Econometric Society, pages 1\u201318, 1987. D. Avis, G. Rosenberg, R. Savani, and B. von Stengel. Enumeration of nash equilibria for two-player games. Economic Theory, 42:9\u201337, 2010. ISSN 0938-2259. URL http:\/\/dx.doi.org\/10.1007\/s00199-009-0449-x. 10.1007\/s00199-009-0449-x. E. Ben-Sasson, A. Kalai, and E. Kalai. An approach to bounded rationality. In NIPS: Proceedings of the Neural Information Processing Systems Conference, pages 145\u2013152, 2006. N. Bhat and K. Leyton-Brown. Computing Nash equilibria of action-graph games. In UAI: Proceedings of the Conference on Uncertainty in Artificial Intelligence, pages 35\u201342, 2004. B. Blum, C. Shelton, and D. Koller. Gametracer. http:\/\/dags.stanford.edu\/Games\/gametracer.html, 2002. 225 B. Blum, C. Shelton, and D. Koller. A continuation method for Nash equilibria in structured games. JAIR: Journal of Artificial Intelligence Research, 25: 457\u2013502, 2006. H. Bodlaender. Treewidth: Algorithmic techniques and results. In Mathematical Foundations of Computer Science, pages 19\u201336. Springer Berlin \/ Heidelberg, 1997. ISBN 978-3-540-63437-9. URL http:\/\/dx.doi.org\/10.1007\/BFb0029946. H. L. Bodlaender. A linear-time algorithm for finding tree-decompositions of small treewidth. 25(6):1305\u20131317, 1996. ISSN 00975397. URL http:\/\/dx.doi.org\/doi\/10.1137\/S0097539793251219. H. L. Bodlaender. Treewidth: structure and algorithms. In Proceedings of the 14th international conference on Structural information and communication complexity, SIROCCO\u201907, pages 11\u201325, Berlin, Heidelberg, 2007. Springer-Verlag. ISBN 978-3-540-72918-1. C. Boutilier, N. Friedman, M. Goldszmidt, and D. Koller. Context-specific independence in Bayesian networks. In UAI, pages 115\u2013123, 1996. F. Brandt, F. Fischer, and M. Holzer. Symmetries and the complexity of pure Nash equilibrium. Journal of Computer and System Sciences, 75(3):163\u2013177, 2009. G. Brown. Iterative solutions of games by fictitious play. In T. Koopmans, editor, Activity Analysis of Production and Allocation. Wiley, New York, 1951. X. Chen and X. Deng. Settling the complexity of 2-player Nash-equilibrium. In FOCS: Proceedings of the Annual IEEE Symposium on Foundations of Computer Science, pages 261\u2013272, 2006. V. Conitzer and T. Sandholm. Complexity of (iterated) dominance. In EC: Proceedings of the ACM Conference on Electronic Commerce, 2005. V. Conitzer and T. Sandholm. Computing the optimal strategy to commit to. In EC: Proceedings of the ACM Conference on Electronic Commerce, 2006. V. Conitzer and T. Sandholm. New complexity results about nash equilibria. Games and Economic Behavior, 63(2):621 \u2013 641, 2008. ISSN 0899-8256. Second World Congress of the Game Theory Society. G. Dantzig and M. Thapa. Linear Programming 2: Theory and Extensions. Springer, 2003. A. Darwiche. Constant-space reasoning in dynamic Bayesian networks. International Journal of Approximate Reasoning, 26(3):161\u2013178, 2001. 226 C. Daskalakis and C. Papadimitriou. The complexity of games on highly regular graphs. In the 13th Annual European Symposium on Algorithms, 2005. C. Daskalakis and C. Papadimitriou. Computing pure Nash equilibria via Markov random fields. In EC: Proceedings of the ACM Conference on Electronic Commerce, pages 91\u201399, 2006. C. Daskalakis and C. Papadimitriou. Computing equilibria in anonymous games. In FOCS: Proceedings of the Annual IEEE Symposium on Foundations of Computer Science, pages 83\u201393, 2007. C. Daskalakis and C. Papadimitriou. Discretized multinomial distributions and nash equilibria in anonymous games. In FOCS: Proceedings of the Annual IEEE Symposium on Foundations of Computer Science, 2008. C. Daskalakis and C. Papadimitriou. On oblivious PTAS\u2019s for Nash equilibrium. In STOC: Proceedings of the Annual ACM Symposium on Theory of Computing, pages 75\u201384. ACM New York, NY, USA, 2009. C. Daskalakis, A. Fabrikant, and C. Papadimitriou. The game world is flat: The complexity of Nash equilibria in succinct games. In ICALP: Proceedings of the International Colloquium on Automata, Languages and Programming, pages 513\u2013524, 2006a. C. Daskalakis, P. W. Goldberg, and C. H. Papadimitriou. The complexity of computing a Nash equilibrium. In STOC: Proceedings of the Annual ACM Symposium on Theory of Computing, pages 71\u201378, 2006b. C. Daskalakis, G. Schoenebeck, G. Valiant, and P. Valiant. On the complexity of Nash equilibria of Action-Graph Games. In SODA: Proceedings of the ACM-SIAM Symposium on Discrete Algorithms, pages 710\u2013719, 2009. T. Dean and K. Kanazawa. A model for reasoning about persistence and causation. Computational Intelligence, 5:142\u2013150, 1989. E. Elkind, L. Goldberg, and P. Goldberg. Nash equilibria in graphical games on trees revisited. EC: Proceedings of the ACM Conference on Electronic Commerce, pages 100\u2013109, 2006. P. Erdo\u030bs and J. L. Selfridge. On a combinatorial game. Journal of Combinatorial Theory, Series A, 14(3):298 \u2013 301, 1973. ISSN 0097-3165. K. Etessami and M. Yannakakis. On the Complexity of Nash Equilibria and Other Fixed Points (Extended Abstract). In FOCS: Proceedings of the Annual IEEE Symposium on Foundations of Computer Science, pages 113\u2013123, 2007. 227 A. Fabrikant, C. Papadimitriou, and K. Talwar. The complexity of pure Nash equilibria. In STOC: Proceedings of the Annual ACM Symposium on Theory of Computing, pages 604\u2013612. ACM New York, NY, USA, 2004. E. Fredkin. Trie memory. Communications of the ACM, 3:490\u2013499, 1962. D. Fudenberg and J. Tirole. Game Theory. MIT Press, 1991. D. Gale, H. Kuhn, and A. Tucker. On symmetric games. Contributions to the Theory of Games, pages 81\u201387, 1950. F. Germano and G. Lugosi. Existence of sparsely supported correlated equilibria. Economic Theory, 32(3):575\u2013578, 2007. M. Goemans, V. Mirrokni, and A. Vetta. Sink equilibria and convergence. In FOCS: Proceedings of the Annual IEEE Symposium on Foundations of Computer Science, pages 142\u2013154, Washington, DC, USA, 2005. IEEE Computer Society. ISBN 0-7695-2468-0. URL http:\/\/dx.doi.org\/10.1109\/SFCS.2005.68. P. W. Goldberg and C. H. Papadimitriou. Reducibility among equilibrium problems. In STOC: Proceedings of the Annual ACM Symposium on Theory of Computing, pages 61\u201370, 2006. G. Gottlob, G. Greco, and F. Scarcello. Pure Nash equilibria: Hard and easy games. Journal of Artificial Intelligence Research, 24:357\u2013406, 2005. G. Gottlob, G. Greco, and T. Mancini. Complexity of pure equilibria in Bayesian games. In IJCAI: Proceedings of the International Joint Conference on Artificial Intelligence, pages 1294\u20131299, 2007. S. Govindan and R. Wilson. Structure theorems for game trees. Proceedings of the National Academy of Sciences, 99(13):9077\u20139080, 2002. S. Govindan and R. Wilson. A global Newton method to compute Nash equilibria. Journal of Economic Theory, 110:65\u201386, 2003. S. Govindan and R. Wilson. Computing Nash equilibria by iterated polymatrix approximation. Journal of Economic Dynamics and Control, 28:1229\u20131241, 2004. M. Gro\u0308tschel, L. Lova\u0301sz, and A. Schrijver. Geometric algorithms and combinatorial optimization. Springer-Verlag, New York, NY, 1988. 228 J. Hannan. Approximation to Bayes risk in repeated plays. In M. Dresher, A. Tucker, and P. Wolfe, editors, Contributions to the Theory of Games, volume 3, pages 97\u2013139. Princeton University Press, 1957. J. Harsanyi. Games with incomplete information played by \u201cBayesian\u201d players, i-iii. part i. the basic model. Management science, 14(3):159\u2013182, 1967. S. Hart and Y. Mansour. How long to equilibrium? the communication complexity of uncoupled equilibrium procedures. Games and Economic Behavior, 69(1):107\u2013126, 2010. ISSN 0899-8256. S. Hart and A. Mas-Colell. A simple adaptive procedure leading to correlated equilibrium. Econometrica, 68(5), 2000. S. Hart and D. Schmeidler. Existence of correlated equilibria. Mathematics of Operations Research, 14(1):18\u201325, 1989. D. Heckerman and J. S. Breese. Causal independence for probability assessment and inference using Bayesian networks. IEEE Transactions on Systems, Man and Cybernetics, 26(6):826\u2013831, 1996. P. Herings and R. Peeters. A globally convergent algorithm to compute all Nash equilibria for n-Person games. Annals of Operations Research, 137(1): 349\u2013368, 2005. P. Herings and R. Peeters. Homotopy methods to compute equilibria in game theory. Economic Theory, pages 1\u201338, 2009. H. Hotelling. Stability in competition. Economic Journal, 39:41\u201357, 1929. J. Howson Jr. Equilibria of polymatrix games. Management Science, pages 312\u2013318, 1972. J. Howson Jr and R. Rosenthal. Bayesian equilibria of finite two-person games with incomplete information. Management Science, pages 313\u2013315, 1974. W. Huang. Equilibrium Computation for ExtensiveGames. PhD thesis, London School of Economics and Political Science, 2011. W. Huang and B. Von Stengel. Computing an extensive-form correlated equilibrium in polynomial time. In WINE: Proceedings of the Workshop on Internet and Network Economics, pages 506\u2013513, 2008. 229 S. Ieong, R. McGrew, E. Nudelman, Y. Shoham, and Q. Sun. Fast and compact: A simple class of congestion games. AAAI: Proceedings of the AAAI Conference on Artificial Intelligence, pages 489\u2013494, 2005. K. Iyer, R. Johari, and M. Sundararajan. Mean field equilibria of dynamic auctions with learning. In EC: Proceedings of the ACM Conference on Electronic Commerce, pages 339\u2013340, New York, NY, USA, 2011. ACM. ISBN 978-1-4503-0261-6. URL http:\/\/doi.acm.org\/10.1145\/1993574.1993631. A. Jiang and K. Leyton-Brown. Polynomial computation of exact correlated equilibrium in compact games. In EC: Proceedings of the ACM Conference on Electronic Commerce, 2011. http:\/\/arxiv.org\/abs\/1011.0253. A. X. Jiang. Computational problems in multiagent systems. Master\u2019s thesis, University of British Columbia, 2006. A. X. Jiang and K. Leyton-Brown. A polynomial-time algorithm for Action-Graph Games. In AAAI: Proceedings of the AAAI Conference on Artificial Intelligence, pages 679\u2013684, 2006. A. X. Jiang and K. Leyton-Brown. Computing pure Nash equilibria in symmetric Action-Graph Games. In AAAI: Proceedings of the AAAI Conference on Artificial Intelligence, pages 79\u201385, 2007a. A. X. Jiang and K. Leyton-Brown. A tutorial on the proof of the existence of Nash equilibria. Technical Report TR-2007-25, University of British Columbia, Department of Computer Science, November 2007b. A. X. Jiang and K. Leyton-Brown. Bayesian action-graph games. In NIPS: Proceedings of the Neural Information Processing Systems Conference, 2010. A. X. Jiang and M. Safari. Pure Nash equilibria: Complete characterization of hard and easy graphial games. In AAMAS: Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, 2010. A. X. Jiang, A. Pfeffer, and K. Leyton-Brown. Temporal Action-Graph Games: A new representation for dynamic games. In UAI: Proceedings of the Conference on Uncertainty in Artificial Intelligence, 2009. A. X. Jiang, K. Leyton-Brown, and N. Bhat. Action-graph games. Games and Economic Behavior, 71(1):141\u2013173, January 2011. S. Kakade, M. Kearns, J. Langford, and L. Ortiz. Correlated equilibria in graphical games. In EC: Proceedings of the ACM Conference on Electronic 230 Commerce, pages 42\u201347, New York, NY, USA, 2003. ACM. ISBN 1-58113-679-X. URL http:\/\/doi.acm.org\/10.1145\/779928.779934. E. Kalai. Large robust games. Econometrica, 72(6):1631\u20131665, 2004. E. Kalai. Partially-specified large games. In WINE: Proceedings of the Workshop on Internet and Network Economics, pages 3\u201313, 2005. H. Kamisetty, E. P. Xing, and C. J. Langmead. Approximating correlated equilibria using relaxations on the marginal polytope. In ICML, 2011. R. Kannan and T. Theobald. Games of fixed rank: A hierarchy of bimatrix games. Economic Theory, pages 1\u201317, 2009. M. Kearns, M. Littman, and S. Singh. Graphical models for game theory. In UAI: Proceedings of the Conference on Uncertainty in Artificial Intelligence, pages 253\u2013260, 2001. P. Klingsberg. A Gray code for compositions. Journal of Algorithms, 3:41\u201344, 1982. T. Kloks. Treewidth: Computations and Approximations. Springer-Verlag, Berlin, 1994. D. Koller and B. Milch. Multi-agent influence diagrams for representing and solving games. In IJCAI: Proceedings of the International Joint Conference on Artificial Intelligence, 2001. D. Koller and B. Milch. Multi-agent influence diagrams for representing and solving games. Games and Economic Behavior, 45(1):181\u2013221, 2003. D. Koller, N. Megiddo, and B. Von Stengel. Efficient computation of equilibria for extensive two-person games. Games and Economic Behavior, 14(2): 247\u2013259, 1996. H. Kuhn. Extensive games and the problem of information. In H. Kuhn and A. Tucker, editors, Contributions to the Theory of Games, volume II, pages 193\u2013216, 1953. C. Lemke and J. Howson. Equilibrium points of bimatrix games. Society for Industrial and Applied Mathematics Journal of Applied Mathematics, 12: 413\u2013423, 1964. C. E. Lemke. Bimatrix equilibrium points and mathematical programming. Management Science, 11(7):681\u2013689, May 1965. 231 K. Leyton-Brown and M. Tennenholtz. Local-effect games. In IJCAI: Proceedings of the International Joint Conference on Artificial Intelligence, pages 772\u2013780, 2003. R. Lipton, E. Markakis, and A. Mehta. Playing large games using simple strategies. In EC: Proceedings of the ACM Conference on Electronic Commerce, pages 36\u201341. ACM New York, NY, USA, 2003. G. B. D. M. Benisch and T. Sandholm. Algorithms for closed under rational behavior (curb) sets. Journal of Artificial Intelligence Research, 38:513\u2013534, 2010. O. Mangasarian. Equilibrium points in bimatrix games. Journal of the Society for Industrial and Applied Mathematics, 12(4):778\u2013780, 1964. R. McKelvey and A. McLennan. Computation of equilibria in finite games. Handbook of Computational Economics, 1:87\u2013142, 1996. R. D. McKelvey, A. M. McLennan, and T. L. Turocy. Gambit: Software tools for game theory, 2006. http:\/\/econweb.tamu.edu\/gambit. B. Milch and D. Koller. Ignorable information in multi-agent scenarios. Technical Report MIT-CSAIL-TR-2008-029, MIT, 2008. I. Milchtaich. Congestion games with player-specific payoff functions. Games and Economic Behavior, 13:111\u2013124, 1996. D. Monderer. Multipotential games. In IJCAI: Proceedings of the International Joint Conference on Artificial Intelligence, pages 1422\u20131427, 2007. D. Monderer and L. Shapley. Potential games. Games and Economic Behavior, 14:124\u2013143, 1996. K. Murphy. Dynamic Bayesian Networks: Representation, Inference and Learning. PhD thesis, UC Berkeley, Computer Science Division, 2002. K. Murphy. Bayes Net Toolbox for Matlab. http:\/\/bnt.sourceforge.net, 2007. R. Myerson. Dual reduction and elementary games. Games and Economic Behavior, 21(1-2):183\u2013202, 1997. U. Nadav and T. Roughgarden. The limits of smoothness: A primal-dual framework for Price of Anarchy bounds. In WINE: Proceedings of the Workshop on Internet and Network Economics, 2010. 232 J. F. Nash. Non-cooperative games. The Annals of Mathematics, 54(2):286\u2013295, 1951. R. Nau and K. McCardle. Coherent behavior in noncooperative games. Journal of Economic Theory, 50(2):424\u2013444, 1990. D. Nilsson and S. Lauritzen. Evaluating influence diagrams using LIMIDs. In UAI, pages 436\u2013445, 2000. N. Nisan and A. Ronen. Algorithmic mechanism design. Games and Economic Behavior, 35:166\u2013196, 2001. N. Nisan, T. Roughgarden, E. Tardos, and V. Vazirani, editors. Algorithmic Game Theory. Cambridge University Press, Cambridge, UK, 2007. E. Nudelman, J. Wortman, Y. Shoham, and K. Leyton-Brown. Run the GAMUT: A comprehensive approach to evaluating game-theoretic algorithms. In AAMAS: Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, pages 880\u2013887, 2004. F. A. Oliehoek, M. T. J. Spaan, J. Dibangoye, and C. Amato. Heuristic search for identical payoff bayesian games. In AAMAS: Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, pages 1115\u20131122, May 2010. L. Ortiz and M. Kearns. Nash propagation for loopy graphical games. In NIPS: Proceedings of the Neural Information Processing Systems Conference, pages 817\u2013824, 2003. C. Papadimitriou. Computing correlated equilibria in multiplayer games. In STOC: Proceedings of the Annual ACM Symposium on Theory of Computing, pages 49\u201356, 2005. C. Papadimitriou and T. Roughgarden. Computing correlated equilibria in multi-player games. Journal of the ACM, 55(3):14, July 2008. C. Papadimitriou and T. Roughgarden. Comment on \u201ccomputing correlated equilibria in multi-player games\u201d, 2010. http:\/\/theory.stanford.edu\/\u223ctim\/papers\/comment.pdf, accessed Jan. 10, 2011. C. H. Papadimitriou. On the complexity of the parity argument and other inefficient proofs of existence. Journal of Computer and System Sciences, 48 (3):498 \u2013 532, 1994. ISSN 0022-0000. 233 C. H. Papadimitriou and T. Roughgarden. Computing equilibria in multi-player games. In SODA: Proceedings of the ACM-SIAM Symposium on Discrete Algorithms, pages 82\u201391, 2005. C. Papdimitriou. The complexity of finding Nash equilibria. In N. Nisan, T. Roughgarden, E. Tardos, and V. Vazirani, editors, Algorithmic Game Theory. Cambridge University Press, Cambridge, UK, 2007. P. Paruchuri, J. P. Pearce, J. Marecki, M. Tambe, F. Ordonez, and S. Kraus. Playing games with security: An efficient exact algorithm for Bayesian Stackelberg games. In AAMAS: Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, 2008. J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, San Francisco, CA, 1988. A. Pfeffer. Probabilistic reasoning for complex systems. PhD thesis, Computer Science Department, Stanford University, 2000. D. Poole and N. Zhang. Exploiting contextual independence in probabilistic inference. Journal of Artificial Intelligence Research, 18:263\u2013313, 2003. R. Porter, E. Nudelman, and Y. Shoham. Simple search methods for finding a nash equilibrium. Games and Economic Behavior, 63(2):642\u2013662, 2008. Z. Rabinovich, E. Gerding, M. Polukarov, and N. R. Jennings. Generalised fictitious play for a continuum of anonymous players. In IJCAI: Proceedings of the International Joint Conference on Artificial Intelligence, pages 245\u2013250, 2009. P. Raghavan. Probabilistic construction of deterministic algorithms: Approximating packing integer programs. Journal of Computer and System Sciences, 37(2):130 \u2013 143, 1988. ISSN 0022-0000. D. M. Reeves and M. P. Wellman. Computing best-response strategies in infinite games of incomplete information. In UAI, pages 470\u2013478, 2004. ISBN 0-9749039-0-6. N. Robertson and P. Seymour. Algorithmic aspects of tree-width. J. Algorithms, 7: 309\u2013322, 1986. R. Rosenthal. A class of games possessing pure-strategy Nash equilibria. International Journal of Game Theory, 2:65\u201367, 1973. 234 T. Roughgarden. Intrinsic robustness of the Price of Anarchy. In STOC: Proceedings of the Annual ACM Symposium on Theory of Computing, 2009. S. Russell and P. Norvig. Artificial Intelligence: A Modern Approach, 2nd edition. Prentice Hall, Englewood Cliffs, NJ, 2003. C. T. Ryan, A. X. Jiang, and K. Leyton-Brown. Computing pure strategy Nash equilibria in compact symmetric games. In EC: Proceedings of the ACM Conference on Electronic Commerce, pages 63\u201372, 2010. T. Sandholm, A. Gilpin, and V. Conitzer. Mixed-integer programming methods for finding Nash equilibria. In AAAI: Proceedings of the AAAI Conference on Artificial Intelligence, pages 495\u2013501, 2005. R. Savani and B. von Stengel. Exponentially many steps for finding a Nash equilibrium in a bimatrix game. In FOCS: Proceedings of the Annual IEEE Symposium on Foundations of Computer Science, pages 258\u2013267, 2004. H. Scarf. The approximation of fixed points of a continuous mapping. SIAM Journal of Applied Mathematics, 15:1328\u20131343, 1967. G. Schoenebeck and S. Vadhan. The computational complexity of Nash equilibria in concisely represented games. In EC: Proceedings of the ACM Conference on Electronic Commerce, pages 270\u2013279, 2006. Y. Shoham and K. Leyton-Brown. Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations. Cambridge University Press, New York, 2009. S. Singh, V. Soni, and M. Wellman. Computing approximate Bayes-Nash equilibria in tree-games of incomplete information. In EC: Proceedings of the ACM Conference on Electronic Commerce, pages 81\u201390. ACM, 2004. J. Spencer. Ten lectures on the probabilistic method. CBMS-NSF regional conference series in applied mathematics. Society for Industrial and Applied Mathematics, 1994. ISBN 9780898713251. N. D. Stein, P. A. Parrilo, and A. Ozdaglar. Exchangeable equilibria contradict exactness of the Papadimitriou-Roughgarden algorithm, October 2010. http:\/\/arxiv.org\/abs\/1010.2871v1. D. Thompson, S. Leong, and K. Leyton-Brown. Computing Nash equilibria of action-graph games via support enumeration. Working paper, 2011. 235 D. R. Thompson and K. Leyton-Brown. Computational analysis of perfect-information position auctions. In EC: Proceedings of the ACM Conference on Electronic Commerce, 2009. G. van der Laan, A. Talman, and L. van der Heyden. Simplicial variable dimension algorithms for solving the nonlinear complementarity problem on a product of unit simplices using a general labelling. Mathematics of Operations Research, 12(3):377\u2013397, 1987. D. Vickrey and D. Koller. Multi-agent algorithms for solving graphical games. In AAAI: Proceedings of the AAAI Conference on Artificial Intelligence, pages 345\u2013351, 2002. J. von Neumann and O. Morgenstern. Theory of Games and Economic Behavior. Princeton University Press, 1944. B. von Stengel. Computing equilibria for two-person games. volume 3 of Handbook of Game Theory with Economic Applications, pages 1723 \u2013 1759. Elsevier, 2002. B. von Stengel and F. Forges. Extensive-form correlated equilibrium: Definition and computational complexity. Mathematics of Operations Research, 33(4): 1002\u20131022, 2008. URL http:\/\/mor.journal.informs.org\/cgi\/content\/abstract\/33\/4\/1002. K. Waugh, M. Zinkevich, M. Johanson, M. Kan, D. Schnizlein, and M. Bowling. A practical use of imperfect recall. In AAAI: Proceedings of the AAAI Conference on Artificial Intelligence, 2009. E. Yanovskaya. Equilibrium points in polymatrix games (in Russian). Litovskii Matematicheskii Sbornik, 8:381\u2013384, 1968. N. Zhang and D. Poole. Exploiting causal independence in Bayesian network inference. JAIR: Journal of Artificial Intelligence Research, 5:301\u2013328, 1996. 236 Appendix 237 Appendix A Software In this chapter I describe software packages implemented as part of my thesis re- search. Overall they can be characterized as tools for computational analysis of games using the AGG and BAGG representations. Source codes of these packages are available for download at the AGG Project website (http:\/\/agg.cs.ubc.ca). In Section A.1 I introduce file formats used by all of these packages for de- scribing AGG and BAGG game instances. Section A.2 describes command-line programs for finding sample (Bayes) Nash equilibria in AGGs and BAGGs. Sec- tion A.3 describes a graphical user interface for creating, editing and visualizing AGGs, and Section A.4 describes extensions of GAMUT that generate AGG in- stances. Finally in Section ?? I discuss software projects that are currently under development. A.1 File Formats These software packages can read and write a description of a game as a text file. There are two formats, one for AGGs and one for BAGGs. All packages work with the AGG format; additionally, the solvers in Section A.2 also work with the BAGG format. 238 A.1.1 The AGG File Format Each representation of an AGG consists of 8 sections, separated by whitespaces. Lines with a starting \u201c#\u201d are treated as comments and are allowed between sections. 1. The number of players, n. 2. The number of action nodes, |A |. 3. The number of function nodes, |P|. 4. Size of action set for each player. This is a row of n integers: |A1|, |A2|, . . . , |An| 5. Each player\u2019s action set. We have n rows; row i has |Ai| integers in ascending order, which are indices of action nodes. Action nodes are indexed from 0 to |A |\u22121. 6. The Action Graph. We have |A |+ |P| nodes, indexed from 0 to |A |+ |P|\u2212 1. The function nodes are indexed after the action nodes. The graph is represented as (|A |+ |P|) neighbor lists, one list per row. Rows 0 to |A | \u2212 1 are for action nodes; rows |A | to |A |+ |P| \u2212 1 are for function nodes. In each row, the first number |\u03bd | specifies the number of neighbors of the node. Then follows |\u03bd | numbers, corresponding to the indices of the neighbors. We require that each function node has at least one neighbor, and the neighors of function nodes are action nodes. The action graph restricted to the func- tion nodes has to be a directed acyclic graph (DAG). 7. Signatures of functions. This is |P| rows, each specifying the mapping fp that maps from the configuration of the function node p\u2019s neighbors to an integer for p\u2019s \u201caction count\u201d. Each function is specified by its \u201csignature\u201d consisting of an integer type, possibly followed by further parameters. Sev- eral types of mapping are implemented: \u2022 Types 0 to 3 require no further input: Type 0: Sum. The action count of a function node p is the sum of the action counts of p\u2019s neighbors. 239 Type 1: Existence: boolean for whether the sum of the counts of neigh- bors are positive. Type 2: The index of the neighbor with the highest index that has non- zero counts, or |A |+ |P| if none applies. Type 3: The index of the neighbor with the lowest index that has non- zero counts, or |A |+ |P| if none applies. \u2022 Types 10 to 13 are extended versions of type 0 to 3, each requiring fur- ther parameters of an integer default value and a list of weights, |A | integers enclosed in square brackets. Each action node is thus associ- ated with an integer weight. Type 10: Extended Sum. Each instance of an action in p\u2019s neighbor- hood being chosen contributes the weight of that action to the sum. These are added to the default value. Type 11: Extended Existence: boolean for whether the extended sum is positive. The input default value and weights are required to be nonnegative. Type 12: The weight of the neighbor with the highest index that has non-zero counts, or the default value if none applies. Type 13: The weight of the neighbor with the lowest index that has non-zero counts, or the default value if none applies. The following is an example of the signatures for an AGG with three action nodes and two function nodes: 2 10 0 [2 3 4] 8. The payoff function for each action node. So we have |A | sub-blocks of numbers. Payoff function for action \u03b1 is a mapping from configurations to real numbers. Configurations are represented as a tuple of integers; the size of the tuple is the size of the neighborhood of \u03b1 . Each configuration specifies 240 the action counts for the neighbors of \u03b1 , in the same order as the neighor list of \u03b1 . The first number of each subblock specifies the type of the payoff function. There are multiple ways of representing payoff functions; we (or other peo- ple) can extend the file format by defining new types of payoff functions. We define two basic types: Type 0: The complete representation. The set of possible configurations can be derived from the action graph. This set of configurations can also be sorted in lexicographical order. So we can just specify the payoffs without explicitly giving the configurations. So we just need to give one row of real numbers, which correspond to payoffs for the ordered set of configurations. If action \u03b1 is in multiple players\u2019 action sets (say players i and j), then it is possible that the set of possible configurations given ai = \u03b1 is dif- ferent from the set of possible configurations given a j = \u03b1 . In such cases, we need to specify payoffs for the union of the sets of configura- tions (sorted in lexicographical order). Type 1: The mapping representation, in which we specify the configura- tions and the corresponding payoffs. For the payoff function of action \u03b1 , first give |C (\u03b1)|, the number of elements in the mapping. Then fol- lows |C (\u03b1)| rows. In each row, first specify the configuration, which is a tuple of integers, enclosed by a pair of brackets \u201c[\u201d and \u201c]\u201d, then the payoff. For example, the following specifies a payoff function of type 1, with two configurations: 1 2 [1 0] 2.5 [1 1] -1.2 A.1.2 The BAGG File Format Each representation of a BAGG consists of the following sections, separated by whitespaces. Lines with a starting \u201c#\u201d are treated as comments and are allowed 241 between sections. 1. The number of players, n. 2. The number of action nodes, |A |. 3. The number of function nodes, |P|. 4. A row of n integers, specifying the number of types |\u0398i| for each player i. 5. Type distribution for each player i. The distributions are assumed to be in- dependent. Each player i\u2019s type distribution is represented as a row of |\u0398i| real numbers, one for each type \u03b8i \u2208\u0398i, specifying Pr(\u03b8i), the probability of i having type \u03b8i. The following example block gives the type distributions for a BAGG with two players and two types for each player. 0.5 0.5 0.2 0.8 6. Size of type-action set for each player\u2019s each type. 7. Type-action set for each player\u2019s each type. Each type-action set is repre- sented as a row of integers in ascending order, which are indices of action nodes. Action nodes are indexed from 0 to |A |\u22121. 8. The action graph: same as Block 6 in the AGG format. 9. Types of functions: same as Block 7 in the AGG format. 10. Utility function for each action node: same as Block 8 in the AGG format. A.2 Solvers for finding Nash Equilibria The AGGSolver package is a collection of solvers that computes (Bayes) Nash equilibria given a game represented in (B)AGG format. The package is written in C++, and makes use of the GameTracer package (which implements Govindan & Wilson\u2019s GNM and IPA algorithms), and GAMBIT\u2019s implementation of the sim- plicial subdivision algorithm. Our black-box algorithmic approach is described in Chapter 3 for AGGs and Chapter 6 for BAGGs. 242 The following solvers are included: gnm agg takes an AGG and computes one or more Nash equilibria using Govin- dan & Wilson\u2019s Global Newton Method (GNM). gnm bagg takes a BAGG and computes one or more Bayes-Nash equilibria using the GNM algorithm. gnm ksym agg takes an AGG or a symmetric BAGG and computes k-symmetric Nash equilibria using a modified GNM algorithm. gnm tracing agg takes an AGG\/BAGG and a file containing initial mixed strat- egy profiles, for each initial mixed strategy profile sigma run a version of the GNM algorithm that simulates the linear tracing procedure starting from sigma. Good approximate equilibria can be used as \u201cwarm starts\u201d. ipa agg \/ ipa bagg takes an AGG\/BAGG and computes an approximate Nash equi- librium using Govindan & Wilson\u2019s Iterated Polymatrix Approximation al- gorithm. simpdiv takes an AGG\/BAGG and computes one or more Nash\/Bayes-Nash equi- libria using the simplicial subdivision algorithm as implemented in GAM- BIT. The source code is available for download at http:\/\/agg.cs.ubc.ca. Detailed in- structions on installation and usage of these solvers can be found in the README file included in the package, which is also available at http:\/\/agg.cs.ubc.ca\/AGGSolver README.txt. A.3 AGG Graphical User Interface Together with Damien Bargiacchi, we developed the AGGUI package, a graphical user interface that allows users to create and edit AGGs, read in existing AGGs, and visualize strategy profiles (e.g. Nash equilibria) as a density map the action graph (see, e.g., Figures 3.17 and 3.18 in Chapter 3). It is written in Java and runs on any platform that supports Java. It is available for download at http:\/\/agg.cs.ubc. ca\/aggui.jar. 243 A.4 AGG Generators in GAMUT GAMUT [Nudelman et al., 2004] is a suite of generators of game instances. We have extended GAMUT with generators of AGG instances in the AGG format. We have implemented generators for three classes of AGGs: RandomSymmetricAGG generates symmetric AGGs on random action graphs with random utilities. CoffeeShopGame generates instances of the Coffee Shop Game described in Chap- ter 3. IceCreamGame generates instances of the Ice Cream Game described in Chapter 3. The extended GAMUT package and documentation on these AGG generators are available for download at htt:\/\/agg.cs.ubc.ca. A.5 Software Projects Under Development GAMBIT [McKelvey et al., 2006] is a collection of software tools for game theo- retic analysis that includes implementations of many of the existing algorithms for the normal form and the extensive form. Together with Professor Theodore Turocy, who is the main author and maintainer of GAMBIT, we are working to incorporate the AGG and BAGG representations into GAMBIT. 244","attrs":{"lang":"en","ns":"http:\/\/www.w3.org\/2009\/08\/skos-reference\/skos.html#note","classmap":"oc:AnnotationContainer"},"iri":"http:\/\/www.w3.org\/2009\/08\/skos-reference\/skos.html#note","explain":"Simple Knowledge Organisation System; Notes are used to provide information relating to SKOS concepts. There is no restriction on the nature of this information, e.g., it could be plain text, hypertext, or an image; it could be a definition, information about the scope of a concept, editorial information, or any other type of information."}],"Genre":[{"label":"Genre","value":"Thesis\/Dissertation","attrs":{"lang":"en","ns":"http:\/\/www.europeana.eu\/schemas\/edm\/hasType","classmap":"dpla:SourceResource","property":"edm:hasType"},"iri":"http:\/\/www.europeana.eu\/schemas\/edm\/hasType","explain":"A Europeana Data Model Property; This property relates a resource with the concepts it belongs to in a suitable type system such as MIME or any thesaurus that captures categories of objects in a given field. It does NOT capture aboutness"}],"GraduationDate":[{"label":"GraduationDate","value":"2012-05","attrs":{"lang":"en","ns":"http:\/\/vivoweb.org\/ontology\/core#dateIssued","classmap":"vivo:DateTimeValue","property":"vivo:dateIssued"},"iri":"http:\/\/vivoweb.org\/ontology\/core#dateIssued","explain":"VIVO-ISF Ontology V1.6 Property; Date Optional Time Value, DateTime+Timezone Preferred "}],"IsShownAt":[{"label":"IsShownAt","value":"10.14288\/1.0052175","attrs":{"lang":"en","ns":"http:\/\/www.europeana.eu\/schemas\/edm\/isShownAt","classmap":"edm:WebResource","property":"edm:isShownAt"},"iri":"http:\/\/www.europeana.eu\/schemas\/edm\/isShownAt","explain":"A Europeana Data Model Property; An unambiguous URL reference to the digital object on the provider\u2019s website in its full information context."}],"Language":[{"label":"Language","value":"eng","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/language","classmap":"dpla:SourceResource","property":"dcterms:language"},"iri":"http:\/\/purl.org\/dc\/terms\/language","explain":"A Dublin Core Terms Property; A language of the resource.; Recommended best practice is to use a controlled vocabulary such as RFC 4646 [RFC4646]."}],"Program":[{"label":"Program","value":"Computer Science","attrs":{"lang":"en","ns":"https:\/\/open.library.ubc.ca\/terms#degreeDiscipline","classmap":"oc:ThesisDescription","property":"oc:degreeDiscipline"},"iri":"https:\/\/open.library.ubc.ca\/terms#degreeDiscipline","explain":"UBC Open Collections Metadata Components; Local Field; Indicates the program for which the degree was granted."}],"Provider":[{"label":"Provider","value":"Vancouver : University of British Columbia Library","attrs":{"lang":"en","ns":"http:\/\/www.europeana.eu\/schemas\/edm\/provider","classmap":"ore:Aggregation","property":"edm:provider"},"iri":"http:\/\/www.europeana.eu\/schemas\/edm\/provider","explain":"A Europeana Data Model Property; The name or identifier of the organization who delivers data directly to an aggregation service (e.g. Europeana)"}],"Publisher":[{"label":"Publisher","value":"University of British Columbia","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/publisher","classmap":"dpla:SourceResource","property":"dcterms:publisher"},"iri":"http:\/\/purl.org\/dc\/terms\/publisher","explain":"A Dublin Core Terms Property; An entity responsible for making the resource available.; Examples of a Publisher include a person, an organization, or a service."}],"Rights":[{"label":"Rights","value":"Attribution-NonCommercial-NoDerivatives 4.0 International","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/rights","classmap":"edm:WebResource","property":"dcterms:rights"},"iri":"http:\/\/purl.org\/dc\/terms\/rights","explain":"A Dublin Core Terms Property; Information about rights held in and over the resource.; Typically, rights information includes a statement about various property rights associated with the resource, including intellectual property rights."}],"RightsURI":[{"label":"RightsURI","value":"http:\/\/creativecommons.org\/licenses\/by-nc-nd\/4.0\/","attrs":{"lang":"en","ns":"https:\/\/open.library.ubc.ca\/terms#rightsURI","classmap":"oc:PublicationDescription","property":"oc:rightsURI"},"iri":"https:\/\/open.library.ubc.ca\/terms#rightsURI","explain":"UBC Open Collections Metadata Components; Local Field; Indicates the Creative Commons license url."}],"ScholarlyLevel":[{"label":"ScholarlyLevel","value":"Graduate","attrs":{"lang":"en","ns":"https:\/\/open.library.ubc.ca\/terms#scholarLevel","classmap":"oc:PublicationDescription","property":"oc:scholarLevel"},"iri":"https:\/\/open.library.ubc.ca\/terms#scholarLevel","explain":"UBC Open Collections Metadata Components; Local Field; Identifies the scholarly level of the author(s)\/creator(s)."}],"Title":[{"label":"Title","value":"Representing and reasoning with large games","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/title","classmap":"dpla:SourceResource","property":"dcterms:title"},"iri":"http:\/\/purl.org\/dc\/terms\/title","explain":"A Dublin Core Terms Property; The name given to the resource."}],"Type":[{"label":"Type","value":"Text","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/type","classmap":"dpla:SourceResource","property":"dcterms:type"},"iri":"http:\/\/purl.org\/dc\/terms\/type","explain":"A Dublin Core Terms Property; The nature or genre of the resource.; Recommended best practice is to use a controlled vocabulary such as the DCMI Type Vocabulary [DCMITYPE]. To describe the file format, physical medium, or dimensions of the resource, use the Format element."}],"URI":[{"label":"URI","value":"http:\/\/hdl.handle.net\/2429\/39951","attrs":{"lang":"en","ns":"https:\/\/open.library.ubc.ca\/terms#identifierURI","classmap":"oc:PublicationDescription","property":"oc:identifierURI"},"iri":"https:\/\/open.library.ubc.ca\/terms#identifierURI","explain":"UBC Open Collections Metadata Components; Local Field; Indicates the handle for item record."}],"SortDate":[{"label":"Sort Date","value":"2011-12-31 AD","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/date","classmap":"oc:InternalResource","property":"dcterms:date"},"iri":"http:\/\/purl.org\/dc\/terms\/date","explain":"A Dublin Core Elements Property; A point or period of time associated with an event in the lifecycle of the resource.; Date may be used to express temporal information at any level of granularity. Recommended best practice is to use an encoding scheme, such as the W3CDTF profile of ISO 8601 [W3CDTF].; A point or period of time associated with an event in the lifecycle of the resource.; Date may be used to express temporal information at any level of granularity. Recommended best practice is to use an encoding scheme, such as the W3CDTF profile of ISO 8601 [W3CDTF]."}]}