Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

The positronic economist : a computational system for analyzing economic mechanisms Thompson, David R. M. 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2015_may_thompson_david.pdf [ 3.77MB ]
Metadata
JSON: 24-1.0166200.json
JSON-LD: 24-1.0166200-ld.json
RDF/XML (Pretty): 24-1.0166200-rdf.xml
RDF/JSON: 24-1.0166200-rdf.json
Turtle: 24-1.0166200-turtle.txt
N-Triples: 24-1.0166200-rdf-ntriples.txt
Original Record: 24-1.0166200-source.json
Full Text
24-1.0166200-fulltext.txt
Citation
24-1.0166200.ris

Full Text

The Positronic EconomistA Computational System for Analyzing Economic MechanismsbyDavid R. M. ThompsonB.Sc., University of Guelph, 2004M.Sc., University of British Columbia, 2007A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFDoctor of PhilosophyinTHE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES(Computer Science)The University Of British Columbia(Vancouver)May 2015c© David R. M. Thompson, 2015AbstractA mechanism is a formal protocol for collective decision making among self-interested agents. Mechanismsmodel many important social processes from auctions to elections. They are also widely studied in computerscience: the participants in real-world mechanisms are often autonomous software systems (e.g., algorithmicbidding and trading agents), and algorithmic problems (e.g., job scheduling) give rise to mechanisms whenusers have competing interests.Strategic behavior (or “gaming”) poses a major obstacle to understanding mechanisms. Although real-worldmechanisms are often fairly simple functions (consider, e.g., plurality voting), a mechanism’s outcomedepends on both this function and on agents’ strategic choices. Game theory provides a principled means foranalyzing such choices. Unfortunately, game theoretic analysis is a difficult process requiring either humaneffort or very large amounts of computation.My thesis is that mechanism analysis can be done computationally, due to recent advances in compactrepresentations of games. Compact representation is possible when a game’s description has a suitableindependence structure. Exploiting this structure can exponentially decrease the space required to representgames, and exponentially decrease the time required to analyze them.The contributions of my thesis revolve around showing that the relevant kinds of structure (specifically, thestructure exploited by action-graph games) are present in important mechanisms of interest. Specifically, Istudied two major classes of mechanisms, position auctions (as used for internet advertising) and voting rules.In both cases, I was able to provide exponential improvements in the space and time complexity of analysis,and to use those improvements to address important open questions in the literature. I also introduced anew algorithm for analyzing action-graph games, with faster empirical performance and additional benefitsover the previous state-of-the-art. Finally, I created the Positronic Economist system, which consists of apython-based descriptive language for mechanism games, with automatic discovery of computationally usefulstructure.iiPrefaceThis thesis contains four previously published sections, each of which involved collaboration with otherresearchers.Chapter 3 was co-authored with Samantha Leung and Kevin Leyton-Brown and was published at theWorkshop on Internet and Network Economics [100]. I designed the algorithm with some input fromSamantha, who did the majority of the software implementation. I did the theoretical and experimentalevaluation of the algorithm and wrote the majority of paper. Kevin provided guidance throughout and wrotethe remainder of the paper.Chapter 4 was co-authored with Kevin Leyton-Brown and was published in the ACM Conference onElectronic Commerce [98]. Kevin and I did the high-level designing of the representation and experiments. Idid the low-level design, implementation, experiments and analysis. Kevin and I wrote the paper.Chapter 5 was co-authored with Kevin Leyton-Brown and was published in the ACM Conference onElectronic Commerce [99]. I provided the initial impetus for this project, and did the implementation,experiments and analysis. Kevin provided guidance, and he and I wrote the paper.Chapter 6 was co-authored with Omer Lev, Jeffrey Rosenschein and Kevin Leyton-Brown and was publishedat the International Conference on Autonomous Agents and Multiagent Systems [101]. I designed thealgorithm and the experiments (with some guidance from Omer Lev). Omer and I shared in the analysisof the data. Omer wrote the majority of the paper, with substantial contributions from Jeffrey Rosenschein,Kevin Leyton-Brown and I.iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiGlossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xivAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.1 Compact Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2 Nash-Equilibrium-Finding Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.2.1 The Global Newton Method and Simplicial Subdivision . . . . . . . . . . . . . . . 92.3 Other Computational Mechanism Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 The Support-Enumeration Method for Action-Graph Games . . . . . . . . . . . . . . . . . . 113.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.2 Technical Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123.3 SEM for AGGs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133.3.1 Conditional Dominance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.3.2 TGS Feasibility Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.3.3 Asymptotic Analysis of SEM for AGGs . . . . . . . . . . . . . . . . . . . . . . . . 173.3.4 Further Speedups for k-Symmetric Games . . . . . . . . . . . . . . . . . . . . . . . 183.3.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.4 Experimental Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.4.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223.4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23iv3.5 The Hardness of Generalizing AGG-SEM . . . . . . . . . . . . . . . . . . . . . . . . . . . 243.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Application: Position Auctions for Internet Advertising . . . . . . . . . . . . . . . . . . . . . 314.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.2.1 Metrics for Evaluating Auction Outcomes . . . . . . . . . . . . . . . . . . . . . . . 344.2.2 Models of Bidder Valuations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354.3 Representing Position Auctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384.3.1 Representing No-Externality GFPs as AGGs . . . . . . . . . . . . . . . . . . . . . 384.3.2 Representing No-Externality uGSPs and wGSPs as AGGs . . . . . . . . . . . . . . 394.3.3 Representing Auctions with Externalities . . . . . . . . . . . . . . . . . . . . . . . 414.4 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414.4.1 Problem Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424.4.2 Equilibrium Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444.4.3 Benchmarks: VCG and Discretized VCG . . . . . . . . . . . . . . . . . . . . . . . 444.4.4 Statistical Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454.5.1 Main Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464.5.2 Basic Models: EOS and V . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464.5.3 Position-Preference Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534.5.4 Externality Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584.6 Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654.6.1 Sensitivity to Bid Increment Size . . . . . . . . . . . . . . . . . . . . . . . . . . . 654.6.2 Sensitivity to Tie-Breaking Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . 684.6.3 Sensitivity to Rounding Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684.6.4 Sensitivity to the Number of Bidders . . . . . . . . . . . . . . . . . . . . . . . . . . 684.6.5 Sensitivity to the Number of Slots . . . . . . . . . . . . . . . . . . . . . . . . . . . 694.7 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724.8 Summary Tables and Statistical Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . 755 Application: Maximizing Internet Advertising Revenue . . . . . . . . . . . . . . . . . . . . . 945.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955.2.1 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 965.2.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975.3 First Analysis: Single-Slot Auctions with Known Quality Scores . . . . . . . . . . . . . . . 985.4 Second Analysis: Multiple Slots, All Pure Nash Equilibria . . . . . . . . . . . . . . . . . . 1025.5 Third Analysis: Multiple Slots, Equilibrium Refinement . . . . . . . . . . . . . . . . . . . 1055.6 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1116 Application: Strategic Voting in Elections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1136.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1136.1.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1156.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1166.3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118v6.4 Pure-Strategy Nash Equilibrium Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196.4.1 Selectiveness of the Truthfulness Incentive . . . . . . . . . . . . . . . . . . . . . . 1196.4.2 Equilibrium Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1206.4.3 Scaling Behavior and Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1226.4.4 Richer Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1236.5 Bayes-Nash Equilibria Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1256.6 Discussion and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1287 The Positronic Economist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1297.1 Introduction and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1297.1.1 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1317.2 Representing Games in PosEc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1327.2.1 Mechanisms and Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1327.2.2 The PosEc Modeling Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1337.2.3 Special Functions in PosEc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1347.2.4 Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1357.3 Structure Inference in PosEc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1357.3.1 White-Box Structure Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1367.3.2 Black-Box Structure Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1377.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1387.5 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144A Using the PosEc API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151A.1 Representing Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151A.2 Set-Like Classes for Outcome Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153A.3 Representing Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154A.4 Accessor Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155B Documentation of the PosEc API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160B.1 Module posec.bbsi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161B.1.1 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161B.1.2 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161B.2 Module posec.mathtools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162B.2.1 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162B.2.2 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162B.3 Module posec.posec core . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163B.3.1 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163B.3.2 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163B.3.3 Class ActionProfile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163B.3.4 Class TypeProfile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165B.3.5 Class RealSpace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166B.3.6 Class CartesianProduct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167B.3.7 Class Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168B.3.8 Class ProjectedSetting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168viB.3.9 Class BayesianSetting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169B.3.10 Class ProjectedBayesianSetting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170B.3.11 Class Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171B.3.12 Class ProjectedMechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171B.3.13 Class Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172B.3.14 Class UniformDistribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172B.4 Module posec.pyagg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174B.4.1 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174B.4.2 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174B.4.3 Class AGG File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175B.4.4 Class BAGG File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175B.4.5 Class AGG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176B.4.6 Class BAGG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179C Documentation of the Included PosEc Applications . . . . . . . . . . . . . . . . . . . . . . . . 181C.1 Package applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182C.1.1 Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182C.1.2 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182C.2 Module applications.basic auctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183C.2.1 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183C.2.2 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183C.2.3 Class SingleGoodOutcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183C.2.4 Class ProjectedOutcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185C.2.5 Class FirstPriceAuction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186C.2.6 Class AllPayAuction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187C.3 Module applications.position auctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188C.3.1 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188C.3.2 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188C.3.3 Class Permutations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189C.3.4 Class NoExternalitySetting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189C.3.5 Class NoExternalityPositionAuction . . . . . . . . . . . . . . . . . . . . . . . . . . 190C.4 Module applications.position auctions externalities . . . . . . . . . . . . . . . . . . . . . . 192C.4.1 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192C.4.2 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192C.4.3 Class HybridSetting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192C.4.4 Class ExternalityPositionAuction . . . . . . . . . . . . . . . . . . . . . . . . . . . 193C.5 Module applications.voting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195C.5.1 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195C.5.2 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196C.5.3 Class AbstractVotingMechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196C.5.4 Class VoteTuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197C.5.5 Class Plurality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198C.5.6 Class Approval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199C.5.7 Class kApproval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200C.5.8 Class Veto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201C.5.9 Class Borda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202viiC.5.10 Class InstantRunoff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203viiiList of TablesTable 3.1 Mean runtimes of AGG-SEM and the three incumbents algorithms . . . . . . . . . . . . 25Table 4.1 The distributions we considered for our position-auctionn experiments . . . . . . . . . . 43Table 4.2 Comparing Efficiency (EOS-UNI distribution) . . . . . . . . . . . . . . . . . . . . . . . 75Table 4.3 Comparing Revenue (EOS-UNI distribution) . . . . . . . . . . . . . . . . . . . . . . . . 75Table 4.4 Comparing Envy (EOS-UNI distribution) . . . . . . . . . . . . . . . . . . . . . . . . . . 76Table 4.5 Comparing Efficiency (EOS-LN distribution) . . . . . . . . . . . . . . . . . . . . . . . . 77Table 4.6 Comparing Revenue (EOS-LN distribution) . . . . . . . . . . . . . . . . . . . . . . . . 77Table 4.7 Comparing Envy (EOS-LN distribution) . . . . . . . . . . . . . . . . . . . . . . . . . . 77Table 4.8 Comparing Efficiency (V-UNI distribution) . . . . . . . . . . . . . . . . . . . . . . . . . 78Table 4.9 Comparing Revenue (V-UNI distribution) . . . . . . . . . . . . . . . . . . . . . . . . . . 78Table 4.10 Comparing Relevance (V-UNI distribution) . . . . . . . . . . . . . . . . . . . . . . . . . 79Table 4.11 Comparing Envy (V-UNI distribution) . . . . . . . . . . . . . . . . . . . . . . . . . . . 79Table 4.12 Comparing Efficiency (V-LN distribution) . . . . . . . . . . . . . . . . . . . . . . . . . 80Table 4.13 Comparing Revenue (V-LN distribution) . . . . . . . . . . . . . . . . . . . . . . . . . . 80Table 4.14 Comparing Relevance (V-LN distribution) . . . . . . . . . . . . . . . . . . . . . . . . . 81Table 4.15 Comparing Envy (V-LN distribution) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81Table 4.16 Comparing Efficiency (BHN-UNI distribution) . . . . . . . . . . . . . . . . . . . . . . . 82Table 4.17 Comparing Revenue (BHN-UNI distribution) . . . . . . . . . . . . . . . . . . . . . . . . 82Table 4.18 Comparing Relevance (BHN-UNI distribution) . . . . . . . . . . . . . . . . . . . . . . . 83Table 4.19 Comparing Envy (BHN-UNI distribution) . . . . . . . . . . . . . . . . . . . . . . . . . 83Table 4.20 Comparing Efficiency (BSS distribution) . . . . . . . . . . . . . . . . . . . . . . . . . . 83Table 4.21 Comparing Revenue (BSS distribution) . . . . . . . . . . . . . . . . . . . . . . . . . . . 84Table 4.22 Comparing Envy (BSS distribution) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84Table 4.23 Comparing Efficiency (CAS-UNI distribution) . . . . . . . . . . . . . . . . . . . . . . . 84Table 4.24 Comparing Revenue (CAS-UNI distribution) . . . . . . . . . . . . . . . . . . . . . . . . 85Table 4.25 Comparing Relevance (CAS-UNI distribution) . . . . . . . . . . . . . . . . . . . . . . . 85Table 4.26 Comparing Efficiency (CAS-LN distribution) . . . . . . . . . . . . . . . . . . . . . . . . 86Table 4.27 Comparing Revenue (CAS-LN distribution) . . . . . . . . . . . . . . . . . . . . . . . . 86Table 4.28 Comparing Relevance (CAS-LN distribution) . . . . . . . . . . . . . . . . . . . . . . . 87Table 4.29 Comparing Efficiency (HYB-UNI distribution) . . . . . . . . . . . . . . . . . . . . . . . 87Table 4.30 Comparing Revenue (HYB-UNI distribution) . . . . . . . . . . . . . . . . . . . . . . . . 88Table 4.31 Comparing Relevance (HYB-UNI distribution) . . . . . . . . . . . . . . . . . . . . . . . 88Table 4.32 Comparing Efficiency (HYB-LN distribution) . . . . . . . . . . . . . . . . . . . . . . . 89Table 4.33 Comparing Revenue (HYB-LN distribution) . . . . . . . . . . . . . . . . . . . . . . . . 89ixTable 4.34 Comparing Relevance (HYB-LN distribution) . . . . . . . . . . . . . . . . . . . . . . . 90Table 4.35 Comparing Efficiency (GIM-UNI distribution) . . . . . . . . . . . . . . . . . . . . . . . 90Table 4.36 Comparing Revenue (GIM-UNI distribution) . . . . . . . . . . . . . . . . . . . . . . . . 91Table 4.37 Comparing Relevance (GIM-UNI distribution) . . . . . . . . . . . . . . . . . . . . . . . 91Table 4.38 Comparing Efficiency (GIM-LN distribution) . . . . . . . . . . . . . . . . . . . . . . . . 92Table 4.39 Comparing Revenue (GIM-LN distribution) . . . . . . . . . . . . . . . . . . . . . . . . 92Table 4.40 Comparing Relevance (GIM-LN distribution) . . . . . . . . . . . . . . . . . . . . . . . 93Table 5.1 Comparing GSP variants for two bidders . . . . . . . . . . . . . . . . . . . . . . . . . . 99Table 5.2 Comparing the various auction variants given their optimal parameter settings (best- andworst-case equilibria) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103Table 5.3 Comparing the various auction variants given their optimal parameter settings (incentive-compatible equilibrium) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109xList of FiguresFigure 1.1 Process flow in the Positronic Economist system . . . . . . . . . . . . . . . . . . . . . 3Figure 1.2 An example of equilibrium analysis of voting . . . . . . . . . . . . . . . . . . . . . . . 4Figure 2.1 An example of an action graph: the ice-cream game . . . . . . . . . . . . . . . . . . . . 7Figure 3.1 The Test-Given-Support (TGS) feasibility program for n-player games . . . . . . . . . . 12Figure 3.2 Iterative Removal of Strictly Dominated Strategies . . . . . . . . . . . . . . . . . . . . 13Figure 3.3 A visualization of Porter et al’s [84] tree-search through support space . . . . . . . . . . 14Figure 3.4 ConditionallyDominatedAGG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Figure 3.5 RecursiveCD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Figure 3.6 Visualizing the outer-loop tree search process followed by AGG-SEM when enumeratingPSNEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20Figure 3.7 Visualizing a ConditionallyDominatedAGG search . . . . . . . . . . . . . . . . . . . . 21Figure 3.8 Scatterplot contrasting the runtimes of AGG-SEM and NFG-SEM . . . . . . . . . . . . 23Figure 3.9 An example 3SAT instance reduced to dominance in an AGG-FNA . . . . . . . . . . . 26Figure 3.10 An example INDEPENDENTSET instance reduced to dominance in a BAGG . . . . . . . 28Figure 3.11 Runtime CDFs for AGG-SEM and the three incumbent algorithms . . . . . . . . . . . . 29Figure 3.12 Per-distribution runtime CDFs for all four algorithms . . . . . . . . . . . . . . . . . . . 30Figure 4.1 A weighted GFP represented as an AGG . . . . . . . . . . . . . . . . . . . . . . . . . . 39Figure 4.2 An algorithm for converting a no-externality auction setting into an action graph repre-senting a (weighted) GFP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40Figure 4.3 A weighted GSP represented as an AGG . . . . . . . . . . . . . . . . . . . . . . . . . . 41Figure 4.4 An algorithm for converting an auction setting into an action graph representing a wGSP 42Figure 4.5 Creating the action graph for a GIM position auction . . . . . . . . . . . . . . . . . . . 43Figure 4.6 Performance of different position auction types, averaged across all 13 distributions . . 47Figure 4.7 Empirical cumulative probability distributions over the number of equilibria in EOS andV models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48Figure 4.8 Empirical box plot of wGSP’s envy under different equilibrium selection criteria . . . . 48Figure 4.9 Empirical box plot of wGSP’s social welfare under different equilibrium selection criteria 49Figure 4.10 Empirical box plot of wGSP’s revenue under different equilibrium selection criteria . . 50Figure 4.11 Empirical box plot of uGSP and wGSP’s revenue under different equilibrium selectioncriteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50Figure 4.12 Empirical box plot of uGSP and wGSP’s social welfare under different equilibriumselection criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51xiFigure 4.13 Comparing the average performance of different position auction types in EOS and Vsettings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52Figure 4.14 Comparing wGFP to wGSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53Figure 4.15 Empirical CDF of economic efficiency in BHN models . . . . . . . . . . . . . . . . . . 54Figure 4.16 Comparing the average performance of different position auction types in BHN settings 55Figure 4.17 Empirical CDF of economic efficiency in BSS models . . . . . . . . . . . . . . . . . . 56Figure 4.18 Comparing the average performance of different position auction types in BSS settings. . 57Figure 4.19 Empirical CDF of economic efficiency in the cascade models. . . . . . . . . . . . . . . 58Figure 4.20 Empirical CDF of revenue in the cascade models. . . . . . . . . . . . . . . . . . . . . . 59Figure 4.21 Comparing the average performance of different position auction types in cascade settings 59Figure 4.22 Comparing the average performance of wGSP and cwGSP in cascade settings . . . . . . 60Figure 4.23 Empirical CDF of revenue in the hybrid model. . . . . . . . . . . . . . . . . . . . . . . 61Figure 4.24 Empirical CDF of economic efficiency in the hybrid model. . . . . . . . . . . . . . . . 61Figure 4.25 Comparing the average performance of different position auction types in hybrid settings 62Figure 4.26 Empirical CDF of economic efficiency in the GIM model. . . . . . . . . . . . . . . . . 63Figure 4.27 Empirical CDF of revenue in the GIM model. . . . . . . . . . . . . . . . . . . . . . . . 63Figure 4.28 wGSP tended to have good worst-case efficiency when the top position produced themajority of the surplus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64Figure 4.29 Comparing the average performance of different position auction types in GIM settings . 64Figure 4.30 Comparing different auction designs as the number of bid increments varies. . . . . . . 66Figure 4.31 Comparing different auction designs as the number of bid increments varies (continued). 67Figure 4.32 How tie breaking affects auction outcomes . . . . . . . . . . . . . . . . . . . . . . . . 69Figure 4.33 How price rounding affects auction outcomes . . . . . . . . . . . . . . . . . . . . . . . 70Figure 4.34 Comparing different auction designs as the number of agents varies. . . . . . . . . . . . 71Figure 4.35 Comparing different auction designs as the number of agents varies (continued). . . . . 72Figure 4.36 Comparing different auction designs as the number of slots varies. . . . . . . . . . . . . 73Figure 4.37 Comparing different auction designs as the number of slots varies (continued). . . . . . 74Figure 5.1 Visualizing the allocation functions of VCG, anchoring and squashing . . . . . . . . . . 100Figure 5.2 Visualizing the allocation functions of UWR and QWR . . . . . . . . . . . . . . . . . . 101Figure 5.3 Visualizing the allocation functions of UWR+Sq and QWR+Sq . . . . . . . . . . . . . 101Figure 5.4 Visualizing the allocation function of the Myerson auction (log-normal distribution) . . 102Figure 5.5 Revenue, parameter settings and equilibrium selection for six GSP variants . . . . . . . 104Figure 5.6 Visualizing-incentive compatible price computation in position auctions . . . . . . . . . 105Figure 5.7 Effect of squashing on revenue in incentive-compatible equilibrium . . . . . . . . . . . 107Figure 5.8 Effect of UWR and QWR on revenue in incentive-compatible equilibrium . . . . . . . . 107Figure 5.9 Marginal effect of squashing on revenue in incentive-compatible equilibrium (with UWRpresent) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108Figure 5.10 Marginal effect of squashing on revenue in incentive-compatible equilibrium (with QWRpresent) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108Figure 5.11 Effect on revenue when squashing is only applied to QWR’s reserve . . . . . . . . . . . 109Figure 5.12 Effect on revenue when squashing is only applied to QWR’s post-reserve allocation . . . 110Figure 5.13 Comparing GSP variants while varying the number of bidders . . . . . . . . . . . . . . 110xiiFigure 6.1 An action graph game encoding of a simple two-candidate plurality vote. Each round noderepresents an action a voter can choose. Dashed-line boxes define which actions are open to avoter given his preferences; in a Bayesian AGG, an agent’s type determines the box from whichhe is allowed to choose his actions. Each square is a sum node, tallying the number of votes acandidate received. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119Figure 6.2 Even with the truthfulness incentive, many different outcomes were still possible in equilibrium. 120Figure 6.3 With the truthfulness incentive, some outcomes occur much more frequently than others. . . . . 121Figure 6.4 CDF of social welfare. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122Figure 6.5 The average proportion of equilibria won by candidates with average rank of 0–1, 1–2, etc. . . . 123Figure 6.6 Percentage of games with a Nash equilibrium of a given type with 3 candidates and varyingnumber of voters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124Figure 6.7 Varying the number of voters and candidates. . . . . . . . . . . . . . . . . . . . . . . . . . 124Figure 6.8 Empirical CDF of counts of equilibria. . . . . . . . . . . . . . . . . . . . . . . . . . . . 125Figure 6.9 Every instance had many equilibria, most of which only involved a few candidates. . . . . . . . 127Figure 7.1 The Three Laws of Positronic Economics . . . . . . . . . . . . . . . . . . . . . . . . . 130Figure 7.2 White-Box Structure Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136Figure 7.3 Measuring the performance of WBSI on efficiently-represented games . . . . . . . . . . 139Figure 7.4 Measuring the performance of the BBSI-ILS algorithm on tuned inputs . . . . . . . . . 140Figure 7.5 Measuring the performance of the BBSI-ILS algorithm on untuned inputs . . . . . . . . 141Figure 7.6 Measuring the scales of BAGGs using structure learned by BBSI . . . . . . . . . . . . . 143xiiiGlossary• AGG – Action-graph game• AGG-SEM – The support-enumeration algorithm, using action-graph-game structure• BBSI – Black-box structure inference• BNE – Bayes-Nash equilibrium• BNIC – Bayes-Nash incentive compatible• GFP – The generalized first-price position auction• GNM – Govindan and Wilson’s global Newton method for finding Nash equilibria• GSP – The generalized second-price position auction• IC – Incentive compatible (a property of some mechanisms)• IRSDS – Iterative removal of strictly dominated strategies• MSNE – Mixed-strategy Nash equilibrium• NE – Nash equilibrium• NFG – Normal-form game• NFG-SEM – The support-enumeration algorithm, using normal-form-game structure• NP – The set of all computational problems that can be solved in polynomial time by a non-deterministicTuring machine• P – The set of all computational problems that can be solved in polynomial time by a deterministicTuring machine• PSNE – Pure-strategy Nash equilibrium• SEM – Porter, Nudelman and Shoham’s support-enumeration method for finding Nash equilibria (seeSection 3.2)• SimpDiv – The simplicial subdivision method for finding Nash equilibriaxiv• TGS – Test given support subroutine• uGSP – The unweighted generalized second-price position auction• VCG – The Vickrey-Clark-Groves mechanism• WBSI – White-box structure inference• wGSP – The (quality-)weighted generalized second-price position auctionxvAcknowledgmentsI would like to thank my supervisor, Kevin Leyton-Brown, and my committee members, Sergei Severinovand Holger Hoos, for their many hours of support and guidance. I also owe a debt to the rest of the faculty atUBC, and at the University of Guelph, who helped to shape and inspire me as a researcher.I am grateful to my collaborators and peers, and for the great communities of AAAI, AAMAS, ACM EC,IJCAI, INFORMS, and the Algorithmic Game Theory semester at the Hebrew University of Jerusalem.My research was generously supported by grants from Google, Microsoft and NSERC.I am thankful for my friends and family, who were a constant source of support and joy.Lastly, I would like to thank my beloved wife, Fern.xviDedicationTo FernxviiChapter 1IntroductionA mechanism is a formal protocol that a group of agents can use to arrive at a decision, even when thoseagents disagree about which choice is best. A simple example is a committee voting to choose a singlerecommendation from a small set of candidates. In computer science terms, a mechanism can be though of asan algorithm whose inputs can be distorted by agents who want to influence the output.Mechanism analysis is an increasingly important topic in computer science research. As the internetbrings agents with competing interests together, many traditional computer science problems are now beinginvestigated as mechanisms. Two important example problems are routing messages through a network [90],and scheduling tasks across a set of processors [78]; both are made more complicated by the presence ofmultiple users with competing interests. Mechanisms also provide important non-traditional applicationdomains for artificial intelligence. One example application is AI agents that participate in mechanisms, e.g.,automated auction bidders [72]. Another is AI systems that design novel mechanisms for specific, unusualsettings [16].The study of mechanisms also has many important applications outside the realm of computer science.Mechanisms are a model of many important social institutions, including auctions [58], elections [33, chapter5] and matching systems [34] such as those used to match medical students to internships or to match organdonors with recipients.Mechanism analysis is the investigation of what happens when a mechanism is run. However, treating amechanism as a simple mapping from inputs to outputs misses a critical point; the inputs to the mechanism canbe distorted by agents hoping to manipulate the output. To understand a mechanism one needs to understandthe relationship between the outcome and the agents’ true preferences. For example, when considering thequestion “who will win an election, if we use voting rule X?,” the answer “whoever gets the most votes” ismuch less satisfactory than “whoever best reflects the actual preferences of the majority of voters.” This kind1of investigation is especially difficult because agents will not merely try to outwit the mechanism designer,but will also try to outwit each other.Mechanism analysis means studying the interaction between a setting, i.e., the agents and their preferences,and a mechanism, but also involves a third element: a formal model of the strategic choices made by theagents. The models of choice come from game theory: the mechanism and setting specify a game, andgame-theoretic solution concepts, e.g., Nash equilibrium, are used to identify the strategies that rational agentswould follow in that game. Thus, mechanism analysis means answering the question “Given mechanism Mand setting S, what happens in an equilibrium of type E?”One could object to this way of framing the problem, arguing that researchers should focus on mechanismswhere agents do not have any incentive to manipulate the outcome. Such mechanisms are called incentivecompatible (IC) or truthful. In fact, the revelation principle shows that incentive compatibility is withoutloss of generality, in the sense that every mechanism is equivalent to some incentive compatible mechanism[75]. However, the revelation principle does not eliminate the need for study of non-IC mechanisms forseveral reasons. One reason is that non-IC mechanisms are widely used in practice, e.g., simultaneousascending auctions [18], plurality votes [33], the generalized second-price auction [103]. To replace anyof these mechanisms with an equivalent IC mechanism first requires that one completely understand howagents behave in the original mechanism. IC mechanisms also have many inherent weakness that could makethem impractical for many applications, including exponentially large computation and communication costsand a need for the revelation of confidential information. Another reason is that IC mechanisms are oftenonly IC under restrictive assumptions about the agents’ preferences and abilities. When these assumptionsare relaxed, the mechanism may be as manipulable as any other non-IC mechanism, e.g., when the Vickreyauction is used to sell to agents with common values [58], when multiple single-good Vickrey auctions sellgoods to agents with combinatorial preferences [10], or when the combinatorial Vickrey auction is used tosell to agents who are capable of bidding under multiple identities [112].My thesis is that mechanism analysis can be done computationally, leveraging compact representa-tions of games. (See Figure 1.1.) Equilibrium computation has made substantial strides in the last decade.Importantly for my thesis, two major areas of progress are (1) designing and identifying algorithms thatare fast in practice [80, 84], and (2) compactly representing games, along with algorithms that exploit thiscompactness for more efficient computation [51, 55, 91]. Using a compact game representation allows me todecouple the problem of representation, i.e., storing the game in a computationally useful form, from theproblem of solving the game, i.e., computing a solution concept such as a Nash or correlated equilibrium.It also allows me to leverage existing algorithms and implementations for solving compact games, as wellas any new algorithms that are developed concurrently. For a compact representation, I chose action-graphgames (AGGs) [51], and their incomplete information counterpart Bayesian action-graph games (BAGGs)[47], because they are more compact than nearly any other representation1 and there are preexisting fast1MAIDs [57] are more compact than AGGs for some games. However, MAIDs are lacking in practical implemented algorithms,2Mechanism-BasedGame:Agents withPreferences+MechanismCompact GameRepresentation EquilibriumOutcomeSolverEncoderComputational Mechanism AnalysisFigure 1.1: My thesis is that is it possible to do mechanism analysis, i.e., identifying equilibriumoutcomes of non-truthful mechanisms, computationally. Computational mechanism analysis(CMA) is different from most other mechanism analysis in that the equilibrium-finding is doneentirely by computer. To do this, I use compact game representations, action-graph games [51]and Bayesian action-graph games [47], as intermediate data structures.implementations of state-of-the-art solvers for AGGs.Beyond the freedom to choose among solver algorithms, our two-stage design also provides freedom tochoose among solution concepts. To date, (Bayes) Nash equilibrium has tended to dominate the game-theoretic analysis of mechanisms. (For example, see the works cited in Chapter 4, which cover a broadrange of analyses of internet advertising auctions.) However, arguments can be made for many other solutionconcepts such as correlated equilibrium, perfect equilibrium and rationalizability, as well as behaviouralsolution concepts such as quantal-response equilibrium or cognitive hierarchy. Computing these solutionconcepts often requires computing game-theoretic “building blocks” such as expected utility, best responseand dominance. Thus, the (B)AGG-representation language, which allows for exponential speedup of thesebuilding blocks, can provide a substantial computational advantage. (See Roughgarden and Papadimitriou[91] for an example of how fast expected utility calculations can lead to exponentially faster correlatedequilibrium computation.)One could argue that any hard-to-compute solution concept is implausible, precisely because of its hardness.If an equilibrium is hard for researchers to find, then surely it must be hard for the agents to find as well;thus, the argument follows, researchers could better predict agent behaviour by using only easy-to-computesolution concepts, such as the limits of learning dynamics. Our approach could still be useful, even for aresearcher studying such a solution concept. Learning dynamics can converge to many different outcomesdepending on learning parameters (and random seeds), and it could be infeasible to bound the range ofpossible outcomes. However, many kinds of learning dynamics are guaranteed to converge to correlatedequilibrium, and thus, an algorithm to compute optimal or extreme correlated equilibrium (e.g., the algorithmfrom Jiang and Leyton-Brown [49]) can be used to identify which outcomes are possible, while remainingand even basic operations like calculating expected utility are typically NP-hard because they generalize an existing NP-hard problem:Bayes-Net inference [17].349 : A BC47 : BC  A4 : C  B AFigure 1.2: An example setting for demonstrating that truthful voting can be a Nash equilibrium inplurality voting, but that it can lead to the “wrong” outcome. The top line can be read as “49 votersprefer candidate A over candidate B, and candidate B over candidate C.” Voting truthfully, i.e.,for your most preferred candidate, is an Nash equilibrium in this setting: because the final talliesseparated by more than one vote, no agent has an incentive to deviate and strategically vote. In thisNash equilibrium, candidate A wins. However, candidate A is the wrong candidate in the sensethat he is a “Condorcet loser,” i.e., he would lose in a pairwise vote against any other candidate.agnostic about learning parameters.Another significant, and fair, criticism of this approach is that it involves analyzing one game at a time,and thus, any result it produces will be sensitive to the exact parameters of the mechanism, setting andequilibrium-selection criteria that produced that specific game and equilibrium2. These results can be called“existential” results; i.e., they show that a game with a particular property exists. (See Figure 1.2 for anexample.) In contrast, the literature on mechanisms contains many “universal results,” i.e., results showingthat some property holds for every mechanism-setting pair in some space. One example of a famous universalresult is Myerson’s revenue equivalence theorem [75], which identifies a wide range of settings in which alleconomically efficient single-good auctions generate the same amount of revenue, in symmetric Bayes-Nashequilibrium. These results are not dependent on parameters in the same way that existential results are, butthey are nevertheless limited, typically covering only a small space of mechanisms and settings. Using CMAit is often possible to explore much larger spaces. For example, the famous EOS/Varian [30, 103] resultsabout economic efficiency in advertising auctions only applies to a very narrow range of settings, a singlemechanism and a specific family of equilibria. In contrast, the CMA tools in Chapter 4 can evaluate economicefficiency for a wide range of settings, a large parameterized family of auctions, and the entire space of Nashequilibria, using the algorithm from Chapter 3.Despite only producing existential results, CMA can make a wide range of contributions to the study ofmechanisms:• Additional analysis of even a single game can yield new insights. For example, the work in Chapter 4includes one of the first analyses of what happens when the envy-free-ness assumption is relaxed in the2The first part of the issue—that computational mechanism analysis (CMA) can only handle a single game at a time—seems tobe intrinsic to our way of framing the problem. The second part—that CMA only works with arbitrary sample equilibria—can beovercome with better equilibrium-finding algorithms. Although most research into equilibrium-finding algorithms focuses on findingsample Nash equilibria, itself a computationally hard problem, there are algorithms that can go further, finding all equilibria of aparticular type (see Chapter 3), or finding equilibria that provably optimize some objective (as in [48] for example).4advertising auction games of Edelman et al [30] and Varian [103].• Although CMA cannot directly produce universal results, it can help human researchers to do so,through its ability to cheaply analyze large numbers of games. CMA helps to disprove hypotheses, byenumerating many games searching for a counterexample. CMA results can also inspire researchers tofind new universal results based on unexpected patterns that appear in sample games. For example,we were able to partially characterize the space of Bayes-Nash equilibria in plurality voting games(Theorem 14) based on patterns found in a small set of sample games.• By sampling settings, it is also possible to compute expected properties of a mechanism, e.g., expectedrevenue given an arbitrary sample-able distribution, possibly derived from real-world data. This alsomakes it possible to compare the expected performance of different mechanisms, even in cases whereneither mechanism dominates the other.• Computational mechanism analysis enables automated mechanism design; in this approach, mechanismdesign is framed as a constrained optimization where the search space is the parameter space of aparameterized mechanism and the objective is some property of equilibrium outcomes, computed usingCMA. Vorobeychick et al [108] have had success with this approach, even using iterative best response,an extremely simple Nash-equilibrium-finding algorithm. Chapter 5 shows an extended example ofrevenue optimization using CMA. As in the case of voting, surprising experimental results have lead tonew theoretical insights.• Lastly, CMA can facilitate the computation of empirical bounds, such as on the price of anarchy.3This document proceeds as follows. Chapter 2 contains formal models of the concepts described above, aswell as relevant background. Chapter 3 describes a novel solver algorithm that became a key tool for my CMAtechniques. Chapters 4 and 5 describe my first application domain, internet advertising auctions. Chapter 4shows how my CMA approach can be used to compare different advertising auctions across a wide range ofmodels. Chapter 5 demonstrates how mechanism analysis can facilitate mechanism design, searching for aprofit-maximizing mechanism from a parameterized space of auction designs. Chapter 6 describes my secondapplication domain, strategic voting in elections. Each application domain required building novel encoderalgorithms, which involved substantial cognitive effort. Chapter 7 describes the Positronic Economist system,which simplifies, unifies and generalizes the CMA software used in the preceding chapters.3 “Price of anarchy,” a worst-case bound on economic efficiency has recently become popular as a way of doing mechanismanalysis. (See Chapters 17–21 of [79].) Price of anarchy results typically consist of a existential result, i.e., an example of a settingand equilibrium where this mechanism is a factor of x away from economic efficiency, and corresponding universal result, i.e., aproof that no setting-equilibrium pair exists where this mechanism is more than x away from economic efficiency.5Chapter 2BackgroundA full review of game theory and mechanisms is far beyond the scope of this document; this chapter is writtenwith the assumption that the reader is familiar with both topics. (See Chapters 3–6 and 9–11 of [96] for adetailed treatment.) Thus, this chapter discusses only more specialized areas of game theory. The first areais compact game representations, particularly action-graph games. These are essential for computationalmechanism analysis: without them I would not be able even to store many games of interest. The secondarea is solvers, specifically Nash-equilibrium-finding algorithms. One of the major advantages of using anexisting compact game representation is that I can directly apply these algorithms, rather than having to relyentirely on novel solvers. The third area is computational mechanism analysis: there is a very small existingliterature on making general purpose algorithms for finding equilibrium outcomes of mechanisms.Because each application domain has its own extensive body of research, I provide application-specificliterature surveys in their respective chapters.2.1 Compact GamesThe first barrier to computational mechanism analysis is game representation. Since normal-form gamesgrow exponentially in the number of players, they yield infeasibly large encodings of all but the simplestinteractions. The literature contains many compact representations for simultaneous move games, for examplecongestion games [89], graphical games [55] and action-graph games [51]. For my purposes, action-graphgames are the most useful, because they combine two important features. They’re very compactly expressive,i.e., if other representations can encode a game in polynomial-space then so can AGGs, and there are existingempirically fast tools for working with them.Action-graph games achieve compactness by exploiting two kinds of structure in the payoffs. (There are62 31564897Figure 1: AGG representation of anarbitrary 3-player, 3-action game231546897Figure 2: AGG representation of a3-player, 3-action graphical gameV1C1V3C3 C4V4V2C2Figure 3: AGG representation of theice cream vendor gamelocations along a beach. Vendors are of two kinds, choco-late and vanilla. Chocolate (vanilla) vendors are nega-tively affected by the presence of other chocolate (vanilla)vendors in the same or neighboring locations, and are si-multaneously positively affected by the presence of nearbyvanilla (chocolate) vendors. Note that this game exhibitscontext-specific independence without any strict indepen-dence, and that the graph structure is independent of n.Other examples of compact AGGs that cannot be com-pactly represented as graphical games include: locationgames, role formation games, traffic routing games, prod-uct placement games and party affiliation games.2.3 NotationLet ϕ(X) denote the set of all probability distributions overa set X . Define the set of mixed strategies for i asΣi ≡ ϕ(Si), (5)and the set of all mixed strategy profiles asΣ ≡∏i∈NΣi. (6)We denote an element of Σi by σi, an element of Σ by σ,and the probability that player i plays action s by σi(s).Next, we give notation for applying some of the conceptsdefined in Section 2.1 to situations where one or moreagents are omitted. By ∆−{i,i′} we denote the set of pos-sible distributions of agents other than i and i′, and byD−{i,i′} we denote an element of ∆−{i,i′}. Analogously,we define N−{i,i′}, S−{i,i′},Σ−{i,i′} and σ−{i,i′}. As ashorthand for the subscript −{i, i′}, which we will needfrequently in the remainder of the paper, we use an overbar,yielding ∆, D,N, S,S,Σ and σ . When only one agent isomitted, we write e.g. ∆−i. Finally, we overload our no-tation, denoting by D(si, s′i, D) the distribution that resultswhen the actions of i and i′ are added to D.Define the expected utility to agent i for playing pure strat-egy s, given that all other agents play the mixed strategyprofile σ−i, asV is (σ−i) ≡∑s−i∈S−iu(s, s−i) Pr(s−i|σ−i). (7)The set of i’s pure strategy best responses to a mixed strat-egy profile σ−i is arg maxs Vis (σ−i), and hence the full setof i’s pure and mixed strategy best responses to σ−i isBRi(σ−i) ≡ ϕ(arg maxsV is (σ−i)). (8)A strategy profile σ is a Nash equilibrium iff∀i ∈ N, σi ∈ BRi(σ−i). (9)Finally, we describe the projection of a distribution ofagents onto a smaller action space. Intuitively we constructa graph from the point of view of an agent who took a par-ticular action, expressing his indifference between actionsthat do not affect his chosen action. For every action s ∈ Sdefine a reduced graph G(s) by including only the nodesν(s) and a new node denoted ∅. The only edges includedin G(s) are the directed edges from each of the nodes ν(s)to the node s. The projected distribution D(s) is definedover the nodes of G(s) asD(s)(s′) ≡{D(s′) s′ ∈ ν(s)∑s′′ 6∈ν(s) D(s′′) s′ = ∅. (10)In the analogous way, we define S(s), s(s), Σ(s) and σ(s).Figure 2.1: An action graph for an “ice-cream game” [51]. In an ice-cream game, each player has eitherchocolate or vanilla ice-cream to sell and must choose from a set of possible locations to set up hisstand. The action graph encodes that a player’s payoff depends only on his location, and the typeand number of competitors within one step of his location.AGG extensions that can exploit additional types of structure.)• Anonymity means that an agent’s payoff depends only on his own action, and the “configuration”induced by the other agents. A configuration is a tuple of counts of how many agents played eachaction. In other words, the agent’s payoff does not depend on who played which action. Therefore,instead of storing an action’s payoff for every pure-strategy profile of the other agents, AGGs onlyneed to store one payoff for every configuration.• Context-specific independence means that an agent’s payoff for playing some action only depends thecounts on a subset of the actions. These independences are encoded in an action graph, a directedgraph where each node corresponds to an action. The payoff for playing an action only depends on thecounts on its neighboring nodes. Instead of storing an action’s payoff for every configuration, AGGsonly need to store one for every “projected configuration,” a tuple of counts on the neighbors of avertex, denoted C(v) for vertex v. See Figure 2.1 for an example.Formally, an action-graph game is a 4-tuple 〈N,A,G,u〉 where N is the set of agents 〈1,2, ...,n〉; A =Πi∈NAiis the set of action profiles; G = 〈V,E〉 is a directed graph with vertices V , where V =⋃i∈N Ai, and edges E;and u = 〈u1,u2, ...,u|V |〉 is a tuple of utility functions, where uv : C(v) 7→R.The other advantage of AGGs, beside their size, is that they can be reasoned about efficiently. In particular,given a mixed-strategy profile it is possible to compute the expected utility (for an agent i playing a particularaction ai) in polynomial time using dynamic programming [51]. This is important because expected utility is7the main bottleneck in two important equilibrium-finding algorithms. The dynamic program proceeds throughn iterations, where at the kth iteration, it computes the marginal distribution over the projected configurationsC(ai) given the strategies of the first k agents.There are two other variants of AGGs that will be relevant to my work:• AGGs with function nodes (AGG-FNs) [51] — These have additional nodes that correspond tofunctions rather than only actions. The projected configuration on a function node is a function of theprojected configuration on its neighbourhood. These are sometimes exponentially smaller than AGGs.• Bayesian AGGs (BAGGs) [47] — In Bayesian AGGs, each agent has a private random type, typicallydrawn independently from the other agents’ types, and each type has its own action set which is asubset of the available action nodes.2.2 Nash-Equilibrium-Finding AlgorithmsNash equilibrium finding is one of the central problems in algorithmic game theory and as such, the literatureon it is extensive, including numerous algorithms and hardness results. (See chapters 4–6 of [96] and chapters1–9 of [79] for a overview.) Because the literature is so broad, I will focus specifically on Nash equilibriumfinding in AGGs. The most powerful hardness result—i.e., that finding an sample Nash equilibrium is PPADcomplete1 [14, 20]—applies directly to all fully expressive compact representations, including AGGs. Thereare also applicable NP-hardness results: Finding a pure-strategy Nash equilibrium of a symmetric AGG withunbounded tree-width is NP-hard [19]. Finding a pure-strategy Nash equilibrium of a graphical game2 isNP-hard [42]. Thus, it is unlikely that there exists a polynomial-time algorithm for finding Nash equilibria ofgeneral action-graph games. There have been some polynomial-time algorithms for finding Nash equilibria ofrestricted AGGs [19, 46] but these have gone unimplemented, and often the restrictions rule them unsuitablefor my purposes, e.g., bounded treewidth, full symmetry, no function nodes.In contrast, there have been promising results taking existing normal form algorithms and making themwork on AGGs [51]. These modified algorithms have exponentially faster asymptotic runtime than theirnormal-form counterparts when inner loop operations are optimized for AGGs. They also often have verygood performance in practice, quickly finding Nash equilibria of games that would be infeasible to evenstore on a modern hard drive if represented in the normal form. Below, I describe two algorithms, simplicialsubdivision and the global Newton method, that were previously made exponentially faster for AGGs. Inboth cases, the algorithm’s bottleneck was calculating expected utility, and performance was improved byusing AGG-specific expected utility routines. A third state-of-the-art Nash-equilibrium-fining algorithm, the1PPAD stands for “Polynomial Parity Arguments on Directed graphs.” This class also includes many other fixed-point problems,and there is no known polynomial time algorithm for any PPAD-complete problem.2Graphical games are a specialization of AGGs. Graphical games only exploit a very strong form of independence, where twoplayers are independent iff their actions can never affect each other’s payoffs.8support enumeration method [84], is covered in detail in Chapter 3. It has different bottlenecks from the othertwo algorithms, but I was still able to make it exponentially faster.2.2.1 The Global Newton Method and Simplicial SubdivisionThe global Newton method (GNM) [43] and simplicial subdivision (SimpDiv) [102] are two algorithmsfor finding sample Nash equilibria with many common features. Most critically, both use expected utilitycalculations as a critical inner-loop step, meaning that when analyzing AGGs they can be made exponentiallyfaster by using computing expected utility according to Jiang et al’s dynamic programming method [51].Both have been implemented in GAMBIT [68] with AGG-specific extensions by Jiang.The global Newton method (GNM) of Govindan and Wilson is a homotopy method. This means that it worksby tracing a continuous path from an easy-to-solve problem instance through to the actual problem instanceof interest, maintaining a solution throughout the process. This particular algorithm works by tracing a spaceof games, starting from a game with strict dominant strategies and ending with the original input game, withinterpolated payoffs on the games in between. GNM is a Las Vegas algorithm3: the starting game with thedominant strategy is generated at random, but the algorithm is always guaranteed to terminate eventually.Simplicial subdivision works by exploring the space of mixed-strategy profiles in the game. Specifically, itdivides that space according to successively smaller triangular grids, i.e., dividing the simplex into smallersimpleces, while tracing a fixed path along the verteces of this grid so that the current vertex is adjacent to asimplex that contains a fixed point. SimpDiv is also a Las Vegas algorithm, because the starting point of thepath determines how much time the exploration will take and which fixed point will be found.2.3 Other Computational Mechanism AnalysisThere is a small existing literature on general-purpose techniques for analysing mechanisms. This is incontrast with special-purpose techniques that only work for a specific small family of mechanisms, settingsand equilibrium concepts. For example, [103] and [5] include special-purpose algorithms for finding Nashequilibria of internet-advertising auctions. However, both are tied to specific auctions, settings and equilibriumrefinements.Vorobeychik et al [108] have experimented with applying iterative best response to simple two-bidder auctionproblems. Iterative best response is a very straightforward algorithm: each agent initially plays some purestrategy, and in each iteration one agent changes to a pure strategy that is a best response given the currentpure strategies of the other agents. Unfortunately, this algorithm can only find pure-strategy Nash equilibria.Further, even in games in which such equilibria exist, iterative best response can cycle forever without finding3A Las Vegas algorithm is an algorithm that has random behavior and runtime, but that never produces incorrect output [6].9one.Gerding et al [35, 86, 95] have explored refinements of the fictitious play algorithm applied to simultaneousauction games and double auctions. Fictitious play is similar to iterative best response; however, whenplayers update their strategies, they best respond to the empirical distribution over previous iterations. In thelimit, these empirical distributions can converge to mixed Nash equilibria. However, fictitious play is notguaranteed to converge.Neither of these approaches, nor my approach, dominates the others in terms of expressiveness: Each canmodel games that are impossible to model for the other two. One advantage of using AGGs, apart frombeing able to model scenarios that are impossible for the other two, is that it allows for more sophisticatedalgorithms, i.e., ones that are guaranteed to converge to equilibrium on any input, and that can be targeted tosearch for specific types of equilibria.Vorobeychik et al, have also studied interactions such as position auctions, approximating Nash equilibriaby simulation [105–107]. In principle, a simulator can model an arbitrarily complicated game. However, inpractice their approach involves sampling a small number of strategies (and a small number of random seedscapturing any randomness in the game) and then finding an equilibrium of a game defined by those samples.This approach is currently only practical when the number of players and samples is small. Unfortunately, agame defined by a small number of samples is unlikely to encode the full complexity of the simulator.10Chapter 3The Support-Enumeration Method forAction-Graph Games3.1 IntroductionIn this chapter, we introduce a novel equilibrium-finding algorithm for AGGs, based on the support-enumeration method (SEM) of Porter, Nudelman and Shoham [84]. Similarly to the SimpDiv and GNMvariants produced by Jiang et al [51], our SEM variant works by replacing the main inner-loop operations withfaster AGG-specific alternatives. SEM has two distinguishing features compared to those other algorithms:1) for many important families of games, SEM is much faster at finding sample Nash equilibria, and 2) SEMcan enumerate Nash equilibria, rather than just finding a single sample equilibrium. (Further, this search canbe directed towards specific types of Nash equilibria, particularly small-support Nash equilibria. No otheralgorithm can do this.) Although the first feature originally motivated us to develop SEM for AGGs, thesecond has proved even more valuable. As subsequent chapters will show, we often found that games ofinterest have many Nash equilibria, and that considering only the sample equilibria found by other solverscould be misleading. Thus, SEM’s equilibrium enumeration ultimately became the main game-solvingalgorithm for all our subsequent analyses.A key reason that SEM has not been extended to work with compact game representations is that it operatesvery differently from other equilibrium-finding algorithms, and hence existing techniques could not be appliedto it directly. In this chapter we show how SEM can be extended to work with AGGs (and hence with othergame families compactly encodable as AGGs, such as graphical games and congestion games). Specifically,we show how three of SEM’s subroutines can be made exponentially faster. Experimentally, we show thatthese optimizations dramatically improve SEM’s performance, rendering it competitive with, and often fasterthan, other state-of-the-art algorithms for computing equilibria of AGGs.11∑a−i∈S−ip(a−i)ui(ai,a−i) = vi ∀i ∈ N,∀ai ∈ Si (3.1)∑a−i∈S−ip(a−i)ui(ai,a−i) ≤ vi ∀i ∈ N,∀ai ∈ Ai \Si (3.2)∑ai∈Sipi(ai) = 1 ∀i ∈ N (3.3)pi(ai) ≥ 0 ∀i ∈ N,∀ai ∈ Si (3.4)Figure 3.1: The Test-Given-Support (TGS) feasibility program for n-player games. For any givensupport profile S ∈ ∏i∈N 2Ai , we can construct a TGS feasibility program where any feasiblesolution (p,v) is a Nash equilibrium with support S,1where the players randomize according to theprobabilities in p and get the payoffs specified by v. The constraints on line 3.1 specify that eachplayer is indifferent between all the actions in the support of his strategy. Those on line 3.2 specifythat each player weakly prefers the actions in the support of his strategy. The remaining linesspecify that each mixed strategy is a probability distribution. Note thatp(a−i) = Π j∈N\{i}p(a j)because agents randomize independently in Nash equilibria.3.2 Technical BackgroundThe support-enumeration method (SEM) is a brute-force-search method for finding Nash equilibria. Ratherthan searching through all mixed strategy profiles, it searches through support profiles (specifying whichactions each agent plays with positive probability) and tests whether there is a Nash equilibrium with thatparticular support. This test can be performed using the polynomial feasibility program given in Figure 3.1,which is feasible if and only if a Nash equilibrium with that support profile exists. Though several algorithmshave been proposed for searching in the space of supports to find Nash equilibria [24, 63, 65], we will focuson the most recent SEM variant, due to Porter, Nudelman and Shoham [84]. This variant introduces twoimportant features designed to improve empirical performance. First, it uses heuristics to order its explorationof the space of supports, searching from smallest to largest, breaking ties in favor of more balanced supportprofiles. This order can speed up equilibrium finding for several reasons. First, there are fewer smallsupport-size profiles to search through. Second, the corresponding feasibility programs have fewer variablesand simpler constraints. And third, in many games of interest (see, e.g., [80]), Nash equilibria with small,balanced supports are common [84]. The second important feature is that instead of simply iterating throughthe complete set of support profiles of a given size, SEM explores this space by tree search (see Figure 3.3 for1Note that this formulation allows for actions in the support to be played with zero probability. This does not adversely affectSEM’s behavior when looking for a single Nash equilibrium; if such a Nash equilibrium existed, SEM would have found it already.However, this can have adverse affects when searching for multiple mixed Nash equilibria, e.g., on the game below:A BA 0,0 -1,0B 0,-1 0,0Although this game has no mixed Nash equilibria, TGS is feasible given full supports. This issue is easily addressed by adding anobjective maxx subject to x≤ pi(ai) for all actions in the support. Although supports strictly smaller than S are still feasible, theoptimal solution will have smaller support iff no MSNE with support S exists (which is indicated by x being equal to zero).12Input: D = (D1, . . . ,Dn): profile of domainsOutput: Updated domains, D, or failurerepeatchanged← falseforeach i ∈ N doforeach ai,a′i ∈ ∪di∈Di doif ConditionallyDominated(ai,a′i,D−i) thenDi← Di \{di ∈ Di : ai ∈ di}changed← trueif Di = /0 thenreturn failureuntil ¬changed ;return DFigure 3.2: Iterative Removal of Strictly Dominated Strategiesan example). This search works by, at each level, selecting a support for a single additional player. At theleaves of the tree, the support is specified for every agent, and that support profile can be tested using TGSfor the existence of a Nash equilibrium. (See Section 3.3.5 for an example of this.) The advantage of usingthis tree search comes from pruning: after an agent’s support is selected, SEM performs iterative removalof strictly dominated strategies (IRSDS — see Algorithm 3.2), conditional on agents only playing actionsin their supports. This has the effect of eliminating many support profiles from consideration. The searchbacktracks whenever an agent has an empty support or the TGS feasibility program is infeasible.While SEM can find (or enumerate) exact PSNEs, but there are two important caveats when it comes to mixed-strategy Nash equilibria. The first is that mixed-strategy Nash equilibria can involve irrational probabilities,making them impossible to store in any rational representation. Thus, the best any implementation workingwith rational or floating-point representations can guarantee is to find ε-Nash equilibria. (For our experiments,we used an additive ε of 10−10.) Second, games can have multiple, even infinitely many, Nash equilibriafor a single support profile; enumerating all MSNEs requires enumerating all feasible solutions to the TGSsystem. For our enumeration experiments, we restricted our attention to PSNEs, which are finite in numberand can be represented exactly using rational numbers.3.3 SEM for AGGsObserve that we can trivially make a version of SEM that takes an AGG as input, simply replacing thenormal form game (NFG) utility lookups (ui(a)) with the AGG equivalents (ui(a) = uai(c(ai)), where c(ai) isthe projected configuration given a). We denote this algorithm NFG-SEM, because its behavior is exactlythe same as that of SEM for strategically equivalent normal-form games. However, while SEM and NFG-13Mechan Mechin MechasinMmchan Mmchin MmchasinFigure 3.3: Porter et al’s [84] tree-search works by instantiating strategies from agents’ supports, oneagent at a time, and removing strategies that are strictly dominated conditional on the agentsplaying within their given supports. The search backtracks whenever an agent has an emptysupport or the TGS feasibility program is infeasible.SEM will do the same sequence of operations, their asymptotic performance analyses are not equivalent.This is because NFG-SEM’s inputs can be exponentially smaller, due to the representational efficiency ofAGGs. Specifically, we show that NFG-SEM’s inner-loop operations—iterative removal of strictly dominatedstrategies and the TGS feasibility program—are at least worst-case exponential in the AGG input length(denoted `).2 The outer-loop search over supports also requires exponential time, even for games with PSNEs.However, we can do better if we construct a version of SEM that explicitly takes AGG structure into account.We present such an extension of SEM, denoted AGG-SEM, and its asymptotic analysis. Overall, we showthat AGG-SEM’s worst-case performance is exponentially faster than that of NFG-SEM.33.3.1 Conditional DominanceBecause SEM makes extensive use of iterative removal of strictly dominated strategies, efficiently identifyingdominated strategies is critical. For normal-form games, testing whether or not some pure strategy ai is2AGG’s representation sizes are usually dominated by the size of their payoff tables. Intuitively, one might conclude that eachtable has size O(nd) (where d is the in-degree of a the corresponding action node). However, this bound is often extremely loose forAGGs, and does not apply to AGG-FNs, at all. Thus, it is both simpler and more accurate to express asymptotic performance interms of actually input size, `, rather than in terms of properties of the graph such as in-degree.3Note that our theoretical results only show that AGG-SEM has exponentially faster worst-case runtime than NFG-SEM. Theremay exist other SEM-based algorithms that are as fast (or faster than) AGG-SEM.14dominated by some other a′i is straightforward: one can exhaustively search through through the pure strategyprofiles of the other agents, looking for the existence of some a−i to which ai is a weakly better response.This trivial algorithm only requires time linear in the size of a normal-form game. However, it can requiretime exponential in the size of an action-graph game.Lemma 1. NFG-SEM’s dominance check has a worst-case running time of Θ(2`).Proof. Consider the family of action-graph games with two actions per player and no edges. For thisfamily, there are at most 2n nodes, the payoff table for each of which contains only a single value. Thus,` is Θ(n), while |A−i| = 2n−1 and therefore is Θ(2`). In the worst case (where an agent has a dominatedstrategy), exhaustive search iterates over every a−i ∈ A−i to confirm that ai is not a best response to any actionprofile.However, we can do better: a straightforward, polynomial-time algorithm for AGG dominance checking canbe derived using an idea similar to that of Jiang et al’s [51] dynamic-programming algorithm. To determinewhether or not ai is dominated by a′i, we do not need to search through the entirety of A−i; we only needto search over the set of possible projected configurations on the joint neighborhoods of ai and a′i. Thisadaptation guarantees polynomial runtime. However, empirically we observed that it often gave rise topoor performance, compared to exhaustive search over A−i. We attribute this to stopping conditions: theexhaustive search can stop as soon as it finds any case where ai is a better response, while the dynamic-programming algorithm must build up all configurations first before it ever encounters a better response,effectively performing a breadth-first search. Based on this insight, we created a depth-first-tree-search-basedalgorithm that combines the best of both approaches: like exhaustive search, it can find a better responsewithout needing to compute the entire set of projected configurations; like our adaptation above, it exploitsAGG structure and so needs only to evaluate a polynomial number of projected configurations. It works asfollows. At each level, the search fixes the action of some agent, giving a search tree that potentially includesevery A−i. (Recall that m denotes the maximum number of possible actions for any agent.) However, wealso perform multiple-path pruning: a search refinement in which previously visited nodes are recorded, andthe search backtracks whenever it re-encounters a node along a different search path [83]. In our case, thealgorithm backtracks whenever it encounters a previously visited projected configuration, based on a lookupfrom a trie map. (See Algorithm 3.4 and see Section 3.3.5 for an example the algorithm’s execution.)Lemma 2. AGG-SEM’s dominance check has a worst-case running time of O(nm`3).Proof. The search traverses a tree with a depth of n and a branching factor of m. However, at every level, atmost ζ 2 nodes are expanded (where ζ denotes the largest set of possible projected configurations for anynode), because there are at most ζ 2 distinct projected configurations on the neighborhood of ai,a′i. In theworst case, when it traverses the whole tree, the search must follow each of m arcs from O(ζ 2) nodes ateach of n levels, or O(nmζ 2) arcs. For each arc, the search may perform a trie-map lookup and insert; theseoperations each require runtime that grows like the maximum in-degree of the graph, ι , and so the total cost15is O(nmζ 2ι). Because ` is Ω(ζ + ι), nmζ 2ι is O(nm`3).Input: Two actions for player i: ai,a′iA domain for each of the other players: D−iOutput: True iff a′i dominates ai conditional on −i only playing actions from D−iT ← an empty triec← an empty projected configuration on neighbors of ai,a′ireturn RecursiveCD(ai,a′i,D−i,T,c,1)Figure 3.4: ConditionallyDominatedAGGInput: Two actions for player i: ai,a′iA domain for each of the other players: D−iA trie map T (passed by reference)A projected configuration c on the neighbors of ai and a′iA player jOutput: True iff a′i dominates ai conditional on −i only playing actions from D−iif j = i thenreturn RecursiveCD(ai,a′i,D−i,T,c, j+1)if j = n+1 thenif ui(ai,c)≥ ui(a′i,c) thenreturn falsereturn trueforeach a j ∈ {a j ∈ A j : ∃s j ∈ D js.t.a j ∈ s j} doc′← c⊕a jif ¬( j,c′) ∈ T thenT ← T ∪{( j,c′)}if ¬RecursiveCD(ai,a′i,D−i,T,c′, j+1) thenreturn falsereturn trueFigure 3.5: RecursiveCD3.3.2 TGS Feasibility ProgramSEM’s asymptotic performance is dominated by the Test-Given-Support feasibility program. (Polynomialfeasibility is NP-hard; e.g., polynomial constraints generalize 0–1 integrality constraints [104].) For NFG-16SEM, this complexity obstacle is particularly severe: directly representing the TGS feasibility programrequires space exponential in the size of the AGG. Thus, TGS could require doubly exponential time.Lemma 3. The NFG-SEM TGS feasibility program has worst-case size of Θ(nm2`).Proof sketch, similar to Lemma 1’s proof. TGS can have |A−i| terms in each constraint, and this quantity isΘ(2`) in the worst case. There are O(nm) such constraints. The essential challenge is in the expected utility constraints (lines 1 and 2 of Figure 3.1) which can beexponentially long. We already have Jiang et al’s [51] dynamic-programming algorithm for computingexpected utility given a specific mixed-strategy profile. Now we want to compute expected utility symbolically,without specifying the probabilities beforehand. This can be accomplished by “unrolling” the dynamicprogram: every update in the dynamic program is expressed as a polynomial equality constraint in the TGSprogram. This set of new constraints is polynomial in the size of the AGG.Lemma 4. The AGG-SEM TGS feasibility program has worst-case size of O(n2m2`2).Proof. For each j ∈ {1, . . . ,n} we introduce O(ζ ) new constraints, each corresponding to the probability of aprojected configuration given the strategies of the first j agents. This gives O(nζ ) constraints. Each containsat most O(ζm) terms, corresponding to the possible projected configurations and actions that could lead tosome new projected configuration when another agent is added. Since ζ is O(`), the output requires O(nm`2)space. It must be run once for each agent i and for each action in Ai: O(nm) times in total. Thus, the TGSfeasibility program requires O(n2m2`2) space.Although this optimization speeds up the worst case exponentially, it is not guaranteed to be helpful onaverage. This is because the symbolic representation of the TGS system is made exponentially smallerby replacing each exponentially long expected-utility constraint with multiple small constraints. How thischange affects runtime in the average case depends on the (black-box) feasibility solver.3.3.3 Asymptotic Analysis of SEM for AGGsWe are now ready to compare the overall asymptotic runtime of AGG-SEM and NFG-SEM. We assumethat both algorithms make use of a polynomial feasibility solver with worst-case runtime O(2x) where x isthe length of the feasibility program. Unfortunately, without knowing which instances are hard for a givenpolynomial feasibility solver, we cannot know if these inputs produced by either SEM variant will actuallyrequire that much time. Thus, we can only upper-bound the runtime of each algorithm.Theorem 5. NFG-SEM’s worst-case runtime requires the solving of O(2`) feasibility programs of sizeO(nm2`). Thus, NFG-SEM’s worst-case runtime is bounded above by O(2nm2`+`), assuming a feasibilitysolver with worst-case runtime of O(2x) where x is the length of the feasibility program.Proof sketch. In the worst case, AGGs can have Ω(2`) support profiles (as in Lemma 1), even for symmetric17games. Thus, the search must traverse a tree with O(2`) leaf nodes where TGS is solved, and O(2`) interiornodes where iterative removal of strictly dominated strategies (IRSDS) is performed. Each TGS systemrequires O(nm2`) space by Lemma 3. The time to solve each one is bounded above by O(2nm2`), whichdominates the time required by IRSDS. Thus the total runtime bounded by O(2nm2`+`). Theorem 6. AGG-SEM’s worst-case runtime requires the solving of O(2`) feasibility programs of sizeO(n2m2`2). Thus, AGG-SEM’s worst-case runtime is bounded above by O(2n2m2`2+`), assuming a feasibilitysolver with worst-case runtime of O(2x) where x is the length of the feasibility program.Proof sketch. AGG-SEM still searches O(2`) interior nodes and leaf nodes. At each leaf node, a TGS systemrequiring O(n2m2`2) space (by Lemma 4) must be solved. The time to solve each one is bounded above byO(2n2m2`2), which dominates the time required by IRSDS. Thus the total runtime bounded by O(2n2m2`2+`).Given complexity results known in the literature, it is unsurprising that AGG-SEM requires exponential timein the worst case. In particular, finding even PSNEs of AGGs in polynomial time would imply P=NP. Thereasoning behind this observation is that AGG-SEM always returns a PSNE, if one exists. Thus, if AGG-SEMcould find a sample Nash equilibrium in polynomial time, it could be used to determine whether a given AGGhas any PSNEs in polynomial time. AGGs generalize graphical games (and have the same asymptotic size),and finding a PSNE of an arbitrary graphical game is NP-hard [42]. Further, finding a PSNE of a symmetricAGG with unbounded m is NP-hard [19].3.3.4 Further Speedups for k-Symmetric GamesWe now show that the search over supports can be sped up in the case of AGGs with k-symmetry, i.e., wherethe players can be partitioned into k classes such that all players in a class are identical. (We describe thealgorithm for the case of k = 1, or full symmetry. The generalization is straightforward.) We saw in theproof of Theorem 5 that symmetry does not help for NFG-SEM. Here we strengthen that result, showing thatNFG-SEM can take exponential time even when PSNEs exist in k-symmetric AGGs with bounded m and k.Theorem 7. NFG-SEM requires exponential time to find a sample PSNE in a k-symmetric AGG with boundedm and k.Proof sketch. For games with PSNEs, we never need to solve TGS: any support profile that survives IRSDSis a Nash equilibrium. The tree search must still expand O(2`) nodes to explore all O(2`) pure supportprofiles. At each interior node, IRSDS is called, requiring O(n2m3) calls to the conditional dominance test,which requires O(2`) time (by Lemma 1). For bounded m, the total runtime to find a PSNE is O(n222`).Further, recall from Lemma 1 that NFG-SEM’s conditinal dominance test requires Θ(2`) when the strategy isdominated. Thus, NFG-SEM’s worst-case run-time is Ω(2`). Next, we show that we can achieve an improvement on such games for AGG-SEM. This optimization works18by skipping any support profile that is a permutation of a previously explored support profile. At every stageof the tree search, we explore a support Si iff Si  S j where j is any player with support selected higher in thetree, and where  is the order in which supports are explored at each level of the tree.Lemma 8. AGG-SEM’s search evaluates poly(n) support profiles in the worst case, even for games withoutPSNEs, given a k-symmetric AGG with bounded k and m.Proof. Every distinct support profile can be identified by a vector of O(k2m) integers in the range [0,n],where each element indicates how many agents of a given class have a given support. There are at mostO(nk2m) such vectors. For bounded k and m, this quantity is poly(n).Theorem 9. AGG-SEM requires poly(`) time to either1. find a sample PSNE, or2. determine that no PSNE existsin a k-symmetric AGG with bounded m and k.Proof sketch. For bounded m and k, AGG-SEM’s search expands polynomially many nodes (by Lemma 8),each of which requires running IRSDS. IRSDS performs O(n2m3) conditional dominance tests, requiringO(nm`3) time (by Lemma 2). Thus, AGG-SEM has poly(`) runtime on such games. 3.3.5 ExamplesIn this section, we show two examples of key operations of AGG-SEM.Outer-Loop Search ExampleFirst, we show a example of AGG-SEM enumerating PSNEs, for the following simple symmetric game (withPSNEs entries marked in bold):A B CA 1,1 1,0 1,0B 0,1 2,2 3,3C 0,1 3,3 2,2AGG-SEM’s enumeration of supports proceeds in a depth-first fashion. Each arc represents choosing thesupport of a single agent. (See Figure 3.6.) For PSNEs, only supports of size one are considered. First, thealgorithm selects that agent one plays action A (node 2), then IRSDS is performed, which removes B and C1912a1=A6a1=B10a1=C3a2=A4a2=B5a2=C7a2=A8a2=B9a2=C11a2=A12a2=B13a2=CFigure 3.6: Visualizing the outer-loop tree search process followed by AGG-SEM when enumeratingPSNEs. Each edge specifies that one more agent’s support has been determined. Edges that can bepruned by IRSDS are marked in gray, while edges that can be pruned by symmetry are marked bydashes. Shaded (leaf) nodes correspond to PSNEs. Note that AGG-SEM does not need to visitnode 12, because this is just a permutation of the equilibrium identified at node 9.from the space of possible actions for agent two. The algorithm proceeds to node 3, and identifies a PSNE,(A,A). Because the other children of node 2 have been pruned by IRSDS, the algorithm backtracks and goesto node 6. Here, IRSDS can prune both A and B from the space of possible actions. a2 = A will also not beexplored because this is a permutation of a profile that was previously explored or ruled out. The algorithmproceeds to node 9, another PSNE, (B,C). Then the algorithm backtracks and proceeds to node 10. However,all the children of node 10 can be pruned: A and C are dominated given that a1 =C, and profile (C,B) can bepruned by symmetry. With the space of pure-strategy profiles exhausted, the algorithm ends.Conditional DominanceNext we show a worked example of conditional dominance testing in a simple 3-agent, 4-resource simplecongestion game with resources denoted by (A,B,C,D), where each agent can only choose a single resource,and where the payoff for using resource x is 6/cx where cx denotes the number of agents using that resource.Note that this corresponds to a symmetric AGG where there are 4 nodes, and the only edges are self-edges.ConditionallyDominatedAGG works by searching for partial strategy profile a−i where ai is a weak bestresponse, which would prove that a′i does not strictly dominate ai, or by confirming that no such profileexists (meaning that ai is strictly dominated and can be pruned without any risk of failing to find a Nashequilibrium).This search proceeds in a depth-first fashion, fixing the actions of all the other agents, one at a time. (See201: (cA=0, cD=0)2: (cA=1, cD=0)a1=A3: (cA=2, cD=0)a3=A6: (cA=1, cD=0)a3=B9: (cA=1, cD=0)a3=C10: (cA=1, cD=1)a3=D4: (cA=3, cD=0)u2=2a2=A5: (cA=2, cD=1)u2=6a2=D7: (cA=2, cD=0)u2=3a2=A8: (cA=1, cD=1)u2=6a2=D11: (cA=2, cD=1)u2=3a2=A12: (cA=1, cD=2)u2=3a2=DFigure 3.7: Visualizing a ConditionallyDominatedAGG search. Each edge is labeled with an assignmentof an action to an agent. Each node is labeled with an integer identifying the order in whichConditionallyDominatedAGG explores, and with a 2-tuple, identifying the projected configurationgiven the actions leading to that node. Leaf nodes are square and come in pairs, representingactions ai and a′i, and are also labeled with agent i’s utility. If i’s utility is weakly greater for actionai (as in the shaded nodes), then ai is not dominated by a′i and the search can exit successfully.Figure 3.7.) At node 2, agent 1 plays A. This increases the projected configuration on A, denoted cA to 1.At node 3, agent 3 does the same, increasing cA to 2. Nodes 4 and 5 correspond to agent 2’s choice of Aand D respective, which increases cA and cD respectively. Because D has strictly higher utility for agent2, the algorithm cannot yet confirm that D does not dominate A. Backtracking, to node 6, we consider thecase where agent 3 plays B. Because B is not a neighbor to either of the actions under consideration, theprojected confirmation does not change between node 2 and node 6. As in nodes 4 and 5, agent 2 has a higherpayoff for D in nodes 7 and 8. Thus, the search backtracks to node 9. Because node 9 has the same projectedconfiguration as node 6, it can be pruned and the search backtracks to nodes 10 (where agent 3 plays D) andit’s children, nodes 11 and 12. In this case (where agents 1 and 3 play A and D respectively), agent 2 getsequal utility for A and for D, showing that D does not strictly dominate A.3.4 Experimental EvaluationSo far, our analysis has concentrated on the worst case. However, improvements to the worst case donot necessarily imply improvements on instances of interest. As we are motivated by developing practicalmethods for computing Nash equilibria, we conducted an experimental evaluation to compare the performanceof NFG-SEM and AGG-SEM.213.4.1 Experimental SetupWe sampled 50 instances from each of 11 different game distributions. Nine distributions were from GAMUT[80]; each had n = 10 players, m = 10 actions per player, and action graphs with in-degree at most five. Theremaining two distributions were over position auction games with n = 10 players and up to m = 11 actionsper player (though weakly dominated actions, which occurred frequently, were omitted by the generator).The two position auctions distributions based on 1) the generalized first-price auction (GFP), which is oftenhas no pure-strategy Nash equilibrium, and 2) the weighted generalized second-price auction (GSP), whichoften has many pure-strategy Nash equilibrium [98]. (See Table 3.1.)On each game, we compared AGG-SEM to three other algorithms: NFG-SEM, and the two existing state-of-the-art Nash-equilibrium-finding algorithms: GNM, the global Newton method [43], and SimpDiv, simplicialsubdivision [102], both using Gambit implementations [68] extended to work efficiently with AGGs by Jianget al[51]. All algorithms were given error tolerance of 10−10. For AGG-SEM and NFG-SEM, we usedMINOS [74] to solve the TGS feasibility problems.We performed a blocking mean-of-means test [15] (with p≤ 0.05) to compare mean runtimes across gamedistributions. This is a blocking test (i.e., rather than testing whether the mean runtime of algorithm A wassignificantly less than the mean runtime of algorithm B, we tested whether zero was significantly greater thanthe mean difference between the runtimes of algorithms A and B), to account for the fact that some instancesmay be harder than others, e.g., because they are lacking in small-support equilibria. The bootstrappingprocess for testing these differences was run with 20,000 samples per test. Because the same instanceswere used in multiple tests (e.g., comparing algorithm A to algorithm B and to algorithm C), we applied a aBonferroni correction.In three distributions (see Table 3.1), we were not able to conclude that differences were significant becauseof high runtime variation. Such problems can be overcome by obtaining additional data; thus, for thesedistributions we generated an additional 150 instances (i.e., 200 total). We included the resulting additionaltests in our Bonferroni correction. In the end, we were able to identify a significantly faster algorithm forevery distribution, except for D1, the coffee-shop game.Our experiments were performed on machines with dual Intel Xeon 3.2GHz CPUs, 2MB cache and 2GBRAM, running Suse Linux 11.1 (Linux kernel 2.6.27.48-0.3-pae). Each run was limited to 12 CPU hours; wereport runs that did not complete as having taken 12 hours. In total, our experiments required about 420 CPUdays.2210-2 10-1 100 101 102 103 104 105AGG-SEM Runtime (CPU S)10-210-1100101102103104105NFG-SEM Runtime (CPU S)10-2 10-1 100 101 102 103 104 105AGG-SEM Runtime (CPU S)10-210-1100101102103104105NFG-SEM Runtime (CPU S)Figure 3.8: Scatterplot contrasting the runtimes of AGG-SEM and NFG-SEM. The left plot showsruntimes for computing a sample Nash equilibria; the right plot shows runtimes for computing allpure strategy Nash equilibria.3.4.2 ResultsOverall, we found that AGG-SEM provided a substantial performance improvement over NFG-SEM, outper-forming it on the vast majority of instances. (See Figure 3.8). As we expected, AGG-SEM was not fasteron absolutely every instance. Nevertheless, AGG-SEM achieved significantly faster mean performance inevery game distribution. Its largest speedup over NFG-SEM was 280× (on D1 — Figure 3.12a), its smallestspeedup was 1.45× (on D4 — Figure 3.12d), and its median speedup was 10× (on D10 — Figure 3.12j).While the biggest speedups were on games where AGG-SEM could leverage k-symmetry, ranging from280× (on D1 — Figure 3.12a) to 7× (on D2 — Figure 3.12b), we still achieved substantial speedups onasymmetric games (those with n player classes), ranging from 10× (on D10 — Figure 3.12j) to 1.45× (onD4 — Figure 3.12d). AGG-SEM stochastically dominated NFG-SEM overall (see Figure 3.11), but on aper-distribution basis, it only stochastically dominated on four distributions (D3 and D9–11 — Figure 3.12cand i–k). AGG-SEM’s failure to stochastically dominate on the remaining seven distributions was due tothe fact that all contained instances that both methods solved extremely quickly (in less than one second),but that NFG-SEM finished more quickly. Considering runtimes over a second, AGG-SEM stochasticallydominated NFG-SEM on every distribution.AGG-SEM signficantly outperformed SimpDiv and GNM on 8 of the 11 game distributions (see Table 3.1),and furthermore stochastically dominated GNM overall (see Figure 3.11). Comparing the runtimes of AGG-SEM and SimpDiv, we found that the two were correlated: they both solved many (338) of the same instancesin under 600s, with SimpDiv having better mean runtime on these instances (µ = 12.71s vs µ = 108.32s). Onthe remaining instances, SEM finished far more often (87.74% vs 23.11%). Thus, SimpDiv was only fastest23on D2 (Ice-cream games — Figure 3.12b), which contained almost exclusively instances that were easy forboth algorithms. AGG-SEM only stochastically dominated SimpDiv on D10 and D11 (Figure 3.12j and k),which contained no instances that were easy for SimpDiv. AGG-SEM’s runtime was less correlated with thatof GNM than it was with that of SimpDiv. For example, GNM solved every instance in D4 (GFP positionauctions — Figure 3.12d), which contained instances that were not solved by either SimpDiv or AGG-SEM.Overall, however, GNM had the worst performance (and was stochastically dominated by AGG-SEM onevery distribution but D3 and D4). Although AGG-SEM had the fastest overall average runtime, there wereat least a few instances for which each of AGG-SEM, SimpDiv and GNM was hundreds of times faster thanthe others. The best practical approach may thus be a portfolio of all three algorithms, following e.g. [111].Like Porter, et. al., [84], we found that in many (7 of 11) distributions, most games (over 90%) had PSNEs.AGG-SEM finished on every such game. Thus the four distributions (D4 and D9–11 – Figure 3.12d and i–k)in which PSNEs were least common were also those on which AGG-SEM was mostly likely to time out. (Weverified that AGG-SEM terminated on every game with PSNEs by checking the support size AGG-SEM wasconsidering when it timed out. In every case, it ruled out all pure support profiles before running out of time.)One advantage of SEM over other Nash-equilibrium-finding algorithms is its ability to enumerate all Nashequilibria (or all equilibria with support sizes not more than some constant). This is particularly useful whenwe want to understand the range of possible outcomes. For example, in [98] one of the goals was to identifythe minimum and maximum revenue possible in equilibrium of position auction games. At the time, we wereonly able to compute empirical bounds on revenue in any given auction game (e.g., asserting the best-caseequilibrium revenue must be at least $5 because we found an equilibrium where the revenue is $5). Toolsthat could search through or enumerate the full set of equilibria (e.g., to find the equilibrium that provablymaximizes revenue) did not exist at the time. We have since tested equilibrium enumeration by searching forall PSNEs on a representative subset of these games (20 from each distribution). We found that AGG-SEMwas significantly faster than NFG-SEM (see Figure 3.8) at enumerating the set of all PSNEs. Notably, forevery position auction game, AGG-SEM was able to find all PSNEs in under one CPU minute. (These gameseach have ten bidders, eight positions and eleven bid increments per bidder.)3.5 The Hardness of Generalizing AGG-SEMIn the previous sections, we have focused on AGGs with function nodes (or AGG-FNs). We introduced theAGG-SEM algorithm for AGG-FNs, and demonstrated its advantages both theoretically and experimentally.However, this algorithm depends on solving many conditional-dominance problems as a key inner-loop step.In this section, we show that conditional dominance is computationally hard for two key extensions of AGGs:AGGs with function nodes and additivity (or AGG-FNAs) and Bayesian AGGs (BAGGs, with or withoutfunction nodes). Thus, SEM (and other algorithms depending on dominance checking) may not be feasiblefor such generalized AGGs.24No. Game type Player Mean runtime (CPU s)Classes AGG-SEM NFG-SEM GNM SimpDivD1 Coffee shop 1 18.00 5032.38? 3309.73? 362.63†D2 Ice cream 3 131.64? 957.42? 151.59? 0.39D3 Job market 1 249.02 6070.61? 372.96?† 1536.45?†Position Auctions:D4 GFP n 7519.90? 10878.19? 75.73 10750.93?D5 Weighted GSP n 45.10 96.78? 723.19? 734.56?†Random AGGs:D6 Random graph 1 68.02 7005.34? 10580.58? 5188.02?D7 Road graph 1 441.11 32103.15? 41814.79? 9507.58?D8 Small-world graph 1 596.75 31750.79? 28195.09? 4665.58?Random Graphical Games:D9 Random graph n 11953.48 20469.50? 24337.47? 27002.81?D10 Road graph n 3244.50 32052.36? 43200.00? 43200.00?D11 Small-world graph n 11356.47 29861.96? 43200.00? 40677.67?Overall: 3244.28 16520.09? 18265.93? 13176.94?Table 3.1: Mean runtimes. ? denotes significantly slower than fastest solver, α = 0.05. Capped runscount as 43200s. † denotes distributions where, due to high variance, more instances (200) werenecessary for statistical significance.Formally, an AGG-FNA is similar to an AGG-FN, except for the space of pure strategies: in an AGG-FNA,each agent chooses a pure strategy from a list of subsets of action nodes. His payoff of for a given purestrategy is simply the sum of the payoff values for all the action nodes in his selection.We define the decision problem as follows: the input is an AGG-FNA, an agent i, and a pure strategy si forthat agent; the decision property is that si is not strictly dominated by any pure strategy.Theorem 10. UNDOMINANCE(AGG-FNA) is NP-complete.Proof. Part 1: UNDOMINANCE(AGG-FNA) is in NP: A certificate for this problem is a pure-strategy profilefor all the agents except i. A certificate can be checked by simply enumerating all of i’s pure strategies,computing the expected utility for each. The certificate is valid iff si’s utility is weakly maximal.Part 2: UNDOMINANCE(AGG-FNA) is NP-hard: by reduction from 3SAT In 3SAT, the input a set of kclauses of length at most three, and involving m boolean variables denoted x1, · · · ,xm. The decision property isthat there exists a satisfying assignment, i.e., an assignment to all m variables where every clause is satisfied.This reduction creates an m+1 player AGG-FNA (as in Figure 3.9) where player i (for i≤m) corresponds tovariable xi. Each of these players has two pure strategies, corresponding to the possible assignments of hisvariable. Each of these pure strategies corresponds to a single action node, not shared with any other player.The payoffs of these players are not important, and can therefore be constant (meaning the correspondingaction nodes have no in-arcs). Player m+1 also has two pure strategies, which we call SAFE and SAT. SAFEconsists of a single action node with no in-arcs; player m+1 gets a payoff of k−1/2 whenever he playsSAFE. SAT consists of k action nodes, corresponding to the k clauses in the 3SAT problem instance. Each ofthese action nodes at most three in-arcs corresponding to single-variable assignments that could satisfy theclause. The payoff for each of these action nodes is one if the assignment would satisfy the clause and zero253 4m+1SAT1 2x1=Tx1=T or x2=F or x3=Tx1=F x2=Tx2=Fx2=T or x3=F or x4=Tx3=T x3=F x4=Tx4=FSAFEFigure 3.9: An example 3SAT instance reduced to dominance in an AGG-FNA. Dashed boxes denoteaction sets for agents, elipses denote action nodes and shaded regions denote pure-strategies. Everypure strategy except SAT has a single action node; the payoff of SAT is the sum of the payoffs ofits two action nodes.otherwise.Sub-claim (Testing for undominance can be used to test for satsifiability): There is a one-to-one mappingbetween assignments in the 3SAT problem instance and pure-strategy profiles for the first m players. Givensuch a pure-strategy profile, the payoff for player m+ 1 playing SAT is equal to the number of clausessatisfied by the assignment. Thus, player m+1 cannot get a payoff of k unless an assignment exists whichsatisfies all k clauses. Thus, SAT is dominated by SAFE (with a guaranteed payoff of k−1/2) unless the3SAT problem instance is satisfiable.Sub-claim (The reduction is polynomial in time and size): This game has 2m action nodes for the variableplayers, and k+1 action nodes for the SAFE/SAT player. Each variable-assignment action node (and theSAT node) have no in-arcs, and thus their payoff tables each have only a single entry. Each of the k clauseaction nodes has 3 in-arcs, which can only be zero or one (because action node is shared between any twoagents). Thus, each of those action nodes has a payoff table with 23 entries. Each action node only appears ina single pure-strategy. The entire AGG-FNA can be created in a single pass (iterating through variables thenthrough clauses, creating the relevant parts of the AGG-FNA as they arise). Therefore, the entire reductioncan be accomplished in O(m+ k) time and space.Therefore, UNDOMINANCE(AGG-FNA) is NP-hard.A similar proof can be used with Bayesian action-graph games (BAGGs).Bayesian action-graph games are a combination of action-graph games and epistemic-type Bayesian games.26As in Bayesian games, each agent has a private epistemic type drawn from a discrete distribution with acommonly known prior. (Types need not be probabilistically independent.) Each type maps to subset ofaction nodes that the agent may choose from. BAGGs can also include function nodes and additivity, butthese features are not necessary for our proof of hardness.We define the UNDOMINANCE(BAGG) as follows: the input is an BAGG, an agent i, a type ti for that agentand an action node ai for that agent given that type; the decision property is that ai is not ex interim strictlydominated by any pure strategy.Theorem 11. UNDOMINANCE(BAGG) is NP-complete.Our proof of NP-completeness for BAGGs follows the same structure as previous AGG-FNA proof. Inparticular, the hardness reduction also involves a subset of the players whose strategies correspond tocertificates and player who has choose between a safe action and an action that corresponds to the decisionproperty (which will be strictly dominated when the decision property is false).Proof. Part 1: UNDOMINANCE(BAGG) is in NP: A certificate for this problem is a behavioral-strategyprofile for all the agents except i. A certificate can be checked by simply enumerating all of i’s possibleactions, computing the ex interim expected utility for each. The certificate is valid iff ai’s utility is weaklymaximal.Part 2: UNDOMINANCE(BAGG) is NP-hard, by reduction from INDEPENDENTSET: In INDEPENDENTSET,the input is an undirected graph with n vertices, and a non-negative integer k. The decision property isthat there exists an independent set (i.e., a set of nodes where no two nodes in that set share an edge) withcardinality k.This reduction creates a 3 player BAGG. (See Figure 3.10.) The first two players each have n types (withuniform probability) each of which corresponds to one of the nodes vi in the graph. Given each of thesetypes, an agent has a choice of two action nodes, labeled IN and OUT. Again, the payoffs of these players areirrelevant, so these action nodes have no inputs. The third player has only one type, with two action-nodes,SAFE and INDSET. If he plays SAFE, he gets a constant utility of k−1/2. If he plays INDSET his ex postutility is given by the table below:case t1 and t2 a1 and a2 u3i equal both IN n2ii equal both OUT 0iii equal otherwise −n3iv neighbours both IN −n3v neighbours otherwise 0vi otherwise otherwise 027i=1,2; ti=3i=1,2; ti=1 i=1,2; ti=2i=3ININDSETOUT INOUT INOUTSAFEFigure 3.10: An example INDEPENDENTSET instance reduced to dominance in a BAGG. Dashed boxesdenote action sets for agents given their types.Note that these payoffs can be computed by connecting every IN node to the INDSET node.Sub-claim (Testing for undominance can be used to test for independent set cardinality): For agents 1 and2, every pure strategy is a mapping from the set of vertices to {IN,OUT}, and thus corresponds to a subsetof the vertices. If agents 1 and 2 play strategies that are not identical, then agent 3’s ex interim expectedutility of INDSET is at most zero. (This is because agent 3 get a payoff of –n3 with probability at least 1/n2(case iii); the only positive payoff agent 3 can get is n2 and this happens with probability at most n/n2.) Ifagents 1 and 2 play strategies that are identical but that do not correspond to an independent set, agent 3’s exinterim expected utility of INDSET is at most zero, by the same reasoning as with non-identical strategies(but using the penalty from case iv). If agents 1 and 2 play strategies that are identical, and that correspondto an independent set with cardinality j, then agent 3’s ex interim expected utility of INDSET is exactly j(because he gets payoff n2 (case i) with probability j/n2). Thus, INDSET is dominated by SAFE (which getsa payoff of k−1/2) unless there is an independent set with cardinality k.Sub-claim (The reduction is polynomial in time and size): This BAGG has 2n+2 action nodes, and all butINDSET have no inputs (and therefore have payoff tables of size 1). INDSET has n inputs, but those inputsmust be non-negative integers and must sum to at most two. Thus, the payoff table for INDSET can haveonly n2 possible configurations, and can only require O(n3) space. This O(n3) factor dominates the othercosts in both time and space.Therefore, UNDOMINANCE(BAGG) is NP-hard.2810-2 10-1 100 101 102 103 104Runtime (CPU s)0.00.20.40.60.81.0Fraction of instances solvedAGG-SEMSimpDivGNMNFG-SEMFigure 3.11: Runtime CDFs for AGG-SEM and the three incumbent algorithms3.6 ConclusionWe have shown that the support enumeration method can be extended to games compactly represented asAGGs. Our approach outperforms the original SEM algorithm for such games both asymptotically and inpractice. Theoretically, we showed that SEM’s worst-case runtime can be reduced exponentially. Our workin this vein may also be of independent interest, as it shows novel ways of exploiting AGG structure intheoretical analyses. In particular, the polynomial-time algorithm for removing dominated strategies couldbe useful, e.g. as a preprocessing step for other equilibrium-finding algorithms. Empirically, we observedthat our new algorithm was substantially (often orders of magnitude) faster than the original SEM algorithm,and that it almost always outperformed current state-of-the-art algorithms. Beyond this, our algorithm offerssubstantial advantages over existing algorithms, such as the ability to enumerate equilibria and to identifypure-strategy Nash equilibria or prove their non-existence.We envision several extensions to AGG-SEM. One promising direction is to search for specific types of (e.g.,symmetric or social-welfare-maximizing) equilibria, for example by replacing the depth-first search withbranch-and-bound search. This could be particularly valuable for computational mechanism analysis in caseswith many equilibria; sometimes the main bottleneck is to finding an specific equilibrium is that it belongsto exponentially large set, which SEM exhaustively enumerates. Performance could also be improved byusing good heuristics to choose the order in which supports are instantiated, or even by exploring the space ofsupports using stochastic local search rather than tree search. (However, using local search would requiregiving up the ability to enumerate equilibria, one of SEM’s most unusual and valuable features.)2910-2 10-1 100 101 102 103 104Runtime (CPU s)0.00.20.40.60.81.0Fraction of instances solvedAGG-SEMSimpDivGNMNFG-SEM(a) D1: Coffee shop10-2 10-1 100 101 102 103 104Runtime (CPU s)0.00.20.40.60.81.0Fraction of instances solvedAGG-SEMSimpDivGNMNFG-SEM(b) D2: Ice cream10-2 10-1 100 101 102 103 104Runtime (CPU s)0.00.20.40.60.81.0Fraction of instances solvedAGG-SEMSimpDivGNMNFG-SEM(c) D3: Job market10-2 10-1 100 101 102 103 104Runtime (CPU s)0.00.20.40.60.81.0Fraction of instances solvedAGG-SEMSimpDivGNMNFG-SEM(d) D4: GFP10-2 10-1 100 101 102 103 104Runtime (CPU s)0.00.20.40.60.81.0Fraction of instances solvedAGG-SEMSimpDivGNMNFG-SEM(e) D5: Weighted GSP10-2 10-1 100 101 102 103 104Runtime (CPU s)0.00.20.40.60.81.0Fraction of instances solvedAGG-SEMSimpDivGNMNFG-SEM(f) D6: AGG - random10-2 10-1 100 101 102 103 104Runtime (CPU s)0.00.20.40.60.81.0Fraction of instances solvedAGG-SEMSimpDivGNMNFG-SEM(g) D7: AGG - road10-2 10-1 100 101 102 103 104Runtime (CPU s)0.00.20.40.60.81.0Fraction of instances solvedAGG-SEMSimpDivGNMNFG-SEM(h) D8: AGG - small-world10-2 10-1 100 101 102 103 104Runtime (CPU s)0.00.20.40.60.81.0Fraction of instances solvedAGG-SEMSimpDivGNMNFG-SEM(i) D9: GG - random10-2 10-1 100 101 102 103 104Runtime (CPU s)0.00.20.40.60.81.0Fraction of instances solvedAGG-SEMSimpDivGNMNFG-SEM(j) D10: GG - road10-2 10-1 100 101 102 103 104Runtime (CPU s)0.00.20.40.60.81.0Fraction of instances solvedAGG-SEMSimpDivGNMNFG-SEM(k) D11: GG - small-worldFigure 3.12: Per-distribution runtime CDFs for all four algorithms30Chapter 4Application: Position Auctions for InternetAdvertising4.1 IntroductionThis chapter covers the first application domain for my computational mechanism analysis techniques,position auctions. Position auctions are a relatively new family of mechanisms in which bidders place a singlebid for a set of goods of varying quality, and the ith-highest bidder wins the ith-most-desirable good. Eachyear, these auctions yield billions of dollars selling advertising space on search-engine results pages. Variousposition auction designs have been considered over the years; e.g., advertisers who attract more clicks maybe given advantages over weaker bidders, and bidders may have to pay their own bid amounts or a smalleramount computed from the bids of others. After some initial experimentation with these design dimensions,the major search engines have converged on a single design: the weighted, generalized second-price auction(which we denote wGSP; we define it formally in what follows). The main question that this chapter seeksto address is whether wGSP represents a good choice, as compared both to the auctions it has replaced andto theoretical benchmarks. Specifically, we ask whether wGSP is more economically efficient, whether itgenerates more revenue, whether it yields results that users find more relevant, and whether it produceslow-envy allocations.There is an enormous literature on auction analysis that seeks to answer such questions. Overwhelmingly, thisliterature proceeds by modeling a setting as a (Bayesian; perfect-information) game and then using theoreticalanalysis to describe what occurs in (Bayes–Nash; dominant-strategy; locally envy-free) equilibrium. There ismuch to like about this approach: it is often capable of determining that a given mechanism optimizes anobjective function of interest or proving that no mechanism can satisfy a given set of properties. Indeed, mostof what we know about mechanism design was established through such analysis. However, the approach31also has limitations: in order to obtain clean theoretical results it can be necessary to make strong assumptionsabout bidder preferences and to simplify tie-breaking and bid discretization rules. Even when consideringsuch a simplified version of a given problem, it is often extremely difficult to make quantitative comparisonsbetween non-optimal mechanisms (e.g., which non-revenue-optimizing mechanism yields higher revenue onexpectation)? The current state of affairs in the literature is thus that we know a great deal about auctions, butthat many open questions appear to be resistant to analysis by known techniques.In the case of position auctions, most research has used perfect-information Nash equilibrium as the solutionconcept of choice. This choice is justified by the fact that advertisers interact repeatedly: nearly identicalgoods—user views of a particular search result page—are sold up to millions of times per day, and advertiserscan continuously adjust their bids and observe the effects. A variety of other technical assumptions arecommonly made, characterizing advertiser preferences (e.g., a click has the same value regardless of position,and regardless of which other ads are shown), user behavior (e.g., a user’s response to an ad is independent ofwhich other ads she has seen), and advertiser behavior (e.g., an advertiser will act to reduce his envy evenwhen doing so does not increase his utility). With all these assumptions in place, various strong results havebeen obtained (e.g., wGSP is efficient and generates weakly more revenue than VCG), but other importantquestions remain open (e.g., does wGSP generate more revenue than other position auctions?). Relaxing anyone of these assumptions can lead to many further open questions.This chapter shows that my computational mechanism analysis techniques are able to address a wide variety ofopen questions that have not proven amenable to theoretical analysis. We maintain the approach of modeling amechanism as a game and of reasoning about (exact) solution concepts of interest; we depart from traditionalanalysis by allowing only a discrete set of bids and by answering questions by providing statistical evidencerather than theorems. Specifically, we sample advertiser preferences from a given distribution, computean exact Nash equilibrium of the resulting game between advertisers, and reason about this equilibrium tocompute properties of the outcome, such as expected revenue or social welfare. By repeatedly sampling,we can make quantitative statistical claims about our position auction setting (e.g., that one auction designgenerates significantly more expected revenue than another). The results of this chapter demonstrate thatmymethods are able to yield qualitatively new findings about a widely studied setting.Computational mechanism analysis differs in many ways from theoretical methods. As already mentioned,one clear disadvantage is that our approach only produces statistical results (e.g., given distribution D, Aperforms significantly better than B on expectation) rather than theorems (e.g., A always performs better thanB). Conversely, our methods have the advantage that they are able to produce results in settings for which suchsimple patterns do not exist. For example, we have observed distributions over advertiser preferences underwhich the wGSP auction sometimes generates far less revenue than its predecessor, uGSP, and sometimesgenerates far more. Thus, we know that any comparison of these auctions must necessarily be statisticaland distribution dependent, rather than guaranteeing that one of these auctions always yields more revenue.Our computational approach also allows us to consider arbitrary preference distributions, possibly derived32from real-world data, as opposed to being restricted to distributions with convenient theoretical propertieslike monotone hazard rates. A further property of computational mechanism analysis is both a benefit and aweakness: bidders must be restricted to a finite set of discrete bids, unlike the vast majority of literature onauction theory, which assumes that bids are continuous. While we depart from this tradition, we do not seediscreteness as necessarily disadvantageous; real-world position auctions tend to be rather coarsely discrete.For example, position auctions often clear for tens of cents per click, while bids are required to be placed ininteger numbers of cents. Finally, some auction features, like rules for tie-breaking and rounding, are difficultto analyze in the continuous case, but pose no obstacle to our approach.The bulk of this chapter shows the effectiveness of computational mechanism analysis by demonstratingwhat it can tell us about position auctions. In Section 4.5 we consider both simple models (such as those ofVarian and Edelman et al) and richer models (such as cascade) in which advertisers have position-dependentvaluations and externalities, in each case using our techniques to shed light on open problems.1 Somehigh-level findings emerged from our analysis. Most strikingly, we found that wGSP consistently showedthe best ads of any position auction, measured both by social welfare and by relevance (expected number ofclicks). Further, even in models where wGSP was already known to have bad worse-case efficiency (either interms of price of stability or price of anarchy), we found that it almost always had very good average-caseperformance. In contrast, we found that revenue was extremely variable across auction mechanisms, andfurthermore was highly sensitive to equilibrium selection, the preference model, and the valuation distribution.In Section 4.6 we consider the extent to which our findings are sensitive to the bid discretization used, thenumber of bidders, and the number of slots sold.4.2 BackgroundAlthough a variety of position auction variants have been proposed, only three have seen large-scale use inpractice. We describe them here and also provide short form names that we will use throughout the chapter.All auctions are pay-per-click; that is, bidders pay every time an end-user clicks on an advertisement, notevery time an advertisement is displayed.GFP The generalized first-price auction, used by Overture and by Yahoo! from 1997–2002. Each biddersubmits a single bid; the highest bidder’s advertisement is displayed in the highest position, the secondbidder gets the second-highest position, and so on. Each bidder pays the amount of his bid.uGSP The unweighted, generalized second-price auction, used by Yahoo! from 2002–2007. As before,bidders are ranked by their bids; the bidder winning the ith position pays the i+1st-highest bid.wGSP The weighted, generalized second-price auction, used by Google AdWords and Microsoft adCenter,1We previously presented results for the no-externalities models in a conference paper [98]. The representation for models withexternalities is new to the current paper, as are all experimental results.33and by Yahoo! since 2007. The search engine assigns each bidder a weight or “quality score”; wemodel this as the probability that an end-user would click on each bidder’s advertisement if it wereshown in the highest position. Bidders are scored by the products of their weights and their bids, andare ranked in descending order of their scores. Each bidder pays the smallest bid amount that wouldhave been sufficient to cause him to maintain his position.These auctions have all received theoretical analysis under a variety of models, typically with the assumptionthat bidders will converge, in repeated play, to an equilibrium of the full-information, one-shot game.4.2.1 Metrics for Evaluating Auction OutcomesIt is tempting to believe that search engines have settled on wGSP because it is a better auction design. Toclaim this, we need to decide what we mean by “better.” In this chapter, we will consider four such metrics.1. Most straightforwardly, perhaps search engines gravitated to an auction design that maximizes their(short-term) interests: revenue.2. Perhaps search engines are better off maximizing the welfare of advertisers, to ensure advertisers’ongoing participation, and thus the search-engine’s longer-term revenue. If so, we should assess amechanism according to the sum of advertisers’ valuations for the allocation achieved in equilibrium.3. Both of these measures neglect a third group of agents: the search engine’s end-users, without whomthere would be no profits for advertisers and thus no revenues for the search engine. Some researchersbelieve that our second metric, social welfare among advertisers, is a good proxy for end-user payoffs,because advertisers only get revenue from clicks that satisfy users’ needs [4]. Others have arguedthat click-through rates are a more direct measure of whether advertisements are interesting to users,measuring the relevance of a page of search ads by the expected number of clicks it receives [60]. Notethat this is an expectation because even if the allocation of positions is deterministic, because whetheror not a user clicks on a given ad is (for all the models covered in this paper, at least) a random event.4. Finally, envy is another important measure of the quality of a multi-good allocation. One agent ienvies another agent j if i’s expected utility could be increased by exchanging i’s and j’s allocationsand payments [44]. Allocations that do not give rise to such envy have been considered desirable inthemselves. In the position auction literature, however, it is more common for envy to be used as a toolfor equilibrium selection. Many researchers have restricted their attention to envy-free Nash equilibria(i.e., Nash equilibria in which the total envy across all bidders is zero) [30, 103].2 In this chapter, we2Although these researchers focused on locally envy-free equilibria—under which no bidder envies the bidders in adjacentpositions—this distinction is not important for our purposes. Under their models, local envy-freeness implies global envy-freeness,using the more general definition of envy from [44]. We need this richer definition of envy—which allows for randomization—tostudy cases involving randomized tie-breaking or mixed strategies. Even so, this richer definition does not cover externalities—where34also use envy as a tool for equilibrium selection, contrasting envy-free and general Nash equilibria.4.2.2 Models of Bidder ValuationsA wide range of different models have been proposed for bidder valuations in position auctions. There arebroadly two classes of models: those that make a no-externalities assumption—holding an ad’s positionconstant, it will generate the same expected number of clicks and same expected value regardless of whichads are shown in other positions—and those that do not make such an assumption.Models Without ExternalitiesWe consider four no-externalities models. The first two models (which we call EOS and V, after the researcherswho introduced them) have in common the assumption that each advertiser values clicks independently oftheir advertisement’s position. The next two models (which we call BHN and BSS) allow for “positionpreferences”: an advertiser might have different values for clicks when the advertisement appears in differentpositions. In each case, we describe important results from the literature, as well as open questions that wewill address.EOS Model. Edelman, Ostrovsky and Schwarz [30] analyzed the GSP under a preference model in whicheach bidder’s expected value per click is independent of position. The click-through rate is the same for allads in a given position (making uGSP equivalent to wGSP), and decreasing in position (ads that appear loweron the screen get fewer clicks). EOS defined locally envy-free equilibria as Nash equilibria in which no bidderenvies the allocation received by a bidder in a neighboring position, and showed that in such equilibria, uGSPis efficient and revenue dominates the truthful equilibrium of VCG. Caragiannis et al [12] showed that other,lower-efficiency Nash equilibria exist, but that none is worse in terms of social welfare by a factor greaterthan 1.259. (In other words, 1.259 is an upper bound on the price of anarchy.) Given that locally envy-freeequilibria are only guaranteed to exist in the continuous case, while real wGSP uses discrete increments,some natural open questions about this model follow.Question 1: Under EOS preferences, how often does wGSP give rise to envy-free (efficient, VCG-revenue-dominating) Nash equilibria? What happens in other equilibria, and how often do they occur?V Model. Varian [103] analyzed wGSP under a more general model, in which each bidder’s value per click isstill independent of position, but click-through rates are decreasing and “separable.” Separability means thatfor any position/bidder pair, the click-through rate can be factored into a position-specific component that isan advertiser’s utility depends not just on his own allocation, but also on which agents receive which other goods. Thus, we do notconsider envy when working with settings involving externalities.35independent of bidder identity and a bidder-specific component that is independent of position (correspondingto wGSP’s weights). Varian showed that in any “symmetric” (globally envy-free) equilibrium, wGSP isefficient and revenue dominates VCG. The price of anarchy result of Caragiannis et al. also applies to the Vmodel. Lahaie and Pennock studied the problem of what happens in this model in uGSP and wGSP (andpoints in between) [60]. Under additional assumptions about the valuations, they found that uGSP was lessefficient than wGSP but generated more revenue. These findings suggest some natural open questions aboutthe V model.Question 2: Under V preferences, how often does wGSP have envy-free (efficient, VCG-revenue-dominating)Nash equilibria? What happens in other equilibria, and how often do they occur? Is uGSP generally betterthan wGSP for revenue, or does their relative performance depend on equilibrium selection, the valuationdistribution, and/or other properties of the game? Is wGSP generally better for efficiency?BHN Model. Blumrosen, Hartline and Nong [11] proposed a model that (like the one that follows) allowsfor the possibility that not all clicks are equally valuable to an advertiser. In this model, click-through ratesare still decreasing and separable. However, a bidder’s expected value per click increases with the bidder’srank in a separable fashion, subject to the constraint that a bidder’s expected value per impression is weaklydecreasing. The authors support their generalization by describing empirical evidence that conversions (e.g.,sales) occur for a higher proportion of clicks on lower-ranked ads. They show that preference profiles existunder their valuation model in which wGSP has no efficient, pure-strategy Nash equilibrium. However, whilewe know that such preference profiles exist, we do not know how much of a problem they pose on average.Question 3: Under BHN preferences, how often does wGSP have no efficient Nash equilibrium? How muchsocial welfare is lost in such equilibria?BSS Model. Benisch, Sadeh and Sandholm [9] proposed another position-preference model that generalizesEOS. In this model, click-through rates are decreasing in position but independent of bidder identities.However, bidders’ values are single peaked in position and strictly decreasing from that peak. For example,“brand” bidders might prefer the prestige of top positions, while “value” bidders prefer positions further down.Benisch, Sadeh and Sandholm analyzed this model in an imperfect-information setting, and showed boththat uGSP and wGSP ranking rules can be arbitrarily inefficient for such models and that more expressivebidding languages can improve efficiency. For different valuation distributions consistent with their model,they bounded the loss of efficiency in the best-case Bayes-Nash equilibrium. We observe that this boundderives entirely from GSP’s inexpressiveness—the fact that agents lack the ability to communicate their truepreferences—rather than from agents’ incentives. Under the more common assumption of perfect-informationNash equilibria, expressiveness cannot cause an inefficient outcome (because for any order of the bidders,strategy profiles exist that will rank the bidders in that order), but incentives can (some bidder might want todeviate from any efficient strategy profile).36Question 4: Under BSS preferences, how often does wGSP have no efficient perfect-information Nashequilibrium? How much social welfare is lost in such equilibria?Models with ExternalitiesSo far we have assumed that advertisers do not care about which ads appear above and below their own. Wenow consider three models that relax this assumption: cascade and two further models that generalize cascade(hybrid, GIM).Cascade Model. The most widely studied model of position auction preferences with externalities is the“cascade model ” [2, 36, 56]. It captures the idea that users scan and click ads in the order they appear; a goodad can make lower ones less desirable, while a bad ad can cause a user to give up entirely. More specifically,users scan the ads starting from the top, and each time the user looks at (and possibly clicks on) an ad, shemay subsequently decide to stop scanning. In this model, it is possible for wGSP to have low-efficiencyequilibria, with price of anarchy 4 [92]. If weights are modified to take into account the probability that auser will continue reading ads, wGSP has a revenue-optimal equilibrium, but this equilibrium may requirebidders to bid above their own valuations, which is a weakly dominated strategy.3Question 5: Under cascade preferences, how often are there low-efficiency equilibria? In cases wherelow-efficiency equilibria exist, are these the only equilibria? When agents play only undominated strategies,how much revenue can wGSP generate? How do the modified weights affect wGSP’s efficiency?Hybrid Model. A richer model, which we call “hybrid”, combines features of the separable (V) and cascademodels. Users decide whether or not to continue scanning ads based on the number of ads they have alreadyscanned (as in V), and the content of each of those ads (as in cascade) [56]. Under this model, auctionsthat allocate according to greedy heuristics—including GSP—are not economically efficient; indeed, noeconomically efficient, polynomial-time allocation algorithm is known [56]. Further, strategic behavior inGSP can lead to greater efficiency loss. When k slots are being sold, the worst Nash equilibrium can be aslittle as 1/k-efficient, while the best can be as little as 2/k-efficient [39].Question 6: With hybrid preferences, how much social welfare is lost? On typical instances (as opposed toworst-case ones), how much difference is there between best- and worst-case equilibria?GIM Model. Beyond their work on cascade, Gomes, Immorlica and Markakis [40] also introduced an evenricher model (which we call GIM) which generalizes the hybrid model in two important ways. First, it allowsfor arbitrary pairwise externalities. For example, after looking at—and possibly clicking on—the first ad,3This reweighted wGSP mechanism is only optimal among mechanisms that do not use reserve prices. Also, because someagents’ strategies may be dominated—bidding strictly more than their valuations—the equilibria tend to not be rationalizable.37a user might update her beliefs about which ads are promising, affecting some ads positively and othersnegatively. Second, this model allows for the possibility that a user’s behavior might depend on which sets ofads she has seen and clicked on before. Its authors argue that this richer model should have similar propertiesto the cascade model.4Question 7: Are the efficiency and revenue achieved by wGSP substantially different under the cascade andGIM models?4.3 Representing Position AuctionsWe now present algorithms for succinctly encoding various position auction settings as AGGs. Implementa-tions of all our algorithms are freely available at http://www.cs.ubc.ca/research/position auctions.4.3.1 Representing No-Externality GFPs as AGGsWe can naively represent a no-externality GFP as an AGG as follows. We begin by creating a distinct actionset for each agent i containing nodes for each of his possible bids bi. Other agents will push agent i below thetop position if they bid strictly more than i’s bid bi or if they bid exactly bi and are favored by the tie-breakingrule. Thus, we create directed edges in the action graph pointing to each bi from every b j that could potentiallybe awarded a higher position: every b j for which b j ≥ bi and j 6= i. (If the tie-breaking rule is randomized,we consider all of the different positions the agent could achieve and return his expected utility.) The price anagent pays is simply the amount of his bid, and so can be determined without adding any more edges.How much space does this representation require? Let the number of agents be n and the number of bidamounts for each agent be k. Consider the action node b0i corresponding to i’s lowest bid amount. Thisnode has edges incoming from each action node belonging to another agent, meaning that the utility tablecorresponding to b0i must store a value for every configuration over these nodes. There are k possibleconfigurations over each agent j’s action nodes, and n−1 other agents who bid independently, so a total ofkn−1 such configurations. This expression is exponential, and so we consider the AGG to be intractably large.However, notice that our naive representation failed to capture key regularities of the no-externalities GFPsetting. Bidder i does not care about the amount(s) by which he is outbid, nor does he care about the identitiesof the bidders who outbid him. We can capture these regularities by introducing function nodes. For everyaction node bi, we can create one summation function node (formally, a node whose count is defined as thesum of the counts of its parents in the action graph) counting the number of other bids greater than or equal4This is clearly true for the issue of whether or not clicking on an ad affects the click-through rates of subsequent ads: beforethe user first clicks, the auction must already have allocated all the ad space. Nevertheless, GIM’s increased range of possibleexternalities could lead to dramatic differences from cascade in terms of equilibrium outcomes.38Figure 4.1: A weighted GFP represented as an AGG. (Square nodes represent summation functionnodes. β denotes the quality score of each advertisier.)to bi, and another counting the number equal to bi. These two quantities are sufficient for computing theposition of an agent who bids bi. Each node in the action graph is thus connected to only two other nodes.Consider again the table encoding the utility for agent i’s bid b0i . There are n−1 possible configurations overthe ≥ function node, and for each of these configurations, up to n−1 possible configurations over the =function node, yielding a total of O((n−1)2) configurations—a polynomial number. There are nk nodes inthe action graph (and none has a larger table), so the graph’s total representation size is O(n3k). An exampleof such an AGG is given in Figure 4.1. (As it does not complicate the representation, and is useful both to ourexposition and in Section 4.5.2, we build weighted GFPs, which we define in the natural way. Of course,setting all bidders’ weights to 1 returns us to the unweighted case.)More formally, we define a no-externalities position auction setting as a 4-tuple 〈N,v,c,q〉 where N is a set ofagents numbered 1, . . . ,n; vi, j is agent i’s value per click in position j; ci, j is agent i’s probability of receivinga click in position j; and qi is agent i’s quality (typically, the probability that i will receive a click in the topposition). To specify a position auction game we additionally specify the range of allowed bid amounts K andthe tie breaking rule T . (For now we will consider only the case where T is uniform-random; in Section 4.6.2we revisit this choice.) We can now more precisely define our construction as Algorithm 4.2.4.3.2 Representing No-Externality uGSPs and wGSPs as AGGsGSPs are similar to GFPs in that each agent’s payoff depends on a small number of values. To determinean agent’s position (or possible range of positions under randomized tie breaking), we start with a graphstructure similar to the one we built for GFPs. The first difference is that we must allow for bidder weights.We do this by creating function nodes for each “effective bid”—each distinct value that can be obtained bymultiplying a bid amount by a bidder weight—rather than one for each bid amount.5 Of course, we can5We can also derive AGG representations of Lahaie and Pennock’s ranking rules [60] by adjusting the values of q appropriately.39foreach agent i ∈ N doforeach bid k ∈ K docreate an action node representing i bidding k;foreach effective bid e ∈ {k ·qi | ∀i ∈ N,∀k ∈ K} docreate a summation function node (=,e) counting the bidders bidding exactly e;create a summation function node (≥,e) counting the bidders bidding above e;add an arc from (=,e) to (≥,e) ;if e > 0 thenadd an arc from (≥,e) to (≥,e′) (where e′ is the next largest effective bid);foreach action node ai,k representing i bidding k doadd an arc from ai,k to (=,k ·qi);add an arc from (=,k ·qi) to ai,k, and denote the value of (=,k ·qi) as `;add an arc from (≥,k ·qi) to ai,k, and denote the value of (≥,k ·qi) as g;instantiate the utility table for ai,k asuai,k(`,g) =1`g+`∑j=g+1ci, j(vi, j− k).Figure 4.2: An algorithm for converting a no-externality auction setting into an action graph representinga (weighted) GFP. Inputs are an AGG setting (N,v,c,q), a set of allowable bid values K and thetie-breaking rule T ; this algorithm assumes that T corresponds to uniform randomization.recover the unweighted case (and thus represent uGSP) by setting all weights to 1. Because effective bids arereal numbers while the set of possible bids and payments K is discrete, our definition of a position auctiongame must now also include a “rounding rule” R that is used to determine payments. For most of the chapterwe will only consider the case where R corresponds to rounding up (meaning that agents pay the minimumamount that they could have bid to maintain their positions); we will revisit this choice in Section 4.6.3.We also need to augment the action graph to capture the GSP pricing rule. We do this by adding “pricenodes”: function nodes that identify the next-highest bid below the bid amount encoded by each given actionnode. We use the term argmax node to refer to a function node whose value is equal to the largest in-arccarrying a non-zero value, based on a given, fixed ordering of the in-arcs. By ordering action nodes accordingto the values of their effective bids (i.e., bids multiplied by bidder weights), an argmax node identifies thehighest effective bid among the subset of action nodes connected to it. Our encoding is defined more preciselyin Algorithm 4.4. An example of the resulting action graph is illustrated in Figure 4.3.6 This representationresults in a graph containing nm action nodes, each of which stores a payoff table with at most O(n2|E|)entries, where E is the set of effective bids and |E| ≤ nm. Thus, this representation requires O(n4m2) space.6Note that although the in-degree of the argmax nodes can get large—O(nm)—the computational complexity of solving an AGGonly depends on the in-degrees of the action nodes.40Figure 4.3: To represent a GSP as an AGG, we add price nodes (argmax nodes denoted by hexagons)to a GFP representation. For clarity only one price node is pictured; a full GSP representationrequires one price node for each effective bid.4.3.3 Representing Auctions with ExternalitiesWe now describe an algorithm for representing GIM auctions as AGGs; this also suffices for cascade andhybrid, which GIM generalize. We define a GIM position auction setting by the 4-tuple 〈N,v,q, f 〉 where Nis a set of agents numbered 1, . . . ,n; vi is agent i’s value per click; qi is agent i’s quality (the probability that iwill receive a click in the top position); and fi : 2N →R encodes the externalities that affect i. When agent i’sad is shown below the ads of the agents in set S, his probability of receiving a click is qi fi(S). We assumethat f is monotone decreasing (i.e., S′ ⊆ S implies f (S′)≥ f (S)), and normalized so that f ( /0) = 1.A GSP auction in a GIM setting can be converted to an AGG using Algorithm 4.5. This representation resultsin a graph containing nk action nodes, where each node has 2n−1 in-arcs. The total size of a payoff table forany node is O(22nnk), so the full representation requires O(22nn2k2) space. However, storing a single agent’spreferences requires O(2n) values (to encode f (·)), so the AGG representation is only quadratically largerthan the input. Nevertheless, GIM settings with large n are impractical.4.4 Experimental SetupBroadly speaking, the method we employ in this chapter is to generate many preference-profile instancesfrom distributions over each of the preference models, to build AGGs encoding the corresponding perfect-information auction problems for each auction design, to solve these AGG computationally, and then tocompare the outcomes against each other and against VCG.41foreach agent i ∈ N doforeach bid k ∈ K docreate an action node representing i bidding k;foreach effective bid e ∈ {k ·qi | ∀i ∈ N,∀k ∈ K} docreate a summation function node (=,e) counting the bidders bidding exactly e;create a summation function node (≥,e) counting the bidders bidding above e;add an arc from (=,e) to (≥,e) ;if e > 0 thenadd an arc from (≥,e) to (≥,e′) (where e′ is the next largest effective bid);create a weighted argmax function node (argmax,e) identifying the next-highest effective bidbelow e;foreach action node a corresponding to effective bid e′ doif e′ < e thenadd an arc from a to (argmax,e) with arc weight of e′ ;if e′ = e thenadd an arc from (argmax,e) to a, and denote the value of (argmax,e) as ρ ;foreach action node ai,k representing i bidding k doadd an arc from ai,k to (=,k ·qi);add an arc from (=,k ·qi) to ai,k, and denote the value of (=,k ·qi) as `;add an arc from (≥,k ·qi) to ai,k, and denote the value of (≥,k ·qi) as g;add an arc from (argmax,k ·qi) to ai,k, and denote the value of (argmax,k ·qi) as ρ;instantiate the utility table for ai,k asuai,k(`,g,ρ) =1`(ci,g+`(vi,g+`−dρ/qie))+g+`−1∑j=g+1ci, j(vi, j− k).Figure 4.4: An algorithm for converting an auction setting into an action graph representing a wGSP.(Specifically, this algorithm encodes wGSP with uniform random tie-breaking and prices thatare rounded up to whole increments. These choices are revisited in Sections 4.6.2 and 4.6.3,respectively.)4.4.1 Problem InstancesWe generated our preference profiles by (1) imposing a probability distribution over a preference modeland then (2) drawing instances from those distributions. Except for BSS, whose definition already includesa distribution, we take two approaches to imposing distributions: one is to assume that all variables areuniformly distributed over their acceptable ranges, and the other is to draw variables from the log-normaldistributions of Lahaie and Pennock [60] that have been fitted to real-world data. Thus, we used a total of 13distributions (see Table 4.1). For each we generated 200 preference-profile instances for each of the threeposition-auction types, GFP, uGSP and wGSP, yielding 13×200×3 = 7800 perfect-information games in42foreach agent i ∈ N doforeach bid k ∈ K docreate an action node representing i bidding k;foreach pair of agents i, j ∈ N doforeach ai ∈ Ai docreate an OR function note (=,ai, j) representing that j having the same effective bid as i;create an OR function node (>,ai, j) representing that j having a strictly greater effective bidthan i;create a weighted argmax function node (ρ,ai) representing the price i pays given ai;create arcs from all three function nodes to ai;foreach a j ∈ A j doif a j > ai thencreate an arc from a j to (>,ai, j);if a j = ai thencreate an arc from a j to (=,ai, j);if a j < ai thencreate an arc from a j to (ρ,ai);foreach agent i ∈ N doforeach action node ai,k representing i bidding k doLet L denote the set of agents with the same effective bid as ai,k;Let G denote the set of agents with greater effective bids than ai,k;Let ρ denote the next highest effective bid after ai,k;instantiate the utility table for ai,k asuai,k(L,G,ρ) =fi(G∪L)2|L|(vi,g+`−dρ/qie})+ ∑S∈2L\Lfi(G∪S)2|L|(vi, j− k).Figure 4.5: Creating the action graph for a GIM position auctionWithout externalities With externalitiesEOS-UNI Cascade-UNIEOS-LN Cascade-LNV-UNI Hybrid-UNIV-LN Hybrid-LNBHN-UNI GIM-UNIBHN-LN GIM-LNBSSTable 4.1: The distributions we considered for our experiments. UNI denotes distributions that are uni-form across all values. LN denotes distributions with parameters log-normal distributed following[60], with additional parameters not discussed in that work again uniformly distributed.43total. Each of these games had 5 bidders, 5 positions and 30 bid increments. In Section 4.6 we describefurther experiments in which we investigated the our findings’ sensitivity to these and other parameters.We normalized values in each game so that the highest was equal to the highest possible bid, to ensure thatthe full number of bid increments was potentially useful. We also removed weakly dominated strategiesfrom each game, both for computational reasons and to select against implausible equilibria. Specifically, weeliminated strategies in which agents bid strictly more than (the ceilings of) their valuations.7.4.4.2 Equilibrium ComputationWe performed our experiments on WestGrid’s Orcinus cluster—384 machines with dual Intel Xeon E54503.0GHz CPUs, 12MB cache and 16GB RAM, running 64-bit Red Hat Enterprise Linux Server 5.3. Tocompute Nash equilibria, we used three algorithms:8 (1) simpdiv, the simplicial subdivision algorithm ofvan der Laan et al [102], adapted to AGGs by Jiang, Bhat and Leyton-Brown [51]; (2) gnm, the global Newtonmethod of Govindan and Wilson [43], adapted to AGGs by Jiang, Bhat and Leyton-Brown [51]; and (3) sem,the support-enumeration method of Porter, Nudelman and Shoham [84], adapted to AGGs by Thompson,Leung and Leyton-Brown [100]. We ran sem to enumerate all pure-strategy Nash equilibria. For findingsample mixed-Nash equilibria, we ran simpdiv from ten different pure-strategy-profile starting points chosenuniformly at random, and also ran gnm ten times with random seeds one through ten. We limited runs ofsimpdiv and gnm to five CPU minutes. In total, we spent about 15 CPU months on equilibrium computation.4.4.3 Benchmarks: VCG and Discretized VCGAs well as comparing GSP and GFP to each other, we also compared these position auctions to VCG. Thereare two ways of doing this. First, we considered VCG’s truthful equilibrium given agents’ actual (i.e.,non-discretized) preferences. However, this prevents us from determining whether differences arise becauseof the auction mechanism or discretization. To answer this question, we also compared to an alternate versionof VCG in which we discretized bids to the same number of increments as in the position auctions. In thiscase, we assume that bidders report the discrete value nearest to their true value. (Observe that this is alwaysan ε-Nash equilibrium with ε equal to half a bid increment.)7In past work, for computational reasons, we also eliminated very weakly dominated strategies. In particular, this included bidsthat led to identical outcomes regardless of the actions of other agents (e.g., from an agent with an extremely low quality score). Wedid not do this here because we now enumerate equilibria: although the elimination of very weakly dominated strategies does notchange the set of equilibrium outcomes, it can change the relative frequency of these outcomes.8Implementations of all three algorithms are available at http://agg.cs.ubc.ca.444.4.4 Statistical MethodsIn order to justify claims that one auction achieved better performance than another according to a givenmetric (e.g., revenue), we ensure that this difference was judged significant by a statistical test. Specifically,we performed blocking, means-of-means, bootstrapping tests [15] as follows:1. For each setting instance, find the difference in the metric across that pair of auctions on that instance.Each value is normalized by the achievable social welfare in that instance. Call this set of values S.2. Draw |S| samples from S (with replacement), and compute the mean. Perform this procedure 20,000times. Let M denote the set of means thus computed.3. Our estimated performance difference is the mean of M (the mean-of-means of S).4. This difference has significance level α if the α th quantile of M is weakly greater than zero.When reporting results, we use the symbol ∗ to denote that a result has a significance level of at least α = 0.05and ∗∗ to denote that a result has a significance level of at least α = 0.01. For each group of data points (i.e.,for a specific size and preference model) we perform many simultaneous tests, comparing revenue, welfare,relevance and envy between all pairs of auctions. To avoid spurious claims of significance, we thus performBonferroni multiple-testing correction (effectively, dividing the desired significance level by the number oftests performed) [73].To avoid undermining the statistical reliability of our data, we did not drop individual games that we couldnot solve within our chosen time budget: we worried that the features that made some instances hard tosolve could also make their equilibrium outcomes qualitatively different from those of easier-to-solve games.Instead, when we were not able to identify any Nash equilibria of a particular game (typically involving GFPauctions), we replaced the metric value of interest with an suitable upper or lower bound (e.g., a positionauction’s revenue is trivially guaranteed to be between 0.0 and the maximum possible social welfare). Whenwe have incomplete data about one or both of auctions A and B, we do not claim that auction A achievessignificantly better performance according to some metric than auction B unless the lower bound on A’sperformance is significantly better than the upper bound on B’s performance.4.5 ResultsWe now turn to our experimental results. Our goal is to demonstrate the effectiveness of computationalmechanism analysis in general and to shed light on open questions about position auctions in particular. Inthe latter vein our main aim is to justify the search industry’s convergence on the wGSP auction; we alsoconsider the seven model-specific questions we asked in the introduction. This section thus begins with abroad comparison of the different position auction mechanisms, followed by a more detailed examination of45each individual model. We also describe some follow-up experiments prompted by model-specific claims inthe research literature. The last section of this chapter gives all of our results in tabular form (i.e., providingnumerical values rather than just graphs) and also reports the results of statistical significance tests for allcomparisons.4.5.1 Main ComparisonWe begin by looking at the relative performance of GFP, uGSP, and wGSP position auctions, averagedacross all of our different models and distributions, and compared to the VCG and discrete VCG baselines asappropriate (see Figure 4.6). In a nutshell, the industry’s choice of wGSP appears to be justified in terms ofefficiency, relevance and (to a lesser extent) envy, but not in terms of revenue. More specifically, in termsof both efficiency and relevance, we found that wGSP was clearly the best position auction design; it alsoexhibited relatively little variation in these metrics (less than 10%) across equilibria. wGSP ranked loweron efficiency and relevance than VCG (with or without discretization), but was surprisingly close, giventhat in many of our models wGSP is known to be inefficient. wGSP also clearly outperformed GFP anduGSP in terms of envy, but also exhibited very substantial variation across equilibria. Revenue comparisonswere much more ambiguous: all auctions achieved fairly similar median revenues, and variation from oneequilibrium to another could be very large for both GSP variants (with best-case equilibrium revenues ofabout twice worst-case equilibrium revenues). We also saw substantial revenue variation across models,distributions, and equilibrium selection criteria, which we will discuss in detail in what follows.4.5.2 Basic Models: EOS and VWe consider the EOS and V models together because they are very similar, both in terms of what wasknown from previous work, and in terms of our findings. The main difference between EOS and V is that Vintroduces quality scores, which can be used to weight advertisers’ bids. Thus, in EOS, wGSP and uGSP areidentical, while in V they are not. Earlier we asked the following two questions.Question 1: Under EOS preferences, how often does wGSP give rise to envy-free (efficient, VCG-revenue-dominating) Nash equilibria? What happens in other equilibria, and how often do they occur?Question 2: Under V preferences, how often does wGSP have envy-free (efficient, VCG-revenue-dominating)Nash equilibria? What happens in other equilibria, and how often do they occur? Is uGSP generally betterthan wGSP for revenue, or does their relative performance depend on equilibrium selection, the valuationdistribution, and/or other properties of the game? Is wGSP generally better for efficiency?We begin by investigating how often envy-free equilibria exist in wGSP (with discrete bids) and how many46Grand Average0.00.20.40.60.81.0EfficiencyGFPuGSPwGSPDiscrete VCGGrand Average0.00.20.40.60.81.0RevenueGFPuGSPwGSPVCGDiscrete VCG(a) Efficiency (b) RevenueGrand Average0.00.20.40.60.81.0RelevanceGFPuGSPwGSPVCGDiscrete VCGGrand Average (No Externalities)0.00.20.40.60.81.0EnvyGFPuGSPwGSP(c) Relevance (d) EnvyFigure 4.6: Performance of different position auction types, averaged across all 13 distributions (effi-ciency, revenue, and relevance) and the 7 no-externality distributions (envy, which does not applyto settings with externalities). The “whiskers” in this plot indicate equilibrium selection effects:the bar is the median equilibrium, while the top and bottom whiskers correspond to best- andworst-case equilibria.such equilibria there are, as compared to pure Nash equilibria. Surprisingly, we found that envy-free equilibriadid not exist in the majority of games, despite most games having hundreds or thousands of pure-strategyNash equilibria (see Figure 4.7), and that even when they did exist there were orders of magnitude more Nashequilibria than envy-free equilibria. This led us to investigate the amount of envy present in wGSP equilibria.We found that this quantity varied substantially across equilibria, but that envy-minimizing Nash equilibriatended to get very close to zero envy (see Figure 4.8).We know that in the EOS and V models, envy-free equilibria are economically efficient and yield highrevenue; however, we have just observed that such envy-free equilibria did not reliably exist in our games. Wenext investigated the efficiency and revenue properties of general Nash equilibria. We found that wGSP was4710-1 100 101 102 103 104 105 106Number of equilibria0.00.20.40.60.81.0pEOS-UNI pureEOS-UNI envy-freeEOS-LN pureEOS-LN envy-freeV-UNI pureV-UNI envy-freeV-LN pureV-LN envy-freeFigure 4.7: Empirical cumulative probability distributions over the number of equilibria. Many lines donot begin at p = 0, due to our use of a log scale; a line beginning at p = 0.6 indicates that 60% ofgames had zero envy-free Nash equilibria.Min MedEOS-UNI Max Min MedEOS-LN Max Min MedV-UNI Max Min MedV-LN Max0.00.20.40.60.81.01.2EnvyFigure 4.8: Empirical box plot of wGSP’s envy under different equilibrium selection criteria (minimum,median and maximum). The envy-minimizing Nash equilibrium always had very little envy, butother equilibria can have substantial envy. Note that we report envy normalized by total possiblesocial welfare. Even normalized, envy can occasionally exceed 1, as it is possible for an agent tobe envied by many other agents.48Min MedEOS-UNI Max Min MedEOS-LN Max Min MedV-UNI Max Min MedV-LN Max0.750.800.850.900.951.00EfficiencyFigure 4.9: Empirical box plot of wGSP’s social welfare under different equilibrium selection criteria(minimum, median and maximum). Note that the majority of equilibria (indicated by the median)tend to be very nearly efficient.very efficient, not just in the best-case equilibrium, but in the majority of equilibria; however, the worst-caseequilibria could be substantially less efficient (see Figure 4.9).9Concerning revenue, we found that equilibrium selection has a very large effect. Further, this effect is notmerely confined to a few very bad equilibria. In every distribution, we found that wGSP’s best-case revenuewas better than VCG, worst-case was much worse than VCG and median was slightly worse in EOS settingsand approximately equal in V settings (see Figure 4.10).It remains to compare uGSP with wGSP under the V model in terms of both revenue and efficiency. First,recall that Lahaie and Pennock [60] found that uGSP generated much more revenue but moderately lesssocial welfare, but only for the specific case of “symmetric equilibria” and log-normal distributions. Forboth V distributions, we found that uGSP was better than wGSP for revenue (provided that we comparedthe mechanisms under the same equilibrium selection criterion, e.g., median), but that both mechanismswere very sensitive to equilibrium selection: the difference between mechanisms (∼ 1.2×) was very smallcompared to the difference between good and bad equilibria (at least ∼ 2×) (see Figure 4.11). The gainsfrom uGSP were much larger in the log-normal distribution than in the uniform distribution. Concerningefficiency, wGSP was indeed better overall. We also found that uGSP’s social welfare was far more sensitiveto equilibrium selection (at least ∼ 1.2×) than wGSP’s (∼ 1.04×), and even uGSP’s best equilibria were9We note that in our previous work we found almost no instances with such low efficiency [98]; we attribute this discrepancyto the fact that we did not previously have a practical algorithm for enumerating Nash equilibria, and so we previously reportedminimum and maximum efficiency based only on the most extreme equilibria our sample-Nash-finding algorithms could find. Incontrast, our current work uses the SEM algorithm, and hence guarantees that we have truly found the best/worse case PSNEs, andfurthermore allows us to identify the median equilibrium (which we could not do before).49Min MedEOS-UNIMax VCG Min MedEOS-LNMax VCG Min MedV-UNI Max VCG Min MedV-LN Max VCG0.00.20.40.60.81.0RevenueFigure 4.10: Empirical box plot of wGSP’s revenue under different equilibrium selection criteria(minimum, median and maximum). The median equilibrium of wGSP performs similarly toVCG, while the worst-case and best-case equilibria are substantially different.Min MeduGSPV-UNIMax Min MedwGSPV-UNIMax Min MeduGSPV-LNMax Min MedwGSPV-LNMax0.00.20.40.60.81.0RevenueFigure 4.11: Empirical box plot of uGSP and wGSP’s revenue under different equilibrium selectioncriteria (minimum, median and maximum).50Min MeduGSPV-UNIMax Min MedwGSPV-UNIMax Min MeduGSPV-LNMax Min MedwGSPV-LNMax0.00.20.40.60.81.0EfficiencyFigure 4.12: Empirical box plot of uGSP and wGSP’s social welfare under different equilibriumselection criteria (minimum, median and maximum).sometimes very inefficient (see Figure 4.12).Finally, we come to the overarching question of this chapter, “which position auction design is best?” Weconducted a general 4-way comparison between GFP, uGSP, wGSP and VCG for each metric. In terms ofsocial welfare, we found that wGSP was clearly the most efficient position design in the V model, regardlessof equilibrium selection. In the EOS model, u/wGSP was more efficient than GFP in best- and median-caseequilibria, but not in the worst case (see Figure 4.13(a)). Comparing revenue, we found that equilibrium-selection effects were very large for GSP auctions—much larger than differences between auction designs (seeFigure 4.13(b)). In terms of relevance (i.e., expected number of clicks),10 we see that wGSP was consistentlybetter than the other designs and roughly comparable to VCG (see Figure 4.13(c)). Finally, treating envy asan objective to minimize, in EOS settings we see that in GFP performed very well in all equilibria, whileGSP designs were more sensitive to equilibrium selection. In the V model, unweighted designs performedvery poorly compared to wGSP (see Figure 4.13(d)).Weighted GFPUnder the EOS model, we found that GFP had high efficiency in its worst-case equilibria, compared to theother two position auctions. This finding nicely compliments another by Chawla and Hartline [13]: GFP’sworst-case Bayes-Nash equilibrium is efficient in games with independent, identically distributed privatevalues. Because the EOS model does not include bidder-specific click probabilities, GFP’s unweighted10We do not consider the EOS model for relevance: because it is unweighted, it predicts that every complete allocation—where allpositions are sold—will produce the same expected number of clicks.51EOS-UNI EOS-LN V-UNI V-LN0.00.20.40.60.81.0EfficiencyGFPuGSPwGSPDiscrete VCGEOS-UNI EOS-LN V-UNI V-LN0.00.20.40.60.81.0RevenueGFPuGSPwGSPVCGDiscrete VCG(a) Efficiency (b) RevenueV-UNI V-LN0.00.20.40.60.81.0RelevanceGFPuGSPwGSPVCGDiscrete VCGEOS-UNI EOS-LN V-UNI V-LN0.00.20.40.60.81.0EnvyGFPuGSPwGSP(c) Relevance (d) EnvyFigure 4.13: Comparing the average performance of different position auction types in EOS and Vsettings. The “whiskers” in this plot indicate equilibrium selection effects: the bar is the medianequilibrium, while the top and bottom whiskers correspond to best- and worst-case equilibria.allocation is not a problem. We thus wondered whether adding weights to GFP would produce a mechanismwith good worst-case Nash equilibria under the V model, which does include bidder-specific click probabilities.Recall that it is straightforward to include weights in our AGG reduction for GFP position auctions. Weperformed two experiments, one at the same scale as our others (where enumerating all mixed-strategy Nashequilibria (MSNEs) of weighted GFP is computationally infeasible) and one at a scale small enough to permitsuch enumeration.In the smaller experiment, we found that 98.5% of games had a single mixed Nash equilibrium. Whenmultiple equilibria existed, they tended all to have nearly identical revenue and social welfare (within 4% and1% respectively). In our full-sized experiments, we also observed little variability across equilibria. In 97% ofgames, revenue and social welfare varied less than 0.1% from the best case to the worst case. In the remaining3% of games, social welfare never differed by more than 0.3% across equilibria while revenue never differed52V-UNI V-LN0.00.20.40.60.81.0EfficiencywGSPwGFPDiscrete VCGV-UNI V-LN0.00.20.40.60.81.0RevenuewGSPwGFPVCGDiscrete VCG(a) Efficiency (b) RevenueFigure 4.14: Comparing wGFP to wGSP. We found that wGFP’s efficiency varied extremely little acrossequilibria, but that its worst case was not better than wGSP’s. wGFP’s revenue again varied little,with worst-case revenue much larger than wGSP’s worst case and roughly comparable to wGSP’smedian case.by more than 3.5%. Comparing wGFP to wGSP, we found that wGFP did not yield significantly betterworst-case efficiency. However, wGFP did achieve much higher worst-case revenue, roughly equal to wGSP’smedian case (see Figure 4.14).4.5.3 Position-Preference ModelsWe now consider the richer BHN and BSS models, which allow advertisers’ values to depend on ad positions.The BHN ModelBlumrosen, Hartline and Nong’s [11] key motivation was that advertisers may prefer clicks in the lowestposition, because the few users who actually click on low-position ads are likely to buy. They found thatwGSP sometimes has no efficient Nash equilibrium under their model.Question 3: Under BHN preferences, how often does wGSP have no efficient Nash equilibrium? How muchsocial welfare is lost in such equilibria?We observed two kinds of efficiency failures (see Figure 4.15). The first kind involved a complete failureto allocate, with every agent bidding zero. This equilibrium arises when the top-most position has so manyspurious (non-converting) clicks that paying even one increment per click is too much. (This scenario alsoleads to an interesting outcome under discrete VCG: the advertiser who gets the top position can actually530.0 0.2 0.4 0.6 0.8 1.0EfficiencyBHN-UNI0.00.20.40.60.81.0pwGSP MinimumwGSP MedianwGSP MaximumVCG Discrete0.0 0.2 0.4 0.6 0.8 1.0EfficiencyBHN-LN0.00.20.40.60.81.0pwGSP MinimumwGSP MedianwGSP MaximumVCG Discrete(a) Uniform Distribution (b) Log-Normal DistributionFigure 4.15: Empirical CDF of economic efficiency in BHN models.be paid, since the externality he imposes on other bidders is to allow them to reach the higher-value lowerpositions.) A similar phenomenon can arise in lower positions as well: e.g., if exactly one advertiser valuesthe second position (and by monotonicity, the first as well) at weakly more than one bid increment, then in anypure equilibrium all the other bidders bid zero. The second kind of efficiency failure involves mis-orderingthe advertisers (e.g., placing the highest-valuation advertiser in some slot other than the highest), as wealready saw in the V and EOS settings. However, the magnitude of inefficiency can be more than we saw inthose models, because different positions have different conversion rates. In equilibria of EOS and V models,the high-valuation bidder might prefer to avoid the top position because the top positions’ greater numbers ofclicks are offset by disproportionately greater prices per click. In the BHN model, top-position clicks canalso be too expensive, and this is compounded by the fact that top-position clicks are least likely to convert.Discretization can exacerbate this problem. Because per-click valuations for the top positions are smallcompared to lower positions, discretization has a greater effect on top positions, leading to inefficiencies dueto rounding. Because high positions get the majority of clicks, this causes a large loss of welfare.We now consider how different auction designs performed on BHN (see Figure 4.16). First, we observedthat all position auctions designs were very inefficient (≤ 69%), and substantially less efficient than discreteVCG (which achieved 91.6% and 90.5% efficiency in BHN-UNI and BHN-LN respectively.) wGSP wasdramatically better than GFP; wGSP’s worst-case equilibria were more efficient that GFP’s best case. Thecomparison between uGSP and wGSP was less clear, because uGSP’s efficiency varied much more acrossequilibria. In the case of BHN-UNI, this made uGSP’s best-case equilibria slightly (but not significantly)better than wGSP’s. In other distributions and for other criteria, uGSP was clearly worse.Turning to revenue, we found that no auction design extracted more than 30% of the surplus, but thatevery position auction design was dramatically better at generating revenue than VCG. This finding has a54BHN-UNI BHN-LN0.00.20.40.60.81.0EfficiencyGFPuGSPwGSPDiscrete VCGBHN-UNI BHN-LN0.00.20.40.60.81.0RevenueGFPuGSPwGSPVCGDiscrete VCG(a) Efficiency (b) RevenueBHN-UNI BHN-LN0.00.20.40.60.81.0RelevanceGFPuGSPwGSPVCGDiscrete VCGBHN-UNI BHN-LN0.00.20.40.60.81.0EnvyGFPuGSPwGSP(c) Relevance (d) EnvyFigure 4.16: Comparing the average performance of different position auction types in BHN settings.The “whiskers” in this plot indicate equilibrium selection effects: the bar is the median equilib-rium, while the top and bottom whiskers correspond to best- and worst-case equilibria.straightforward explanation: in BHN models, conversion rates are higher in low positions. In VCG, an agentpays his externality, i.e., the amount he decreases the welfare of the lower-ranked agents by displacing themone position. In GSP, an agent pays the amount of the next highest bid, as if he had entirely removed the nexthighest bidder entirely. In EOS, V and BHN models, the VCG payment (for a given bid profile) is alwaysweakly less than GSP, because the VCG payment takes into account that next highest agent gets a positionof value. In BHN, the displaced agents are moved into positions that are likely to be closer to the value ofpre-displacement positions (compared with EOS and V settings). We also observe that, as we saw before,revenue variation across equilibria was dramatically larger than revenue variation across position auctiondesigns. Unlike the V model, under BHN uGSP did not provide dramatically more revenue than wGSP whenvalues were lognormally distributed.All position auction designs achieved dramatically worse relevance than VCG (both with and without550.0 0.2 0.4 0.6 0.8 1.0EfficiencyBSS0.00.20.40.60.81.0pwGSP MinimumwGSP MedianwGSP MaximumVCG DiscreteFigure 4.17: Empirical CDF of economic efficiency in BSS modelsdiscretization). wGSP was clearly the best position auction in terms of relevance..Envy varied substantially among both GSP designs, but nevertheless wGSP outperformed the other positionauction designs in minimizing this metric.The BSS ModelNext we investigated to the model of Benish, Sadeh and Sandholm [9]. This model is similar to the BHNmodel in that different advertisers can have different per-click valuations. However, in this case, valuationsare “single-peaked” in the sense that each advertiser has a most preferred position and their per-click valuationdecreases as the ad is moved further from that position. Recall that BSS is an unweighted model (i.e., all theadvertisers have identical quality scores and click probabilities). Thus, wGSP and uGSP are equivalent in thismodel.Question 4: Under BSS preferences, how often does wGSP have no efficient perfect-information Nashequilibrium? How much social welfare is lost in such equilibria?We found that efficiency failures were very common under BSS, and could be very large, though zero-efficiency outcomes were rare and never happened in best-case equilibria (see Figure 4.17). This was due inpart to randomized tie-breaking. When none of the agents values the top slot, all the agents can bid zero inequilibrium. However, they can also all bid a small but nonzero amount and tie for top bid; this leads them allto get more valuable intermediate positions often enough to justify the cost.Comparing auction designs (see Figure 4.18), we observed that uGSP was significantly more efficient thanGFP, but fell well behind discrete VCG. In revenue, uGSP and GFP both varied substantially across equilibria,56BSS0.00.20.40.60.81.0EfficiencyGFPuGSPDiscrete VCGBSS0.00.20.40.60.81.0RevenueGFPuGSPVCGDiscrete VCG(a) Efficiency (b) RevenueBSS0.00.20.40.60.81.0RelevanceGFPuGSPVCGDiscrete VCGBSS0.00.20.40.60.81.0EnvyGFPuGSP(c) Relevance (d) EnvyFigure 4.18: Comparing the average performance of different position auction types in BSS settings.The “whiskers” in this plot indicate equilibrium selection effects: the bar is the median equilib-rium, while the top and bottom whiskers correspond to best- and worst-case equilibria.and both were significantly worse than VCG in their median equilibria. uGSP slightly outperformed VCH inbest-case equilibrium, but not significantly so. Both uGSP and GFP achieved more relevant outcomes thanVCG, largely due to the tieing effect described earlier, where agents were often sold clicks that they did notparticularly value. Both auctions had little envy in absolute terms, though GSP was worse than GFP.570.0 0.2 0.4 0.6 0.8 1.0Efficiency0.00.20.40.60.81.0pwGSP Min (µ=0.855)wGSP Med (µ=0.947)wGSP Max (µ=0.981)VCG Disc (µ=1.000)0.0 0.2 0.4 0.6 0.8 1.0Efficiency0.00.20.40.60.81.0pwGSP Min (µ=0.861)wGSP Med (µ=0.951)wGSP Max (µ=0.984)VCG Disc (µ=1.000)(a) Uniform Distribution (b) Log-Normal DistributionFigure 4.19: Empirical CDF of economic efficiency in the cascade models.4.5.4 Externality ModelsWe now consider Cascade, Hybrid and GIM, three models in which bidders can care about the positionsawarded to other agents.The Cascade ModelCascade is the simplest of our models involving externalities. Cascade settings are similar to V settings inthat each bidder has a quality (proportional to click probability) and a per-click valuation. The differenceis that each advertiser also has a continuation probability: the probability that a user looks at subsequentads. Thus, in both models, lower positions are less valuable because they receive fewer clicks. However, incascade models, this reduction depends on what is shown in higher positions.Question 5: Under cascade preferences, how often are there low-efficiency equilibria? In cases wherelow-efficiency equilibria exist, are these the only equilibria? When agents play only undominated strategies,how much revenue can wGSP generate? How do the modified weights affect wGSP’s efficiency?We found that inefficiency was common, though the magnitudes of efficiency losses were smaller than weexpected (see Figure 4.20). We found that wGSP’s efficiency was always greater than 50% and was almostalways greater than 80%. In contrast, Giotis and Karlin [39] showed that the price of anarchy is k and thus,for k = 5 there exist instances (which we evidently did not sample) with as little as 20% efficiency.11 The gapbetween best- and worst-case equilibria was typically quite large, though a few instances had poor (∼ 60%)11Although they studied the more general model that we refer to as hybrid, the construction of their worst-case example uses apart of the hybrid model space that is also consistent with the more restrictive cascade model.580.0 0.2 0.4 0.6 0.8 1.0Revenue0.00.20.40.60.81.0pwGSP Min (µ=0.189)wGSP Med (µ=0.337)wGSP Max (µ=0.498)VCG (µ=0.312)VCG Disc (µ=0.312)0.0 0.2 0.4 0.6 0.8 1.0Revenue0.00.20.40.60.81.0pwGSP Min (µ=0.172)wGSP Med (µ=0.317)wGSP Max (µ=0.474)VCG (µ=0.283)VCG Disc (µ=0.283)(a) Uniform Distribution (b) Log-Normal DistributionFigure 4.20: Empirical CDF of revenue in the cascade models.CAS-UNI CAS-LN0.00.20.40.60.81.0EfficiencyGFPuGSPwGSPDiscrete VCG CAS-UNI CAS-LN0.00.20.40.60.81.0RevenueGFPuGSPwGSPVCGDiscrete VCGCAS-UNI CAS-LN0.00.20.40.60.81.0RelevanceGFPuGSPwGSPVCGDiscrete VCG(a) Efficiency (b) Revenue (c) RelevanceFigure 4.21: Comparing the average performance of different position auction types in cascade set-tings. The “whiskers” in this plot indicate equilibrium selection effects: the bar is the medianequilibrium, while the top and bottom whiskers correspond to best- and worst-case equilibria.efficiency even in the best case.wGSP’s revenue varied substantially across equilibria (see Figure 4.20), with the best-case equilibriumgenerating at least 2.5 times as much revenue as the worst case.Comparing position auctions, we observed relative performance similar to that under the no-externality models(see Figure 4.21). First, we found that wGSP was the most efficient position auction design. Surprisingly,wGSP’s best-case equilibria tended to be close to fully efficient (at least ∼ 95%). However, variationacross equilibria was greater than in no-externality settings like EOS and V. Both uGSP and wGSP variedsubstantially in their revenue across equilibria, with wGSP’s median equilibrium achieving slightly morerevenue than VCG. Like in the V model, wGSP was roughly comparable to uGSP in terms of revenuewhen values were uniformly distributed, and noticeably worse when values were log-normally distributed.Concerning relevance, we found that wGSP was clearly superior to other position auction designs, but not59CAS-UNI CAS-LN0.00.20.40.60.81.0EfficiencywGSPcwGSPDiscrete VCGCAS-UNI CAS-LN0.00.20.40.60.81.0RevenuewGSPcwGSPVCGDiscrete VCG(a) Efficiency (b) RevenueFigure 4.22: Comparing the average performance of wGSP and cwGSP in cascade settings. The“whiskers” in this plot indicate equilibrium selection effects: the bar is the median equilibrium,while the top and bottom whiskers correspond to best- and worst-case equilibria.quite as good as VCG. Recall that envy is not well defined in settings with externalities; thus, we do notdiscuss it here or in what follows.wGSP with Cascade-Specific WeightsTo address of some of the shortcomings of wGSP in cascade, Gomes et al proposed an alternative way ofcalculating advertiser qualities for wGSP. In their alternative weighting, an advertiser’s weight is equal to hertop-position click probability divided by the probability that a user will stop looking at ads after seeing hers(i.e., one minus her continuation probability). This simple reweighting scheme has the nice property that anadvertiser with a continuation probability of one will always be ranked first, a desirable property given thatthe efficient ranking will always rank such bidders first. Gomes et al also showed that this weighting givesrise to a revenue-optimal equilibrium. (There are two important caveats about this equilibrium. (1) It is onlyrevenue-optimal among mechanisms that always allocate every position; mechanisms that use reserve pricescan get strictly more revenue. (2) It might not be rationalizable, requiring some agents to play the weaklydominated strategy of bidding strictly above their own valuations.)We experimented with this alternative mechanism, which we call cwGSP, and found that it was dramaticallymore efficient that wGSP (see Figure 4.22). However, we also found that it was noticeably worse than wGSPin terms of revenue. Thus, we conclude that Gomes et al’s revenue claims about cwGSP stem mainly fromtheir unusual equilibrium-selection criteria.600.0 0.2 0.4 0.6 0.8 1.0Revenue0.00.20.40.60.81.0pwGSP Min (µ=0.321)wGSP Med (µ=0.469)wGSP Max (µ=0.615)VCG (µ=0.534)VCG Disc (µ=0.534)0.0 0.2 0.4 0.6 0.8 1.0Revenue0.00.20.40.60.81.0pwGSP Min (µ=0.151)wGSP Med (µ=0.274)wGSP Max (µ=0.390)VCG (µ=0.325)VCG Disc (µ=0.326)(a) Uniform Distribution (b) Log-Normal DistributionFigure 4.23: Empirical CDF of revenue in the hybrid model.0.0 0.2 0.4 0.6 0.8 1.0Efficiency0.00.20.40.60.81.0pwGSP Min (µ=0.955)wGSP Med (µ=0.987)wGSP Max (µ=0.994)VCG Disc (µ=1.000)0.0 0.2 0.4 0.6 0.8 1.0Efficiency0.00.20.40.60.81.0pwGSP Min (µ=0.982)wGSP Med (µ=0.997)wGSP Max (µ=1.000)VCG Disc (µ=1.000)(a) Uniform Distribution (b) Log-Normal DistributionFigure 4.24: Empirical CDF of economic efficiency in the hybrid model.The Hybrid ModelThe hybrid model generalizes both the V and cascade models; lower positions can get fewer clicks either dueto the number of higher-placed ads (as in V) or to the content of those ads (as in cascade). Hybrid settingscan have a very large price of stability and an even larger price of anarchy [56].Question 6: With hybrid preferences, how much social welfare is lost? On typical instances (as opposed toworst-case ones), how much difference is there between best- and worst-case equilibria?We again found that wGSP’s revenue varied substantially across equilibria (see Figure 4.23). However, wewere surprised to find that while wGSP’s efficiency was quite low (∼ 60%) in a few games, wGSP was often61HYB-UNI HYB-LN0.00.20.40.60.81.0EfficiencyGFPuGSPwGSPDiscrete VCG HYB-UNI HYB-LN0.00.20.40.60.81.0RevenueGFPuGSPwGSPVCGDiscrete VCGHYB-UNI HYB-LN0.00.20.40.60.81.0RelevanceGFPuGSPwGSPVCGDiscrete VCG(a) Efficiency (b) Revenue (c) RelevanceFigure 4.25: Comparing the average performance of different position auction types in hybrid set-tings. The “whiskers” in this plot indicate equilibrium selection effects: the bar is the medianequilibrium, while the top and bottom whiskers correspond to best- and worst-case equilibria.efficient, even in its worst-case equilibria (see Figure 4.24). This result stands in contrast with our previousfindings on cascade. The very small difference between best- and worst-case equilibria did not arise becausegames had few Nash equilibira: wGSP still had many Nash equilibria in just about every game. In the nextsubsection, we compare all three externality models and investigate this anomaly more deeply.Comparing the different position auction designs (see Figure 4.25), we again found that wGSP was themost efficient and produced the most relevant results. Concerning revenue, we found that both uGSP andwGSP varied substantially across their equilibria, but that uGSP was consistently better than wGSP. wGSPsubstantially outperformed GFP and uGSP in terms of relevance.The GIM ModelFinally, we turn to the model of Gomes, Immorlica and Markakis, which is the most general of our externalitymodels. In the GIM model, every bidder has a quality score, which corresponds to her probability of gettinga click in the top position, but the click probability in lower positions can depend arbitrarily on whichadvertisers are shown in higher positions as long as this probability weakly decreases with position. Gomeset al conjectured that this model was similar to cascade, which motivated our questions.Question 7: Are the efficiency and revenue achieved by wGSP substantially different under the cascade andGIM models?Recall that in the cascade model, revenue and efficiency both varied substantially across equilibria, and thatthe majority of games exhibited moderately large inefficiency (∼ 90%) in worst-case equilibrium. In GIM,we observed similar effects of similar magnitude (see Figures 4.26 and 4.27). We found it surprising thatGIM, which generalizes the hybrid model (which in turn generalizes cascade), should be have outcomes thatare more similar to cascade than to hybrid. To explain the anomaly, we looked for features that were common620.0 0.2 0.4 0.6 0.8 1.0Efficiency0.00.20.40.60.81.0pwGSP Min (µ=0.858)wGSP Med (µ=0.950)wGSP Max (µ=0.981)VCG Disc (µ=1.000)0.0 0.2 0.4 0.6 0.8 1.0Efficiency0.00.20.40.60.81.0pwGSP Min (µ=0.881)wGSP Med (µ=0.956)wGSP Max (µ=0.981)VCG Disc (µ=1.000)(a) Uniform Distribution (b) Log-Normal DistributionFigure 4.26: Empirical CDF of economic efficiency in the GIM model.0.0 0.2 0.4 0.6 0.8 1.0Revenue0.00.20.40.60.81.0pwGSP Min (µ=0.248)wGSP Med (µ=0.405)wGSP Max (µ=0.569)VCG (µ=0.384)VCG Disc (µ=0.385)0.0 0.2 0.4 0.6 0.8 1.0Revenue0.00.20.40.60.81.0pwGSP Min (µ=0.225)wGSP Med (µ=0.354)wGSP Max (µ=0.493)VCG (µ=0.339)VCG Disc (µ=0.339)(a) Uniform Distribution (b) Log-Normal DistributionFigure 4.27: Empirical CDF of revenue in the GIM model.to cascade and GIM, but not to hybrid. There are many such features (e.g., in wGSP’s equilibrium rankings,the effect of position on CTR is much greater under hybrid). In the end we concluded that the feature that bestexplains our findings is the relative value of the top position—which is higher in hybrid settings—because inall three models, instances where this fraction was large tended to have very good worst-case efficiency inwGSP (see Figure 4.28). This correlation was moderately strong (ρ > 0.4 using Spearman’s rank correlation)and highly significant (p < 0.01 with Bonferroni correction) for every distribution except BHN-LN, in whichthe majority of equilibria achieved almost perfect efficiency.Lastly, we compare different position auctions in the GIM setting (see Figure 4.29). As under otherexternality models, we found that wGSP produced more efficient and relevant rankings than GFP and uGSP,but performed significantly worse than VCG. Again, revenue was ambiguous, with GSP revenue varying630.0 0.2 0.4 0.6 0.8 1.0Fraction of welfare produced by top positionCAS-UNI0.00.20.40.60.81.0Worst-case efficiencyρ=0.501 ∗ ∗ 0.0 0.2 0.4 0.6 0.8 1.0Fraction of welfare produced by top positionHYB-UNI0.00.20.40.60.81.0Worst-case efficiencyρ=0.583 ∗ ∗ 0.0 0.2 0.4 0.6 0.8 1.0Fraction of welfare produced by top positionGIM-UNI0.00.20.40.60.81.0Worst-case efficiencyρ=0.417 ∗ ∗(a) Cascade: Uniform (b) Hybrid: Uniform (c) GIM: Uniform0.0 0.2 0.4 0.6 0.8 1.0Fraction of welfare produced by top positionCAS-LN0.00.20.40.60.81.0Worst-case efficiencyρ=0.614 ∗ ∗ 0.0 0.2 0.4 0.6 0.8 1.0Fraction of welfare produced by top positionHYB-LN0.00.20.40.60.81.0Worst-case efficiencyρ=0.219 0.0 0.2 0.4 0.6 0.8 1.0Fraction of welfare produced by top positionGIM-LN0.00.20.40.60.81.0Worst-case efficiencyρ=0.580 ∗ ∗(d) Cascade: Log-Normal (e) Hybrid: Log-Normal (f) GIM: Log-NormalFigure 4.28: wGSP tended to have good worst-case efficiency when the top position produced themajority of the surplus (in an efficient allocation). Thus, in hybrid distributions, where thisoccurred extremely frequently, wGSP tended to have better worst-case efficiency than in cascadeor GIM.GIM-UNI GIM-LN0.00.20.40.60.81.0EfficiencyGFPuGSPwGSPDiscrete VCG GIM-UNI GIM-LN0.00.20.40.60.81.0RevenueGFPuGSPwGSPVCGDiscrete VCGGIM-UNI GIM-LN0.00.20.40.60.81.0RelevanceGFPuGSPwGSPVCGDiscrete VCG(a) Efficiency (b) Revenue (c) RelevanceFigure 4.29: Comparing the average performance of different position auction types in GIM settings.The “whiskers” in this plot indicate equilibrium selection effects: the bar is the median equilib-rium, while the top and bottom whiskers correspond to best- and worst-case equilibria.64substantially across equilibria and all median revenues fairly close to VCG’s revenue.4.6 Sensitivity AnalysisComputational analysis techniques oblige us to make concrete choices about discretization and the sizes ofproblems that we study. This section examines the extent to which our findings varied under five such detailsabout the position auction setting that we have held constant until now. (1) Although real-world auctionsare discrete, the coarseness of the discretization can vary greatly from keyword to keyword or advertiserto advertiser, depending on the size of each advertiser’s valuation relative to the bid increment. Thus, it isimportant to understand how sensitive our findings are to the scale of the discretization. When bids andpayments are discrete, issues also arise that are not significant in continuous games, such as (2) how tieswill be broken and (3) how payments will be rounded or aggregated. Furthermore, computational analysisrequires setting many parameters that can often be left unspecified in a theoretical analysis, such as (4) thenumber of bidders and (5) the number of slots for sale. We chose two widely studied distributions to considerin our scaling studies: one simple (V) and one with externalities (cascade). In both cases we used the morerealistic valuation distribution, log-normal.4.6.1 Sensitivity to Bid Increment SizeFirst, we considered the problem of increment size. In addition to V-LN and CAS-LN we also consideredthe BHN-UNI distribution, because we found BHN’s interaction with increment size particularly interesting.Recall that earlier we found that the single-increment reserve price was sufficient to cause dramatic efficiencyloss. We anticipated that as increment size decreased, this would occur less frequently. For each distribution,we varied the number of bid increments from 5 to 40, at each step generating 100 instances and computingall relevant metrics (see Figures 4.30 and 4.31). In the V and cascade distributions, we found that (withone exception) the relative performance of different mechanisms became steady once k was greater than 10,and that absolute performance became stable once k was greater than 20. The one exception was the GFPauction; in both distributions, GFP’s performance (both in relative and absolute terms) varied dramatically ask grew. BHN’s behavior was particularly interesting. As the number of increments increased, the efficiencyand relevance of all auctions dramatically increased (because the problem of unallocated slots became lesscommon). Worst-case envy and revenue tended to get worse as the number of increments increased, whilemedian- and best-case revenue and envy remained fairly flat.655 10 15 20 25 30 35 40kV-LN0.00.20.40.60.81.0EfficiencyGFP (µ=0.913)uGSP (µ=0.926)wGSP (µ=0.992)Discrete VCG (µ=0.999)5 10 15 20 25 30 35 40kV-LN0.00.20.40.60.81.0RevenueGFP (µ=0.377)uGSP (µ=0.393)wGSP (µ=0.375)VCG (µ=0.354)Discrete VCG (µ=0.354)(a) V: Log-Normal, Efficiency (d) V: Log-Normal, Revenue5 10 15 20 25 30 35 40kBHN-UNI0.00.20.40.60.81.0EfficiencyGFP (µ=0.477)uGSP (µ=0.494)wGSP (µ=0.510)Discrete VCG (µ=0.870)5 10 15 20 25 30 35 40kBHN-UNI0.00.20.40.60.81.0RevenueGFP (µ=0.204)uGSP (µ=0.211)wGSP (µ=0.217)VCG (µ=0.259)Discrete VCG (µ=0.148)(b) BHN: Uniforml, Efficiency (e) BHN: Uniform, Revenue5 10 15 20 25 30 35 40kCAS-LN0.00.20.40.60.81.0EfficiencyGFP (µ=0.900)uGSP (µ=0.911)wGSP (µ=0.943)Discrete VCG (µ=0.999)5 10 15 20 25 30 35 40kCAS-LN0.00.20.40.60.81.0RevenueGFP (µ=0.380)uGSP (µ=0.407)wGSP (µ=0.365)VCG (µ=0.301)Discrete VCG (µ=0.301)(c) Cascade: Log-Normal, Efficiency (f) Cascade: Log-Normal, RevenueFigure 4.30: Comparing different auction designs as the number of bid increments varies.665 10 15 20 25 30 35 40kV-LN0.00.20.40.60.81.0RelevanceGFP (µ=0.827)uGSP (µ=0.825)wGSP (µ=0.928)VCG (µ=0.942)Discrete VCG (µ=0.939)5 10 15 20 25 30 35 40kV-LN0.00.20.40.60.81.0EnvyGFP (µ=0.150)uGSP (µ=0.179)wGSP (µ=0.057)(g) V: Log-Normal, Relevance (j) V: Log-Normal, Envy5 10 15 20 25 30 35 40kBHN-UNI0.00.20.40.60.81.0RelevanceGFP (µ=0.401)uGSP (µ=0.413)wGSP (µ=0.472)VCG (µ=0.908)Discrete VCG (µ=0.751)5 10 15 20 25 30 35 40kBHN-UNI0.00.20.40.60.81.0EnvyGFP (µ=0.134)uGSP (µ=0.144)wGSP (µ=0.085)(h) BHN: Uniforml, Relevance (k) BHN: Uniform, Envy5 10 15 20 25 30 35 40kCAS-LN0.00.20.40.60.81.0RelevanceGFP (µ=0.764)uGSP (µ=0.758)wGSP (µ=0.814)VCG (µ=0.933)Discrete VCG (µ=0.930)(i) Cascade: Log-Normal, RelevanceFigure 4.31: Comparing different auction designs as the number of bid increments varies (continued).674.6.2 Sensitivity to Tie-Breaking RulesDiscretization also invites the possibility that ties will occur between bids with positive probability. Thus,while tie-breaking rules are typically unimportant to the analysis of auctions with continuous action spaces,they can be significant in discrete settings. Observe, however, that this problem does not arise under most ofour distributions; although bids are integral, each bid is weighted by a real number, with the property that forany pair of weights the probability of their ratio being integral is zero. Thus, we only needed to examinesensitivity to tie breaking in the two models that did not have weights: EOS-LN and BSS-UNI.Broadly, we found that tie breaking made almost no difference in the EOS-LN distribution; however,lexicographic tie breaking consistently decreased performance in every metric in the BSS distribution (seeFigure 4.32). This problem arose when multiple agents preferred a lower slot to a higher slot, and strictlypreferred not to be shown in the high position. With random tie breaking, each agent could win their morepreferred slot often enough to justify paying a high price. However, when ties are broken lexicographically,some agent must pay a higher price and get a less desirable slot, inducing him to deviate. Thus, random tiebreaking achieved much higher relevance, selling clicks even in cases where continuous VCG did not.4.6.3 Sensitivity to Rounding RulesThe last problem discretization poses for an auctioneer is the question of how to round prices in wGSPauctions. Because each bidder’s integral bid is scaled by an arbitrary real constant, the next-highest bidmight not correspond to any integer price. So far, we have assumed that prices are rounded up (i.e., that abidder must pay the minimum amount that she could bid to win the position she won). In this section wealso consider rounding down, rounding to the nearest integer, and rounding up plus 1 increment (which wasused by Yahoo! [30]). Here our experiments consisted of 100 instances each from the V-LN and CAS-LNdistributions (see Figure 4.33). We found, somewhat unsurprisingly, that more “aggressive” rounding rules(i.e., rules that favor higher prices) tended to produce higher revenue regardless of the distribution and theequilibrium-selection criterion (the difference between the best and worst rounding rules ranged from 10%to 25%), with a corresponding improvement in envy. More surprisingly, we found that rounding rules hadnoticeably smaller effects on economic efficiency and relevance (not more than 2.5%). Thus, the aggressiverounding rules used in practice appear to increase revenue at very little cost in terms of ad quality.4.6.4 Sensitivity to the Number of BiddersTo study whether our findings were sensitive to the number of bidders, we used the same two distributions,V-LN and CAS-LN. For each, we solved 100 instances, and studied how each auction performed as thenumber of bidders varied from two to ten. Throughout, we assumed that there was no artificial limit on supply;68EOS-LN BSS0.00.20.40.60.81.0EfficiencyGFP (random)GFP (lexicographic)uGSP (random)uGSP (lexicographic)Discrete VCGEOS-LN BSS0.00.20.40.60.81.0RevenueGFP (random)GFP (lexicographic)uGSP (random)uGSP (lexicographic)VCGDiscrete VCG(a) EOS-LN and BSS: Efficiency (b) EOS-LN and BSS: RevenueEOS-LN BSS0.00.20.40.60.81.0RelevanceGFP (random)GFP (lexicographic)uGSP (random)uGSP (lexicographic)VCGDiscrete VCGEOS-LN BSS0.00.20.40.60.81.0EnvyGFP (random)GFP (lexicographic)uGSP (random)uGSP (lexicographic)(c) EOS-LN and BSS: Relevance (d) EOS-LN and BSS: EnvyFigure 4.32: How tie breaking affects auction outcomes. The “whiskers” in this plot indicate equilibriumselection effects: the bar is the median equilibrium, while the top and bottom whiskers correspondto best- and worst-case equilibria.i.e., the search engine would allocate a slot to every advertiser who bid more than zero. (We investigatethe impact of this assumption next.) We found that the relative ranking between position auction variantsremained consistent as the number of bidders varied (see Figures 4.34 and 4.35). As the number of biddersincreased, all position auctions became progressively less efficient and relevant (although in V-LN, wGSPexperienced less such decrease than the other position auctions), and all auctions (including VCG) sawincreasing normalized revenue (i.e., fraction of the surplus going to the seller).4.6.5 Sensitivity to the Number of SlotsSo far, all of our experiments have assumed that the search engine can allocate space to every single advertiser(as, indeed, is often the case). Here we consider what happens when the supply of slots is limited. This is69V-LN CAS-LN0.00.20.40.60.81.0EfficiencywGSP (floor)wGSP (nearest)wGSP (ceiling)wGSP (ceiling+1)Discrete VCGV-LN CAS-LN0.00.20.40.60.81.0RevenuewGSP (floor)wGSP (nearest)wGSP (ceiling)wGSP (ceiling+1)VCGDiscrete VCG(a) Efficiency (b) RevenueV-LN CAS-LN0.00.20.40.60.81.0RelevancewGSP (floor)wGSP (nearest)wGSP (ceiling)wGSP (ceiling+1)VCGDiscrete VCGV-LN0.00.20.40.60.81.0EnvywGSP (floor)wGSP (nearest)wGSP (ceiling)wGSP (ceiling+1)(c) Relevance (d) EnvyFigure 4.33: How price rounding affects auction outcomes. The “whiskers” in this plot indicateequilibrium selection effects: the bar is the median equilibrium, while the top and bottomwhiskers correspond to best- and worst-case equilibria.particularly important for cascade and hybrid models, under which both price of anarchy and computationalcomplexity are known to depend on the total supply (m) [56]. We again considered V-LN and CAS-LN, ineach case varying the number of slots from one to five.We present our results in Figures 4.36 and 4.37. Our first striking observation is that the case of m = 1 (i.e.,selling a single good to “quality-weighted” bidders) is distinctly different from m > 1. When m = 1 thereis almost no variability in the allocation across different auctions, and therefore likewise no variability inefficiency and relevance. (The one exception is relevance in wGSP: if two bidders have quality-weightedvaluations within an increment of each other but different click-through rates, either one could win inequilibrium.) However, in GSP auctions revenue (and therefore envy) was nevertheless extremely variableacross equilibria. (Indeed in the continuous case, any outcome having revenue between zero and the VCGpayment is possible in conservative equilibrium.) When two or more slots were available, competition for the702 3 4 5 6 7 8 9 10nV-LN0.00.20.40.60.81.0EfficiencyGFP (µ=0.912)uGSP (µ=0.935)wGSP (µ=0.994)Discrete VCG (µ=0.999)2 3 4 5 6 7 8 9 10nCAS-LN0.00.20.40.60.81.0EfficiencyGFP (µ=0.889)uGSP (µ=0.902)wGSP (µ=0.937)Discrete VCG (µ=0.999)(a) V: Log-Normal, Efficiency (b) Cascade: Log-Normal, Efficiency2 3 4 5 6 7 8 9 10nV-LN0.00.20.40.60.81.0RevenueGFP (µ=0.388)uGSP (µ=0.389)wGSP (µ=0.379)VCG (µ=0.342)Discrete VCG (µ=0.342)2 3 4 5 6 7 8 9 10nCAS-LN0.00.20.40.60.81.0RevenueGFP (µ=0.391)uGSP (µ=0.397)wGSP (µ=0.350)VCG (µ=0.280)Discrete VCG (µ=0.280)(c) V: Log-Normal, Revenue (d) Cascade: Log-Normal, RevenueFigure 4.34: Comparing different auction designs as the number of agents varies.lower slots dramatically decreased the range of prices that the top bidder could be charged, and hence therange of possible revenues. However, these lower slots could attract a bidder who should (in the efficientallocation) have appeared in a high position. Thus, with two or more slots, efficiency (and relevance) becamemore variable for all position auction designs. The relative performance of all auctions remained consistent,with wGSP being slightly worse than VCG and substantially better than uGSP and GFP. Revenue was adifferent story: as m increased, all auctions extracted a smaller fraction of the possible surplus because theavailability of lower positions decreased competition for the higher positions. However, in GSP auctions, thisdecline occurred more gradually than in GFP and VCG.712 3 4 5 6 7 8 9 10nV-LN0.00.20.40.60.81.0RelevanceGFP (µ=0.824)uGSP (µ=0.829)wGSP (µ=0.921)VCG (µ=0.931)Discrete VCG (µ=0.929)2 3 4 5 6 7 8 9 10nCAS-LN0.00.20.40.60.81.0RelevanceGFP (µ=0.753)uGSP (µ=0.753)wGSP (µ=0.815)VCG (µ=0.933)Discrete VCG (µ=0.931)(e) V: Log-Normal, Relevance (f) Cascade: Log-Normal, Relevance2 3 4 5 6 7 8 9 10nV-LN0.00.20.40.60.81.0EnvyGFP (µ=0.159)uGSP (µ=0.173)wGSP (µ=0.048)(g) V: Log-Normal, EnvyFigure 4.35: Comparing different auction designs as the number of agents varies (continued).4.7 Conclusions and Future WorkThis chapter demonstrates the feasibility of computational, rather than theoretical, mechanism analysis,applying algorithms for equilibrium computation to address a wide range of open questions about position-auction games. The main technical obstacle we faced was representing such games in a computationallyusable form. We accomplished this by identifying encoder algorithms that take as input a position-auctionsetting (based on a wide variety of models drawn from the literature) and the parameters of the positionauction (e.g., pricing rule, tie-breaking rule) and produce as output an action-graph game (AGG). Theseencoder algorithms, when combined with state-of-the-art equilibrium-finding algorithms, allow us to veryquickly compute exact Nash equilibria of realistic position-auction games, and hence to provide quantitativestatistical answers to many open questions in the literature. For example, where existing research into the721 2 3 4 5mV-LN0.00.20.40.60.81.0EfficiencyGFP (µ=0.909)uGSP (µ=0.920)wGSP (µ=0.999)Discrete VCG (µ=1.000)1 2 3 4 5mCAS-LN0.00.20.40.60.81.0EfficiencyGFP (µ=0.911)uGSP (µ=0.922)wGSP (µ=0.965)Discrete VCG (µ=1.000)(a) V: Log-Normal, Efficiency (b) Cascade: Log-Normal, Efficiency1 2 3 4 5mV-LN0.00.20.40.60.81.0RevenueGFP (µ=0.480)uGSP (µ=0.424)wGSP (µ=0.376)VCG (µ=0.450)Discrete VCG (µ=0.450)1 2 3 4 5mCAS-LN0.00.20.40.60.81.0RevenueGFP (µ=0.470)uGSP (µ=0.420)wGSP (µ=0.366)VCG (µ=0.406)Discrete VCG (µ=0.406)(c) V: Log-Normal, Revenue (d) Cascade: Log-Normal, RevenueFigure 4.36: Comparing different auction designs as the number of slots varies.widely-studied V and EOS models has focused on locally envy-free Nash equilibria, we found that suchequilibria are a small minority among the set of equilibria, and often do not exist when bids are restrictedto integer increments. These two sets also had interesting qualitative differences: while every envy-freeequilibrium is known to yield more revenue than VCG, we found that the majority of Nash equilibria yield lessrevenue than VCG. These techniques also allowed us to make direct, apples-to-apples to comparisons betweendifferent auction designs—varying the auction while holding the set of bidders and the equilibrium-selectioncriteria constant—yielding some striking results. In particular, while wGSP is known to be inefficient undermany models of bidder valuations, we found that it was the most efficient position auction under nearly everymodel and valuation distribution, and often close to fully efficient in expectation even in models where itsworst-case efficiency is very poor. Similarly, wGSP tended to outperform other position auction designs interms of relevance and envy. On the other hand, we found that different position auction mechanisms couldnot easily be distinguished in terms of their revenues, with relative performance varying dramatically across731 2 3 4 5mV-LN0.00.20.40.60.81.0RelevanceGFP (µ=0.791)uGSP (µ=0.793)wGSP (µ=0.923)VCG (µ=0.922)Discrete VCG (µ=0.923)1 2 3 4 5mCAS-LN0.00.20.40.60.81.0RelevanceGFP (µ=0.763)uGSP (µ=0.763)wGSP (µ=0.844)VCG (µ=0.912)Discrete VCG (µ=0.913)(e) V: Log-Normal, Relevance (f) Cascade: Log-Normal, Relevance1 2 3 4 5mV-LN0.00.20.40.60.81.0EnvyGFP (µ=0.188)uGSP (µ=0.241)wGSP (µ=0.097)(g) V: Log-Normal, EnvyFigure 4.37: Comparing different auction designs as the number of slots varies (continued).models, distributions, and equilibria.Our techniques for computational mechanism analysis are applicable to many other open problems in positionauctions. One very general direction is using computational mechanism analysis to facilitate mechanismdesign (e.g., searching through the space of reserve prices and quality-score distortions in order to find aposition auction variant that optimizes some objective such as revenue or relevance, as indeed we do inChapter 5). Other promising ideas include investigating nonlinear value for money (representing bidderswho are not risk neutral, or who have budgets or return-on-investment targets), and Bayesian games. Wealso expect that it would be possible to extend our encoders to deal with richer auction rules (e.g., auctionswhere bidders specify budgets, or auctions with greater expressiveness as in [9]), or to encompass strategicinteractions between multiple heterogeneous auctions (e.g., arising due to advertisers using broad match, orlocation targeting, or having a single budget that spans a multi-keyword campaign).744.8 Summary Tables and Statistical ComparisonsIn this section we provide detailed numerical summaries of our main experiments, as well as statisticalcomparisons between different auctions. For each pair of auctions A,B and metric (e.g., efficiency), we firsttest whether A is robustly superior to B, meaning that A’s worst case is significantly better than B’s bestcase. (This is signified by †.) Next, we test a looser condition, whether A is better than B up to the limits ofequilibrium selection, meaning that A is significantly better than B when comparing best case to best case,median to median and worst case to worst case. Next, we test whether or not one auction’s performance“spans” the other, denoted by ⊆. That is, A⊆ B indicates that A’s worst-case performance is significantlybetter than B’s, but that B’s best-case performance is significantly better than A’s. ∼ is used to indicate thatnone of these conditions is true with sufficient statistical confidence.Mechanism Worst Median Best nGFP 0.979 (σ = 0.011) 0.979 (σ = 0.011) 0.979 (σ = 0.011) 161uGSP 0.958 (σ = 0.039) 0.998 (σ = 0.003) 1.000 (σ = 0.001) 200VCG discrete 1.000 (σ = 0.001) 1.000 (σ = 0.001) 1.000 (σ = 0.001) 200GFP uGSP VCG dVCGGFP ∼ ≤ †?? ≤ †??uGSP ≤ †?? ∼VCG ≥ †??dVCGTable 4.2: Comparing Efficiency (EOS-UNI distribution)Mechanism Worst Median Best nGFP 0.487 (σ = 0.168) 0.487 (σ = 0.168) 0.487 (σ = 0.168) 161uGSP 0.365 (σ = 0.172) 0.498 (σ = 0.161) 0.635 (σ = 0.170) 200VCG 0.572 (σ = 0.190) 0.572 (σ = 0.190) 0.572 (σ = 0.190) 200VCG discrete 0.573 (σ = 0.191) 0.573 (σ = 0.191) 0.573 (σ = 0.191) 200GFP uGSP VCG dVCGGFP ∼ ∼ ∼uGSP ⊇?? ⊇??VCG ∼dVCGTable 4.3: Comparing Revenue (EOS-UNI distribution)75Mechanism Worst Median Best nGFP 0.021 (σ = 0.013) 0.021 (σ = 0.013) 0.021 (σ = 0.013) 161uGSP 0.003 (σ = 0.005) 0.092 (σ = 0.083) 0.282 (σ = 0.230) 200GFP uGSPGFP ∼uGSPTable 4.4: Comparing Envy (EOS-UNI distribution)76Mechanism Worst Median Best nGFP 0.971 (σ = 0.010) 0.971 (σ = 0.009) 0.971 (σ = 0.010) 176uGSP 0.935 (σ = 0.048) 0.995 (σ = 0.005) 1.000 (σ = 0.000) 198VCG discrete 1.000 (σ = 0.001) 1.000 (σ = 0.001) 1.000 (σ = 0.001) 200GFP uGSP VCG dVCGGFP ∼ ≤ †?? ≤ †??uGSP ∼ ∼VCG ≥ †??dVCGTable 4.5: Comparing Efficiency (EOS-LN distribution)Mechanism Worst Median Best nGFP 0.352 (σ = 0.083) 0.352 (σ = 0.083) 0.352 (σ = 0.082) 176uGSP 0.236 (σ = 0.062) 0.387 (σ = 0.078) 0.547 (σ = 0.114) 198VCG 0.422 (σ = 0.111) 0.422 (σ = 0.111) 0.422 (σ = 0.111) 200VCG discrete 0.422 (σ = 0.111) 0.422 (σ = 0.111) 0.422 (σ = 0.111) 200GFP uGSP VCG dVCGGFP ⊆?? ∼ ∼uGSP ⊇?? ⊇??VCG ∼dVCGTable 4.6: Comparing Revenue (EOS-LN distribution)Mechanism Worst Median Best nGFP 0.036 (σ = 0.015) 0.037 (σ = 0.015) 0.037 (σ = 0.015) 176uGSP 0.001 (σ = 0.002) 0.066 (σ = 0.041) 0.279 (σ = 0.109) 198GFP uGSPGFP ∼uGSPTable 4.7: Comparing Envy (EOS-LN distribution)77Mechanism Worst Median Best nGFP 0.830 (σ = 0.157) 0.830 (σ = 0.157) 0.830 (σ = 0.157) 146uGSP 0.736 (σ = 0.218) 0.826 (σ = 0.208) 0.883 (σ = 0.195) 200wGSP 0.964 (σ = 0.040) 0.999 (σ = 0.005) 1.000 (σ = 0.001) 200wGFP 0.974 (σ = 0.014) 0.974 (σ = 0.014) 0.974 (σ = 0.014) 181VCG discrete 1.000 (σ = 0.001) 1.000 (σ = 0.001) 1.000 (σ = 0.001) 200GFP uGSP wGSP wGFP VCG dVCGGFP ∼ ≤ †?? ∼ ≤ †?? ≤ †??uGSP ≤ †?? ∼ ≤ †?? ≤ †??wGSP ∼ ≤ †?? ∼wGFP ≤ †?? ≤ †??VCG ≥ †??dVCGTable 4.8: Comparing Efficiency (V-UNI distribution)Mechanism Worst Median Best nGFP 0.393 (σ = 0.132) 0.393 (σ = 0.132) 0.393 (σ = 0.131) 146uGSP 0.267 (σ = 0.152) 0.407 (σ = 0.160) 0.570 (σ = 0.182) 200wGSP 0.255 (σ = 0.142) 0.389 (σ = 0.152) 0.525 (σ = 0.179) 200wGFP 0.393 (σ = 0.173) 0.393 (σ = 0.173) 0.393 (σ = 0.173) 181VCG 0.443 (σ = 0.187) 0.443 (σ = 0.187) 0.443 (σ = 0.187) 200VCG discrete 0.442 (σ = 0.187) 0.442 (σ = 0.187) 0.442 (σ = 0.187) 200GFP uGSP wGSP wGFP VCG dVCGGFP ∼ ∼ ∼ ∼ ∼uGSP ∼ ⊇?? ⊇?? ⊇??wGSP ⊇?? ⊇?? ⊇??wGFP ∼ ∼VCG ∼dVCGTable 4.9: Comparing Revenue (V-UNI distribution)78Mechanism Worst Median Best nGFP 0.701 (σ = 0.211) 0.701 (σ = 0.211) 0.701 (σ = 0.211) 146uGSP 0.630 (σ = 0.243) 0.697 (σ = 0.245) 0.751 (σ = 0.247) 200wGSP 0.866 (σ = 0.136) 0.901 (σ = 0.128) 0.915 (σ = 0.127) 200wGFP 0.888 (σ = 0.118) 0.889 (σ = 0.118) 0.889 (σ = 0.118) 181VCG 0.901 (σ = 0.129) 0.901 (σ = 0.129) 0.901 (σ = 0.129) 200VCG discrete 0.902 (σ = 0.128) 0.902 (σ = 0.128) 0.902 (σ = 0.128) 200GFP uGSP wGSP wGFP VCG dVCGGFP ∼ ≤ †?? ∼ ≤ †?? ≤ †??uGSP ≤ †?? ∼ ≤ †?? ≤ †??wGSP ∼ ⊇?? ⊇??wGFP ∼ ∼VCG ∼dVCGTable 4.10: Comparing Relevance (V-UNI distribution)Mechanism Worst Median Best nGFP 0.335 (σ = 0.348) 0.335 (σ = 0.348) 0.335 (σ = 0.348) 146uGSP 0.242 (σ = 0.350) 0.409 (σ = 0.422) 0.627 (σ = 0.452) 200wGSP 0.002 (σ = 0.004) 0.076 (σ = 0.067) 0.247 (σ = 0.183) 200wGFP 0.026 (σ = 0.014) 0.026 (σ = 0.014) 0.026 (σ = 0.014) 181GFP uGSP wGSP wGFPGFP ∼ ∼ ≥ †??uGSP ≥?? ≥ †??wGSP ⊇??wGFPTable 4.11: Comparing Envy (V-UNI distribution)79Mechanism Worst Median Best nGFP 0.921 (σ = 0.059) 0.922 (σ = 0.059) 0.922 (σ = 0.059) 174uGSP 0.810 (σ = 0.157) 0.941 (σ = 0.092) 0.993 (σ = 0.033) 200wGSP 0.938 (σ = 0.056) 0.997 (σ = 0.005) 1.000 (σ = 0.000) 200wGFP 0.970 (σ = 0.012) 0.970 (σ = 0.012) 0.970 (σ = 0.012) 198VCG discrete 1.000 (σ = 0.001) 1.000 (σ = 0.001) 1.000 (σ = 0.001) 200GFP uGSP wGSP wGFP VCG dVCGGFP ∼ ∼ ∼ ≤ †?? ≤ †??uGSP ≤?? ⊇?? ≤ †?? ≤ †??wGSP ∼ ∼ ∼wGFP ≤ †?? ≤ †??VCG ≥ †??dVCGTable 4.12: Comparing Efficiency (V-LN distribution)Mechanism Worst Median Best nGFP 0.333 (σ = 0.076) 0.333 (σ = 0.076) 0.333 (σ = 0.076) 174uGSP 0.190 (σ = 0.054) 0.362 (σ = 0.079) 0.558 (σ = 0.128) 200wGSP 0.172 (σ = 0.074) 0.328 (σ = 0.096) 0.489 (σ = 0.138) 200wGFP 0.302 (σ = 0.107) 0.302 (σ = 0.107) 0.302 (σ = 0.107) 198VCG 0.351 (σ = 0.126) 0.351 (σ = 0.126) 0.351 (σ = 0.126) 200VCG discrete 0.351 (σ = 0.126) 0.351 (σ = 0.126) 0.351 (σ = 0.126) 200GFP uGSP wGSP wGFP VCG dVCGGFP ⊆?? ⊆?? ∼ ∼ ∼uGSP ≥?? ⊇?? ⊇?? ⊇??wGSP ⊇?? ⊇?? ⊇??wGFP ≤ †?? ≤ †??VCG ∼dVCGTable 4.13: Comparing Revenue (V-LN distribution)80Mechanism Worst Median Best nGFP 0.842 (σ = 0.114) 0.842 (σ = 0.114) 0.842 (σ = 0.114) 174uGSP 0.750 (σ = 0.147) 0.848 (σ = 0.132) 0.921 (σ = 0.104) 200wGSP 0.882 (σ = 0.104) 0.938 (σ = 0.082) 0.979 (σ = 0.046) 200wGFP 0.925 (σ = 0.065) 0.925 (σ = 0.065) 0.925 (σ = 0.065) 198VCG 0.938 (σ = 0.083) 0.938 (σ = 0.083) 0.938 (σ = 0.083) 200VCG discrete 0.939 (σ = 0.081) 0.939 (σ = 0.081) 0.939 (σ = 0.081) 200GFP uGSP wGSP wGFP VCG dVCGGFP ∼ ≤ †?? ≤ †?? ≤ †?? ≤ †??uGSP ≤?? ∼ ≤ †?? ≤ †??wGSP ⊇?? ⊇?? ⊇??wGFP ≤ †?? ≤ †??VCG ∼dVCGTable 4.14: Comparing Relevance (V-LN distribution)Mechanism Worst Median Best nGFP 0.148 (σ = 0.149) 0.148 (σ = 0.149) 0.148 (σ = 0.150) 174uGSP 0.033 (σ = 0.077) 0.168 (σ = 0.174) 0.429 (σ = 0.259) 200wGSP 0.001 (σ = 0.002) 0.057 (σ = 0.045) 0.257 (σ = 0.123) 200wGFP 0.033 (σ = 0.015) 0.033 (σ = 0.015) 0.033 (σ = 0.015) 198GFP uGSP wGSP wGFPGFP ⊆?? ∼ ≥ †??uGSP ≥?? ∼wGSP ⊇??wGFPTable 4.15: Comparing Envy (V-LN distribution)81Mechanism Worst Median Best nGFP 0.611 (σ = 0.398) 0.638 (σ = 0.387) 0.640 (σ = 0.388) 200uGSP 0.584 (σ = 0.392) 0.657 (σ = 0.400) 0.682 (σ = 0.412) 200wGSP 0.648 (σ = 0.427) 0.671 (σ = 0.440) 0.675 (σ = 0.443) 200VCG discrete 0.916 (σ = 0.151) 0.916 (σ = 0.151) 0.916 (σ = 0.151) 200GFP uGSP wGSP VCG dVCGGFP ⊆?? ∼ ≤ †?? ≤ †??uGSP ∼ ≤ †?? ≤ †??wGSP ≤ †?? ≤ †??VCG ≥ †??dVCGTable 4.16: Comparing Efficiency (BHN-UNI distribution)Mechanism Worst Median Best nGFP 0.225 (σ = 0.180) 0.252 (σ = 0.193) 0.257 (σ = 0.198) 200uGSP 0.196 (σ = 0.170) 0.258 (σ = 0.197) 0.295 (σ = 0.220) 200wGSP 0.231 (σ = 0.201) 0.270 (σ = 0.218) 0.296 (σ = 0.234) 200VCG 0.269 (σ = 0.160) 0.269 (σ = 0.160) 0.269 (σ = 0.160) 200VCG discrete 0.219 (σ = 0.236) 0.219 (σ = 0.236) 0.219 (σ = 0.236) 200GFP uGSP wGSP VCG dVCGGFP ⊆?? ∼ ∼ ∼uGSP ∼ ∼ ∼wGSP ∼ ∼VCG ≥ †??dVCGTable 4.17: Comparing Revenue (BHN-UNI distribution)82Mechanism Worst Median Best nGFP 0.510 (σ = 0.351) 0.528 (σ = 0.342) 0.530 (σ = 0.344) 200uGSP 0.464 (σ = 0.338) 0.542 (σ = 0.356) 0.581 (σ = 0.377) 200wGSP 0.604 (σ = 0.413) 0.626 (σ = 0.423) 0.638 (σ = 0.430) 200VCG 0.903 (σ = 0.113) 0.903 (σ = 0.113) 0.903 (σ = 0.113) 200VCG discrete 0.788 (σ = 0.256) 0.788 (σ = 0.256) 0.788 (σ = 0.256) 200GFP uGSP wGSP VCG dVCGGFP ⊆?? ≤ †?? ≤ †?? ≤ †??uGSP ≤?? ≤ †?? ≤ †??wGSP ≤ †?? ≤ †??VCG ≥ †??dVCGTable 4.18: Comparing Relevance (BHN-UNI distribution)Mechanism Worst Median Best nGFP 0.152 (σ = 0.173) 0.174 (σ = 0.190) 0.176 (σ = 0.191) 200uGSP 0.121 (σ = 0.180) 0.191 (σ = 0.220) 0.287 (σ = 0.311) 200wGSP 0.097 (σ = 0.185) 0.110 (σ = 0.185) 0.146 (σ = 0.198) 200GFP uGSP wGSPGFP ⊆?? ∼uGSP ∼wGSPTable 4.19: Comparing Envy (BHN-UNI distribution)Mechanism Worst Median Best nGFP 0.775 (σ = 0.221) 0.835 (σ = 0.127) 0.854 (σ = 0.120) 189uGSP 0.799 (σ = 0.208) 0.902 (σ = 0.118) 0.918 (σ = 0.109) 200VCG discrete 0.999 (σ = 0.003) 0.999 (σ = 0.003) 0.999 (σ = 0.003) 200GFP uGSP VCG dVCGGFP ∼ ≤ †?? ≤ †??uGSP ≤ †?? ≤ †??VCG ≥ †??dVCGTable 4.20: Comparing Efficiency (BSS distribution)83Mechanism Worst Median Best nGFP 0.255 (σ = 0.171) 0.306 (σ = 0.154) 0.360 (σ = 0.170) 189uGSP 0.210 (σ = 0.140) 0.327 (σ = 0.152) 0.436 (σ = 0.193) 200VCG 0.408 (σ = 0.188) 0.408 (σ = 0.188) 0.408 (σ = 0.188) 200VCG discrete 0.412 (σ = 0.186) 0.412 (σ = 0.186) 0.412 (σ = 0.186) 200GFP uGSP VCG dVCGGFP ∼ ∼ ∼uGSP ∼ ∼VCG ≤ †?dVCGTable 4.21: Comparing Revenue (BSS distribution)Mechanism Worst Median Best nGFP 0.015 (σ = 0.020) 0.017 (σ = 0.021) 0.022 (σ = 0.021) 189uGSP 0.016 (σ = 0.023) 0.046 (σ = 0.057) 0.107 (σ = 0.113) 200GFP uGSPGFP ∼uGSPTable 4.22: Comparing Envy (BSS distribution)Mechanism Worst Median Best nGFP 0.837 (σ = 0.150) 0.837 (σ = 0.150) 0.837 (σ = 0.150) 153uGSP 0.650 (σ = 0.230) 0.819 (σ = 0.191) 0.916 (σ = 0.145) 200wGSP 0.855 (σ = 0.118) 0.947 (σ = 0.081) 0.981 (σ = 0.048) 200cwGSP 0.956 (σ = 0.040) 0.995 (σ = 0.010) 1.000 (σ = 0.002) 164VCG discrete 1.000 (σ = 0.000) 1.000 (σ = 0.000) 1.000 (σ = 0.000) 200GFP uGSP wGSP cwGSP VCG dVCGGFP ∼ ∼ ∼ ≤ †?? ≤ †??uGSP ≤?? ∼ ≤ †?? ≤ †??wGSP ∼ ≤ †?? ≤ †??cwGSP ≤ †?? ≤ †??VCG ≥ †??dVCGTable 4.23: Comparing Efficiency (CAS-UNI distribution)84Mechanism Worst Median Best nGFP 0.344 (σ = 0.114) 0.344 (σ = 0.114) 0.344 (σ = 0.114) 153uGSP 0.189 (σ = 0.117) 0.370 (σ = 0.137) 0.575 (σ = 0.157) 200wGSP 0.189 (σ = 0.108) 0.337 (σ = 0.127) 0.498 (σ = 0.157) 200cwGSP 0.156 (σ = 0.098) 0.270 (σ = 0.119) 0.396 (σ = 0.158) 164VCG 0.312 (σ = 0.159) 0.312 (σ = 0.159) 0.312 (σ = 0.159) 200VCG discrete 0.312 (σ = 0.158) 0.312 (σ = 0.158) 0.312 (σ = 0.158) 200GFP uGSP wGSP cwGSP VCG dVCGGFP ⊆?? ∼ ∼ ∼ ∼uGSP ∼ ∼ ⊇?? ⊇??wGSP ∼ ⊇?? ⊇??cwGSP ∼ ∼VCG ∼dVCGTable 4.24: Comparing Revenue (CAS-UNI distribution)Mechanism Worst Median Best nGFP 0.669 (σ = 0.211) 0.669 (σ = 0.211) 0.669 (σ = 0.211) 153uGSP 0.516 (σ = 0.224) 0.645 (σ = 0.224) 0.757 (σ = 0.214) 200wGSP 0.718 (σ = 0.167) 0.813 (σ = 0.167) 0.878 (σ = 0.140) 200cwGSP 0.876 (σ = 0.120) 0.918 (σ = 0.115) 0.945 (σ = 0.097) 164VCG 0.909 (σ = 0.117) 0.909 (σ = 0.117) 0.909 (σ = 0.117) 200VCG discrete 0.906 (σ = 0.120) 0.906 (σ = 0.120) 0.906 (σ = 0.120) 200GFP uGSP wGSP cwGSP VCG dVCGGFP ∼ ∼ ∼ ≤ †?? ≤ †??uGSP ≤?? ∼ ≤ †?? ≤ †??wGSP ∼ ≤ †?? ≤ †??cwGSP ∼ ∼VCG ∼dVCGTable 4.25: Comparing Relevance (CAS-UNI distribution)85Mechanism Worst Median Best nGFP 0.905 (σ = 0.096) 0.905 (σ = 0.096) 0.905 (σ = 0.096) 180uGSP 0.788 (σ = 0.161) 0.905 (σ = 0.119) 0.973 (σ = 0.062) 200wGSP 0.861 (σ = 0.125) 0.951 (σ = 0.074) 0.984 (σ = 0.049) 200cwGSP 0.952 (σ = 0.044) 0.996 (σ = 0.010) 1.000 (σ = 0.002) 170VCG discrete 1.000 (σ = 0.000) 1.000 (σ = 0.000) 1.000 (σ = 0.000) 200GFP uGSP wGSP cwGSP VCG dVCGGFP ∼ ∼ ∼ ≤ †?? ≤ †??uGSP ∼ ∼ ≤ †?? ≤ †??wGSP ∼ ≤ †?? ≤ †??cwGSP ≤ †?? ∼VCG ≥ †??dVCGTable 4.26: Comparing Efficiency (CAS-LN distribution)Mechanism Worst Median Best nGFP 0.344 (σ = 0.112) 0.344 (σ = 0.112) 0.344 (σ = 0.112) 180uGSP 0.214 (σ = 0.101) 0.376 (σ = 0.112) 0.558 (σ = 0.146) 200wGSP 0.172 (σ = 0.096) 0.317 (σ = 0.123) 0.474 (σ = 0.167) 200cwGSP 0.143 (σ = 0.086) 0.249 (σ = 0.114) 0.363 (σ = 0.160) 170VCG 0.283 (σ = 0.166) 0.283 (σ = 0.166) 0.283 (σ = 0.166) 200VCG discrete 0.283 (σ = 0.166) 0.283 (σ = 0.166) 0.283 (σ = 0.166) 200GFP uGSP wGSP cwGSP VCG dVCGGFP ⊆?? ⊆?? ∼ ∼ ∼uGSP ≥?? ∼ ⊇?? ⊇??wGSP ∼ ⊇?? ⊇??cwGSP ∼ ∼VCG ∼dVCGTable 4.27: Comparing Revenue (CAS-LN distribution)86Mechanism Worst Median Best nGFP 0.784 (σ = 0.166) 0.784 (σ = 0.166) 0.784 (σ = 0.166) 180uGSP 0.658 (σ = 0.182) 0.764 (σ = 0.181) 0.873 (σ = 0.156) 200wGSP 0.755 (σ = 0.168) 0.842 (σ = 0.150) 0.912 (σ = 0.126) 200cwGSP 0.904 (σ = 0.090) 0.953 (σ = 0.068) 0.980 (σ = 0.046) 170VCG 0.942 (σ = 0.079) 0.942 (σ = 0.079) 0.942 (σ = 0.079) 200VCG discrete 0.942 (σ = 0.079) 0.942 (σ = 0.079) 0.942 (σ = 0.079) 200GFP uGSP wGSP cwGSP VCG dVCGGFP ∼ ∼ ∼ ≤ †?? ≤ †??uGSP ≤?? ∼ ≤ †?? ≤ †??wGSP ∼ ≤ †?? ≤ †??cwGSP ∼ ∼VCG ∼dVCGTable 4.28: Comparing Relevance (CAS-LN distribution)Mechanism Worst Median Best nGFP 0.821 (σ = 0.160) 0.821 (σ = 0.160) 0.821 (σ = 0.160) 89uGSP 0.711 (σ = 0.263) 0.812 (σ = 0.242) 0.867 (σ = 0.225) 200wGSP 0.955 (σ = 0.083) 0.987 (σ = 0.041) 0.994 (σ = 0.029) 200VCG discrete 1.000 (σ = 0.001) 1.000 (σ = 0.001) 1.000 (σ = 0.001) 200GFP uGSP wGSP VCG dVCGGFP ∼ ≤ †?? ≤ †?? ≤ †??uGSP ≤ †?? ≤ †?? ≤ †??wGSP ≤ †?? ≤ †??VCG ≥ †??dVCGTable 4.29: Comparing Efficiency (HYB-UNI distribution)87Mechanism Worst Median Best nGFP 0.503 (σ = 0.141) 0.503 (σ = 0.141) 0.504 (σ = 0.142) 89uGSP 0.342 (σ = 0.179) 0.508 (σ = 0.188) 0.670 (σ = 0.208) 200wGSP 0.321 (σ = 0.158) 0.469 (σ = 0.152) 0.615 (σ = 0.175) 200VCG 0.534 (σ = 0.185) 0.534 (σ = 0.185) 0.534 (σ = 0.185) 200VCG discrete 0.534 (σ = 0.185) 0.534 (σ = 0.185) 0.534 (σ = 0.185) 200GFP uGSP wGSP VCG dVCGGFP ∼ ∼ ∼ ∼uGSP ∼ ⊇?? ⊇??wGSP ⊇?? ⊇??VCG ∼dVCGTable 4.30: Comparing Revenue (HYB-UNI distribution)Mechanism Worst Median Best nGFP 0.638 (σ = 0.214) 0.638 (σ = 0.214) 0.638 (σ = 0.214) 89uGSP 0.570 (σ = 0.257) 0.652 (σ = 0.261) 0.708 (σ = 0.255) 200wGSP 0.829 (σ = 0.164) 0.864 (σ = 0.152) 0.888 (σ = 0.139) 200VCG 0.886 (σ = 0.136) 0.886 (σ = 0.136) 0.886 (σ = 0.136) 200VCG discrete 0.888 (σ = 0.132) 0.888 (σ = 0.132) 0.888 (σ = 0.132) 200GFP uGSP wGSP VCG dVCGGFP ∼ ∼ ≤ †? ≤ †?uGSP ≤ †?? ≤ †?? ≤ †??wGSP ∼ ∼VCG ∼dVCGTable 4.31: Comparing Relevance (HYB-UNI distribution)88Mechanism Worst Median Best nGFP 0.775 (σ = 0.210) 0.775 (σ = 0.210) 0.776 (σ = 0.210) 171uGSP 0.698 (σ = 0.305) 0.759 (σ = 0.298) 0.812 (σ = 0.275) 200wGSP 0.982 (σ = 0.046) 0.997 (σ = 0.014) 1.000 (σ = 0.001) 200VCG discrete 1.000 (σ = 0.004) 1.000 (σ = 0.004) 1.000 (σ = 0.004) 200GFP uGSP wGSP VCG dVCGGFP ∼ ≤ †?? ≤ †?? ≤ †??uGSP ≤ †?? ≤ †?? ≤ †??wGSP ≤ †?? ∼VCG ≥ †??dVCGTable 4.32: Comparing Efficiency (HYB-LN distribution)Mechanism Worst Median Best nGFP 0.365 (σ = 0.133) 0.366 (σ = 0.133) 0.366 (σ = 0.133) 171uGSP 0.192 (σ = 0.116) 0.320 (σ = 0.160) 0.472 (σ = 0.229) 200wGSP 0.151 (σ = 0.125) 0.274 (σ = 0.176) 0.390 (σ = 0.238) 200VCG 0.325 (σ = 0.222) 0.325 (σ = 0.222) 0.325 (σ = 0.222) 200VCG discrete 0.326 (σ = 0.221) 0.326 (σ = 0.221) 0.326 (σ = 0.221) 200GFP uGSP wGSP VCG dVCGGFP ∼ ∼ ∼ ∼uGSP ∼ ⊇?? ⊇??wGSP ⊇?? ⊇??VCG ∼dVCGTable 4.33: Comparing Revenue (HYB-LN distribution)89Mechanism Worst Median Best nGFP 0.613 (σ = 0.268) 0.613 (σ = 0.268) 0.613 (σ = 0.268) 171uGSP 0.543 (σ = 0.315) 0.596 (σ = 0.325) 0.651 (σ = 0.321) 200wGSP 0.864 (σ = 0.206) 0.895 (σ = 0.180) 0.921 (σ = 0.152) 200VCG 0.904 (σ = 0.168) 0.904 (σ = 0.168) 0.904 (σ = 0.168) 200VCG discrete 0.900 (σ = 0.171) 0.900 (σ = 0.171) 0.900 (σ = 0.171) 200GFP uGSP wGSP VCG dVCGGFP ∼ ≤ †?? ≤ †?? ≤ †??uGSP ≤ †?? ≤ †?? ≤ †??wGSP ⊇?? ⊇??VCG ∼dVCGTable 4.34: Comparing Relevance (HYB-LN distribution)Mechanism Worst Median Best nGFP 0.829 (σ = 0.131) 0.829 (σ = 0.131) 0.829 (σ = 0.131) 130uGSP 0.684 (σ = 0.214) 0.828 (σ = 0.189) 0.905 (σ = 0.135) 200wGSP 0.858 (σ = 0.112) 0.950 (σ = 0.073) 0.981 (σ = 0.043) 200VCG discrete 1.000 (σ = 0.001) 1.000 (σ = 0.001) 1.000 (σ = 0.001) 200GFP uGSP wGSP VCG dVCGGFP ∼ ∼ ≤ †?? ≤ †??uGSP ≤?? ≤ †?? ≤ †??wGSP ≤ †?? ≤ †??VCG ≥ †??dVCGTable 4.35: Comparing Efficiency (GIM-UNI distribution)90Mechanism Worst Median Best nGFP 0.424 (σ = 0.120) 0.424 (σ = 0.120) 0.424 (σ = 0.120) 130uGSP 0.274 (σ = 0.129) 0.462 (σ = 0.147) 0.650 (σ = 0.153) 200wGSP 0.248 (σ = 0.111) 0.405 (σ = 0.122) 0.569 (σ = 0.156) 200VCG 0.384 (σ = 0.164) 0.384 (σ = 0.164) 0.384 (σ = 0.164) 200VCG discrete 0.385 (σ = 0.165) 0.385 (σ = 0.165) 0.385 (σ = 0.165) 200GFP uGSP wGSP VCG dVCGGFP ∼ ∼ ∼ ∼uGSP ∼ ⊇?? ⊇??wGSP ⊇?? ⊇??VCG ∼dVCGTable 4.36: Comparing Revenue (GIM-UNI distribution)Mechanism Worst Median Best nGFP 0.642 (σ = 0.190) 0.642 (σ = 0.190) 0.642 (σ = 0.190) 130uGSP 0.536 (σ = 0.208) 0.654 (σ = 0.222) 0.736 (σ = 0.203) 200wGSP 0.728 (σ = 0.153) 0.818 (σ = 0.160) 0.869 (σ = 0.145) 200VCG 0.900 (σ = 0.124) 0.900 (σ = 0.124) 0.900 (σ = 0.124) 200VCG discrete 0.900 (σ = 0.126) 0.900 (σ = 0.126) 0.900 (σ = 0.126) 200GFP uGSP wGSP VCG dVCGGFP ∼ ∼ ≤ †?? ≤ †??uGSP ≤?? ≤ †?? ≤ †??wGSP ≤ †?? ≤ †??VCG ∼dVCGTable 4.37: Comparing Relevance (GIM-UNI distribution)91Mechanism Worst Median Best nGFP 0.891 (σ = 0.091) 0.892 (σ = 0.091) 0.892 (σ = 0.091) 169uGSP 0.802 (σ = 0.165) 0.906 (σ = 0.128) 0.954 (σ = 0.089) 200wGSP 0.881 (σ = 0.104) 0.956 (σ = 0.064) 0.981 (σ = 0.039) 200VCG discrete 1.000 (σ = 0.001) 1.000 (σ = 0.001) 1.000 (σ = 0.001) 200GFP uGSP wGSP VCG dVCGGFP ∼ ∼ ≤ †?? ≤ †??uGSP ≤?? ≤ †?? ≤ †??wGSP ≤ †?? ≤ †??VCG ≥ †??dVCGTable 4.38: Comparing Efficiency (GIM-LN distribution)Mechanism Worst Median Best nGFP 0.398 (σ = 0.119) 0.398 (σ = 0.119) 0.398 (σ = 0.118) 169uGSP 0.277 (σ = 0.101) 0.428 (σ = 0.120) 0.582 (σ = 0.160) 200wGSP 0.225 (σ = 0.118) 0.354 (σ = 0.148) 0.493 (σ = 0.191) 200VCG 0.339 (σ = 0.185) 0.339 (σ = 0.185) 0.339 (σ = 0.185) 200VCG discrete 0.339 (σ = 0.186) 0.339 (σ = 0.186) 0.339 (σ = 0.186) 200GFP uGSP wGSP VCG dVCGGFP ⊆?? ∼ ∼ ∼uGSP ≥?? ⊇?? ⊇??wGSP ⊇?? ⊇??VCG ∼dVCGTable 4.39: Comparing Revenue (GIM-LN distribution)92Mechanism Worst Median Best nGFP 0.755 (σ = 0.156) 0.755 (σ = 0.155) 0.755 (σ = 0.155) 169uGSP 0.661 (σ = 0.182) 0.762 (σ = 0.179) 0.830 (σ = 0.160) 200wGSP 0.765 (σ = 0.137) 0.844 (σ = 0.132) 0.897 (σ = 0.113) 200VCG 0.933 (σ = 0.093) 0.933 (σ = 0.093) 0.933 (σ = 0.093) 200VCG discrete 0.934 (σ = 0.092) 0.934 (σ = 0.092) 0.934 (σ = 0.092) 200GFP uGSP wGSP VCG dVCGGFP ∼ ∼ ≤ †?? ≤ †??uGSP ≤?? ≤ †?? ≤ †??wGSP ≤ †?? ≤ †??VCG ∼dVCGTable 4.40: Comparing Relevance (GIM-LN distribution)93Chapter 5Application: Maximizing InternetAdvertising Revenue5.1 IntroductionThe previous chapter introduced internet advertising auctions, and showed that computational mechanismanalysis could be used to analyze and compare different auction designs. This chapter also focuses on internetadvertising as an application domain. However, in this chapter, the focus is on computational mechanismanalysis as a tool for mechanism design, specifically with the objective of maximizing search-engine revenue.As described in the previous chapter, the major search engines generate most of their revenue from advertisingauctions and all have converged on the weighted generalized second-price auction (wGSP). Notably, thisauction design is not incentive compatible—it is typically not in an advertiser’s best interest to truthfullyreport her per-click willingness to pay. In contrast, incentive compatible designs have been proposed by theresearch community [e.g., 1], but not adopted by any search engine.Revenue optimization is typically framed as a question of finding the revenue-maximizing incentive-compatible auction design. We instead ask “how can we optimize revenue within the basic GSP design?”Specifically, we consider a range of different GSP variants that have been used in practice. Each has adjustableparameters; we consider how these parameters should be set in order to optimize revenue.Our most striking finding is that a new reserve price scheme used by major search engines (“quality-weightedreserves”: low-quality advertisers must pay more per click) is not a good choice from a revenue perspective;the old scheme (“unweighted reserves”: all advertisers have the same per-click reserve price) is substantiallybetter. Indeed, this finding is not just striking, but also robust: we offer evidence for it across three different94analysis methods and two different sets of assumptions about bidder valuations. We observed that richer GSPvariants sometimes outperformed GSP with unweighted reserves, but these variants tended to incorporate(approximately) unweighted reserves. Three of this chapter’s other findings also deserve mention here. First,we identify a new GSP variant that is provably revenue optimal in some restricted settings, and that achievesvery good revenue in settings where it is not optimal. Second, we perform the first systematic investigationof the interaction between reserve prices and another revenue-optimization technique called “squashing”.Interestingly, we find that squashing can substantially improve the performance of quality-weighted reserveprices, but this effect arises because squashing undoes the quality weighting. Third, we perform the firstsystematic investigation of how equilibrium selection interacts with revenue optimization techniques likereserve prices. Again, unweighted reserve prices prove to be superior, especially from the perspective ofoptimizing the revenue of worst-case equilibria.This chapter proceeds as follows. Section 5.2 describes the GSP variants and how they are used in practice,the model of advertiser preferences, and finally a review of related work. In Section 5.3 we perform atheoretical analysis of the problem of selling only a single position. The simplicity of this setting allowsus to identify properties of the optimal auction, and to contrast them with the GSP variants. Section 5.4covers our first computational analysis of auctions involving multiple ad positions. Here, we we used efficientgame representations to enumerate the entire set of pure-strategy equilibria. Section 5.5 describes a furthercomputational analysis, where we performed very large-scale experiments, focusing on a commonly studiedrestricted class of equilibria. Section 5.6 discusses some implications of our results and directions for futurework.5.2 BackgroundRecall the weighted generalized second-price auction (wGSP) from the previous chapter. (Hereafter, we referto it as the “vanilla GSP” to distinguish it from the other wGSP-based auction designs that we discuss laterEach advertiser i is assigned a quality score qi ∈ (0,1] by the search engine. We assume that this quality scoreis equal to the probability that the advertiser’s ad would be clicked on if it were shown in the top position.Each advertiser submits a bid bi that specifies the maximum per-click price that she would be willing to pay.Ads are shown ordered by biqi from highest to lowest. When a user clicks on an ad, the advertiser pays theminimum amount she could have bid while maintaining her position in the ordering.Three simple variants of vanilla GSP have been proposed—and used in practice—to improve revenue.1. Squashing changes vanilla GSP by changing how bidders are ranked, according to a real-valuedparameter s: bidders are ranked by biqsi . When s = 1, squashing GSP is identical to regular GSP.When s = 0, squashing GSP throws away all quality information and ranks advertisers by their bids,promoting lower-quality advertisers and forcing higher-quality advertisers to pay more to retain the95same position. Intermediate values of s ∈ (0,1) smoothly interpolate between these two extremes [60].2. An unweighted reserve price (UWR) specifies a minimum per-click price r that every advertiser mustbid (and pay) in order for her ad to be shown.1 We call this per-click reserve price “unweighted”because it is constant across all advertisers, regardless of their qualities.3. A quality-weighted reserve price (QWR) also specifies a minimum price that advertisers must pay, butnow this price is increased for higher-quality bidders. Specifically, each advertiser i must pay at leastr/qi per click.Squashing and both kinds of reserve prices have been used in practice (and in combination). Google initiallyused a UWR of $0.05 across all keywords [103], but switched to QWR in 2005 [45]. Yahoo! also had a UWR($0.10), but in a well-documented field experiment switched to QWR while both tailoring reserve prices tospecific keywords and dramatically increasing them overall [81]. Yahoo! researchers have publicly confirmedthat their auctions use squashing [71]. However, since search engines withhold many auction details (e.g., themethods used to calculate quality scores and minimum bids), it is impossible to be certain of current practice.Finally, we consider three richer GSP variants. The first two, UWR+Sq and QWR+Sq, combine squashingwith the two reserve price variants. The third—which to our knowledge had not previously been discussedin the literature, and which we dub anchoring2—imposes unweighted reserve prices, but ranks advertisersbased only on the portion of bids that exceeds the reserve price, multiplied by the quality score: (bi− r)qi.Anchoring is interesting because (as we show in Section 3) it is the optimal auction for some very simplesettings; we also found that it performed well in settings where it was not provably optimal.5.2.1 ModelFor this chapter, we focus on Varian’s model (V) [103], which is one of the most widely studied models,particularly among researchers studying revenue optimization.3 In this model, a setting is specified by a4-tuple 〈N,v,q,α〉: N is the set of agents (numbered 1, . . . ,n); vi specifies agent i’s value for a single click; qispecifies agent i’s quality score, which is equal to the probability that i’s ad would receive a click if shown inthe top position (qi > 0); and α j is the probability that an agent with quality of 1.0 will receive a click inposition j. Observe that α1 = 1.0; we further assume that α is decreasing, meaning that higher positions get1Note: we assume that advertisers who do not bid above their minimum bid do not affect the outcome in any way. Particularly,this means that an advertiser who bids below her minimum bid does not affect the prices that other agents must pay.2Roberts et al, studying much the same problem as this chapter, independently invented the same auction design [88].3Among the models considered in the previous chapter, V is the most important for this analysis. EOS [30] is unsuitable, becauseit lacks quality scores. This lack means that we cannot compare many GSP variants that differ in how they handle quality scores. Theother models include multi-parameter agents (BHN and BSS) or externalities (cascade, hybrid and GIM). Although these features donot prevent computational analysis using our methods (as in Section 5.4), they do rule out more conventional methods (such as thosein Section 5.5). Perhaps because these models are harder to analyze, there is also essentially no literature on revenue optimization forthem. Revenue optimization in richer models remains an important open problem where CMA can make a substantial contribution.96more clicks. Thus, if agent i’s ad is shown in position j with a price of p per click, then her expected payoffis α jqi(vi− p).As in Chapter 4 and in most of the literature on GSP [e.g., 30, 103], we assume that the setting is commonknowledge to all the advertisers. This assumption is motivated by the fact that in practice, advertisers canlearn a great deal about each other through repeated interactions. We assume that the search engine is ableto directly observe and condition on the number of bidders and their quality scores. As is common in theliterature on revenue optimization (going back to [75] and continuing in work related to GSP [81, 97]),we further assume that details of the setting (v,q,α) are drawn from a distribution which is known to theauctioneer. Thus while the search engine is unable to choose a reserve price conditional on the advertisers’valuations, it can base this decision on the distribution from which these values are drawn. We further assumethat the auctioneer can observe and condition on the number of agents, and on each agent’s quality score. Tounderstand this last assumption, note that the search engine directly observes every click on an ad. Thus, evenif qi begins as private information, it is impossible for an advertiser to misreport this value. This assumptionis a key distinction between our work and previous work on revenue optimization (most notably, [97]).5.2.2 Related WorkRevenue has been a major consideration since the earliest equilibrium analysis of vanilla GSP. Foundationalresearch in the area [30, 103] analyzed a specific equilibrium refinement, under which they found that GSPwas at least as good at generating revenue as VCG. Subsequent research has shown that VCG (and thereforeGSP) achieves revenue close to that of the optimal auction [25]—though in practice, this could mean thatsearch engines leave billions of dollars on the table. Other work has looked into general Nash equilibria(without the refinements mentioned earlier) and found that GSP has many equilibria, ranging from somemuch worse than VCG [98] to others that are significantly better [64, 98].Lahaie and Pennock [60] first introduced the concept of squashing. While squashing behaves similarly to thevirtual values of Myerson, in that it tends to promote weak bidders, they proved that no squashing scheme (orindeed any other manipulation of quality scores) can yield an optimal auction. (Our Figure 5.1 gives somevisual intuition as to why this is true.) They also performed substantial simulation experiments, demonstratingthe effectiveness of squashing as a means of sacrificing efficiency for revenue. Recently—and departing fromthe model we consider in our own work—it has been shown that squashing can improve efficiency whenquality scores are noisy [59].Reserve prices have been used in GSP auctions for years, but our theoretical understanding of their effects isstill relatively limited. Since weighted GSP has the same outcome as VCG in many analyses (e.g., [1] and theBayesian analyses of [103] and [41]), one could infer that weighted GSP with a weighted reserve would havethe same outcome as VCG with an unweighted reserve, which often corresponds to the optimal auction of[75]. [81] showed that GSP with weighted reserves is not quite equivalent to the optimal auction in the case97of asymmetric bidders. Recently, [97] showed that GSP with weighted reserves is optimal when the qualityscore is part of the advertisers’ private information. However, the question of how to optimize revenue in thearguably more realistic case where quality scores are known to the auctioneer remains open. Despite the factthat squashing and reserve prices have been used together in practice, we are aware of no studies of how theyinteract. Further, little is known about how equilibrium selection affects the revenue of GSP with reserves orsquashing.5.3 First Analysis: Single-Slot Auctions with Known Quality ScoresFor our first analysis of the problem of revenue optimization in GSP, we consider an extremely restrictedcase—selling a single slot to advertisers with independent, identically distributed per-click valuations, butknown quality scores. We restrict ourselves to a single slot because it allows us to rely directly upon Myerson’scharacterization to identify the optimal auction. Observe that our use of Varian’s model in this setting is lessrestrictive than it might appear: richer models such as cascade [e.g., 36, 56] and position preferences [e.g.,9, 11] all collapse to Varian’s model in the single-slot case.First, we consider the problem of which kind of reserve prices are optimal.Proposition 12. The optimal auction uses the same per-click reserve price for all bidders in any one-positionsetting for which all agents’ per-click valuations (v) are independently drawn from a common, regulardistribution g.Proof. First, observe that although per-click valuations are identically distributed in this setting, agents’per-impression valuations (denoted V ) are not. If an agent i’s per-click valuation is vi, then her per-impressionvaluation (given qi) is qivi = Vi. Because the auctioneer is effectively selling impressions, it is the latter valuethat matters. Let fi and Fi denote the probability density function and cummulative distribution function ofVi.As was shown by Myerson [75], when f is regular the optimal auction allocates by virtual values ψ:ψi(Vi) = Vi−1−Fi(Vi)fi(Vi). (5.1)For any per-click valuation distribution g, we can identify the per-impression valuation distribution for anagent with quality qi: f (qivi) = g(vi)/qi and F(qivi) = G(vi). Substituting these into (1) givesψi(qivi) = qivi−1−Gi(vi)gi(vi)/qi= qi(vi−1−Gi(vi)gi(vi)). (5.2)The value vi that makes this expression equal to zero is independent of qi, and so the optimal per-click reserveis independent of qi.98Auction Revenue Parameter(s)VCG/GSP 0.208 —Squashing 0.255 s = 0.19QWR 0.279 r = 0.375UWR 0.316 r = 0.549QWR+Sq 0.321 r = 0.472,s = 0.24UWR+Sq 0.322 r = 0.505,s = 0.32Anchoring 0.323 r = 0.5Table 5.1: Comparing GSP variants for two bidders with q1 = 1, q2 = 1/2, and v1,v2∼ U(0,1).Next, we consider a simple value distribution: the uniform one. Because this case has such a simple functionalform, it is easy to identify the optimal auction for such bidders. In fact, the optimal auction for the uniformdistribution is precisely the anchoring rule described earlier.Proposition 13. The anchoring GSP auction is optimal in any one-position setting for which (1) all theagents’ per-click valuations (v) are independently drawn from a uniform distribution on [0,v] (hereafterU(0,v)), and (2) each agent i’s quality score qi is known to the auctioneer.Proof. For valuations from U(0,x), f (v) = 1/x and F(v) = v/x. Note that for every agent i, x corresponds toi’s maximum possible per-impression valuation qiv. Substituting these into Equation (1) givesψi(qivi) = qivi−1−qivi/(qiv)1/(qiv)= qi(2vi− v). (5.3)Thus, the optimal per-click reserve price r∗i occurs at ψi(qiri) = 0, ri = v/2 in this case. In the optimal auction,advertisers are ranked by ψ(qivi) = qi(2vi− v) ∝ qi(vi− r∗i ), and so the anchoring auction is optimal.However, not all value distributions give rise to optimal auctions with simple (e.g., linear) forms. Consider thelog-normal distribution, which some researchers have argued is a good model of real-world bidder valuations[60, 81]. The optimal auction for log-normal distributions uses unweighted reserve prices, and behavessimilarly to anchoring when bids are close to the reserve. However, far from the reserves, the optimal auction’sallocation more closely resembles vanilla GSP. (See Figure 5.1 for a visualization of the optimal auction foruniform valuations and Figure 5.4 for a visualization of the optimal auction for log-normal valuations.)The uniform distribution also makes it easy to calculate the optimal auction’s expected revenue. We consideredthe case of two bidders, one with high quality (q1 = 1) and one with lower quality (q2 = 1/2), and calculatedthe optimal parameter settings for each of the GSP variants defined above (see Table 5.1).4 While anchoring(of course) generated the most revenue, other mechanisms could also be configured to achieve very nearly4Because of the simplicity of the model, we could calculate the expected revenues numerically (to 10−5 accuracy). To find theoptimal parameter settings, we used grid search with increments of 0.001 for r and increments of 0.01 for s.990.2 0.4 0.6 0.8 1.0v10.20.40.60.81.0v 20.2 0.4 0.6 0.8 1.0v10.20.40.60.81.0v 20.2 0.4 0.6 0.8 1.0v10.20.40.60.81.0v 2Figure 5.1: Allocation functions visualized (regions in which agent 1 wins are dark green; regionswhere agent 2 wins are yellow.) Left: the efficient (VCG) auction; middle: the revenue-optimalauction; right: the optimal squashing auction. Note that squashing changes the slope of thedividing line, while the optimal auction transposes it without changing the slope. (Specifically, theoptimal auction “anchors” the dividing line to the point where every agent bids his reserve price.)This provides intuition for the result by [60] that no combination of squashing and reserve pricescan implement the optimal auction.optimal revenues: reserves and squashing used together were ∼ 99% optimal, and UWR was ∼ 98% optimal.Squashing and QWR were far behind (at ∼ 79% and ∼ 83% respectively).We can gain insight into the GSP variants’ similarities and differences by visualizing their allocation functions,for the same setting with two bidders with uniformly distributed valuations and different quality scores (seeFigures 5.1–5.3). In each case, the x and y axes correspond to the per-click valuations of agents 1 and 2respectively. The green region, yellow region and white region respectively indicate joint values for whichagent 1 wins, agent 2 wins, and neither agent wins. The dashed line indicates the dividing line of the efficientallocation, with agent 1 winning below the line and agent 2 winning above it.In summary, this section considered single-slot auctions with i.i.d. per-click values, and obtained the followingmain findings:1. the optimal auction uses unweighted reserve prices;2. when values are uniform, anchoring GSP is optimal;3. in a very restricted uniform setting, the richer mechanisms (anchoring, QWR+sq, and UWR+sq)achieve approximately equal revenue when optimized, and are slightly better than UWR which is betterthan QWR and squashing.1000.2 0.4 0.6 0.8 1.0v10.20.40.60.81.0v 20.2 0.4 0.6 0.8 1.0v10.20.40.60.81.0v 2Figure 5.2: Left: the optimal UWR auction; right: the optimal QWR auction. Note that the optimalunweighted reserve is higher than the reserve used by the optimal auction (0.588 rather than0.5), and that agent 2 only wins when agent 1 does not meet his reserve, because 1’s quality ismuch higher. Also, note the compromise involved in quality-weighted reserve prices: becausethe reserve prices must correspond to a point on the efficient dividing line, obtaining a reasonablereserve for agent 1 (relative to the optimal auction) results in a much-too-high reserve for agent 2.Because of this compromise, QWR generates ∼ 13% less revenue than UWR. Note that wheneveragent 1 exceeds his reserve price in UWR, he wins regardless of agent 2’s bid. For multi-slotUWR auctions, this can have an unexpected side effect: it can be impossible for a high-qualityadvertiser to win the second position.0.2 0.4 0.6 0.8 1.0v10.20.40.60.81.0v 20.2 0.4 0.6 0.8 1.0v10.20.40.60.81.0v 2Figure 5.3: Left: the UWR+Sq auction; right: the QWR+Sq auction. Both have reserves that are muchcloser to the reserves of the optimal auction (and both are within ∼ 1% of revenue-optimal), butboth use substantial squashing (0.2 and 0.3 respectively, where squashing of 0 indicates completelydisregarding quality scores).1010.5 1.0 1.5 2.0 2.5v10.51.01.52.02.5v 2Figure 5.4: The optimal auction for log-normal valuations (µ = 0 and σ = 0.25) and plotting valuationsup to the 99.9th% quantile. This auction resembles anchoring (shown with the solid black line)when values are below or near the reserve price. As values get further from the reserve, theallocation tends towards the efficient auction.5.4 Second Analysis: Multiple Slots, All Pure Nash EquilibriaAs mentioned earlier, it is widely known that GSP has multiple Nash equilibria that can yield substantiallydifferent revenue and social welfare. For our second analysis, we investigated how equilibrium selectionaffects GSP and the six GSP variants by directly calculating their pure strategy Nash equilibria. We use thewGSP encoding algorithm from Chapter 4 to represent these auction games as AGGs, and AGG-SEM (fromChapter 3) to enumerate all5 the PSNEs of each game.For this set of experiments we used a uniform distribution6 over settings: drawing each agent’s valuationfrom U(0,25), each agent’s quality score from U(0,1), and αk+1 from U(0,αk). We generated 100 5-bidder,5-slot settings. (Observe that our use of reserves implies that not all ads will be shown for every realization ofbidder values.) Every auction has a minimum bid of 1 bid increment per click, with ties broken randomlyand prices rounded up to the next whole increment. We used grid search to explore the space of possibleparameter settings. We varied reserve prices between 0.1 and 1 in steps of 0.1 (this range only affects lowquality advertisers and only in QWR), and between 1 and 25 in steps of 2. We varied squashing powerbetween 0 and 1 in steps of 0.2.As with our previous two-bidder, one-slot analysis, we used grid search to explore the space of possiblereserve prices and squashing factors. Specifically, we computed all pure-strategy Nash equilibria of every5As is common in worst-case analysis of equilibria of auctions, [e.g., 12, 92] we assume that bidders are conservative, i.e., nobidder ever follows the weakly dominated strategy of bidding more than his valuation. Without this assumption, many implausibleequilibria are possible, e.g., even single-good Vickrey auctions have equilibria that are unboundedly far from efficient, andunboundedly far from the revenue of truthful bidding.6We could not get meaningful results for log-normal distributions. In a log-normal distribution, a large fraction of the expectedrevenue is contributed by a small fraction of instances involving bidders with exceptionally high quality scores and valuations. Thus,accurate expected-revenue estimates require many more samples than we could practically generate.102Auction Revenue Parameter(s)Vanilla GSP 3.814 —Squashing 4.247 s = 0.4QWR 9.369 r = 9.0Anchoring 10.212 r = 13.0QWR+Sq 10.217 r = 15.0,s = 0.2UWR 11.024 r = 15.0UWR+Sq 11.032 r = 15.0,s = 0.6Auction Revenue Parameter(s)Vanilla GSP 9.911 —QWR 10.820 r = 5.0Squashing 11.534 s = 0.2UWR 11.686 r = 11.0Anchoring 12.464 r = 11.0QWR+Sq 12.627 r = 7.0,s = 0.2UWR+Sq 12.745 r = 9.0,s = 0.2Table 5.2: Comparing the various auction variants given their optimal parameter settings (Left: Worst-case equilibrium; Right: Best-case equilibrium) Bold indicates variants that are significantly betterthan all other variants, but not significantly different from each other (based on p ≤ 0.05 withBonferroni correction).one of our 100 perfect-information auction settings, and every discretized setting of each GSP variant’sparameter(s). For each variant we identified the parameter settings that maximized: (i) the revenue of theworst-case equilibrium, averaged across settings; and (ii) the revenue of the best-case equilibrium, againaveraged across settings.Broadly, we found that every reserve price scheme dramatically improved worst-case revenue, though UWRwas particularly effective. Squashing did not help appreciably with the revenue of worst-case equilibria (seeFigure 5.5). Comparing the mechanisms (Table 5.2), we found that UWR and UWR+Sq were among thebest and were dramatically better than any other mechanism in terms of worst-case equilibria. Also, wenoticed that optimizing for worst-case equilibria consistently yielded higher reserve prices than optimizingfor best-case equilibria.In summary, for this experiment we enumerated the equilibria of perfect-information, 5-slot, 5-bidder adauction settings with independent, uniform valuations, and found that:1. There is a huge gap between best- and worst-case equilibria (over 2.5× for vanilla GSP). Squashingdoes not help to close this gap, but reserve prices do.2. The optimal reserve price is much higher (for any GSP variant) when optimizing worst-case revenuethan when optimizing best-case revenue.3. When considering best-case equilibria, the revenue ranking remains roughly the same as in our simple2-bidder, 1-slot analysis (Anchoring ' QWR+sq ' UWR+sq > UWR > Squashing > QWR). Whenconsidering the worst-case equilibria, the revenue ranking changes slightly (UWR ' UWR+sq >Anchoring ' QWR+sq > QWR > Squashing).1030.0 0.2 0.4 0.6 0.8 1.0s02468101214RevenueWorst NEBest NE0 5 10 15 20 25r02468101214RevenueWorst NEBest NE0 5 10 15 20 25r02468101214RevenueWorst NEBest NE0 5 10 15 20 25r02468101214RevenueWorst NEBest NE0 5 10 15 20 25r02468101214RevenueWorst NEBest NE0 5 10 15 20 25r02468101214RevenueWorst NEBest NEFigure 5.5: Depending on parameter settings, revenue can vary dramatically between worst- and best-case equilibria. Squashing has almost no effect on worst-case equilibrium, while any reserve pricescheme can substantially improve it. (Top left: squashing; top right: anchoring; middle left: UWR;middle right: QWR; bottom left: UWR+sq (s = 0.2); bottom right: QWR+sq (s = 0.2))1040 5 10 15 20 25vi0.00.20.40.60.81.0p(click)Figure 5.6: It is easy to compute incentive compatible pricing for a position auction. For every marginalincrease in click probability that an agent gains by moving up a position, she must pay thatprobability times the minimum she must bid to be shown that position. Thus, her payment mustbe equal to the shaded area.5.5 Third Analysis: Multiple Slots, Equilibrium RefinementFor our third analysis, we again considered multiple-slot settings. In this case we solved the equilibriumselection problem by considering the perfect-information Nash equilibrium in which each agent’s expectedpayment is equal to what she would pay in a dominant-strategy truthful mechanism with the same allocationfunction as the corresponding GSP variant. This refinement has been used extensively in the analysis ofvanilla GSP, where it is the unique Nash equilibrium equivalent to VCG’s truthful equilibrium (i.e., one thatchooses the same outcome and charges the same expected payments). When applied to vanilla GSP, thisequilibrium has a number of desirable properties:• It is guaranteed to exist (provided that bids are continuous) and is computable in polynomial time [1].• The outcome is a competitive, symmetric and envy-free equilibrium [30, 103]• The equilibrium is impersonation-proof [53].• It does not violate the non-contradiction criterion (i.e., we should not be interested in equilibria of theperfect-information game that generate more expected revenue than the optimal auction) [29].This equilibrium refinement can also be applied to other GSP variants ([60] used it to analyze squashing, and[29] used it to analyze reserve prices). This equilibrium is guaranteed to exist for all of our GSP variantsbecause they are all monotonic (i.e., increasing an agent’s bid weakly increases his position and therefore hisexpected number of clicks).Although we focus on this equilibrium for mainly economic reasons, this choice also has computationaladvantages. Specifically, it is possible to compute the payments very quickly using the algorithm of Aggarwalet al [1]. (See Figure 5.6.)105For the experiments presented in this section, we considered two distributions:• Uniform, where each agent’s valuation is drawn from U(0,25), each agent’s quality score is drawnfrom U(0,1), and αk+1 is drawn from U(0,αk) (as in Section 5.4);• Log-normal, where each agent’s valuation and quality are drawn from log-normal distributions, andvaluation and quality are positively correlated using a Gaussian copula [77]. The exact parameterswere provided to us by Se´bastien Lahaie of Yahoo! Research, who derived them from confidential biddata. (To maintain confidentiality, the bids were scaled by arbitrary positive constants. Thus, the bids,prices and revenue in our results are not in any standard unit of currency.) Although we cannot disclosethem, we can say that the distribution is a refinement of the distribution studied in [60].From the uniform distribution, we sampled 1000 5-bidder, 5-slot settings; for the log-normal distribution wesampled 10000 5-bidder, 5-slot settings.7 To explore auction parameters, we used a simple grid search. Wevaried reserve prices between 0 and 30 in steps of 2, between 30 and 100 in steps of 10, between 100 to 1000in steps of 100, and from 1000 to 10000 in steps of 1000. We varied squashing power between 0 and 1 insteps of 0.25. Our objective was expected revenue averaged across all samples.We began by investigating the optimal parameter settings for each mechanism. For squashing, the optimalsquashing power was 0 for log-normal distributions (as was also observed in previous work [60]), but greaterthan zero for uniform distributions, where squashing was also somewhat less effective (see Figure 5.7). UWRand anchoring had similar optimal reserve prices, which were dramatically different from QWR’s optimalreserve (see Figure 5.8). Adding squashing to UWR produced some improvements and had little effect on theoptimal reserve price (see Figure 5.9). QWR greatly benefited from squashing, but the optimal reserve pricewas extremely sensitive to the squashing parameter (see Figure 5.10).We then compared GSP variants (summarized in Table 5.3). We found that among simple variants UWR wasclearly superior to both squashing and QWR. The richer variants—anchoring, QWR+Sq and UWR+Sq—allperformed comparably well (within ∼ 2% of each other), though QWR+Sq was consistently the worst.Interestingly, QWR+Sq was only competitive with the other top mechanisms when squashing power was setclose to zero. Observe that squashing has two effects when added to QWR: it changes the ranking amongbidders who exceed their reserve prices, but it also changes the reserve prices. As squashing power gets closerto zero, reserve prices tend towards UWR. We hypothesized that QWR+Sq only performed as well as it didbecause of this second effect. To tease apart these two properties, we tested them in isolation. Specifically,we tested (1) a GSP variant in which reserve prices were weighted by squashed quality but ranking amongreserve-exceeding bidders was performed according to their true quality scores, and (2) a GSP variant inwhich reserve prices were weighted by true quality scores but rankings were done using squashed quality7A substantial fraction of the expected revenue in log-normal settings comes from rare, high valuation bidders. Thus, theexpected revenue had much higher variance than in the uniform case; we thus needed many more samples to reduce noise and obtainstatistically significant results.1060.0 0.2 0.4 0.6 0.8 1.0Squashing Power02468101214Revenue0.0 0.2 0.4 0.6 0.8 1.0Squashing Power0102030405060708090RevenueFigure 5.7: The optimal squashing power is zero or close to it. Squashing offers modest revenue gainsgiven a uniform value distribution (left) and substantial gains given a log-normal value distribution(right).0 5 10 15 20 25Reserve Price02468101214RevenueanchoruReswRes100 101 102 103 104 105Reserve Price0102030405060708090RevenueanchoruReswResFigure 5.8: All three reserve-based variants (anchoring, QRW and UWR) provide substantial revenuegains. Anchoring is slightly better than UWR, and both are substantially better than QWR. (left:uniform; right: log-normal)1070 5 10 15 20 25Reserve Price02468101214Revenue0.00.250.50.751.0100 101 102 103 104 105Reserve Price0102030405060708090Revenue0.00.250.50.751.0Figure 5.9: Adding squashing to UWR provides modest marginal improvements (compared to theoptimal unweighted reserve price with no squashing) and does not substantially affect the optimalreserve price. (left: uniform; right: log-normal)0 5 10 15 20 25Reserve Price02468101214Revenue0.00.250.50.751.0100 101 102 103 104 105Reserve Price0102030405060708090Revenue0.00.250.50.751.0Figure 5.10: Adding squashing to QWR provides dramatic improvements. However, the higher thesquashing power, the less the reserve prices are actually weighted by quality. In the case of alog-normal value distribution, the optimal parameter setting (s = 0.0) removes quality scoresentirely and is thus equivalent to UWR. Note that different values of squashing power lead todramatically different optimal reserve prices. (left: uniform; right: log-normal)108Auction Revenue Parameter(s)Vanilla GSP 7.737 —Squashing 9.123 s = 0.25QWR 10.598 r = 8.0UWR 12.026 r = 14.0QWR+Sq 12.046 r = 12.0,s = 0.25UWR+Sq 12.220 r = 12.0,s = 0.25Anchoring 12.279 r = 12.0Auction Revenue Parameter(s)Vanilla GSP 20.454 —QWR 48.071 r = 400.0Squashing 53.349 s = 0.0QWR+Sq 79.208 r = 20.0,s = 0.0UWR 80.050 r = 20.0Anchoring 80.156 r = 20.0UWR+Sq 81.098 r = 20.0,s = 0.5Table 5.3: Comparing the various auction variants given their optimal parameter settings. (Left: uniformdistribution; right: log-normal distribution.) Note that in this analysis, vanilla GSP is equivalent toVCG. Bold indicates variants that are significantly better than all other variants, but not significantlydifferent from each other. (p≤ 0.05 with Bonferroni correction.)0 5 10 15 20 25Reserve Price02468101214Revenue0.00.250.50.751.0100 101 102 103 104 105Reserve Price0102030405060708090Revenue0.00.250.50.751.0Figure 5.11: When squashing is only applied to reserve prices, it can dramatically increase QWR’srevenue. However, there has to be a lot of squashing (i.e., s close to 0), and the optimal reserveprice is very dependent on the squashing power. In fact, for both distributions, the optimalparameters set s = 0, in which case the mechanism is identical to UWR.scores. These experiments confirmed our hypothesis: the first variant (Figure 5.11)—which uses squashingto make the mechanism behave more like UWR—was much more effective at increasing revenue than thesecond (Figure 5.12).Next, we investigated the effect of varying the number of bidders. Broadly, we found that the ranking amongauctions remained consistent (see Figure 5.13). The optimal reserve price tended to increase with the numberof bidders, particularly in the case of UWR. We found this especially interesting because a heuristic oftendescribed in the literature is to take Myerson’s optimal reserve prices and add them to GSP [e.g., 81]. Ourresults show that Myerson’s finding that the optimal reserve price does not vary with the number of bidders isonly a property of the optimal auction, not of other auctions such as these GSP variants.1090 5 10 15 20 25Reserve Price02468101214Revenue0.00.250.50.751.0100 101 102 103 104 105Reserve Price0102030405060708090Revenue0.00.250.50.751.0Figure 5.12: When squashing is only applied to ranking, but not to the reserve prices, the marginalgains from squashing over QWR (with the optimal reserve) are very small.2 4 6 8 10 12 14 16 18 20Agents0510152025RevenueVCGSquashingUWRQWRAnchoringUWR+SqQWR+Sq2 4 6 8 10 12 14 16 18 20Agents050100150200250300350RevenueVCGSquashingUWRQWRAnchoringUWR+SqQWR+SqFigure 5.13: As the number of agents varies, the relative ranking of GSP variants remains mostlyunchanged. The one notable exception to this is squashing and QWR, their ranking can bereversed depending on the distribution and number of bidders.110In summary, this section considered the equilibria of multi-slot ad auctions which are revenue equivalent tothe truthful equilibria of corresponding dominant-strategy mechanisms with the same allocation rules, andconsidered both uniform and log-normal valuations. Our main conclusions were that:1. As in the previous analyses, the mechanisms best able to optimize revenue were Anchoring, QWR+sq,UWR+sq and UWR; either squashing or QWR took fifth place with the other in sixth. Within thesegroups, the exact ranking depended on the distribution and the number of bidders.2. QWR+sq only performed as well as it did when squashing was configured to make its reserve pricesbehave like UWR. When squashing is applied to the allocation in QWR, but not to the reserve prices,very little revenue improvement was possible.5.6 Conclusions and Future WorkThis chapter demonstrates how computational mechanism analysis can enable computational mechanismdesign. Despite the fact that we only explored a small space of mechanisms, we were able to discovernew and important things about revenue-optimizing position auctions. First, we observed that the highestrevenues were produced by mechanisms that (directly or effectively) use unweighted per-click reserve prices,an observation that we were able to replicate with two more-conventional methods of analysis. Second, wefound that any kind of reserve price dramatically improved worst-case-equilibrium revenue, while vanillaGSP and squashing were very sensitive to equilibrium selection. However, we found that squashing couldprovide extra benefits when used in conjunction with UWR or QWR. This finding could not be made withexisting techniques; CMA was necessary to explore the full space of equilibria. Third, our experimentalfindings also inspired new theoretical insights, including a characterization of the optimal mechanism forsome settings (a novel auction called anchoring that is also nearly-optimal in settings where it is not optimal)and a characterization of the optimal reserve prices for a wide range of settings.Our rather robust conclusion that UWR achieves higher revenues than QWR raises the question of whyGoogle and Yahoo! both made the transition from unweighted to quality-weighted reserves. One likelyexplanation is that search engines do not aim to optimize their short-term revenue, but instead optimizelong-term revenue via other short term objectives such as efficiency, user satisfaction, revenue under aconstraint that ads are costly to show, etc. Other possibilities are that search engines are motivated by otherbusiness considerations entirely,8 that they have simply acted in error, or that our findings expose a flaw in thestandard model of position auctions. Finally, it is possible that the premise of our question is wrong: perhapssearch engines do not in fact use QWR, but instead use some other (secret) approach to setting reserves.We believe that the most pressing open problem stemming from our work is to attempt to resolve these8[81] report that they were explicitly instructed to use weighted reserve prices in their experiments because these are moreconsistent with an otherwise weighted auction and are perceived to be more fair by bidders.111questions by examining richer models that allow short-term revenue to be contrasted with longer-term revenue.Considering short-term revenue, we conjecture that in the field experiment of [81], where Yahoo! increasedrevenue by increasing reserve prices and simultaneously switching to QWR, the revenue increases wouldhave been even greater if Yahoo! had retained optimized reserve prices but maintained UWR. To analyzelonger-term revenue, a richer model could include quality and click probability that are determined by theadvertiser’s choice of ad text, rather than being exogenous. In equilibrium, this choice must be a best responseto the rules of the auction and the choices of the other agents. For example, consider the problem of anadvertiser who has two choices of ad text. One choice will yield 1000 clicks per hour, leading to 11 sales perhour. The other choice yields 10 clicks per hour, but every click produces a sale. With weighted reserve prices(and no squashing) the advertiser will always choose the first text, since it produces more sales per hour forthe same price. With appropriate quality-weighted reserve prices (or squashing), the advertiser would chosethe second, which generates nearly as many sales, and requires him to pay the reserve price far less often. Itis not immediately clear which text the search engine should prefer: the first satisfies more users, but alsowastes the time of many users who click through but do not buy.112Chapter 6Application: Strategic Voting in Elections6.1 IntroductionThis chapter shows a dramatically different application for computational mechanism analysis: the study ofstrategic voting in elections. As with the auction applications we have discussed so far, the kinds of votingmechanisms that are used in practice tend to be very simple and structured in ways that make them amenableto compact representations (e.g., anonymity is a common feature). Unlike auctions, incentive compatiblemechanisms are nearly non-existent (and those that do exist are so undesirable as to almost never be used inpractice1), so researchers tend to focus on mechanisms that are not incentive compatible.Since voters may have an incentive vote strategically to influence an election’s results, according to theirknowledge or perceptions of others’ preferences, much research has considered ways of limiting manipulation.This can be done by exploiting the computability limits of manipulations (e.g., finding voting mechanisms forwhich computing a beneficial manipulation is NP-hard [7, 8, 110]), by limiting the range of preferences (e.g.,if preferences are single peaked, there exist non-manipulable mechanisms [31]), randomization [38, 85], etc.When studying the problem of vote manipulation, nearly all research falls into two categories: coalitionalmanipulation and equilibrium analysis. Much research into coalitional manipulation considers models inwhich a group of truthful voters faces a group of manipulators who share a common goal. Less attentionhas been given to Nash equilibrium analysis which models the (arguably more realistic) situation where allvoters are potential manipulators. One reason is that it is difficult to make crisp statements about this problem:strategic voting scenarios give rise to a multitude of Nash equilibria, many of which involve implausibleoutcomes. For example, even a candidate who is ranked last by all voters can be unanimously elected in a1A simple majority vote is incentive compatible, provided there are only two candidates. For settings with more than twocandidates, the Gibbard-Satterthwaite theorem [37, 94] shows that every incentive compatible mechanism either 1) is a dictatorshipwhere only a single agent’s vote matters, or 2) is not onto, in the sense that it disqualifies all but at most two candidates a priori.113Nash equilibrium—observe that when facing this strategy profile, no voter gains from changing his vote.Another problem is that finding even a single Nash equilibrium of a game can be computationally expensive,and plurality votes can have exponentially many equilibria (in the number of voters).Given the tools of CMA, this chapter naturally focuses on the second approach, analysis using the Nash (andsubsequently, Bayes-Nash) equilibria of voting games. We focus on plurality, as it is by far the most commonvoting mechanism used in practice. We refine the set of equilibria by adding a small additional assumption:that agents realize a very small gain in utility from voting truthfully; we call this restriction a truthfulnessincentive. We ensure that this incentive is small enough that it is always overwhelmed by the opportunityto be pivotal between any two candidates: that is, a voter always has a greater preference for swinging anelection in the direction of his preference than for voting truthfully. All the same, this restriction is powerfulenough to rule out the bad equilibrium described above, as well as being, in our view, a good model of reality,as voters might reasonably have a preference for voting truthfully.The computational approach has not previously been used in the literature on strategic voting, because theresulting normal-form games are enormous. For example, representing some of our games (e.g., with 20players and 3 candidates) in the normal form would require billions of payoff-table entries (20× 320 '6.97×1010).Our first contribution is an equilibrium analysis of full-information models of plurality elections. We analyzethe number of Nash equilibria that exist when truthfulness incentives are present. We also examine thewinners, asking questions like how often they also win the election in which all voters vote truthfully, orhow often they are also Condorcet winners. We also investigate the social welfare of equilibria; for example,we find that it is very uncommon for the worst-case result to occur in equilibrium. Our approach can begeneralized to richer mechanisms where agents vote for multiple candidates (i.e., approval, k-approval, andveto).Our second contribution involves the arguably more realistic scenario in which the information availableto voters is incomplete. We assume that voters know only a probability distribution over the preferenceorders of others, and hence identify Bayes-Nash equilibria. We found that although the truthfulness incentiveeliminates the most implausible equilibria (i.e., where the vote is unanimous and completely independentof the voters preferences), many other equilibria remain. Similarly to Duverger’s law (which claims thatplurality election systems favor a two-party result [28], but does not directly apply to our setting), we foundthat a close race between almost any pair of candidates was possible in equilibrium. Equilibria supportingthree or more candidates were possible, but less common.1146.1.1 Related WorkAnalyzing equilibria in voting scenarios has been the subject of much work, with many researchers proposingvarious frameworks with limits and presumptions to deal with both the sheer number of equilibria, and todeal with more realistic situations, where there is limited information. Early work in this area, by McKelveyand Wendell [67], allowed for abstention, and defined an equilibrium as one with a Condorcet winner. As thisis a very strong requirement, such an equilibrium does not always exist, but they established some criteria forthis equilibrium that depend on voters’ utilities.Myerson and Weber [76] wrote an influential article dealing with the Nash equilibria of voting games. Theirmodel assumes that players only know the probability of a tie occurring between each pair of players, and thatplayers may abstain (for which they have a slight preference). They show that multiple equilibria exist, andnote problems with Nash equilibrium as a solution concept in this setting. The model was further studied andexpanded in subsequent research [21, 54]. Assuming a slightly different model, Messner and Polborn [70],dealing with perturbations (i.e., the possibility that the recorded vote will be different than intended), showedthat equilibria only includes two candidates (Duverger’s law). Our results, using a different model of partialinformation (Bayes-Nash), show that with the truthfulness incentive, there is a certain tendency towards suchequilibria, but it is far from universal.Looking at iterative processes makes handling the complexity of considering all players as manipulatorssimpler. Dhillon and Lockwood [23] dealt with the large number of equilibria by using an iterative processthat eliminates weakly dominated strategies (a requirement also in Feddersen and Pesendorfer’s definition ofequilibrium [32]), and showed criteria for an election to result in a single winner via this process. Using adifferent process, Meir et al. [69] and Lev and Rosenschein [62] used an iterative process to reach a Nashequilibrium, allowing players to change their strategies after an initial vote with the aim of myopicallymaximizing utility at each stage.Dealing more specifically with the case of abstentions, Desmedt and Elkind [22] examined both a Nashequilibrium (with complete information of others’ preferences) and an iterative voting protocol, in whichevery voter is aware of the behavior of previous voters (a model somewhat similar to that considered by Xiaand Contizer [109]). Their model assumes that voting has a positive cost, which encourages voters to abstain;this is similar in spirit to our model’s incentive for voting truthfully, although in this case voters are driven towithdraw from the mechanism rather than to participate. However, their results in the simultaneous vote aresensitive to their specific model’s properties.Rewarding truthfulness with a small utility has been used in some research, though not in our settings. Laslierand Weibull [61] encouraged truthfulness by inserting a small amount of randomness to jury-type games,resulting in a unique truthful equilibrium. A more general result has been shown in Dutta and Sen [27], wherethey included a subset of participants which, as in our model, would vote truthfully if it would not change the115result. They show that in such cases, many social choice functions (those that satisfy the No Veto Power) areNash-implementable, i.e., there exists a mechanism in which Nash equilibria correspond to the voting rule.However, as they acknowledge, the mechanism is highly synthetic, and, in general, implementability does nothelp us understand voting and elections, as these involve a predetermined mechanism. The work of Dutta andLaslier [26] is more similar to our approach. They use a model where voters have a lexicographic preferencefor truthfulness, and study more realistic mechanisms. They demonstrated that in plurality elections withodd numbers of voters, this preference for truthfulness can eliminate all pure-strategy Nash equilibria. Theyalso studied a mechanism strategically equivalent to approval voting (though they used an unusual namingconvention), and found that when a Condorcet winner exists, there is always a pure-strategy Nash equilibriumwhere the Condorcet winner is elected.6.2 DefinitionsElections are made up of candidates, voters, and a mechanism to decide upon a winner.Definition 1. Let C be a set of m candidates, and let A be the set of all possible preference orders over C.Let V be a set of n voters. Every voter vi ∈V has some element in A which is his true, “real” value (which weshall mark as ai), and some element of A that he announces as his value, which we shall denote as a˜i.The voting mechanism is a function f : An→C.Note that our definition of a voter incorporates the possibility of him announcing a value different than histrue value (strategic voting).In this work, we restrict our attention to scoring rules, i.e., voting rules in which each voter assigns a certainnumber of points to each candidate, and the candidate with the most points wins. Specifically, we studyscoring rules in which each candidate can get at most 1 point from each voter. We focus on four mechanisms:• Plurality: A single point is given to one candidate.• Veto: A point is given to everyone except one candidate.• k-approval: A point is given to exactly k candidates.• Approval: A point is given to as many candidates as each voter chooses.Another important concept is that of a Condorcet winner.Definition 2. A Condorcet winner is a candidate c ∈C such that for every other candidate d ∈C (d 6= c) thenumber of voters that rank c over d is at least bn2c+1.Condorcet winners do not exist in every voting scenario, and many voting rules—including plurality—arenot Condorcet-consistent (i.e., even when there is a Condorcet winner, that candidate may lose).116To reason about the equilibria of voting systems, we need to formally describe them as games, and hence tomap agents’ preference relations to utility functions. More formally, each agent i must have a utility functionui : An 7→ R, where ui(aV )> ui(a′V ) indicates that i prefers the case when all the agents have voted aV overthe case when the agents vote a′V . Representing preferences as utilities rather than explicit rankings allowsfor the case where i is uncertain about what outcome will occur. This can arise either because he is uncertainabout the outcome given everyone’s actions (because of random tie-breaking rules), or because he is uncertainabout the actions the other agents will take (e.g., agents behaving randomly; agents playing strategies thatcondition on information that i does not observe). Here we assume that an agent’s utility only depends on thecandidate that gets elected and on his own actions (e.g., an agent can get some utility for voting truthfully).Thus, we obtain simpler utility functions ui : C×A 7→R, with an agent i’s preference for outcome aV denotedui( f (aV ), a˜i).In this work, we consider two models of games, full-information games and symmetric Bayesian games. Inboth models, each agent must choose an action a˜i without conditioning on any information revealed by thevoting method or by the other agents. In a full-information game, each agent has a fixed utility functionwhich is common knowledge to all the others. In a symmetric Bayesian game, each agent’s utility function(or “type”) is an independent, identically distributed draw from a commonly known distribution of the spaceof possible utility functions, and each agent must choose an action without knowing the types of the otheragents, while seeking to maximize his expected utility.We consider a plurality voting setting with voters’ preferences chosen randomly. We show detailed results forthe case of 10 voters and 5 candidates (numbers chosen to give a setting both computable and with a range ofcandidates), but we also show that changing these numbers results in qualitatively similar equilibria.Suppose voter i has a preference order of a5  a4  . . . a1, and the winner when voters voted aV is a j. Wethen define i’s utility function asui( f (aV ), a˜i) = ui(a j, a˜i) =j ai 6= a˜ij+ ε ai = a˜i,with ε = 10−6.Note that we use utilities because we need, when computing an agent’s best response, to be able to comparenearly arbitrary distributions over outcomes (e.g., for mixed strategies or Bayesian games). This is not meantto imply that utilities are transferable in this setting. Most of our equilibria would be unchanged if we movedto a different utility model, provided that the preferences were still strict, and the utility differences betweenoutcomes were large relative to ε . The one key distinction is that agents are more likely to be indifferent tolotteries (e.g., an agent that prefers A BC is indifferent between {A,B,C} and {B}) than under someother utility models.117As with perfect information games, we consider Bayesian games with a fixed number of candidates (m) andvoters (n). The key difference is that the agents’ preferences are not ex ante common knowledge. Instead,each agent’s preferences are drawn from a distribution pi : A 7→R. Here we consider the case of symmetricBayesian games, where every agent’s preferences are drawn independently from the same distribution, p.Due to computational limits, we cannot study games where p has full support; each agent would have55! ' 7.5×1083 pure strategies. Instead, we consider distributions where only a small subset of preferenceorders are in the support of p. We generate distributions by choosing six preference orderings, uniformly atrandom (this gives a more reasonable 56 = 15625 pure strategies). For each of these orderings a, we drawp(a) from a uniform [0,1] distribution. These probabilities are then normalized to sum to one. This restrictedsupport only affects what preference orders the agents can have; we do not restrict agents’ action sets in anyway.Note that formally the ε truthfulness incentive represents a change to the game, rather than a change in thesolution concept. However, there is an equivalence between the two approaches: for any sufficiently smallε , the set of pure-strategy Nash equilibria in the game with ε truthfulness incentives is identical to the setof pure-strategy Nash equilibria (of the game without truthfulness incentives) that also satisfy the propertythat only the pivotal agents (i.e., agents who, were their vote to change, the outcome would change) deviatefrom truthfulness. The meaning of sufficiently small depends on the agents’ utility functions, and on thetie-breaking rule. If u is the difference in utility between two outcomes, and t is the minimum joint probabilityof any type profile (in a Bayesian game), then ε must be less than ct/|C| (the 1/|C| factor comes from thefact that uniform tie-breaking can select some candidate with that probability).6.3 MethodEncoding plurality games as action-graph games is relatively straightforward. For each set of voters withidentical preferences, we create one action node for each possible way of voting. For each candidate, wecreate a sum node that counts how many votes the candidate receives. Directed edges encode which voteactions contribute to a candidate’s score, and that every action’s payoff can depend on the scores of all thecandidates (see Figure 6.1). The same approach generalizes to approval-based mechanisms; if an actioninvolves approving more than one candidate, then there must be an edge from that action node to eachapproved candidate’s sum node. Similarly, positional scoring rules (e.g., Borda) can be encoded by usingweighted sum nodes.A variety of Nash-equilibrium-finding algorithms exist for action-graph games [19, 51]. In this work, weused the support enumeration method (see [84, 100] and Chapter 3) exclusively, because it allows Nashequilibrium enumeration. This algorithm works by iterating over possible supports, testing each for theexistence of a Nash equilibrium. In the worst case, this requires exponential time, but in practice SEM’sheuristics (exploiting symmetry and conditional dominance) enable it to find all the pure-strategy Nash118  AB∑ A∑ BA>B      AB      B>AFigure 6.1: An action graph game encoding of a simple two-candidate plurality vote. Each round node representsan action a voter can choose. Dashed-line boxes define which actions are open to a voter given hispreferences; in a Bayesian AGG, an agent’s type determines the box from which he is allowed to choose hisactions. Each square is a sum node, tallying the number of votes a candidate received.equilibria (PSNEs) of a game quickly.We represented our symmetric Bayesian games using a Bayesian game extension to action-graph games [47].Because we were concerned only with symmetric pure Bayes-Nash equilibria, it remained feasible to searchfor every equilibrium with SEM. (No pruning of supports is possible because dominance checking of BAGGsis NP-hard. Thus, SEM in this context essentially amounts to pure brute-force search.)6.4 Pure-Strategy Nash Equilibrium ResultsTo examine pure strategies, we ran 1000 voting experiments using plurality with 10 voters and 5 candidates.6.4.1 Selectiveness of the Truthfulness IncentiveThe logical first question to ask about our truthfulness incentive is whether or not it is effective as a way ofreducing the set of Nash equilibria to manageable sizes. As a baseline, when we did not use any equilibriumselection method, each plurality game had over a million PSNEs. A stronger baseline is the number of PSNEsthat survive removal of weakly dominated strategies (RDS). RDS reduces the set by an order of magnitude,but still allows over 100,000 PSNEs per game to survive. In contrast, the truthfulness incentive reduced thenumber of PSNEs down to 30 or fewer, with the median game having only 3. Interestingly, a handful ofgames (1.1%) had no PSNEs. Laslier and Dutta [26] had shown that PSNEs were not guaranteed to exist,but only when the number of voters is odd (and at least 5). Our results show that the same phenomenon canoccur, albeit infrequently, when the number of voters is even.One of the problems with unrestricted Nash equilibria is that there are so many of them; the other problem isthat they are compatible with any outcome. Given that the TI is so effective at reducing the set of PSNEs, onecould wonder whether or not the TI is helpful for the second problem. Unfortunately, the TI had only limitedeffectiveness in ruling some outcomes as impossible: only 4.4% of games support exactly one outcome in1191 2 3 4 5Number of outcomes that are possible in equilibrium0.00.20.40.60.81.0Fraction of gameswith TIwithout TIFigure 6.2: Even with the truthfulness incentive, many different outcomes were still possible in equilibrium.equilibrium. However, nearly always (> 99% of the time) some outcomes occur more frequently than others.See Figures 6.2 and 6.3.6.4.2 Equilibrium OutcomesWith a workable equilibrium selection method, we can now consider the question of what kinds of outcomesoccur in plurality. We shall examine two aspects of the results: the preponderance of equilibria with victorsbeing the voting method’s winners, and Condorcet winners. Then, moving to the wider concept of socialwelfare of the equilibria (which we can consider since we work with utility functions), we examine both thesocial welfare of the truthful voting rule vs. best and worse possible Nash equilibria and the average rank ofthe winners in the various equilibria.The first issues to consider are to what extent truthful voting is an equilibrium, and to what extent the agentscancel out each other’s manipulations (i.e., when there are non-truthful Nash equilibria that lead to the sameoutcome as truthful voting). We call a candidate the “truthful winner” iff that candidate wins when voters votetruthfully. For 63% of the games, the truthful preferences were a Nash equilibrium, but more interestingly,many of the Nash equilibria reached the same result as the truthful preferences: 80% of the games had atleast one equilibrium where the truthful winner wins, and looking at the multitude of equilibria, 42% electedthe truthful winner (out of games with a truthful result as an equilibrium, the share was 52%). Without the120Least frequent Most frequentOutcome0.00.10.20.30.40.5Frequencywith TIwithout TIFigure 6.3: With the truthfulness incentive, some outcomes occur much more frequently than others.truthfulness incentive, the truthful winner only won 22% of equilibria.Next, we turn to the question of whether or not strategic voting leads to the election of good candidates,starting with Condorcet winners. 55% of games had Condorcet winners, which would be elected by truthfulvoting in 49% of the games (not a surprising result; plurality is known not to be Condorcet consistent).However, the combination of truthful voting both being an equilibrium and electing a Condorcet winner onlyoccurred in 42% of games. In contrast, a Condorcet winner could win in some Nash equilibrium in 51% ofgames (though only 36% of games would elect a Condorcet winner in every equilibrium).Turning to look at the social welfare of equilibria, once again, the existence of the truthfulness incentiveenabled us to reach “better” equilibria. In 93% of the cases, the worst-case outcome was not possible at all(recall that without the truthfulness incentive, every result is possible in some Nash equilibrium), while onlyin 30% of cases, the best outcome was not possible. While truthful voting led to the best possible outcomein 59% of cases, truthful voting was still stochastically dominated by the best-case Nash equilibrium (seeFigure 6.4).When looking at the distribution of welfare throughout the multitudes of equilibria, one can see that theconcentration of the equilibria is around high-ranking candidates, as the average share of equilibria bycandidates with an average ranking (across all voters in the election) of less than 1 was 56%. (See Figure 6.5.)Fully 72%, on average, of the winners in every experiment had above (or equal) the median rank, and in morethan half the experiments (52%) all equilibria winners had a larger score than the median. As a comparison,121Figure 3: Empirical CDF of social welfare3 Social Welfare ResultsWithout the ✏ preference for truthful voting, every outcome is always possiblein some PSNE. (This implies that the price of anarchy is unbounded, whilethe price of stability is one.) With it, the worst case-outcome is almost alwaysimpossible in PSNE (92.8%). Sometimes (29.7%) the best case outcome is alsoimpossible (29.7%). The gap between best and worst PSNEs can be very large,though both can lead to the worst-case outcome. (Thus, the price of anarchy andprice of stability are unbounded if I normalize social welfare from worst to bestoutcome. I think I need a new way of normalizing.) In the majority of games(59%), truthful voting will lead to the best possible outcome. Nevertheless, thebest-case PSNE still stochastically dominates truthful voting.In games where truthfulness is a PSNE, truthfulness is closer to the best-case PSNE, but still stochastically dominated. In games where truthfulness isnot a PSNE, the equilibrium outcomes and truthful outcomes tend to be worstthan went it is.Note: for welfare results I omit the games with no PSNEs.4 Condorcet WinnersOf the 1000 games tested, 931 games had a Condorcet winner. In fact, 204games had multiple Condorcet winners. (See Figure 5.) As with social welfare,when comparing the relative probability of having a Condorcet winner win the3Figure 6.4: CDF of social welfare.the numbers from experiments without the truthfulness incentive, are quite different: candidates—whatevertheir average rank—won, with minor fluctuations, about the same number of equilibria (57% of winners,were, on average, above or qual to the median rank).6.4.3 Scaling Behavior and StabilityWe next varied the number of voters and candidates. Our main finding was that when voter number was odd,the probability that a game had no equilibria at all increased dramatically, as the truthfulness incentive causedmany such situations to be unstable (see Figure 6.6). Less surprisingly, as the number of candidates increasedPSNEs were less likely to exist.Nevertheless, these equilibria equilibria retained the properties we have seen—a concentration of equilibriaaround “quality” candidates, such as truthful and Condorcet winners, as can be seen in Figure 6.7a, fortruthful winners (never below 40%) and in Figure 6.7b for Condorcet winners. These effects were even morepronounced with an odd number of voters, as the number of equilibria was so small.1221 2 3 4 5Average ranking0.00.10.20.30.40.50.6Average percentage of equilibriaAll games (with truthfulness incentive)Ignoring Condorcet winnersIgnoring truthful winnersWithout truthfulness incentiveFigure 6.5: The average proportion of equilibria won by candidates with average rank of 0–1, 1–2, etc.6.4.4 Richer MechanismsApproval (and variants such as k-approval and veto) are straightforward extensions of plurality, so it is naturalto consider whether our approach will work similarly well for these mechanisms. Also, Laslier and Dutta [26]were able to resolve the existence problem for plurality and approval, but noted that the existence of PSNEsfor k-approval was an open problem.Thus, we started by investigating k-approval and veto. As was the case with plurality, the truthfulnessincentive kept the number of equilibria manageable. As can be seen from Figure 6.8, in all cases more than75% of games had 35 equilibria or fewer, and it seems that the number of equilibria roughly increases withthe number of candidates. Our data also allowed us to resolve the open problem of equilibrium existence: forevery value of k and m we observed at least one instance without any PSNE.123101 102Number of voters0.00.20.40.60.81.0p(NE exists)Truthful PSNEPSNEFigure 6.6: Percentage of games with a Nash equilibrium of a given type with 3 candidates and varying numberof voters.2 4 6 8 10 12 14 16 18 20Number of voters0.00.20.40.60.81.0p3 candidates4 candidates5 candidates(a) Percentage of equilibria that elect the truthful win-ner.2 4 6 8 10 12 14 16 18 20Number of voters0.00.20.40.60.81.0p3 candidates4 candidates5 candidates(b) Percentage of equilibria that elect the CondorcetwinnerFigure 6.7: Varying the number of voters and candidates.We also considered approval voting. Laslier and Dutta have already shown that a Condorcet consistentequilibrium is always guaranteed to exist. However, this raises an interesting question: are there other Nashequilibria? In our experiments, we found that approval voting had an extremely large number of equilibria(over 200,000 per game), so it seems that the addition of another dimension (allowing each voter to decidehow many candidates to vote for), reduces the effectiveness of the truthfulness incentive.Looking at the equilibrium outcomes, we found that they maintained some of the qualities that were present124100 101 102 103Number of PSNEs0.00.20.40.60.81.0Cumulative p4-approval (5 candidates)3-approval (4 candidates)3-approval (5 candidates)2-approval (3 candidates)2-approval (4 candidates)2-approval (5 candidates)Figure 6.8: Empirical CDF of counts of equilibria.in plurality equilibrium outcomes: truthful winners won, on average, in over 30% of the equilibria in everysetting, and sometimes more (e.g., 2-approval with 4 candidates resulted in almost 50% of equilibria, onaverage, electing truthful winners). However, Condorcet winners were not similarly common in equilibrium,and it seems that as the number of candidates grows, and as the number of candidates to which voters allotpoints to increases, the percent of equilibria with a Condorcet winner drops (so in 2-approval such equilibriaare common, in 3-approval somewhat less so, and in 4-approval even less).6.5 Bayes-Nash Equilibria ResultsMoving beyond the full-information assumption, we considered plurality votes where agents have incompleteinformation about each other’s preferences. In particular, we assumed that agents have independent, identicallydistributed (but not necessarily uniformly distributed) preferences, and that each agent knows only his own125preferences and the (commonly-known) prior distribution. Again, we considered the case of 10 voters and 5candidates, but now also introduced 6 possible types for each voter. For each of 50 games, we computed theset of all symmetric pure-strategy Bayes-Nash equilibria, both with and without the ε-truthfulness incentive.2Our first concern was studying how many equilibria each game had and how the truthfulness incentiveaffected the number of equilibria. The set of equilibria was small (< 28 in every game) when the truthfulnessincentive was present. Surprisingly though, removing the truthfulness incentive added only a few equilibria.In fact, in the majority of games (76%), there were exactly five new equilibria: one for each strategy profilein which all types vote for a single candidate. Looking into the structure of these equilibria, we found twointeresting, and seemingly contradictory, properties. First, most equilibria (95%) only ever involved twoor three candidates (i.e., voters only voted for a limited set of candidates). Second, every candidate wasinvolved in some equilibrium. Thus, we can identify an equilibrium by the number of candidates it involves(see Figure 6.9). Notably, most equilibria involved only two candidates, with each type voting for their mostpreferred candidate of the pair. Further, most games had 10 such equilibria, one for every possible pair. Therewere two reasons why some pairs of candidates did not have corresponding equilibria in some games. First,sometimes one candidate Pareto-dominated the other (i.e., was preferred by every type). Second, sometimesthe types that liked one candidate were so unlikely to be sampled that close races occurred with extremelylow probability (relative to ε); in such cases, agents preferred to be deterministically truthful than pivotalwith very small probability. This observation allowed us to derive the following theoretical result about whena 2-candidate equilibrium will exist.Theorem 14. In any symmetric Bayesian plurality election game (with n≥ 2), for any pair of candidatesc1,c2, one of the following conditions is true:• with some positive ε truthfulness incentive, there exists a Bayes-Nash equilibrium where each votervotes for his most preferred of c1,c2; or• one of the candidates Pareto-dominates the other ex ante (i.e., the probability that a voter prefers thesecond candidate to the first is zero).Sketch. If c1 Pareto dominates c2, then every agent would vote for c1 and no agent could influence theoutcome. For any non-zero ε , voters would deviate to honest voting instead of c1.So as long as there is non-zero probability of some agent preferring c2 to c1, every agent has a non-zeroprobability of being pivotal between those two outcomes (and a zero probability of being pivotal between anytwo other outcomes). For a sufficiently small ε , the value of influencing the outcome overwhelms the valueof truthful voting.2We omitted two games from our results. The omitted games each have a type with very low probability. For someprofiles, the probability of agents with these types being pivotal was less than machine-ε . This led to SEM finding“Bayes-Nash equilibria” that were actually only ε-Bayes-Nash.1261 2 3 4 5Number of supported candidates0246810Average number of PS-BNEswithout TIwith TIFigure 6.9: Every instance had many equilibria, most of which only involved a few candidates.These two-candidate equilibria have some interesting properties. Because they can include any two candidateswhere one does not Pareto-dominate the other, they can exist even when a third candidate Pareto-dominatesboth. Thus, it is possible for two-candidate equilibria to fail to elect a Condorcet winner. However,because every two-candidate equilibrium is effectively a pairwise runoff, it is impossible for a two-candidateequilibrium to elect a Condorcet loser.Equilibria supporting three or more candidates are less straightforward. Which 3-candidate combinations arepossible in equilibrium (even without ε-truthful incentives) can depend on the specific type distribution andthe agents’ particular utilities. Also, in these equilibria, agents do not always vote for their most preferred ofthe three alternatives (again, depending on relative probabilities and utilities). Finally, 3-candidate equilibriacan elect a Condorcet loser with non-zero probability.1276.6 Discussion and Future WorkOur work shows that computational mechanism analysis can be applicable to mechanisms other than positionauctions. Using the AGG representation and the AGG-SEM algorithm, we were able to explore the entirespace of Nash equilibria for games where conventional representations and algorithms would be far too costlyto use. As was the case with position auctions, AGG-SEM was particularly valuable because of its ability toenumerate all PSNEs of a game. The issue of multiple equilibria arose even with the added assumption ofthe truthfulness incentive; further, it is only by enumeration that we were able to measure exactly how manyequilibria a given game had with and without the truthfulness incentive.We saw several interesting results, beyond a reduction in the number of equilibria due to our truthfulnessincentive. One of the most significant was the “clustering” of many equilibria around a subset of candidates,that reflect the voters aggregated preferences (e.g., Condorcet winners). A very large share of each game’sequilibria resulted in winners that were either truthful winners (according to plurality) or Condorcet winners.Truthful winners were selected in a larger fraction of equilibria when the total number of equilibria was fairlysmall (as was the case in a large majority of our experiments), and their share decreased as the number ofequilibria increased (where we saw, in cases where there were Condorcet winners, that those equilibria took afairly large share of the total). Furthermore, these results held up even when varying the number of candidatesand voters, and many of them appear to also hold with other voting systems, such as veto and k-approval.Looking at social welfare enabled us to compare equilibrium outcomes to all other possible outcomes. Weobserved that plurality achieved nearly the best social welfare possible (a result that did not rely on ourtruthfulness incentive). While another metric showed the same “clustering” we noted above, most equilibriumresults concentrated around candidates that were ranked, on average, very highly (on average, more than 50%of winners in every experiment had a rank less than 1). This suggests that one should question whether it iseven important to minimize the amount of manipulation, as we found that manipulation by all voters veryoften results in socially beneficial results.In the Bayes-Nash results, we saw that lack of information often pushed equilibria to be a “battle” between asubset of the candidates—usually two candidates (as Duverger’s law would indicate), but occasionally more.There is much more work to be done in the vein we introduced in this chapter. This includes examining theeffects of changing utility functions, as well as looking at more voting rules and determining properties oftheir equilibria. Voting rules can be ranked according to their level of clustering, how good, socially, theirtruthful results are, and similar criteria. Furthermore, it would be worthwhile to examine other distributionsof preferences and preference rules, such as single-peaked preferences.128Chapter 7The Positronic Economist7.1 Introduction and MotivationThe work of the previous chapters clearly demonstrates that computational mechanism analysis is technologi-cally feasible, and can produce novel, scientifically interesting results. The major thread that connects thisbody of work, and differentiates it from other work that uses computational methods to analyze mechanisms,is that the equilibrium computation is done by state-of-the-art general-purpose equilibrium-finding algorithms.This chapter focuses on laying out a way forward for our approach to computational mechanism analysis,codifying some of the desiderata that motivated our past design decisions, and ultimately providing a general,easy-to-use system that other researchers can build on to answer their own mechanism analysis questions.We call this system “the Positronic Economist.”Our desiderata for a computational-mechanism-analysis system are spelled out in our “Three Laws ofPositronic Economics,” inspired by Asimov’s [1942] “Three Laws of Robotics.” (See Figure 7.1.)In order for a CMA system to address real open questions, fidelity is key: any imprecision in either thegame representation or the equilibrium could compromise the relevance of the system’s findings. Whileapproximate equilibria are a major subject of research in algorithms and complexity, researchers in mechanismdesign and auction theory overwhelmingly favor exact equilibria.Our second desideratum is speed, with a strong emphasis on empirical performance over worst-case asymp-totic guarantees. While polynomial-time guarantees are strongly associated with good empirical performance,they do not always provide a complete picture, e.g., see the discussion of different IRSDS variants in Chap-ter 3. For our game representation of choice, Bayesian action-graph games (BAGGs), Nash equilibriumfinding is an NP- or PPAD-hard problem, depending on the equilibrium type, but good heuristics can often1291. Precision: Games are represented exactly. Equilibria are either exact, or areε-equilibria where ε is on the order of machine-ε .a2. Speed: Algorithms must be fast enough to be used in practice, except when thisis in conflict with the first law. Algorithms must provably require only polynomialtime and space, except when this• is in conflict with the first law,• cannot be done without answering a major open complexity-theory problem,such as P == NP.3. Autonomy: Human effort should be minimized, except when this is in conflictwith the first two laws.aFor all the analyses in preceding chapters, ε = 10−10. Since the games typically involve payoffs thatcan be 1 or more, this represents a multiplicative error of one part per at least 1010. In contrast, IEEE32-bit floating point numbers have 22 significand bits, thus have a multiplicative error of up to one part per222 ' 4.2×107. Rational numbers represented as a ratio of signed 32-bit integers have a multiplicativeerror 231 ' 2.2×109.Figure 7.1: The Three Laws of Positronic Economicsfind exact equilibria quickly (see Chapter 3 and Jiang et al. [51]).Our last desideratum for a CMA system is that it be autonomous, requiring minimal human effort andingenuity. While the previous two desiderata are substantially addressed by previous work, this one is not, andthus receives the most attention in this chapter. In our pipeline (Figure 1.1), analysis is broken down into atwo-stage process: encoding followed by solving. Due to our use of BAGGs, solving is straightforward—werely on off-the-shelf Nash equilibrium finding algorithms. On the other hand, until now, encoding has requiredingenuity and effort, first for devising a suitable graphical-model representation of a game and then forworking with lower-level APIs or file specifications to produce a BAGG encoding for use with an existingsolver.The main aim of the Positronic Economist system is to produce compact BAGG representations of econom-ically relevant settings while preserving the main advantages of such BAGG representations: fidelity andspeed.This chapter makes two major contributions. The first is the Positronic Economist API (PosEc), which isa high-level python-based declarative language for describing games in a form that resembles their naturalmathematical representation as closely as possible. The second is a pair of complementary algorithms thatautomatically infer the structure of a game specified in PosEc and produce a compact BAGG.The structure of this chapter is as follows. First, we survey related work. Next, we describe the PosEcrepresentation language. We go on to describe projection, an extension of our mathematical model that can beused to describe some common forms of game structure. We then describe our structure inference algorithmsand prove some properties about their performance. We move on to experimental results, demonstrating that130our algorithms can work in practice to quickly produce compact games. Finally, we discuss some directionsfor future work.7.1.1 Related workThere is a small body of work on the use of equilibrium computation for analysis of mechanisms, but noneof it interfaces with the compact games literature in the same way as our work. Nevertheless, we nowreconsider the three other general-purpose CMA systems of which we are aware, assessing how well theysatisfy our desiderata. In brief: each of the systems we discuss can represent games that cannot be representedby the other systems, including ours. However, our approach is best able to leverage high-performanceequilibrium-finding algorithms.The first system, due to Rabinovich et al. [87], does not explicitly specify how games are to be represented.Instead, a user has to provide code for computing expected utility given a strategy profile. This could requiresubstantial human effort; observe that one of the main benefits of action-graph games is that they providesuch an efficient computational procedure. This also gives rise to some risk of lost fidelity; for example, intheir application to simultaneous auctions, Rabinovich et al. were forced to approximate the tie-breakingrule. Further, their system only supports the fictitious play algorithm (FP), which is a relatively weak Nashequilibrium computation algorithm, being prone to getting stuck in cycles. Thus, FP can consume extremeamounts of time without ever finding a high-fidelity (i.e., small ε) equilibrium.Second, the system of Vorobeychik et al. [108] represents mechanisms and settings as piecewise linearequations. Given that many single-parameter mechanisms and settings are described algebraically in theliterature, this representation requires very little human effort and has great fidelity. However, the onlyequilibrium-finding algorithm available for this representation is iterative best response, which is a degeneratecase of fictitious play and converges even less often.The third system, called empirical algorithmic game theory (EAGT), developed by Wellman and others[52, 66, etc.], involves writing a software simulator of a given game. At the simulator stage, perfect fidelity ispossible, and human effort is mitigated since the user can build the simulator in whatever language is mostconvenient. However, these simulator-based games are then reduced to normal-form games by sampling fromthe simulator based on a representative set of strategies. Devising these strategies requires substantial usereffort; it also risks loss of fidelity. The method does offer the advantage that high-fidelity equilibrium-findingalgorithms can be used once a normal-form game representation is obtained.1317.2 Representing Games in PosEcThe Positronic Economist system consists of two key parts: a declarative language for specifying utilityfunctions and mechanisms, and a structure inference system for automatically constructing compact BayesianAction Graph Games (BAGGs)1 for settings described in this language. We describe both of these elementsin turn, beginning with the language. Our goal is to follow the natural mathematical formalization of amechanism as closely as possible.7.2.1 Mechanisms and SettingsRecall the definitions of mechanisms and settings, from Chapter 2.An epistemic-type Bayesian game is specified by 〈N,A,Θ, p,U 〉, where• N is a set of agents, numbered 1 to n,• A = A1×A2×·· ·×An and Ai is a set of actions that agent i can perform,• Θ is the a set of private types that an agent can have,• p is the joint type distribution, p ∈ ∆Θn, and• U is a profile of n utility functions where Ui : A×Θn→ R.This paper considers “mechanism-based games,” and so splits games into two parts, a mechanism and asetting. A mechanism is given by 〈A,M〉 where• A = A1×A2×·· ·×An and Ai is a set of actions that agent i can perform, and• M is the choice function, M : A→ ∆O.A Bayesian setting is given by 〈N,O,Θ, p,u〉 where• N is a set of agents, numbered 1 to n,• O is a set of outcomes,• Θ is the a set of private types that an agent can have,• p is the joint type distribution, p ∈ ∆Θn,2 and12We will assume that p can be factored into independent pi’s where p(θ) = ∏n1 pi(θi). This assumption is not without lossof generality, but is present in the current reference implementation of BAGGs. Thus, any setting that is not consistent with thisassumption cannot be represented by the Positronic Economist.132• u is a utility function u : N×Θn×O→ R.3Perfect information settings are a special case of Bayesian settings where each agent’s type distribution is apoint mass. Any mechanism and setting that both use the same n and O can be combined to form a gamewhere Ui(θN ,aN) = u(i,θN ,M(aN)) and where aN ∈ A denotes an action profile.7.2.2 The PosEc Modeling LanguagePosEc is a language that aims to make it easy for users to describe mechanisms and settings. This sectionprovides examples and discussion of some of the key decisions that went into its design. The appendicesgo much further, giving an introduction to the API as well as exhaustive documentation of the API’sspecifications.Our modeling language is based on python. Thus, the tuples, sets and utility functions of the mathematicalrepresentation become tuples, set and functions in python. To specify a mechanism-based game, a user mustdefine a choice function M and a setting. For example, here is the choice function for a plurality vote withdeterministic tie-breaking:def M(setting,a_N):for c1 in setting.O:c1Wins = Truefor c2 in setting.O:if a_N.count(c2)>a_N.count(c1):c1Wins=Falseif c1Wins: return c1A corresponding setting follows, where the elements of Theta represent descending preference orderings.n=10O = ("c1","c2","c3")Theta = [("c1","c2","c3"),("c2","c3","c1"),("c3","c1","c2"),("c1","c3","c2")]P = [UniformDistribution(Theta)]*ndef u(i,theta,o,a_i):return theta[i].index(o)setting = Setting(n,O,Theta,P,u)Once the user has defined the mechanism and setting in this way, the Positronic Economist system creates3Note that it is equivalent to say either1. u is a profile of n utility functions ui : Θ×O→ R; or2. u is a utility function that takes an agent number as one of its parameters u : N×Θ×O→ R.While form (1) is more common, form (2) is dramatically easier for a novice Python user to produce correctly. Thus, we prefer form(2).133a BAGG of that game (described in the next section), allowing it to be analyzed using a variety of high-performance algorithms.7.2.3 Special Functions in PosEcOne of our goals was to let the user implement their utility and choice functions however they liked. Indeed,PosEc will convert any valid Python functions into a valid BAGG.However, the way that the functions are implemented can affect the size of the BAGG and the amount of timerequired to produce it. In this section, we outline the key PosEc constructs that can be used to signal gamestructure.First, consider an anonymous mechanism with a constant number of actions c, such as our voting game. Thecorresponding normal-form game requires O(ncn) space, while the corresponding AGG is O(nc). We cansignal that the mechanism is anonymous by using the a N.count() function, as in the plurality-vote codeabove.Second, many important settings involve exponentially large outcome spaces (consider e.g., stable matchingsettings such as kidney exchange), which require exponential space when represented using standard Pythonclasses. The situation is even worse if the natural specification of a setting involves an infinite outcome space(e.g., in much of the auction literature valuations and payments can be arbitrary real values) as there are nostandard Python classes for infinite sets.4 Both problems have the same solution: introducing special-purposeset-like classes. The PosEc system includes several such set-like classes, allowing for outcome spaces thatinclude permutation, for use in position auctions, real values, and partition matroids, as well as their Cartesianproducts.Third, randomized mechanisms also need to be handled carefully, in order to avoid violating the first law.In particular, if a choice function performed its own randomizing, e.g., via the Python random module,PosEc would need to sample the choice function in order even to approximate the distribution over outcomes.Instead, mechanisms should be implemented with the choice function returning a distribution over outcomes,so that PosEc can compute the expected utilities as accurately as possible.The overriding goal for the PosEc API is to allow the user to precisely specify games as easily as possible.Thus, our design decisions emphasize an extremely simple general language for the user to build with, ratherthan a palette of options for the user to choose from. For example, there is no special keyword to specify thatan agent has quasilinear utility with linear value for money. Instead, we make it easy for the user to specifysuch a utility function directly:4Of course, PosEc only generates finite-action games. Nevertheless, PosEc encourages the specification of infinite outcomespaces when it is natural to do so; this allows the user to vary the discretization imposed by a mechanism while holding the settingconstant.134def u(i, theta, o, a_i):alloc,payments = oif alloc==i:return theta[i]-payments[i]return -payments[i]Naturally, we hope that users will produce a library of reusable mechanisms and settings. We provide codefor position auctions and voting.7.2.4 ProjectionIn many single-good auction settings (including settings with the utility function shown in the previoussection), each agent’s payoff depends only on whether he is allocated the good and on his own payment. Suchstructure is computationally useful: an agent’s utility may be computed without deriving a distribution overthe entire outcome space. We can generalize this concept via the idea of projection.Formally, a projected setting is 〈N,O,Θ,Ψ,u,pi〉 where N,O and Θ are defined as before. Ψ is the space ofprojected outcomes, u is overloaded where u : N×Θn×Ψ→R, and pi is a projection function pi : N×O→Ψ(where for any o,o′ ∈ O, pi(i,o) = pi(i,o′) implies u(i,θN ,o) = u(i,θN ,o′)). Projected settings do notnecessarily lead to computational savings, but are useful because they make it possible to define mechanismsthat only compute projected outcomes for a single agent at a time. Formally, a projected mechanism isspecified by 〈A,M′〉 where A is again a set of action profiles, and M is a choice function over (distributionsover) projected outcomes; M′ : N×A→ ∆Ψ (where M′(i,aN) = pi(i,M(aN))). Any projected setting andprojected mechanism that both use the same n and Ψ can be combined to form a Bayesian game whereUi(θN ,aN) = u(i,θN ,M′(aN)) for each action profile aN ∈ A. Such Bayesian games can be much morecompact than Bayesian games based on the equivalent (unprojected) settings and mechanisms. PosEc’sblack-box structure inference algorithm can discover these more compact BAGGs by itself; however, this cantake significant time: e.g., Ω(2n) computation for digital good settings and Ω(n!) for position auction settings.Thus, while expressing projection structure can be more work for the user, doing so can yield exponentiallyfaster computation.7.3 Structure Inference in PosEcThe second main component of PosEc is structure inference: automatically generating compact BAGGs givensetting and mechanism descriptions in PosEc’s own modeling language. We provide two approaches for doingthis. First, “white-box structure inference” uses structure made explicit in the PosEc representation—e.g.,via use of the count operator, randomization via distributions, projection, etc.—to generate a BAGG. (In thedegenerate case, no such structure is explicitly given, and we obtain an exponential-size BAGG.) Second,135Input: Bayesian game, utility represented as a functionOutput: Bayesian action-graph gameCreate a BAGG with actions agents N, actions A, types Θ, type distribution p, and no edges or functionnodesforeach Action-node a docreate payoff table for action node arepeatf ← 0foreach projected configuration c on neighbors of a dotry to compute payoff given cif success then add c and payoff to tableelsev← a missing variable needed to compute payoffif no function node computes v thenadd function node and edges to compute vadd edge from function node to aerase contents of payoff tablef ← 1breakuntil f =0 ;Figure 7.2: White-Box Structure Inference“black-box structure inference” takes the BAGG generated in the first step and probes it to find additionalstructure and hence to obtain a more compact BAGG. This procedure is completely automatic, but (1) dependson the size of the initial representation, and (2) relies upon heuristics to avoid getting stuck in local minima.Thus, black-box structure inference should be seen as a procedure for further refining the compact BAGGsobtained via white-box structure inference, not as a replacement for white-box structure inference.7.3.1 White-Box Structure InferenceWe aim to obtain what we call the “straightforward BAGG”: a BAGG that contains only those function nodesand edges that are necessary to compute features used by the input game. Let `s denote the representationlength of this game. Our goal is to compute the straightforward BAGG using only poly(`s) calls to the utilityfunction of the input game. We do so via a relatively direct algorithm. Essentially, it works by beginningwith a totally disconnected action graph, and progressively adding function nodes and edges whenever theirabsence means that the utility function cannot compute a payoff. (See Algorithm 7.2.)Theorem 15. For any game parameterized by the number of agents n, the number of actions per agent m,and the number of types t, where the choice and utility functions each can make a constant number of differentcalls to accessor functions, and where any weighted-sum calls involve weights bounded by poly(nmt), thestraightforward BAGG will require only poly(nmt) space.136Proof sketch. WBSI introduces at most one function node per accessor call. A weighted max node canhave at most O(nmt) different projected configurations. A weighted sum node with maximum w can onlyhave at most O(wn) different projected configurations. For each action node, there are a constant number ofneighbors, all of which are function nodes. Thus the possible projected configurations on the neighborhoodof any action node is at most the Cartesian product of the possible projected configurations of each neighbor.Since these spaces are all poly(nmt) and there are boundedly many of them, the total configuration space inthe neighborhood of every action is at most poly(nmt). Small outputs are important because the BAGGs produced by WBSI are typically used as inputs to game-solving algorithms, which often require worst-case exponential time. Nevertheless, these algorithms oftenhave good empirical performance. (See e.g., Chapter 3 and Jiang et al. [51]). Thus, it is important that WBSIalso be fast, so as not to become the main bottleneck.Theorem 16. The white-box structure-inference algorithm (Algorithm 7.2) runs in O(c(`s)2) time, where `sdenotes the size of its output, the straightforward BAGG, and c denotes the amount of time that the input coderequires to compute a single agent’s payoff for a single type-action-profile.Proof sketch. The runtime is dominated by the computation of payoff tables. The outer for loop and repeatloop jointly take O(`s) time: the for loop runs once per action node, and each iteration of the repeat loopafter the first one involves creating a new edge, and both actions and edges take up space in the BAGGrepresentation. The inner for loop iterates over projected configurations, where one payoff per projectedconfiguration is also part of the BAGG representation. Because this loop only deals with BAGGs that containweakly fewer edges than the straightforward BAGG, it can only iterate over projected-configuration spacesthat are weakly smaller than the projected configuration space of the straightforward AGG. Thus, this innerloop also takes O(`s) time. Provided the outcome and utility functions passed to WBSI are polynomial time, and make bounded numbersof calls to accessor functions, WBSI will run in polynomial time and produce a polynomial-sized BAGG asoutput.7.3.2 Black-Box Structure InferenceThe goal of black-box structure inference is to take a BAGG obtained from white-box structure inference—inthe degenerate case, a completely unstructured BAGG—and return another BAGG that more efficientlyrepresents the same game. In other words, BBSI is a constrained optimization problem where the feasi-ble region is the set of all BAGGs that are equivalent to the input game and the objective is to find thesmallest BAGG, measured by input length, in the feasible region. We can also define a decision versionof BBSI: BAGGREDUCIBILITY takes a BAGG G and an integer k, and asks whether there exists a BAGGG′ that is strategically equivalent to G and that has representation size no larger than k. We conjecture thatBAGGREDUCIBILITY is NP-hard.137Our focus in this paper is thus on heuristics for performing BBSI. There are some natural local operations thatcan produce a smaller, strategically equivalent BAGG. A simple, polynomial-time example is to try cuttingan edge to an action node, reducing by one the dimension of that action node’s payoff table. (E.g., in a GFPauction, an agent’s payoff is unaffected by each bid less than his own.) Another example is aggregating twoor more actions, replacing their inputs with a single sum node. (E.g., in a no-externality GFP auction, anagent’s payoff only depends on the number of higher bids, not these bids’ values.) Unfortunately, such localoperations can get stuck in local optima. Consider, for example, a five-candidate vote using the two-approvalmechanism where voters have the option to abstain. This setting can be encoded as summing the number ofapprovals each candidate gets using five sum nodes, for a O(n5) representation. However, a naive anonymousencoding could count how many agents approved each pair of candidates, for a O(n10) representation. It isnot possible to get from the naive encoding to the efficient encoding using the two operations defined above;none of the counts can be cut and no pair of counts can be replaced with their sum.Given that locally improving moves can get stuck in local optima, we opted to implement BBSI using aniterative local-search (ILS) algorithm that sometimes makes non-improving moves. Our ILS algorithmoptimizes the size of the payoff table for each action node independently. For each action node, the algorithmstarts from the initial neighborhood of that node (in order to preserve information already encoded in theBAGG by WBSI), and alternates between non-improving perturbations (introducing function nodes thataggregate existing inputs, even when those inputs cannot subsequently be cut) and locally improving moves(cutting edges to an action node when doing so maintains strategic equivalence).Our BBSI-ILS algorithm is a heuristic search with several free parameters, which were manually configuredto exploit the kinds of structure that are commonly present in mechanism-based games. Initially, the focus ison finding and exploiting anonymity structure, so the mechanism introduces SUM nodes to compute actioncounts and tries to cut arcs that provide asymmetric information. Once this phase is complete, the algorithmsearches more broadly. The perturbation phase introduces weighted SUMs constructed by adding togetherexisting weighted SUM nodes, biased in favor of not introducing weights greater than 1. The improvementphase cuts any arc, not just those that provide asymmetric information.7.4 Experimental ResultsIn this section, we describe experimental evidence that our structure inference algorithms can producecompact BAGGs in a practical amount of time.We began by recreating games from Chapter 4 and Chapter 6—specifically, GFP and wGSP position auctions,and two-approval votes—which were originally produced using hand-tuned encoders. To test WBSI, wegenerated straightforward specifications of these settings in PosEc. In all cases, WBSI produced exactly thesame output as the hand-tuned encoders, which are dramatically smaller than the corresponding normal-form1382 4 6 8 10 12 14 16 18 20Number of agents10010210410610810101012101410161018Size (payoffs)BNFGAGG32 GB4 TB10010210410610810101012101410161018Time (s)WBSI time(a) GFP2 4 6 8 10 12 14 16 18 20Number of agents1011031051071091011101310151017Size (payoffs)BNFGAGG32 GB4 TB1011031051071091011101310151017Time (s)WBSI time(b) GSP2 4 6 8 10 12Number of agents1001011021031041051061071081091010101110121013Size (payoffs)BNFGAGG32 GB4 TB1001011021031041051061071081091010101110121013Time (s)WBSI time(c) 2-ApprovalFigure 7.3: Measuring the performance of WBSI on efficiently-represented games. Solid lines corre-spond to mean size and runtime while the dashed lines correspond to the 5th percentile, medianand 95th percentile of 50 runs.games. Runtime performance was also good: even extremely large games involving 20 players were producedin less than an hour, except in the case of two-approval. (The largest two-approval games we created had 12players. Even so, these games took roughly three hours each to produce. See Figure 7.3.)To validate black-box structure-inference, we tested the same games, both with and without well-implementedinputs. When the inputs were well implemented (i.e., using appropriate accessor functions), BBSI did notmake substantial, further reductions to representation size, suggesting that the solution obtained by WBSIwas already nearly optimal. In contrast, BBSI was able to make dramatic improvements to the size ofpoorly implemented inputs, shrinking games from their exponential normal-form size to within less thantwice the size of hand-tuned encodings. However, the runtime of BBSI grew with the size of the normalform: exponentially with n. Thus, BBSI could take as much as an hour even for small games (n = 4). (See1392 4 6 8 10 12Number of agents100101102Size (payoffs)Input AGGOutput AGG100101102Time (s)Runtime(a) GFP2 4 6 8 10 12Number of agents100101102103104Size (payoffs)Input AGGOutput AGG100101102103104Time (s)Runtime(b) GSP2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0Number of agents100101102103104105Size (payoffs)Input AGGOutput AGG100101102103104105Time (s)Runtime(c) 2-ApprovalFigure 7.4: Measuring the performance of the BBSI-ILS algorithm on tuned inputs. Because theseinputs were already somewhat efficient, the compression is mostly small, except in the case ofGSP. Runtime was manageable for position auction games, but prohibitively expensive for largervoting games.Figures 7.4 and 7.5.)We have so far motivated BBSI as a refinement of WBSI for computational reasons. However, it has anotheruse: as a sort of automated tutorial to help a user make better use of the PosEc system. Although BBSI cannotpractically be applied to large, unstructured games, it can indirectly help in such cases by identifying moreefficient representations of smaller, related games. A user can then examine this representation to learn whatsort of structure is important, and write a PosEc description of the setting that makes this structure explicit.This tutorial mode works by taking a PosEc-produced BAGG and explaining, for each action node, whichaccessor function calls are used to produce the payoff table. By design, our BBSI implementation only makes1402.0 2.5 3.0 3.5 4.0Number of agents10-1100101102103104Size (payoffs)Input AGGReference AGGOutput AGG10-1100101102103104Time (s)Runtime(a) GFP2.0 2.5 3.0 3.5 4.0Number of agents10-1100101102103104Size (payoffs)Input AGGReference AGGOutput AGG10-1100101102103104Time (s)Runtime(b) GSP2.0 2.5 3.0 3.5 4.0Number of agents100101102103104Size (payoffs)Input AGGReference AGGOutput AGG100101102103104Time (s)Runtime(c) 2-ApprovalFigure 7.5: Measuring the performance of the BBSI-ILS algorithm on untuned inputs. BBSI was ableto produce a dramatic improvement in size for every input, though it was seldom as good as thereference games produced by WBSI with a tuned input.moves that are consistent with PosEc’s accessors. Specifically, every cut reduces the number of accessor callsand every perturbation introduces a function node that corresponds to a single accessor call. Thus, the outputof BBSI is always explainable terms of PosEc accessors. Because our aim here is to help a user represent aclass of games in a way that scales well in the number of agents, the appropriate measure is not the size of theoutput BAGG, but rather the asymptotic size of the family of BAGGs with the same structure. To measurethis, we calculate an upper bound on the size of each payoff table, as a function of n, based on the accessorfunctions. For example, a call to count() will return an integer between zero and n− 1—or 1 and n ifcounting how many agents played the action correspoding to that payoff table entry, while a call to any()has two possible values. Thus, an action node that is explained by these two calls has a payoff table withat most 2n entries. From this expression, it is possible to compute a bound on the size of the BAGG, as afunction of n.141For each of the games from the previous subsection, we computed these bounds, as a measure of the efficiencyof the structure discovered by BAGGs. We compared this measure against two benchmarks: the size of thenormal form and the size of the efficient representation produced by carefully hand-tuned inputs to WBSI. Inour experiments, extrapolating from the structure that BBSI identified in games with three agents, we foundthat extrapolated games were dramatically larger than hand-tuned games produced by WBSI, but far smallerthan normal-form games. (See Figure 7.6.) Thus, we found that BBSI was capable of producing output thatis not only more compact than unstructured games, but also informative to human users.7.5 Conclusion and Future WorkThis chapter makes two major technical contributions: the Positronic Economist declarative language fordescribing mechanism-based Bayesian games, and the two structure-inference algorithms that make it possibleto compactly represent such games as Bayesian action-graph games. These contributions dramaticallyreduce the human effort necessary to perform computational mechanism analysis, without leading to asubstantial loss of accuracy or speed. There are many potential applications in which PosEc could shed lighton hard-to-analyze economic situations. Even within the limited and well-studied sphere of single-goodauctions, PosEc could be used to study non-linear utility for money (e.g., budgets; risk attitudes), asymmetricvaluation distributions, other-regarding preferences (both altruism and spite), and conditional type dependence(including common and affiliated values).One limitation of the current PosEc system is that it can only describe simultaneous-move games. In contrast,many real-world mechanisms proceed in multiple stages (e.g., sequential auctions; clock-proxy auctions).In both cases, decisions made in one stage can affect which outcomes are possible or desirable in the nextstage. BAGGs are inherently single-stage games, but could be used to analyze multi-stage mechanismsby representing the different stages as individual BAGGs and solving the complete system by a process ofbackward induction. Such a process would likely resemble the special-purpose algorithm that Paes Lemeet al. [82] proposed for computing the equilibria of sequences of single-good auctions. Alternatively, CMAcould be performed using a compact game representation that explicitly supports multi-stage games, such astemporal AGGs [50] or MAIDs [57]. Unfortunately, the algorithms for reasoning about such representationscurrently offer much poorer performance than algorithms for reasoning about BAGGs.Another limitation of the PosEc system is the cost of explicitly representing types and actions; no BAGGcan be asymptotically smaller than its number of types or actions. Thus, games with large type or actionspaces—such as combinatorial auctions—cannot be succinctly represented. There is very little work onrepresenting games with implicitly specified action spaces—Koller and Milch [57] and Ryan et al. [93] arethe two exceptions, unfortunately both without good implementations—but as this literature develops theremay be opportunities for extending our encode-and-solve CMA approach to new game families.1422 4 6 8 10 12 14 16 18 20Number of agents10110310510710910111013101510171019Size (payoffs)BNFGBBSI-structure AGGHand-tuned AGG32 GB4 TB(a) GFP2 4 6 8 10 12 14 16 18 20Number of agents10110310510710910111013101510171019Size (payoffs)BNFGBBSI-structure AGGHand-tuned AGG32 GB4 TB(b) GSP2 4 6 8 10 12Number of agents10210310410510610710810910101011101210131014Size (payoffs)BNFGBBSI-structure AGGHand-tuned AGG32 GB4 TB(c) 2-ApprovalFigure 7.6: Measuring the scales of BAGGs using structure learned by BBSI. Note that the extrapolatedBBSI output is a loose upper bound; for example, no (B)AGG representation can ever includemore payoff table entries than the strategically (B)NFG. Solid lines correspond to mean size andruntime while the dashed lines correspond to the 5th percentile, median and 95th percentile of 50runs. In position-auction games, we only included actions up to each bidder’s valuation. Thus, thesize (in terms of number of actions) could vary from game to game. This is the cause of the muchhigher variance on position-auction games, compared to voting games.143Bibliography[1] G. Aggarwal, A. Goel, and R. Motwani. Truthful auctions for pricing search keywords. In EC, 2006.→ pages 94, 97, 105[2] G. Aggarwal, J. Feldman, S. Muthukrishnan, and M. Pal. Sponsored search auctions with Markovianusers. In Workshop on Ad Auctions, 2008. → pages 37[3] I. Asimov. Runaround. In Astounding Science Fiction, 1942. → pages 129[4] S. Athey and G. Ellison. Position auctions with consumer search. Quarterly Journal of Economics,126:1213–1270, 2011. → pages 34[5] S. Athey and D. Nekipelov. A structural model of sponsored search advertising auctions. Workingpaper. → pages 9[6] L. Babai. Monte-carlo algorithms in graph isomorphism testing. Technical report, Universite deMontreal, 1979. → pages 9[7] J. J. Bartholdi III and J. B. Orlin. Single transferable vote resists strategic voting. Social Choice &Welfare, 8:341–354, 1991. → pages 113[8] J. J. Bartholdi III, C. A. Tovey, and M. A. Trick. The computational difficulty of manipulating anelection. Social Choice & Welfare, 6(3):227–241, 1989. → pages 113[9] M. Benisch, N. Sadeh, and T. Sandholm. The cost of inexpressiveness in advertisement auctions. InWorkshop on Ad Auctions, 2008. → pages 36, 56, 74, 98[10] K. Bhawalkar and T. Roughgarden. Welfare guarantees for combinatorial auctions with item bidding.2011. → pages 2[11] L. Blumrosen, J. D. Hartline, and S. Nong. Position auctions and non-uniform conversion rates. InWorkshop on Ad Auctions, 2008. → pages 36, 53, 98[12] I. Caragiannis, C. Kaklamanis, P. Kanellopoulos, M. Kyropoulou, B. Lucier, R. Paes Leme, andE. Tardos. On the efficiency of equilibria in generalized second price auctions. Invited to the Journalof Economic Theory (JET). → pages 35, 102[13] S. Chawla and J. D. Hartline. Auctions with unique equilibria. In EC, 2013. → pages 51[14] X. Chen and X. Deng. Settling the complexity of two-player Nash equilibrium. In FOCS, 2006. →pages 8144[15] M. R. Chernick. Bootstrap Methods, A practitioner’s guide. Wiley, 1999. → pages 22, 45[16] V. Conitzer and T. Sandholm. Self-interested automated mechanism design and implications foroptimal combinatorial auctions. In ACM-EC, 2004. → pages 1[17] G. F. Cooper. The computational complexity of probabilistic inference using Bayesian belief networks.Artificial Intelligence, 42:393–405, 1990. → pages 3[18] P. Cramton. Simultaneous ascending auctions. MIT Press, 2006. → pages 2[19] C. Daskalakis, G. Schoenebeck, G. Valiant, and P. Valiant. On the complexity of Nash equilibria ofaction-graph games. In SODA, 2009. → pages 8, 18, 118[20] C. Daskalakis, P. W. Goldberg, and C. H. Papadimitriou. The complexity of computing a Nashequilibrium. SIAM Journal on Computing, 39:195–259, 2009. → pages 8[21] F. De Sinopoli. On the generic finiteness of equilibrium outcomes in plurality games. Games andEconomic Behavior, 34(2):270–286, February 2001. → pages 115[22] Y. Desmedt and E. Elkind. Equilibria of plurality voting with abstentions. In Proceedings of the 11thACM conference on Electronic commerce (EC ’10), pages 347–356, Cambridge, Massachusetts, June2010. → pages 115[23] A. Dhillon and B. Lockwood. When are plurality rule voting games dominance-solvable? Games andEconomic Behavior, 46(1):55–75, 2004. → pages 115[24] J. Dickhaut and T. Kaplan. A program for finding Nash equilibria. Mathematica J., 1:87–93, 1991. →pages 12[25] S. Dughmi, T. Roughgarden, and M. Sundararajan. Revenue submodularity. In ACM-EC, 2009. →pages 97[26] B. Dutta and J.-F. Laslier. Costless honesty in voting. in 10th International Meeting of the Society forSocial Choice and Welfare, Moscow, 2010. → pages 116, 119, 123[27] B. Dutta and A. Sen. Nash implementation with partially honest individuals. Games and EconomicBehavior, 74(1):154–169, 2012. → pages 115[28] M. Duverger. Political Parties: Their Organization and Activity in the Modern State. MethuenPublishing, 1959. → pages 114[29] B. Edelman and M. Schwarz. Optimal auction design and equilibrium selection in sponsored searchauctions. HBS Working Paper, 2010. → pages 105[30] B. Edelman, M. Ostrovsky, and M. Schwarz. Internet advertising and the generalized second priceauction: Selling billions of dollars worth of keywords. American Economic Review, 97(1):242–259,March 2007. → pages 4, 5, 34, 35, 68, 96, 97, 105[31] B. Escoffier, J. Lang, and M. O¨ztu¨rk. Single-peaked consistency and its complexity. In Proceedings ofthe 2008 conference on ECAI 2008: 18th European Conference on Artificial Intelligence, pages366–370, Amsterdam, The Netherlands, 2008. → pages 113145[32] T. Feddersen and W. Pesendorfer. Voting behavior and information aggregation in elections withprivate information. Econometrica, 65(5):1029–1058, September 1997. → pages 115[33] W. Gaertner. A Primer in Social Choice Theory. Oxford, 2006. → pages 1, 2[34] D. Gale and L. Shapley. College admissions and the stability of marriage. American MathematicsMonthley, 69:9–14, 1962. → pages 1[35] E. Gerding, Z. Rabinovich, A. Byde, E. Elkind, and N. R. Jennings. Approximating mixed Nashequilibria using smooth fictitious play in simultaneous auctions. In AAMAS, 2008. → pages 10[36] A. Ghosh and M. Mahdian. Externalities in online-advertising. In WWW: International World WideWeb Conference, 2008. → pages 37, 98[37] A. Gibbard. Manipulation of voting schemes. Econometrica, 41(4):587–602, July 1973. → pages 113[38] A. Gibbard. Manipulation of schemes that mix voting with chance. Econometrica, 45(3):665–681,April 1977. → pages 113[39] I. Giotis and A. Karlin. On the equilibria and efficiency of the GSP mechanism in keyword auctionswith externalities. In WINE, 2008. → pages 37, 58[40] R. Gomes, N. Immorlica, and E. Markakis. Externalities in keyword auctions: an empirical andtheoretical assessment. In WINE, 2009. → pages 37[41] R. D. Gomes and K. Sweeney. Bayes-Nash equilibria of the generalized second-price auction.Working paper, 2011. → pages 97[42] G. Gottlob, G. Greco, and F. Scarcello. Pure Nash equilibria: Hard and easy games. JAIR, 24:357–406, 2005. → pages 8, 18[43] S. Govindan and R. Wilson. A global Newton method to compute Nash equilibria. J. EconomicTheory, 110:65–86, 2003. → pages 9, 22, 44[44] J. D. Hartline and Q. Yan. Envy, truth, and profit. In EC, 2011. → pages 34[45] Inside AdWords. A common AdWords misconception explained..., Jan. 2006.http://adwords.blogspot.ca/2006/01/common-adwords-misconception-explained.html. → pages 96[46] A. X. Jiang and K. Leyton-Brown. Computing pure Nash equilibria in symmetric action-graph games.2007. → pages 8[47] A. X. Jiang and K. Leyton-Brown. Bayesian action-graph games. In NIPS, 2010. → pages 2, 3, 8, 119[48] A. X. Jiang and K. Leyton-Brown. A general framework for computing optimal correlated equilibriain compact games. In WINE, 2011. → pages 4[49] A. X. Jiang and K. Leyton-Brown. Polynomial-time computation of exact correlated equilibrium incompact games. Games and Economic Behavior, (0):–, 2013. ISSN 0899-8256.doi:http://dx.doi.org/10.1016/j.geb.2013.02.002. → pages 3[50] A. X. Jiang, K. Leyton-Brown, and A. Pfeffer. Temporal action-graph games: A new representationfor dynamic games. In UAI, 2009. → pages 142146[51] A. X. Jiang, K. Leyton-Brown, and N. A. R. Bhat. Action-graph games. GEB, 71:141–173, 2011. →pages 2, 3, 6, 7, 8, 9, 11, 15, 17, 22, 44, 118, 130, 137[52] P. R. Jordan, Y. Vorobeychik, and M. P. Wellman. Searching for approximate equilibria in empiricalgames. In AAMAS, 2008. → pages 131[53] I. A. Kash and D. C. Parkes. Impersonation strategies in auctions. In WINE, 2010. → pages 105[54] K. Kawai and Y. Watanabe. Inferring strategic voting. Technical report, Northwestern, 2010. → pages115[55] M. Kearns, M. Littman, and S. Singh. Graphical models for game theory. In UAI, 2001. → pages 2, 6[56] D. Kempe and M. Mahdian. A cascade model for externalities in sponsored search. In WINE, 2008.→ pages 37, 61, 70, 98[57] D. Koller and B. Milch. Multi-agent influence diagrams for representing and solving games. GEB, 45:181–221, 2003. → pages 2, 142[58] V. Krishna. Auction Theory. Academic Press, 2002. → pages 1, 2[59] S. Lahaie and P. McAfee. Efficient ranking in sponsored search. In WINE, 2011. → pages 97[60] S. Lahaie and D. M. Pennock. Revenue analysis of a family of ranking rules for keyword auctions. InACM-EC, 2007. → pages 34, 36, 39, 42, 43, 49, 96, 97, 99, 100, 105, 106[61] J.-F. Laslier and J. W. Weibull. A strategy-proof condorcet jury theorem. forthcoming, ScandinavianJournal of Economics, 2012. → pages 115[62] O. Lev and J. S. Rosenschein. Convergence of iterative voting. In Proceedings of the 11thInternational Coference on Autonomous Agents and Multiagent Systems (AAMAS), volume 2, pages611–618, Valencia, Spain, June 2012. → pages 115[63] R. Lipton and E. Markakis. Nash equilibria via polynomial equations. In LATIN, volume 2976 ofLNCS, pages 413–422, 2004. → pages 12[64] B. Lucier, R. Paes Leme, and E. Tardos. On revenue in the generalized second price auction. In WWW,2012. → pages 97[65] O. Mangasarian. Equilibrium points of bimatrix games. J. Society for Industrial and AppliedMathematics, 12:778–780, 1964. → pages 12[66] B. A. Mayer, E. Sodomka, A. Greenwald, and M. P. Wellman. Accounting for price dependencies insimultaneous sealed-bid auctions. In EC, 2013. → pages 131[67] R. D. McKelvey and R. E. Wendell. Voting equilibria in multidimensional choice spaces.Mathematics of Operations Research, 1(2):144–158, May 1976. → pages 115[68] R. D. McKelvey, A. M. McLennan, and T. L. Turocy. Gambit: Software tools for game theory, 2006.http://econweb.tamu.edu/gambit. → pages 9, 22147[69] R. Meir, M. Polukarov, J. S. Rosenschein, and N. R. Jennings. Convergence to equilibria of pluralityvoting. In Proceedings of the 24th National Conference on Artificial Intelligence (AAAI), pages823–828, Atlanta, July 2010. → pages 115[70] M. Messner and M. K. Polborn. Miscounts, Duverger’s law and Duverger’s hypothesis. TechnicalReport 380, IGIER (Innocenzo Gasparini Institute for Economic Research), Bocconi University, 2011.→ pages 115[71] C. Metz. Yahoo! “handicaps” its search ad auctions. does google “squash” too?, 2010. URLhttp://www.theregister.co.uk/2010/09/16/yahoo does squashing/. → pages 96[72] P. S. Michael P. Wellman, Amy Greenwald. Autonomous Bidding Agents: Strategies and Lessons fromthe Trading Agent Competition. MIT Press, 2007. → pages 1[73] R. G. J. Miller. Simultaneous Statistical Inference. Springer, 1981. → pages 45[74] B. Murtagh and M. Saunders. MINOS, 2010. http://www.sbsi-sol-optimize.com. → pages 22[75] R. Myerson. Optimal auction design. Mathematics of Operations Research, 6(1), 1981. → pages 2, 4,97, 98[76] R. B. Myerson and R. J. Weber. A theory of voting equilibria. The American Political Science Review,87(1):102–114, March 1993. → pages 115[77] R. B. Nelsen. An Introduction to Copulas. Springer, 2006. → pages 106[78] N. Nisan and A. Ronen. Algorithmic mechanism design. In STOC, 1999. → pages 1[79] N. Nisan, T. Roughgarden, E. Tardos, and V. Vazirani, editors. Algorithmic Game Theory. CambridgeUniversity Press, Cambridge, UK, 2007. → pages 5, 8[80] E. Nudelman, J. Wortman, Y. Shoham, and K. Leyton-Brown. Run the GAMUT: A comprehensiveapproach to evaluating game-theoretic algorithms. In AAMAS, pages 880–887, 2004. → pages 2, 12,22[81] M. Ostrovsky and M. Schwarz. Reserve prices in internet advertising auctions: A field experiment.Working Paper, 2009. → pages 96, 97, 99, 109, 111, 112[82] R. Paes Leme, V. Syrgkanis, and E. Tardos. Sequential auctions and externalities. In SODA, 2011. →pages 142[83] D. L. Poole and A. K. Mackworth. Artificial Intelligence. Cambridge University Press, 2011. →pages 15[84] R. Porter, E. Nudelman, and Y. Shoham. Simple search methods for finding a nash equilibrium. GEB,63:642–662, 2008. → pages xi, 2, 9, 11, 12, 14, 24, 44, 118[85] A. D. Procaccia. Can approximation circumvent Gibbard-Satterthwaite? In Proceedings of the 24thNational Conference on Artificial Intelligence (AAAI), pages 836–841, Atlanta, Georgia, July 2010. →pages 113148[86] Z. Rabinovich, E. Gerding, M. Polukarov, and N. R. Jennings. Generalised fictitious play for acontinuum of anonymous players. In IJCAI, 2009. → pages 10[87] Z. Rabinovich, V. Naroditskiy, E. H. Gerding, and N. R. Jennings. Computing pure bayesian nashequilibria in games with finite actions and continuous types. Artificial Intelligence, To appear:–, 2012.→ pages 131[88] B. Roberts, D. Gunawardena, I. A. Kash, and P. Key. Ranking and tradeoffs in sponsored searchauctions. In EC, 2013. URL http://research.microsoft.com/pubs/193878/1304.7642v1.pdf. → pages96[89] R. Rosenthal. A class of games possessing pure-strategy Nash equilibria. Int. J. Game Theory, 2:65–67, 1973. → pages 6[90] T. Roughgarden. Selfish Routing. PhD thesis, Cornell University, 2002. → pages 1[91] T. Roughgarden and C. Papadimitriou. Computing correlated equilibria in multi-player games. JACM,37:49–56, 2008. → pages 2, 3[92] T. Roughgarden and E. Tardos. Do externalities degrade GSP’s efficiency? In Workshop onAdvertising Auctions, 2012. → pages 37, 102[93] C. T. Ryan, A. X. Jiang, and K. Leyton-Brown. Computing pure strategy Nash equilibria in compactsymmetric games. In EC, 2010. → pages 142[94] M. A. Satterthwaite. Strategy-proofness and Arrow’s conditions: Existence and correspondencetheorems for voting procedures and social welfare functions. Journal of Economic Theory, 10(2):187–217, April 1975. → pages 113[95] B. Shi, E. Gerding, P. Vytelingum, and N. Jennings. An equilibrium analysis of competing doubleauction marketplaces using fictitious play. In ECAI, 2010. → pages 10[96] Y. Shoham and K. Leyton-Brown. Multiagent Systems: Algorithmic, Game-Theoretic, and LogicalFoundations. Cambridge, 2010. → pages 6, 8[97] Y. Sun, Y. Zhou, and X. Deng. Optimal reserve prices in weighted gsp auctions: Theory andexperimental methodology. In Workshop on Ad Auctions, 2011. → pages 97, 98[98] D. R. M. Thompson and K. Leyton-Brown. Computational analysis of perfect-information positionauctions. In ACM-EC, pages 51–60, 2009. → pages iii, 22, 24, 33, 49, 97[99] D. R. M. Thompson and K. Leyton-Brown. Revenue optimization in the generalized second-priceauction. In EC, 2013. → pages iii[100] D. R. M. Thompson, S. Leung, and K. Leyton-Brown. Computing Nash equilibria of action-graphgames via support enumeration. In WINE, 2011. → pages iii, 44, 118[101] D. R. M. Thompson, O. Lev, K. Leyton-Brown, and J. Rosenschein. Empirical analysis of pluralityelection equilibria. In The 12th International Conference on Autonomous Agents and MultiagentSystems (AAMAS), Saint Paul, Minnesota, USA, 2013. → pages iii149[102] G. van der Laan, A. J. J. Talman, and L. van Der Heyden. Simplicial variable dimension algorithmsfor solving the nonlinear complementarity problem on a product of unit simplices using a generallabelling. Mathematics of Operations Research, 12:377–397, 1987. → pages 9, 22, 44[103] H. R. Varian. Position auctions. International Journal of Industrial Organization, 25:1163–1178,2007. → pages 2, 4, 5, 9, 34, 35, 96, 97, 105[104] R. V. Vohra. Advanced Mathematical Economics. Routledge, 2005. → pages 16[105] Y. Vorobeychik. Simulation-based game theoretic analysis of keyword auctions with low-dimensionalbidding strategies. In UAI, 2009. → pages 10[106] Y. Vorobeychik. Probabilistic analysis of simulation-based games. ACM Transactions on Modelingand Computer Simulation, 20, 2010. → pages[107] Y. Vorobeychik. A game theoretic bidding agent for the ad auction game. In International Conferenceon Agents and Artificial Intelligence, 2011. → pages 10[108] Y. Vorobeychik, D. M. Reeves, and M. P. Wellman. Constrained automated mechanism design forinfinite games of incomplete information. JAAMAS, 25:313–351, 2012. → pages 5, 9, 131[109] L. Xia and V. Conitzer. Stackelberg voting games: Computational aspects and paradoxes. InProceedings of the 24th National Conference on Artificial Intelligence (AAAI), pages 805–810,Atlanta, Georgia, USA, 2010. → pages 115[110] L. Xia, M. Zuckerman, A. D. Procaccia, V. Conitzer, and J. S. Rosenschein. Complexity ofunweighted coalitional manipulation under some common voting rules. In Proceedings of the 21stInternational Joint Conference on Artificial Intelligence (IJCAI), pages 348–353, Pasadena, California,USA, July 2009. → pages 113[111] L. Xu, F. Hutter, H. H. Hoos, and K. Leyton-Brown. SATzilla: portfolio-based algorithm selection forSAT. 32:565–606, 2008. → pages 24[112] M. Yokoo, Y. Sakurai, and S. Matsubara. The effect of false-name bids in combinatorial auctions:New fraud in internet auctions. GEB, 46:174–188, 2004. → pages 2150Appendix AUsing the PosEc APIRecall that PosEc’s main function is to take mechanisms and settings and convert them into (B)AGGs. Thus,the most essential call isagg = makeAGG(setting, mechanism). Then, the AGG object can be stored to file in a formatthat Gambit can read:agg.saveToFile(filename). The challenging part of using PosEc is specifying settings and mecha-nisms.A.1 Representing SettingsIn PosEc, settings are represented by Python objects. Their constructors are designed to mimic the mathemat-ical representation in the body of the paper. Their constructor have the following forms:•Setting(n, O, Theta, u),•BayesianSetting(n, O, Theta, P, u),•ProjectedSetting(n, O, Theta, u, Psi, pi), and•151ProjectedBayesianSetting(n, O, Theta, P, u, Psi, pi),wheren is an integer specifying the number of agents,O is a “set-like object”1 containing the set of possible outcomes,Theta is the (finite) collection of types which must be hashable (in non-Bayesian games, this is assumed tobe an n-length list/tuple whereTheta[i] is the type of agent i),P is an n-length vector of type distributions represented asDistribution objects (essentially mappings fromTheta to real numbers),u is a utility function (described below),Psi is a set-like object containing the set of projected outcomes, andpi is a projection function (described below).Every utility function has the signatureu(i, theta, o, a i) wherei is an integer indicating the agent number,theta is aTypeProfile object (essentially an n-length vector of types, but with accessor methods described below),o is an outcome (from setO), anda i is agent i’s action. Every utility function returns a real value.1We define set-like object to mean any object that hascontains () andeq () methods. This includeslist,tuple andset, as well as several special purpose classes provided by PosEc.152Every projection function has the signaturepi(i, o) wherei is the agent number ando is an outcome. Every projection function returns a projected outcome (i.e., an element of the setPsi).A.2 Set-Like Classes for Outcome SpacesPosEc includes several set-like classes to deal with the fact that often mechanism outcome spaces are toolarge to represent with conventional Python collections such aslist orset.RealSpace(k) is a class for k-dimensional real vectors.CartesianProduct(*factors) is a class for Cartesian products of set-like objects. For example, one could create a theoutcome space for a single-good auction with n-bidders (i.e., where the outcome space consists of anallocation between 0 and n−1, and an n-length vector of payments) as follows:O = CartesianProduct(range(n), RealSpace(n)).Permutations(S) is a class for every permutation of the elements of collectionS.PowerSet(S) is a class containing every subset ofS, whilePartitionMatroid(S, k) is a class containing every subset ofS of length at most k.153A.3 Representing MechanismsPosEc provides two abstracted mechanism classes: mechanisms with and without projections. These classeshave nearly identical constructors:Mechanism(A, M) andProjectedMechanism(A, M).A is an action-set function, which has the signatureA(setting, i, theta i) wheresetting is any valid setting,i is the agent number, andtheta i is that agent’s type. The output of an action-set function is always a container (e.g.,list,tuple) of actions.M is an outcome function (described below). The output of any outcome function is either an outcome (fromO) or aDistribution over outcomes.For aMechanism, the outcome function has the signatureM(setting, a N) wheresetting is a valid setting anda N is anActionProfile object (essentially an n-length vector of actions, but with accessor methods describedbelow).For aProjectedMechanism, the outcome function has the signature154M(setting, i, theta i, a N) wheresetting is a valid setting,i is an agent number,theta i is agent i’s type anda N is anActionProfile object.PosEc includes two probability distribution classes:UniformDistribution(events) andDistribution(events,probabilities). Both accept finite collections of hash-able events.A.4 Accessor MethodsBothActionProfile andTypeProfile objects have many accessor methods. At the most basic level, each is an n-length vectorwherev[i] returns the ith element.The accessor methods for anActionProfile are:•a N.action(j) – returns the action played by agentj•a N.any(A, agents = None) – returns155True iff any agent plays an action in listA. If the agents parameter is notNone, it can be a collection of agents to consider.•a N.argmax(A, fn = lambda x:x, default = None) – returns some actiona where (1)a is inA, (2)a is played by at least one agent, and (3)a maximizesfn() subject to (1) and (2).default is returned if no agent plays an action inA.•a N.argmin(A, fn = lambda x:x, default = None) – returns some action inA ordefault (seeargmax)•a N.count(a) – returns a count of how many agents played actiona•a N.max(A, fn = lambda x:x, default = None) – returnsfn(a) or156default (seeargmax)•a N.min(A, fn = lambda x:x, default = None) – returnsfn(a) ordefault (seeargmax)•a N.plays(j, a j) – returnsTrue iff agentj played actiona j Optionally,j can be a collection of agents.•a N.sum(A) – returns a count of how many agents played any action in collectionA.•a N.weightedSum(A, W) – returns a weighted sum of how many agents played any action incollectionA (The weight for actionA[j] isW[j].)The accessor methods for aTypeProfile are:157•argmax(T, fn = lambda x:x, default = None) – returns some typet where (1)t is inT, (2)t is the type of at least one agent, and (3)t maximizesfn() subject to (1) and (2).default is returned if no agent has any type inT.•theta.any(T) – returnsTrue iff at least one agent has a type inT•theta.argmin(T, fn = lambda x:x, default = None) – returns some type inT or default (seeargmax)•theta.count(t) – returns a count of how many agents have typet•theta.hasType(j, theta) – returnsTrue iff agent158j has typetheta•theta.max(T, fn = lambda x:x, default = None) – returnsfn(t) ordefault (Seeargmax)•theta.min(T, fn = lambda x:x, default = None) – returnsfn(t) ordefault (Seeargmax)•theta.sum(T) – returns a total of how many agents have types inT•theta.type(j) – returns the type of agentj•theta.weightedSum(T, W ) – returns a weighted sum of how many agents had any type incollectionT (The weight for typeT[j] isW[j].)159Appendix BDocumentation of the PosEc API160B.1 Module posec.bbsiB.1.1 Functionspreprocess(agg)collapseTest(f, C, t)testCut(agg, act, arcIndex)tryCutNode(agg, act, node)tryCut(agg, act, arcIndex, strict=False)If possible (and, optionally, strictly improving) cut an arcfindArcIndex(agg, act, otherNode)makeSum(agg, act, existingSums, weights, functionName)Create a new weighted sum node that combines a bunch of existing inputsmakeOr(agg, act, sumNode, functionName)Create a new weighted sum node that combines a bunch of existing inputsmakeMax(agg, act, existingMaxes, weights, functionName)compressByILS(agg, seed=None)anonymityCuts(agg)passB.1.2 VariablesName DescriptionSUM LOG Value: []package Value: ’posec’161B.2 Module posec.mathtoolsB.2.1 FunctionsisDistribution(d)d is a list of probabilitiesisDelta(d)cartesianProduct(listOfLists)permutations(S)subsetPermutations(S)product(S)intToBitVector(n, minBits=0)powerSet(S, cast fn=<type ’list’>)B.2.2 VariablesName Descriptionpackage Value: ’posec’162B.3 Module posec.posec coreB.3.1 FunctionsmakeAGG(setting, mechanism, symmetry=False, transform=None, quiet=False, bbsi level=0)Takes a Setting and a Mechanism, returns a corresponding pyagg.AGG objecttransform is a function (setting,i,theta i,a i,o,theta N) that returns a real value quiet==True producesno standard output bbsi level==0 means to do no BBSI bbsi level==1 means to do limited BBSI(suitable for fine-tuning games) bbsi level==2 means to do extensive BBSI (suitable for discoveringcoarse structure)explain(agg, acts=None)Describes the set of function-calls that can be used to produce a strategically equivalent AGG.Optionally, acts can cover a specific subset of action nodes.B.3.2 VariablesName Descriptionpackage Value: ’posec’B.3.3 Class ActionProfileposec.posec core. InstrumentedDataStoreActionProfileRepresentation of an action profile with efficient ways of accessing the data.An ActionProfile is passed to a Mechanism’s outcome function as the argument a NMethodscount(self, a)Returns a count of how many agents played action a163sum(self, A)Returns a count of how many agents played any action in collection AweightedSum(self, A, W)Returns a weighted sum of how many agents played any action in collection A (The weight foraction A[j] is W[j].)argmax(self, A, fn=<function <lambda> at 0x10147bd70>, default=None)Returns some action a where (1) a is in A, (2) a is played by at least one agent, and (3) a maximizesfn() subject to (1) and (2). <default> is returned if no agent plays an action in A.max(self, A, fn=<function <lambda> at 0x10147be60>, default=None)Returns fn(a) or default (See argmax)argmin(self, A, fn=<function <lambda> at 0x10147bf50>, default=None)Returns some action in A or default (See argmax)min(self, A, fn=<function <lambda> at 0x1014930c8>, default=None)Returns fn(a) or default (See argmax)any(self, A, agents=None)Returns True iff any agent plays an action in list A If the agents parameter is not None, it can be alist of agents to considerplays(self, j, a j)Returns True iff agent j played action a j Optionally, j can be a list or tuple of agents.action(self, j)Returns the action played by agent jgetitem (self, j)Returns the action of agent jInherited from posec.posec core. InstrumentedDataStoreinit ()164B.3.4 Class TypeProfileposec.posec core. InstrumentedDataStoreTypeProfileRepresentation of an type profile with efficient ways of accessing the data.A TypeProfile is passed to a Setting’s utility function as the argument theta NMethodshasType(self, j, theta)Returns True iff agent j has type thetatype(self, j)Returns the type of agent jgetitem (self, j)Returns the type of agent jcount(self, t)Returns a count of how many agents have type tsum(self, T)Returns a total of how many agents have types in TweightedSum(self, T, W)Returns a weighted sum of how many agents had any type in collection T (Theweight for type T[j] is W[j].)any(self, T)Returns Truee iff at least one agent has a type in T165argmax(self, T, fn=<function <lambda> at 0x1014936e0>,default=None)Returns some type t where (1) t is in T, (2) t is the type of at least one agent, and (3)t maximizes fn() subject to (1) and (2). <default> is returned if no agent has anytype in T.max(self, T, fn=<function <lambda> at 0x1014937d0>,default=None)Returns fn(t) or default (See argmax)argmin(self, T, fn=<function <lambda> at 0x1014938c0>,default=None)Returns some type in T or default (See argmax)min(self, T, fn=<function <lambda> at 0x1014939b0>,default=None)Returns fn(t) or default (See argmax)Inherited from posec.posec core. InstrumentedDataStoreinit ()B.3.5 Class RealSpaceA set-like object that contains vectors of floating-point numbersUseful for defining the (projected) outcome space in a SettingMethodsinit (self, dimensions=None)If dimensions==None, it contains all floating point numbers. If dimensions>=0, itcontains all dimensions-length lists & tuples of floating point numbers.166contains (self, element)If dimensions==None, it contains all floating point numbers. If dimensions>=1, itcontains all dimensions-length lists & tuples of floating point numbers.eq (self, other)Returns True iff other is a RealSpace instance with the same dimensionsrepr (self )B.3.6 Class CartesianProductA set-like object that contains vectors containing elements of the ”factor” setUseful for defining the (projected) outcome space in a SettingMethodsinit (self, *factors, **options)factors is a list of set-like objects (let k denote its length) contains only k-lengthvectors where each element is in the corresponding factoroptions: memberType=<type> - contains only objects of type <type> (alsosupports tuples of types)contains (self, vector)Returns True iff vector is a k-length vector where each element is in thecorresponding factor if memberType!=None, then vector must alos have type<memberType> (eq (self, other)True iff both inputs are CertesianProducts with the same factors and memberTyperepr (self )167B.3.7 Class SettingKnown Subclasses: posec.posec core.BayesianSetting, posec.posec core.ProjectedSettingA data structure representing a (not projected) full-information settingMethodsinit (self, n, O, Theta, u)Class VariablesName Descriptionn Value: TypeCheckDescriptor("n","integer number of agents", lam...O Value: TypeCheckDescriptor("O","set-like container of outcomes...Theta Value: TypeCheckDescriptor("Theta","n-length vector of (hash()...u Value: TypeCheckDescriptor("u","Utility function; u(i,theta,o,...B.3.8 Class ProjectedSettingposec.posec core.SettingProjectedSettingKnown Subclasses: posec.posec core.ProjectedBayesianSetting,posec.applications.position auctions externalities.HybridSetting,posec.applications.position auctions.NoExternalitySetting168Methodsinit (self, n, O, Theta, u, Psi, pi)Overrides: posec.posec core.Setting. initClass VariablesName DescriptionPsi Value: TypeCheckDescriptor("Psi","set-like container of projec...pi Value: TypeCheckDescriptor("pi","Projection function; pi(i,o) ...u Value: TypeCheckDescriptor("u","Projected utility function; u(...Inherited from posec.posec core.Setting (Section B.3.7)O, Theta, nB.3.9 Class BayesianSettingposec.posec core.SettingBayesianSettingKnown Subclasses: posec.posec core.ProjectedBayesianSettingMethodsinit (self, n, O, Theta, P, u)Overrides: posec.posec core.Setting. initClass Variables169Name DescriptionP Value: TypeCheckDescriptor("P","n-length vector of Distributio...Theta Value: TypeCheckDescriptor("Theta","Finite collection of (hash...Inherited from posec.posec core.Setting (Section B.3.7)O, n, uB.3.10 Class ProjectedBayesianSettingposec.posec core.Settingposec.posec core.ProjectedSettingposec.posec core.Settingposec.posec core.BayesianSettingProjectedBayesianSettingMethodsinit (self, n, O, Theta, P, u, Psi, pi)Overrides: posec.posec core.BayesianSetting. initClass VariablesName DescriptionInherited from posec.posec core.ProjectedSetting (Section B.3.8)Psi, pi, uInherited from posec.posec core.Setting (Section B.3.7)O, Theta, nInherited from posec.posec core.BayesianSetting (Section B.3.9)P170B.3.11 Class MechanismKnown Subclasses: posec.posec core.ProjectedMechanism,posec.applications.voting.AbstractVotingMechanismMethodsinit (self, A, M)Class VariablesName DescriptionA Value: TypeCheckDescriptor("A","Action function; A(setting, i,...M Value: TypeCheckDescriptor("M","Outcome function; M(setting, a...B.3.12 Class ProjectedMechanismposec.posec core.MechanismProjectedMechanismKnown Subclasses: posec.applications.position auctions.NoExternalityPositionAuction,posec.applications.basic auctions.FirstPriceAuctionMethodsInherited from posec.posec core.Mechanism(Section B.3.11)init ()Class Variables171Name DescriptionM Value: TypeCheckDescriptor("M","Outcome function; M(setting, i...Inherited from posec.posec core.Mechanism (Section B.3.11)AB.3.13 Class DistributionKnown Subclasses: posec.posec core.UniformDistributionRepresentation of a distribution with finite supportMethodsinit (self, support, probabilities)support and probabilities are equal-length vectorsiter (self )Returns an iterator over 2 tuples representing the probability of an event and theeventprobability(self, event)Returns the probability of event Returns 0.0 if event is not in supportB.3.14 Class UniformDistributionposec.posec core.DistributionUniformDistribution172Methodsinit (self, support)support and probabilities are equal-length vectorsOverrides: posec.posec core.Distribution. init extit(inherited documentation)Inherited from posec.posec core.Distribution(Section B.3.13)iter (), probability()173B.4 Module posec.pyaggPython wrapper class for representing action-graph gamesB.4.1 FunctionsfromLtoLL(L, aSizes)fromLLtoLC(LL)fromLLtoString(LL, actionDelim=’ ’, agentDelim=’\t’)purgeBarren(agg)Removes any functions nodes that have no childrenB.4.2 VariablesName DescriptionFN TYPE SUM Value: 0FN TYPE OR Value: 1FN TYPE WEIGHTED SU-MValue: 10FN TYPE WEIGHTED M-AXValue: 12gnm Value: Solver("gambit-gnm -n {runs}{filename}", {"runs": 1})simpdiv Value: Solver("gambit-simpdiv -n{runs} {filename}", {"runs": 1})ipa Value: Solver("gambit-ipa -n {runs}{filename}", {"runs": 1})enumpure Value: Solver("gambit-enumpure{filename}")enumpoly Value: Solver("gambit-enumpoly{filename}")continued on next page174Name DescriptionSOLVERS Value: [gnm, simpdiv, ipa, enumpure,enumpoly]package Value: ’posec’B.4.3 Class AGG FileKnown Subclasses: posec.pyagg.AGG, posec.pyagg.BAGG FileMethodsinit (self, filename)parse(self, strategyString)interpretProfile(self, sp)fixStrategy(self, stratStr)test(self, strategyString)Returns a n-length vector of the agents’ payoffsisNE(self, strategyString)Tests whether or not a given strategy profile is a (Bayes) Nash equilibriumdel (self )B.4.4 Class BAGG Fileposec.pyagg.AGG FileBAGG FileKnown Subclasses: posec.pyagg.BAGG>>> import AGG Examples175>>> bagg = AGG Examples.MixedValueChicken()>>> bagg.saveToFile("mvc.bagg")>>> bagg.isNE("1 0 1 0 1 0 1 0")False>>> bagg.testExAnte("1 0 1 0 1 0 1 0")[0.0, 0.0]>>> baggf = BAGG File("mvc.bagg")>>> baggf.Theta[’High’, ’Low’]>>> baggf.S["High"]["(’High’, ’Swerve’)", "(’High’, ’Straight’)"]>>> baggf.P{’1’: {’High’: 0.5, ’Low’: 0.5}, ’2’: {’High’: 0.5, ’Low’: 0.5}}>>> baggf.isNE("1 0 1 0 1 0 1 0")False>>> baggf.testExAnte("1 0 1 0 1 0 1 0")[0.0, 0.0]Methodstest(self, strategyString)Returns an ex interim expected payoff profileOverrides: posec.pyagg.AGG File.testtestExAnte(self, strategyString)Inherited from posec.pyagg.AGG File(Section B.4.3)del (), init (), fixStrategy(), interpretProfile(), isNE(), parse()B.4.5 Class AGGposec.pyagg.AGG FileAGG176Known Subclasses: posec.pyagg.BAGGMethodssizeAsAGG(self )sizeAsNFG(self )177init (self, N, A, S, F, v, f, u, title=None)N is a list of players A is a list of actions S is a mapping from players to lists ofactions F is a list of projection (aka function) nodes v is a list of arcs (2-tupes ofstart,end nodes) f is a mapping from projection node to type of projection (integer)u is a mapping from an action node to a payoff mapping (tuples of inputs to realvalues)>>> import AGG Examples>>> agg = AGG Examples.PrisonersDilemma()>>> agg.saveToFile("pd.agg")>>> agg.test("NE,1,0,1,0")[3.0, 3.0]>>> agg.isNE("NE,1,0,1,0")False>>> string.strip(simpdiv.solve(agg).next())’NE,0,1,0,1’>>> string.strip(gnm.solve(agg).next())’NE,0.000000,1.000000,0.000000,1.000000’>>> f = open("pd2.agg","w")>>> f.write(open("pd.agg","r").read())>>> f.close()>>> aggf = AGG File("pd2.agg")>>> aggf.test("NE,1,0,1,0")[3.0, 3.0]>>> aggf.isNE("NE,1,0,1,0")False>>> string.strip(list(enumpoly.solve(aggf))[0])’NE,0.000000,1.000000,0.000000,1.000000’Overrides: posec.pyagg.AGG File. initarcsTo(self, node)neighbours(self, node)saveToFile(self, filename)Inherited from posec.pyagg.AGG File(Section B.4.3)178del (), fixStrategy(), interpretProfile(), isNE(), parse(), test()Class VariablesName Descriptionsignature Value: ’#AGG\n’B.4.6 Class BAGGposec.pyagg.AGG Fileposec.pyagg.BAGG Fileposec.pyagg.AGG Fileposec.pyagg.AGGBAGGMethodssizeAsNFG(self )Returns the size of the games as a Bayesian normal-form gameOverrides: posec.pyagg.AGG.sizeAsNFGinit (self, N, Theta, P, A, S, F, v, f, u, title=None)N is a list of players Theta is a list of types P is a mapping from players to mappingsfrom types to probability A is a list of actions S is a mapping from types to lists ofactions F is a list of function nodes v is a list of arcs (2-tupes of start,end nodes) f isa mapping from projection node to type of projection (integer) u is a mapping froman action node to a payoff mapping (tuples of inputs to real values)Overrides: posec.pyagg.AGG. initInherited from posec.pyagg.BAGG File(Section B.4.4)179test(), testExAnte()Inherited from posec.pyagg.AGG File(Section B.4.3)del (), fixStrategy(), interpretProfile(), isNE(), parse()Inherited from posec.pyagg.AGG(Section B.4.5)arcsTo(), neighbours(), saveToFile(), sizeAsAGG()Class VariablesName Descriptionsignature Value: ’#BAGG\n’180Appendix CDocumentation of the Included PosEcApplications181C.1 Package applicationsC.1.1 Modules• basic auctions (Section C.2, p. 183)• position auctions: This is strictly for no-externality position auctions.(Section C.3, p. 188)• position auctions externalities (Section C.4, p. 192)• voting (Section C.5, p. 195)C.1.2 VariablesName Descriptionpackage Value: None182C.2 Module applications.basic auctionsC.2.1 FunctionsProjectedBayesianSetting(typeDistros)typeDistros is an n-length vector of vectors of floats for each of thesis vectors, avalue of x in the ith position denotes that an agent has valuation of i with probabilityxwelfareTransform(setting, i, theta N, o, a i)paymentTransform(setting, i, theta N, o, a i)C.2.2 VariablesName DescriptionSCALE Value: 10package Value: ’applications’C.2.3 Class SingleGoodOutcomeobjecttupleSingleGoodOutcomeSingleGoodOutcome(allocation, payments)Methodsgetnewargs (self )Return self as a plain tuple. Used by copy and pickle.Overrides: tuple. getnewargs183new ( cls, allocation, payments)Create new instance of SingleGoodOutcome(allocation, payments)Return Valuea new object with type S, a subtype of TOverrides: object. newrepr (self )Return a nicely formatted representation stringOverrides: object. reprInherited from tupleadd (), contains (), eq (), ge (), getattribute (), getitem (),getslice (), gt (), hash (), iter (), le (), len (), lt (), mul (),ne (), rmul (), sizeof (), count(), index()Inherited from objectdelattr (), format (), init (), reduce (), reduce ex (), setattr (), str (),subclasshook ()PropertiesName Descriptionallocation Alias for field number 0payments Alias for field number 1Inherited from objectclass184C.2.4 Class ProjectedOutcomeobjecttupleProjectedOutcomeProjectedOutcome(i win, my payment)Methodsgetnewargs (self )Return self as a plain tuple. Used by copy and pickle.Overrides: tuple. getnewargsnew ( cls, i win, my payment)Create new instance of ProjectedOutcome(i win, my payment)Return Valuea new object with type S, a subtype of TOverrides: object. newrepr (self )Return a nicely formatted representation stringOverrides: object. reprInherited from tupleadd (), contains (), eq (), ge (), getattribute (), getitem (),getslice (), gt (), hash (), iter (), le (), len (), lt (), mul (),ne (), rmul (), sizeof (), count(), index()Inherited from objectdelattr (), format (), init (), reduce (), reduce ex (), setattr (), str (),185subclasshook ()PropertiesName Descriptioni win Alias for field number 0my payment Alias for field number 1Inherited from objectclassC.2.5 Class FirstPriceAuctionposec.posec core.Mechanismposec.posec core.ProjectedMechanismFirstPriceAuctionKnown Subclasses: applications.basic auctions.AllPayAuctionMethodsinit (self, scale=10)Overrides: posec.posec core.Mechanism. initA(self, setting, i, theta i)Overrides: posec.posec core.Mechanism.AM(self, setting, i, theta i, a N)Overrides: posec.posec core.Mechanism.M186C.2.6 Class AllPayAuctionposec.posec core.Mechanismposec.posec core.ProjectedMechanismapplications.basic auctions.FirstPriceAuctionAllPayAuctionMethodsM(self, setting, i, theta i, a N)Overrides: posec.posec core.Mechanism.MInherited from applications.basic auctions.FirstPriceAuction(Section C.2.5)A(), init ()187C.3 Module applications.position auctionsThis is strictly for no-externality position auctions.C.3.1 FunctionsnormalizeSetting(setting, k)Normalizes values and CTR so that the highest valuation is exactly k and thehighest ctr is exactly 1. Makes to change to quality scores.makeAlpha(n, m, r)makeLNAlpha(n, m, r)EOS(n, m, k, seed=None)Varian(n, m, k, seed=None)BHN(n, m, k, seed=None)BSS(n, m, k, seed=None)BHN LN(n, m, k, seed=None)gauss2(rand, rho)Sample from a 2D Gaussian (with correlation rho)LNdistro(rand, k)LP(n, m, k, seed=None)EOS LN(n, m, k, seed=None)C.3.2 Variables188Name DescriptionLN CORR Value: 0.4LN params Value: [0.0, 1.0, 0.0, 1.0]GENERATORS Value: {"BHN-LN": BHN LN, "V-LN":LP, "EOS-LN": EOS LN, "EOS":E...package Value: ’applications’C.3.3 Class PermutationsMethodsinit (self, members, lengths=None)contains (self, element)eq (self, other)C.3.4 Class NoExternalitySettingposec.posec core.Settingposec.posec core.ProjectedSettingNoExternalitySettingMethodsinit (self, valuations, ctrs, qualities)Overrides: posec.posec core.Setting. initpi(self, i, o)Overrides: posec.posec core.ProjectedSetting.pi189u(self, i, theta, po, a i)Overrides: posec.posec core.Setting.uClass VariablesName DescriptionInherited from posec.posec core.ProjectedSetting (Section B.3.8)PsiInherited from posec.posec core.Setting (Section B.3.7)O, Theta, nC.3.5 Class NoExternalityPositionAuctionposec.posec core.Mechanismposec.posec core.ProjectedMechanismNoExternalityPositionAuctionMethodsinit (self, reserve=1, squashing=1.0, reserveType=’UWR’, rounding=None,tieBreaking=’Uniform’, pricing=’GSP’)Overrides: posec.posec core.Mechanism. initq(self, theta i)reservePPC(self, i, theta i)makeBid(self, theta i, b, eb)A(self, setting, i, theta i)Overrides: posec.posec core.Mechanism.A190projectedAllocations(self, i, theta i, a N)Returns a list of positions (all the positions that can happen depending on howtie-breaking goes) the last element in the list means losing every tie-breakppc(self, i, theta i, a N)makeOutcome(self, alloc, price)M(self, setting, i, theta i, a N)Overrides: posec.posec core.Mechanism.M191C.4 Module applications.position auctions externalitiesC.4.1 Functionscascade UNI(n, m, k, seed=None)cascade LN(n, m, k, seed=None)hybrid UNI(n, m, k, seed=None)hybrid LN(n, m, k, seed=None)C.4.2 VariablesName DescriptionGENERATORS Value: {"cascade UNI": cascade UNI,"cascase LN": cascade LN, "h...C.4.3 Class HybridSettingposec.posec core.Settingposec.posec core.ProjectedSettingHybridSettingMethodsinit (self, valuations, ctrs, qualities, continuation probabilities)Overrides: posec.posec core.Setting. initctr(self, i, theta, projectedAllocation)192u(self, i, theta, po, a i)Overrides: posec.posec core.Setting.uClass VariablesName DescriptionInherited from posec.posec core.ProjectedSetting (Section B.3.8)Psi, piInherited from posec.posec core.Setting (Section B.3.7)O, Theta, nC.4.4 Class ExternalityPositionAuctionposec.posec core.Mechanismposec.posec core.ProjectedMechanismapplications.position auctions.NoExternalityPositionAuctionExternalityPositionAuctionMethodsmakeOutcome(self, alloc, price)Overrides:applications.position auctions.NoExternalityPositionAuction.makeOutcomemakeBid(self, theta i, b, eb)Overrides: applications.position auctions.NoExternalityPositionAuction.makeBid193projectedAllocations(self, i, theta i, a N)Returns a list of all the projected allocations (a projected allocation is the list ofexternality types of agents ranked above i)Overrides:applications.position auctions.NoExternalityPositionAuction.projectedAllocationsInherited from applications.position auctions.NoExternalityPositionAuction(Section C.3.5)A(), M(), init (), ppc(), q(), reservePPC()194C.5 Module applications.votingC.5.1 FunctionsleastFavorites(setting, i, theta i)returns a list of i’s least favorite outcomes (or an empty list if the computation can’tbe done, due to CTD)mostFavorites(setting, i, theta i)returns a list of i’s most favorite outcomes (or an empty list if the computation can’tbe done due to CTD)isRanking(setting, i, theta i)Returns outcomes listed from i’s most to least favoritesettingFromRankings(rankings, truthfulness=True)urn model setting(n, m, a, seed)uniform Setting(n, m, seed)impartialAnonymousCulture Setting(n, m, seed)threeUrn Setting(n, m, seed)jMajority Setting(n, m, j, seed)twoMajority Setting(n, m, seed)threeMajority Setting(n, m, seed)isSinglePeaked(ranking)singlePeaked Setting(n, m, seed)uniformXY Setting(n, m, seed)195C.5.2 VariablesName DescriptionMECHANISM CLASSES Value: [Plurality, Approval,kApproval, Veto, Borda,InstantRunoff]package Value: ’applications’C.5.3 Class AbstractVotingMechanismposec.posec core.MechanismAbstractVotingMechanismKnown Subclasses: applications.voting.Approval, applications.voting.Borda, applica-tions.voting.InstantRunoff, applications.voting.PluralityProvides a bunch of support features for real voting mechanismsMethodsinit (self, randomTieBreak=True, removeDominatedStrategies=False,allowAbstain=False)Overrides: posec.posec core.Mechanism. initA(self, setting, i, theta i)returns self.actions(setting, i, theta i) after (optionally) removing dominatedstrategies and adding an abstain actionAllows RDS, etc to be re-usedOverrides: posec.posec core.Mechanism.Aoutcome(self, scores)Subroutine for M, identifies the maximal-score candidates and breaks tiesappropriately196Class VariablesName DescriptionrandomTieBreak Value: TrueallowAbstain Value: FalseremoveDominatedStrategies Value: FalseInherited from posec.posec core.Mechanism (Section B.3.11)MC.5.4 Class VoteTupleobjecttupleVoteTupleLike a normal tuple, but it can also mark an action as truthfulMethodsrepr (self )repr(x)Overrides: object. repr extit(inherited documentation)Inherited from tupleadd (), contains (), eq (), ge (), getattribute (), getitem (),getnewargs (), getslice (), gt (), hash (), iter (), le (), len (), lt (),mul (), ne (), new (), rmul (), sizeof (), count(), index()Inherited from objectdelattr (), format (), init (), reduce (), reduce ex (), setattr (), str (),subclasshook ()197PropertiesName DescriptionInherited from objectclassClass VariablesName Descriptiontruthful Value: FalseC.5.5 Class Pluralityposec.posec core.Mechanismapplications.voting.AbstractVotingMechanismPluralityMethodsactions(self, setting, i, theta i)truthful(self, setting, i, theta i, a)dominated(self, setting, i, theta i, a)M(self, setting, a N)Overrides: posec.posec core.Mechanism.MInherited from applications.voting.AbstractVotingMechanism(Section C.5.3)A(), init (), outcome()Class Variables198Name DescriptionInherited from applications.voting.AbstractVotingMechanism (Section C.5.3)allowAbstain, randomTieBreak, removeDominatedStrategiesC.5.6 Class Approvalposec.posec core.Mechanismapplications.voting.AbstractVotingMechanismApprovalKnown Subclasses: applications.voting.Veto, applications.voting.kApprovalMethodsactions(self, setting, i, theta i)truthful(self, setting, i, theta i, a)dominated(self, setting, i, theta i, a)M(self, setting, a N)Overrides: posec.posec core.Mechanism.MInherited from applications.voting.AbstractVotingMechanism(Section C.5.3)A(), init (), outcome()Class VariablesName DescriptionInherited from applications.voting.AbstractVotingMechanism (Section C.5.3)allowAbstain, randomTieBreak, removeDominatedStrategies199C.5.7 Class kApprovalposec.posec core.Mechanismapplications.voting.AbstractVotingMechanismapplications.voting.ApprovalkApprovalMethodsinit (self, k, randomTieBreak=True, removeDominatedStrategies=False,allowAbstain=False)Overrides: posec.posec core.Mechanism. initactions(self, setting, i, theta i)Overrides: applications.voting.Approval.actionstruthful(self, setting, i, theta i, a)Overrides: applications.voting.Approval.truthfuldominated(self, setting, i, theta i, a)Overrides: applications.voting.Approval.dominatedInherited from applications.voting.Approval(Section C.5.6)M()Inherited from applications.voting.AbstractVotingMechanism(Section C.5.3)A(), outcome()Class Variables200Name DescriptionInherited from applications.voting.AbstractVotingMechanism (Section C.5.3)allowAbstain, randomTieBreak, removeDominatedStrategiesC.5.8 Class Vetoposec.posec core.Mechanismapplications.voting.AbstractVotingMechanismapplications.voting.ApprovalVetoMethodsactions(self, setting, i, theta i)Overrides: applications.voting.Approval.actionstruthful(self, setting, i, theta i, a)Overrides: applications.voting.Approval.truthfuldominated(self, setting, i, theta i, a)Overrides: applications.voting.Approval.dominatedInherited from applications.voting.Approval(Section C.5.6)M()Inherited from applications.voting.AbstractVotingMechanism(Section C.5.3)A(), init (), outcome()Class Variables201Name DescriptionInherited from applications.voting.AbstractVotingMechanism (Section C.5.3)allowAbstain, randomTieBreak, removeDominatedStrategiesC.5.9 Class Bordaposec.posec core.Mechanismapplications.voting.AbstractVotingMechanismBordaMethodsactions(self, setting, i, theta i)truthful(self, setting, i, theta i, a)dominated(self, setting, i, theta i, a)M(self, setting, a N)Overrides: posec.posec core.Mechanism.MInherited from applications.voting.AbstractVotingMechanism(Section C.5.3)A(), init (), outcome()Class VariablesName DescriptionInherited from applications.voting.AbstractVotingMechanism (Section C.5.3)allowAbstain, randomTieBreak, removeDominatedStrategies202C.5.10 Class InstantRunoffposec.posec core.Mechanismapplications.voting.AbstractVotingMechanismInstantRunoffMethodsinit (self, allowAbstain=False)FIXME: So far, ties must be broken alphabeticallyOverrides: posec.posec core.Mechanism. initactions(self, setting, i, theta i)truthful(self, setting, i, theta i, a)dominated(self, setting, i, theta i, a)Dominance checking is not implemented by InstantRunoffM(self, setting, a N)Overrides: posec.posec core.Mechanism.MInherited from applications.voting.AbstractVotingMechanism(Section C.5.3)A(), outcome()Class VariablesName DescriptionInherited from applications.voting.AbstractVotingMechanism (Section C.5.3)allowAbstain, randomTieBreak, removeDominatedStrategies203

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0166200/manifest

Comment

Related Items