Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Essays on individual decision making and recoverability of preferences Zrill, Lanny Reuben 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2015_may_zrill_lanny.pdf [ 809kB ]
Metadata
JSON: 24-1.0135699.json
JSON-LD: 24-1.0135699-ld.json
RDF/XML (Pretty): 24-1.0135699-rdf.xml
RDF/JSON: 24-1.0135699-rdf.json
Turtle: 24-1.0135699-turtle.txt
N-Triples: 24-1.0135699-rdf-ntriples.txt
Original Record: 24-1.0135699-source.json
Full Text
24-1.0135699-fulltext.txt
Citation
24-1.0135699.ris

Full Text

Essays on Individual Decision Making andRecoverability of PreferencesbyLanny Reuben ZrillB.A. (Honours), Simon Fraser University, 2006M.A., Queen’s University, 2007A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinThe Faculty of Graduate and Postdoctoral Studies(Economics)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)April 2015c© Lanny Reuben Zrill 2015AbstractA decision maker’s preferences are revealed through their choices, hence a desir-able method for recovering preferences should take into account as much prefer-ence information as is available. In the context of choice from convex budget sets,we introduce the Money Metric Index (MMI) which recovers parameters that min-imize the inconsistency between a decision maker’s revealed preferences and therankings implied by a given parametric family of utility functions. This approachdiffers from statistical methods which discard much of this preference informa-tion and select parameters based on a comparison between observed and predictedchoices alone. Additionally, the MMI has many practical advantages: it is simple tocompute, it can accommodate non-convex preferences, and it can be decomposedinto separate measures of inconsistency and mis-specification.In Chapter 2, we compare these methods for recovering parameters using atwo-stage experimental design. We use the data from the first part of the ex-periment to construct choices in the second part that can be used to evaluate thepredictive success of the two methods. We find that, in all cases, the MMI out-performs the statistical method in terms of its ability to accurately predict subjectchoices. Additionally, we find substantial evidence of First-Order Risk Aversion,both with respect to the recovered parameters and through direct inspection of sub-ject choices.The final chapter approaches this problem from a different perspective by con-sidering to what extent decision makers are capable of making consistent choicesin a laboratory context. We implement a simple experiment in order to assess theeffect of computational difficulty on a decision maker’s propensity to randomizetheir choices. We find substantial evidence of stochastic choice when subjects havelimited time and computational resources available. An implication of our findingsis that observed anomalous behavior in laboratory experiments involving choiceunder risk may be the result of ambiguity attitudes arising from the computationaldifficulty of the task rather than risk attitudes.iiPrefaceChapter 1A version of this chapter has been submitted for publication as Halevy et al. (2014).This work is a collaboration with Yoram Halevy, University of British Columbia,and Dotan Persitz, Tel Aviv University. All co-authors contributed to all aspects ofthe project equally. The computer programming required for this work was doneby Shervin Mohammadi-Tari.Chapter 2This work is a collaboration with Yoram Halevy, University of British Columbia.All co-authors contributed to all aspects of the project equally. The experimen-tal interface utilized in this work was programmed by Shervin Mohammadi-Tari.This work was approved by the UBC Behavioural Research Ethics Board underapplication ID H14-00510.Chapter 3This work is a collaboration with Yoram Halevy, University of British Columbia.All co-authors contributed to all aspects of the project equally. This work wasapproved by the UBC Behavioural Research Ethics Board under application IDH14-02260.iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix1 Parametric Recoverability of Preferences . . . . . . . . . . . . . . . 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Non-Parametric Recoverability . . . . . . . . . . . . . . . . . . 41.3 Parametric Recoverability . . . . . . . . . . . . . . . . . . . . . 81.4 Application to Choice under Risk . . . . . . . . . . . . . . . . . 161.5 Short Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . 252 The Predictive Power of Parametric Recovery Methods . . . . . . . 312.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312.2 Parametric Recovery Methods . . . . . . . . . . . . . . . . . . . 352.3 Experimental Design and Procedures . . . . . . . . . . . . . . . 382.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452.5 Issues and Extensions . . . . . . . . . . . . . . . . . . . . . . . 513 Computational Difficulty and Uncertainty . . . . . . . . . . . . . . 543.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543.2 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573.4 Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59ivTable of ContentsBibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62AppendicesA Appendix for Chapter 1 . . . . . . . . . . . . . . . . . . . . . . . . . 67A.1 Non-Parametric Recovery and Non-Convex Preferences . . . . . 67A.2 Proof of Proposition 13 . . . . . . . . . . . . . . . . . . . . . . 69A.3 Proof of Theorem 16 . . . . . . . . . . . . . . . . . . . . . . . . 71A.4 The Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74B Appendix for Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . 89B.1 Experiment Instructions . . . . . . . . . . . . . . . . . . . . . . 89B.2 Subject Consistency . . . . . . . . . . . . . . . . . . . . . . . . 100B.3 Recovered Parameters . . . . . . . . . . . . . . . . . . . . . . . 103C Appendix for Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . 107C.1 Experiment Instructions . . . . . . . . . . . . . . . . . . . . . . 107vList of Tables1.1 The median recovered parameters . . . . . . . . . . . . . . . . . 191.2 Comparing consistent and inconsistent subjects . . . . . . . . . . 201.3 Evaluating restriction to expected utility using misspecification andbootstrapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231.4 Choice of utility index and evaluation of expected utility restriction 252.1 Unrefined Results - Aggregate . . . . . . . . . . . . . . . . . . . 462.2 Individual Results - All Subjects (n=207) . . . . . . . . . . . . . 462.3 Aggregate Results - Consistent Subjects (n=103) . . . . . . . . . 492.4 Individual Results - Consistent Subjects (n=85) . . . . . . . . . . 492.5 Recovered Parameters - Median . . . . . . . . . . . . . . . . . . 503.1 Mean Switches (Number of Subjects in parentheses) . . . . . . . 573.2 Mean Switches - Sets 2 and 3 only . . . . . . . . . . . . . . . . . 583.3 Switching Statistics - EASY versus HARD Questions . . . . . . . 593.4 Hedging and Ambiguity . . . . . . . . . . . . . . . . . . . . . . 593.5 Lotteries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603.6 Hedging and Ambiguity . . . . . . . . . . . . . . . . . . . . . . 61viList of Figures1.1 Measuring misspecification with budget adjustments . . . . . . . 91.2 The removal of direct inconsistencies removes all indirect incon-sistencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.3 Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . 151.4 Typical non-expected utility indifference curves induced by Gul(1991) Disappointment Aversion function . . . . . . . . . . . . . 171.5 Cumulative distribution of misspecification for CRRA/CARA func-tional forms for mean/sum-of-squares aggregators . . . . . . . . . 211.6 Modified budget sets . . . . . . . . . . . . . . . . . . . . . . . . 261.7 Non-convex preferences and a distance-based loss-function . . . . 272.1 The Money Metric Index . . . . . . . . . . . . . . . . . . . . . . 332.2 Constructing Pairwise Choices . . . . . . . . . . . . . . . . . . . 392.3 Budget Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402.4 Parameter Refinement . . . . . . . . . . . . . . . . . . . . . . . . 48A.1 Violations of Fact 5 . . . . . . . . . . . . . . . . . . . . . . . . . 68B.1 Pairwise Choices . . . . . . . . . . . . . . . . . . . . . . . . . . 90B.2 Pairwise Choices - Example . . . . . . . . . . . . . . . . . . . . 91B.3 Pairwise Choices - Overlapping Points . . . . . . . . . . . . . . . 92B.4 Pairwise Choices - Confirmation Screen . . . . . . . . . . . . . . 93B.5 Pairwise Choices - Confirmed Choice . . . . . . . . . . . . . . . 94B.6 Budget Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95B.7 Budget Lines - Relationship to Pairwise Choice . . . . . . . . . . 96B.8 Budget Lines - Examples . . . . . . . . . . . . . . . . . . . . . . 97B.9 Budget Lines - Special Cases . . . . . . . . . . . . . . . . . . . . 99viiAcknowledgementsI would like to thank my Supervisor, Yoram Halevy, for his trust, support andguidance, and without whom this would not have been possible; the members ofmy Supervisory Committee, Erwin Diewert and Hiro Kasahara for their thoughtfulcomments and sincere interest in my work; and Dotan Perstiz for his absolutelycrucial role in completion of this thesis.I would like to acknowledge financial support from the following sources:SSHRC in the form of a Doctoral Fellowship as well as indirectly through grants is-sued to Yoram Halevy and Ryan Oprea, Queen’s University in the form of the E.G.Bauman Scholarship, and the Government of Ontario in the form of an OntarioGraduate Scholarship. Additional funding was provided by Queen’s Universityand the University of British Columbia.I would also like to acknowledge the Department of Economics at LangaraCollege and its members, particularly Gray Giovannetti and Scott McLean, fortheir support and flexibility throughout this process.Additionally, I would like to thank the many professional colleagues who servedas mentors, collaborators, and friends over the years. This includes, but is not lim-ited to, Doug Allen, Robin Boadway, David Freeman, Ryan Oprea, Henry Siu, andYaniv Yedid-Levi.Finally, I would like to thank my friends and family, especially Debbie Root-man and Efraim Gavrilovich, Dave Bryan, Alex Farber, Paul Grunberg, and KellyWalker, and Gaëlle Simard-Duplain.viiiDedicationThis manuscript is dedicated to Peter Kennedy, the best teacher I ever had and thestandard to which I aspire.ixChapter 1Parametric Recoverability ofPreferences1.1 IntroductionThis paper is a contribution to the applicability of revealed preference theory tothe domain of recovering stable preferences from individual choices. The needfor such an application emerges from the recent availability of relatively large datasets composed of individual choices made directly from linear budget sets.1 Theserich data sets allow researchers to recover approximate individual stable utilityfunctions and report the magnitude and distribution of behavioral characteristics inthe subject population.Given a data set constructed from a generic consumer choice problem, whichsatisfies the Generalized Axiom of Revealed Preference (henceforth GARP), Afriat(1967) suggests a nicely behaved piecewise linear utility function that satisfies therestrictions imposed by the revealed preference relation. This method requires re-covering twice the number of parameters as there are observations (Diewert, 1973)and therefore the behavioral implications of such functional forms may be diffi-cult to interpret and apply to economic problems. Varian (1982) builds on thiswork to construct non parametric bounds that partially identify the utility function,assuming that preferences are convex.In many cases, however, researchers assume simple functional forms with fewparameters that lend themselves naturally to behavioral interpretation. The draw-back of this approach is that simple functional forms are often too structured to cap-ture every nuance of individual decision making, and thus preferences recovered inthis way are almost always misspecified. That is, the ranking implied by the recov-ered preferences may be inconsistent with the ranking information implied by thedecision maker’s choices (summarized through the revealed preference relation).Following this line of reasoning, given a parametric utility function, one shouldseek a measure to quantify the extent of misspecification and use this measure as1Notable references are Andreoni and Miller (2002); Fisman et al. (2007); Choi et al. (2007);Ahn et al. (2011); Andreoni and Sprenger (2012).11.1. Introductiona criterion for selecting the element of the functional family which minimizes themisspecification. This measure should apply continuously to inconsistent choicedata, and inform the extent of misspecification by all possible utility functions.Our proposed measure of misspecification used in recovering preferences relieson insights gained from the literature that quantifies internal inconsistencies inher-ent in a data set. The Varian (1990) Inefficiency Index is a popular measure of thedecision maker’s inconsistency. It is calculated by aggregating the minimal budgetadjustments required to remove cyclic revealed preference relations, that cause thedataset to fail GARP. We provide the following novel theoretical characterization ofthe Varian Inefficiency Index: for every continuous and locally non-satiated utilityfunction we calculate the Money Metric Index. This index is an aggregation, takenover all observations, of the minimal budget adjustments required to remove incon-sistencies between the ranking information induced by the utility function and therevealed preference information contained in the choices.2 We prove that the Var-ian Inefficiency Index equals the infimum of the money metric indices, taken overall continuous and locally non-satiated utility functions. Hence, the Varian Ineffi-ciency Index lends itself naturally as a benchmark for minimizing misspecificationbetween the data set and all possible utility functions.Our proposed procedure of recovering approximate preferences within a re-stricted parametric family generalizes the principle we introduced in characteriz-ing the Varian Inefficiency Index, by calculating the infimum of the Money MetricIndex over the restricted subset. If a data set satisfies GARP, the measure we pro-pose quantifies the extent of misspecification that arises solely from consideringa specific family of utility functions, rather than all utility functions. If the dataset does not satisfy GARP, the measure can be decomposed into the Varian Inef-ficiency Index and a misspecification index, which is the difference between theMoney Metric Index and the Varian Inefficiency Index. Since for a given data setthe Varian Inefficiency Index is constant (zero if GARP is satisfied), the measurecan be used to recover parametric preferences within some parametric family byminimizing the misspecification.Furthermore, the procedure can be used to evaluate the increase in misspecifi-cation implied by restricting the set of parameters, and to choose among functionalforms. For example, consider some parametric form of non-expected utility thatincludes expected utility as a special case. Given a data set of choices under risk,one can recover the values of the parameters that minimize misspecification, andevaluate the additional misspecification implied by restricting to expected utility.To illustrate, we apply our method to recover preferences from data on choice2The latter corresponds to the relations between observed choice (the observation) and all feasi-ble alternatives.21.1. Introductionunder risk collected by Choi et al. (2007). We recover parameters for the dis-appointment aversion functional form of Gul (1991)3 using both the widely usedEuclidean-distance-based Non Linear Least Squares (NLLS) and the proposed ap-proach. We identify several important qualitative differences in the recovered pa-rameters. In several cases, the recovered parameters are contradictory with respectto whether subjects are elation loving or disappointment averse, and as such the be-havioral conclusions of our analysis may depend critically on the chosen recoverymethod. Moreover, quantitative differences in the distribution of parameter valuesin the subject population suggest that the preferences recovered by minimizing in-consistency with revealed preference information put higher weight on first-orderrisk aversion and lower weight on second-order risk aversion (Segal and Spivak,1990) than previously found using distance-based approaches. We calculate theadditional misspecification implied by restricting to expected utility, and find thatchoices of between one third and one half of the subjects may be reasonably ap-proximated by expected utility.Our proposed method of recovering preferences using revealed preference in-formation is fundamentally different from the traditional approach that relies onthe distance between observed and predicted choices. While the latter comparesonly two points and utilizes auxiliary assumptions (e.g. “closer is better”) to se-lect among utility functions (each providing different predictions), we point outthat choosing an alternative from a menu informs the observer that it is revealedpreferred to every other feasible alternative. Independently of the misspecificationcriterion, a desired recovery method should incorporate as much revealed prefer-ence information as available in the data. The proposal made in the current workis one possible candidate that follows this methodology. While possessing nicetheoretical and computational properties, other methods that follow similar princi-ple may exist. Studying such potential proposals is an important avenue for futureresearch.This work continues the line of thought taken by Varian (1990). There, Variansuggests the money metric as a “natural measure of how close the observed con-sumer choices come to maximizing a particular utility function” (page 133) andthen recommends its usage as a criterion for recovering preferences. He arguesthat measuring differences in utility space has a more natural economic interpreta-tion than measuring distances between bundles in commodity space. We augmentVarian’s intuition by providing theoretical and practical substance for the usageof the money metric as a measure of misspecification. First, we demonstrate thatthe money metric utilizes more preference information encoded in the observed3Within the considered setup of two-states of the world it is observationally equivalent to modelsof Rank Dependent Utility (Quiggin, 1982).31.2. Non-Parametric Recoverabilitychoices to recover preferences than distance-based methods. Second, we relate thebudget adjustments implied by the money metric to the Varian Inefficiency Index.Third, we prove that the money metric measure can be constructed observation-by-observation while maintaining most revealed preference information contained inchoices.4 Finally, since we show that the goodness of fit can be decomposed intoan inconsistency index and a misspecification index, we introduce several novelapplications including evaluating parametric restrictions and model selection.In the next section we reintroduce familiar definitions and results from the liter-ature that applied revealed preference theory to non-parametric recovery of prefer-ences given consistent and inconsistent data sets of choices from budget sets. Thediscussion regarding the shortcomings of the non-parametric approach in the sec-ond section, serves as a motivation for the next section. Our main analytical resultsand the suggested recovery method are presented in the third section. In the fourthsection we apply the proposed method to recover preferences from data on choiceunder risk collected by Choi et al. (2007). We conclude with four brief discussionson related theoretical, practical and interpretational issues.1.2 Non-Parametric Recoverability1.2.1 PreliminariesConsider a decision maker (DM) who chooses bundles xi ∈ℜK+ (i∈ 1, . . . ,n) out ofbudget menus{x : pix≤ 1, pi ∈ℜK++}. Let D ={(pi,xi)ni=1}be a finite data set,where xi is the chosen bundle at prices pi. The preference information incorporatedin the observed choices is summarized by the following binary relations.Definition 1. An observed bundle xi ∈ℜK+ is1. directly revealed preferred to a bundle x ∈ℜK+, denoted xiR0Dx, if pixi ≥ pix.2. strictly directly revealed preferred to a bundle x ∈ ℜK+, denoted xiP0Dx, ifpixi > pix.3. revealed preferred to a bundle x ∈ ℜK+, denoted xiRDx, if there exists a se-quence of observed bundles(x j,xk, . . . ,xm)such thatxiR0Dxj,x jR0Dxk, . . . ,xmR0Dx.4. strictly revealed preferred to a bundle x ∈ℜK+, denoted xiPDx, if there existsa sequence of observed bundles(x j,xk, . . . ,xm)such thatxiR0Dxj,x jR0Dxk, . . . ,xmR0Dx at least one of them is strict.4On the other hand, the computation of the Varian Inefficiency Index is NP-hard since the re-quired budget adjustments are interdependent.41.2. Non-Parametric RecoverabilityThe data is said to be consistent if it satisfies the General Axiom of Revealed Pref-erence.Definition 2. Data set D satisfies the General Axiom of Revealed Preference (GARP)if for every pair of observed bundles, xiRDx j implies not x jP0Dxi.The following definition relates the revealed preference information implied byobserved choices to ranking induced by utility maximization.Definition 3. A utility function u : ℜK+→ℜ rationalizes data set D, if for everyobserved bundle xi ∈ ℜK+, u(xi) ≥ u(x) for all x such that xiR0Dx. We say that D isrationalizable if such u(·) exists.Rationalizability does not imply uniqueness. There could be different utilityfunctions (not related by a monotonic transformation) that rationalize the same dataset. Afriat’s celebrated theorem provides tight conditions for the rationalizabilityof a data set.Theorem. (Afriat, 1967) The following conditions are equivalent:1. There exists a non-satiated utility function that rationalizes the data.2. The data satisfies GARP.3. There exists a non-satiated, continuous, concave, monotone utility functionthat rationalizes the data.Proof. See Afriat (1967); Diewert (1973); Varian (1982).1.2.2 ShortcomingsSimplicityThe traditional problem of recoverability is to find a utility function that rational-izes the data. Indeed, Afriat’s proof of the Theorem is constructive: he shows thatif a data set D of size n satisfies GARP then U (x) = mini{U i +λ i pi(x− xi)},where U i and λ i > 0 are 2n real numbers that satisfy a set of n2 inequalities:U i ≤ U j + λ j p j(xi − x j), rationalizes D. It is important to note that althoughAfriat’s utility function does not rely on any parametric assumptions, it is diffi-cult to directly learn from it about behavioral characteristics of the decision maker,which are typically summarized by few parameters (e.g. risk aversion, ambiguityaversion). Moreover, this utility function that rationalizes the data is genericallynon-unique. Hence, if one can find a “simpler” (parametric) utility function thatrationalizes the data set - it will have equal standing in representing the ranking51.2. Non-Parametric Recoverabilityinformation implied by the data set. If one accepts that “simple” may be superior,then one should consider paying a price in terms of misspecification. We pursuethis line of reasoning by considering the minimal misspecification implied by cer-tain parametric specifications.Convexity AssumptionVarian (1982) suggests a non-parametric recovery method that partially identifiesthe subject’s preferences by constructing upper and lower bounds on her indiffer-ence curves. However, this method imposes the restriction of convexity on the pref-erences that may be recovered. In Appendix A.1 we demonstrate that if the data setis generated by a DM who correctly maximizes a non-convex preference relation,the ranking implied by Varian’s non-parametric bounds may be inconsistent withthe underlying preferences of the DM. The reason is that while Afriat’s Theoremstates that if a data set is rationalizable, there exists a concave utility function thatrationalizes it, these convexified preferences rank unobserved bundles differentlythan a utility function that represents non-convex preferences and rationalizes thesame data set. We find that in view of the evolving literature on the importance ofnon-convex preferences in domains like risk, ambiguity and other-regarding pref-erences, this exclusion seem to be unwarranted in many contexts. The parametricapproach to recoverability permits the observer to identify non-convex preferenceswithin a given functional family.1.2.3 Inconsistent Data SetsAfriat (1973, 1987) and Houtman (1995) use similar methods to those used inAfriat (1967) to recover non-parametrically an approximate utility function, in thesense that the existence of an underlying preference relation is maintained by al-lowing the DM to not exactly maximize that relation. This approach suffers fromthe same shortcomings of Afriat (1967) discussed above. The non-parametric ap-proach of Varian (1982) has been extended and developed in Blundell et al. (2003,2008) and Cherchye et al. (2009), however, to the best of our knowledge, it hadnot been expanded to include treatment of inconsistent data sets, and doing so willprobably entail some behavioral assumptions regarding the nature of the inconsis-tencies. The parametric approach developed in the current paper, not only extendsnaturally to inconsistent data sets, but also permits an insightful decomposition ofthe goodness of fit into measures of inconsistency and misspecification.The following definition is a generalization of Definition 1. Similar conceptshave been introduced into the literature on consistency (Afriat, 1972, 1987; Varian,61.2. Non-Parametric Recoverability1990, 1993) in order to measure how close is a DM to satisfying GARP5.Definition 4. Let D be a finite data set. Let v ∈ [0,1]n.6 An observed bundlexi ∈ℜK+ is1. v−directly revealed preferred to a bundle x∈ℜK+, denoted xiR0D,vx, if vi pixi≥pix.2. v−strictly directly revealed preferred to a bundle x ∈ℜK+, denoted xiP0D,vx, ifvi pixi > pix.3. v−revealed preferred to a bundle x ∈ ℜK+, denoted xiRD,vx, if there exists asequence of observed bundles(x j,xk, . . . ,xm)such thatxiR0D,vxj,x jR0D,vxk, . . . ,xmR0D,vx.4. v−strictly revealed preferred to a bundle x ∈ ℜK+, denoted xiPD,vx, if thereexists a sequence of observed bundles(x j,xk, . . . ,xm)such thatxiR0D,vxj,x jR0D,vxk, . . . ,xmR0D,vx at least one of them is strict.Similarly, consider the following generalization of GARP (Varian, 1990):Definition 5. Let v ∈ [0,1]n. D satisfies the General Axiom of Revealed Prefer-ence Given v (GARPv) if for every pair of observed bundles, xiRD,vx j implies notx jP0D,vxi.The vector v is used to generate the adjusted relation RD,v that is acyclic al-though RD may contain cycles. Obviously, usually there are many vectors suchthat D satisfies GARPv. Following are two useful and trivial properties of GARPv:Fact 6. Every D satisfies GARP0.7Fact 7. If v,v′ ∈ [0,1]n and v≥ v′ and D satisfies GARPv then D satisfies GARPv′ .Varian (1990) proposed an inefficiency index that measures the minimal adjust-ments of the budget sets which remove cycles implied by choices.8 While Variansuggests to aggregate the adjustments using the sum of squares, we define thisindex with respect to an arbitrary aggregator function.5A different but related concept of inconsistency is presented in Echenique et al. (2011). Themain difference is that in most of the literature a single adjustment is enough to “break” a cycle,while in their work, all the relevant budget lines must be adjusted.6Throughout the paper we use bold fonts (as v or 1) to denote vectors of scalars in ℜn. Forv,v′ ∈ ℜn v = v′ if ∀i : vi = v′i, v = v′ if ∀i : vi ≥ v′i, v ≥ v′ if v = v′and v 6= v′ and v > v′ if∀i : vi > v′i. We continue to use regular fonts to denote vectors of prices and goods.7P0D,0 is the empty relation.8Afriat (1972, 1973) Critical Cost Efficiency Index employs a uniform adjustment for all bud-gets.71.3. Parametric RecoverabilityDefinition 8. f : [0,1]n→ [0,M], where M is finite, is an Aggregator Function iff (1) = 0, f (0) = M and f (·) is continuous and weakly decreasing.9For a given aggregator function, this index is a measure of the decision maker’sinconsistency.Definition 9. Let f : [0,1]n → [0,M] be an aggregator function. Varian’s Ineffi-ciency Index is10,IV (D, f ) = infv∈[0,1]n:D satisfies GARPvf (v)Fact 10. IV (D, f ) always exists.111.3 Parametric RecoverabilityThis section proposes a loss-function that measures the inconsistency between theranking information encoded in choices made within a data set and a given utilityfunction. For a data set that satisfies GARP, this will constitute a measure of themisspecification in representing the data set by the utility function.Consider, for example, a data set of a single observation D ={(p1,x1)}andtwo candidate utility functions u and u′ (the two utility functions represent theparametric restriction) as depicted in Figure 1.1. The data set includes only a sin-gle observation, hence is trivially consistent. However, both utility functions failto rationalize the data since for both utility function there exist feasible bundlesthat are preferred to x1 according to the respective utility function. Consider theunobserved bundles in the lightly shaded region of Figure 1.1. These are bundlesto which x1 is directly revealed preferred and yet are ranked higher than x1 by theutility function u. In other words, u is misspecified since for these bundles theranking induced by u is inconsistent with the ranking implied by choices. Yet, ifwe look at the union of the light and dark shaded regions, it is easy to see thatall inconsistencies with the revealed preference information implied by u are also9An aggregator function f is weakly decreasing if for every v,v′ ∈[0,1]n: v≥ v′ =⇒ f (v)≤ f (v′)v >v′ =⇒ f (v) < f (v′) . One may wish to restrict the set of potential aggrega-tor functions to include only separable functions that satisfy the cancellation axiom. All ourexamples belong to this restricted set (and assume an additive structure). The theoretical resultdoes not require the richness of possible aggregator functions. It remains an interesting theoreticalexercise to axiomatically characterize possible aggregator functions.10Consider a data set of two points D ={(p1,x1);(p2,x2)}such that p1x2 = p1x1 but p2x1 <p2x2. D is inconsistent with GARP or GARP1 (since x1RDx2 and x2P0Dx1), but consider the seriesvl = (1− 1l ,1) where l ∈ N>0. It is easy to verify that for every l ∈ N>0, D satisfies GARPvl .11 f (·) is bounded and by Fact 6, the set {v ∈ [0,1]n : D satis f ies GARPv} is non-empty.81.3. Parametric Recoverabilityx2x1u′uIuIu′x1Figure 1.1: Measuring misspecification with budget adjustmentsimplied by u′. In this sense, we say that the utility function u dominates u′ and thatu′ is more misspecified than u.12Our proposed loss-function seeks the minimal adjustment to the expenditurelevels such that all inconsistencies between the revealed preference informationand the ranking information are removed. In Figure 1.1, Iu and Iu′ are the high-est expenditure levels (keeping the prices constant) such that there is no affordablebundle that is ranked strictly higher than x1 by the utility functions u and u′ respec-tively. Since Iu′ < Iu < p1x1 it is evident that although both utility functions aremisspecified, the misspecification implied by u is smaller than the misspecificationimplied by u′ relative to the data set.13 In the following subsection we introducetheoretical foundations for this approach.1.3.1 v-Rationalizability and the Money Metric IndexAfriat (1967) showed that D satisfies GARP if and only if there exists a non-satiatedutility function that rationalizes the data. However, if we consider a specific utilityfunction, it will generically not rationalize the data (even if choices are consistent).12Following this example of a single data point, it might be tempting to conclude that as thepreferences become less convex (for the same prediction), the misspecification diminishes. However,this intuition is misleading since in larger data sets the variability in prices may be high enough sothat less convex preferences will result in more misspecification than the more convex one.13Obviously, this measure is not unique. For example, an alternative measure could use the areacontained in the intersection of the upper counter set that goes through x1 and the budget line. Wedefer the discussion of this alternative measure to section 1.5.1.91.3. Parametric RecoverabilityNext we define the following generalization of rationalizability:Definition 11. Let v ∈ [0,1]n. A utility function u(x) v-rationalizes D, if for everyobserved bundle xi ∈ℜK+, u(xi)≥ u(x) for all x such that xiR0D,vx.That is, the intersection between the set of bundles strictly preferred to an ob-served bundle xi according to u, and the set of bundles to which xi is revealedpreferred when the budget constraint is adjusted by vi, is empty. Notice that1−rationalizability reduces to Definition 3.To illustrate, consider Figure 1.1, where x1 is chosen but is not optimal ac-cording to utility function u. For every v1 such that 0 ≤ v1 p1x1 ≤ Iu there is nox that satisfies v1 p1x1 ≥ p1x and is strictly preferred to x1 according to u. In thiscase we say that u v−rationalizes x1 (v is a one-dimensional vector that equalsv1). We define the minimum adjustment (supremum v in this case) as the basis forour measure of misspecification. In Figure 1.1 the minimal adjustment required tov−rationalize x1 by utility function u is given by Iup1x1 . Naturally, we would expectutility functions that represent the decision maker’s preferences using less mis-specification, to require smaller budget adjustments in order to v−rationalize theobserved choices. This is evident in Figure 1.1 where Iu′ < Iu captures the intuitionthat u is less misspecified than u′.Below we show that the minimal adjustment to the budget set for every ob-servation is given by the value of the Money Metric Utility Function (Samuelson,1974) at the observation:Definition 12. The normalized money metric vector for a utility function u(·),v?(D,u), is such that v?i(D,u) = m(xi,pi,u)pixi wherem(xi, pi,u)=min{y∈ℜK+:u(y)≥u(xi)}piy. The Money Metric Index for a utility functionu(·) is f (v? (D,u)).The money metric vector and the money metric utility function upon which itis based, measure, for a given utility function, the minimal expenditure required toachieve at least the same level of utility as the observed choices.14Proposition 13. Let D ={(pi,xi)ni=1}and let u(·) be a continuous and locallynon-satiated utility function.1. u(·) v?(D,u)-rationalizes D.2. v? (D,u) = 1 if and only if u(·) rationalizes D.14We include (D,u) in the definition to emphasize that the optimal budget set adjustments dependon both the observed choices and on the specific utility function.101.3. Parametric Recoverabilityx2x1x1x2y(a)x2x1x1x2y(b)Figure 1.2: The removal of direct inconsistencies removes all indirect inconsisten-cies3. Let v ∈ [0,1]n. u(·) v-rationalizes D if and only if v 5 v?(D,u).Proof. Is immediate and is provided in Appendix A.2 for completeness.Proposition 13 establishes that f (v? (D,u)) may be viewed as a function that mea-sures the loss incurred by using a specific utility function to describe a data set. Part3 shows that v? (D,u) measures the minimal adjustments to the budget sets requiredto v−rationalize D by u, that is - to remove inconsistencies between the revealedpreference information contained in D and the ranking information induced by u.Part 3 also implies that each coordinate of v? (D,u) is calculated independentlyof the other observations in the data set. This is a crucial feature of this procedurewhich deserves some discussion. One may intuitively believe that such indepen-dent calculation uses only the directly revealed preference information and mayfail to rationalize the data based on the indirect revealed preference information.However, since RD is the transitive closure of R0D, it follows that a utility functionis consistent with the directly revealed preference information if and only if it isconsistent with all the indirectly revealed preference information. In other words,if the utility function is inconsistent with some indirect revealed preference infor-mation, it must be inconsistent with some directly revealed preference informationas well.Figure 1.2 demonstrates this point. The data set includes two observations,where x1 is directly revealed preferred to x2. The utility function u(·) ranks x1above x2 but fails to rationalize the data since u(y) > u(x1)although x1 is strictlyindirectly revealed preferred to y (which is feasible when x2 is chosen). First, note111.3. Parametric Recoverabilitythat if this is the case, it must be that u(y) > u(x2). That is, u(·) does not ra-tionalize the direct revealed preference information. Second, as is evident fromFigure 1.2a, D will be v?−rationalized by adjusting only observation 2’s budgetset to remove inconsistencies between the utility ranking and the directly revealedpreference information. More generally, the v?− adjustments can be calculatedobservation-by-observation: for each observation the minimal adjustment is inde-pendent of the required adjustments for other observations.15 Moreover, Figure1.2b16 demonstrates that v? (D,u) retains most of the indirect revealed preferenceinformation that is consistent with the ranking encoded in the utility function underconsideration since RD,v?(D,u) is just the transitive closure of R0D,v?(D,u).Part 2 of Proposition 13 is merely a restatement of the familiar definition ofrationalizability using the money metric as a criterion. It shows that a non-satiatedand continuous utility function u(·) rationalizes the observed choices if and only ifit is the case that for all observations there exist no affordable bundles that achievestrictly higher level of utility than the observed choices themselves. In this case wewould say that the utility function is correctly specified.Recall that given an aggregator function f (·), f (v? (D,u)) measures the in-consistency between a data set D and a specific preference relation represented bythe utility function u. Let U c be the set of all continuous and locally non-satiatedutility functions. Given a set of utility functionsU ⊆U c, the Money Metric Indexmeasures the inconsistency between U and the data set D.Definition 14. For a data set D and an aggregator function f (·), let U ⊆U c. TheMoney Metric Index of U isIM(D, f ,U ) = infu∈Uf (v? (D,u))The following observation follows directly from the definition of IM(D, f ,U ).Fact 15. For every U ′ ⊆U : IM(D, f ,U )≤ IM(D, f ,U ′).In particular, it implies that for every U ⊆U c: IM(D, f ,U c)≤ IM(D, f ,U ). Thatis, the value of the Money Metric Index calculated for all continuous and locallynon-satiated utility functions is a lower bound on IM(D, f ,U ) for every subset ofutility functions.15An additional implication of this property is that given m data sets Di of ni observations, andutility function u(·), if u v? (Di,u)− rationalizes Di for every i, then u v? (⋃mi=1 Di,u)− rationalizes⋃mi=1 Di where v? (⋃mi=1 Di,u) =(v? (D1,u)T , . . . ,v? (Dm,u)T)T. Moreover, if f (·) is additive sepa-rable (as are all the aggregators mentioned in this paper) then f (v? (⋃mi=1 Di,u))=∑mi=1 f (v? (Di,u)).16The shaded area represents those bundles that are directly and indirectly v? (D,u)− dominatedby x1.121.3. Parametric Recoverability1.3.2 Decomposing the Money Metric IndexThus far we have been primarily concerned with GARP-consistent data sets thatcan be rationalized by some utility function. Given such data sets we argued thatIM(D, f ,U ) is a natural measure of the misspecification induced by the choice torecover the utility function of the DM using the parametric family U . By Afriat’sTheorem, data sets that do not satisfy GARP cannot be rationalized by any utilityfunction. Were we to restrict our analysis to only consistent data sets, the scope ofour analysis would be somewhat limited.17The method we propose to construct v? (D,u) does not depend on the consis-tency of the data set D. Therefore, even if a decision maker does not satisfy GARP,we can recover preferences (within the parametric family U ) that approximate theconsistent revealed preference information encoded in the choices. The difficultywith this arises from the fact that IM (D, f ,U ) includes both the inconsistency withrespect to GARP and the misspecification implied by the chosen parametric fam-ily. In this section we study how we can decompose our measure into these twocomponents.Our strategy in developing the decomposition is to employ Varian (1990) Ineffi-ciency Index as a measure of inconsistency, which is independent of the parametricfamily under consideration. We prove that the money metric index calculated forall locally non-satiated and continuous utility function - IM(D, f ,U c) coincideswith Varian’s Inefficiency index. It follows that IM(D, f ,U )− IM(D, f ,U c) is ameasure of misspecification. Note that Varian’s Inefficiency Index is independentof any preference ranking, and, as defined, is just a measure of the inconsistencyincorporated in the data set. On the other hand, recall that for a family of utilityfunctions U , the Money Metric Index measures the inconsistency between U andthe data set. The following Theorem establishes that Varian’s Inefficiency Indexcan be viewed as a measure of the inconsistency between the set of all continuousand locally non-satiated utility functions and the data set.17Andreoni and Miller (2002), one of the first experimental papers that utilizes revealed prefer-ence approach with moderate price variation, finds that a great majority of the subjects satisfy GARP.However, in several recent experimental studies that employ considerable price variation (Choi et al.,2007; Ahn et al., 2011; Choi et al., 2011), about 75 percent of subjects did not satisfy GARP. Mostof them can be shown to be very nearly consistent with GARP according to various measures ofconsistency as Afriat (1972) Critical Cost Efficiency Index, Varian (1990) Inefficiency Index, andthe Houtman and Maks (1985) Index.131.3. Parametric RecoverabilityTheorem 16. For every finite data set D ={(pi,xi)ni=1}and aggregator functionf : [0,1]n→ [0,M] :IV (D, f ) = IM(D, f ,U c)where U c is the set of continuous and locally non-satiated utility functions.Proof. See Appendix A.3.The proof proceeds to show that IV (D, f ) ≤ IM(D, f ,U c) since if IV (D, f ) >IM(D, f ,U c) there exists a utility function u(·) such thatIM(D, f ,U c) ≤ f (v? (D,u)) < IV (D, f ) and D satisfies GARPv?(D,u) in contradic-tion to the definition of IV (D, f ). On the other hand, we show that if D satisfiesGARPv then IM(D, f ,U c) ≤ f (v). Moreover, we show that there exists a vectorof adjustments v such that f (v) = IV (D, f ) and for every 0 ≤ λ < 1 D satisfiesGARPλv, and therefore we conclude that IM(D, f ,U c)≤ IV (D, f ).Theorem 16 enables us to decompose the Money Metric Index into a familiarmeasure of inconsistency (Varian’s Inefficiency Index) and a natural measure ofmisspecification that quantifies the cost of restricting preferences to a subset ofutility functions (possibly through a parametric form). By monotonicity of IM (Fact15), for every U ⊆U c :IV (D, f ) = IM(D, f ,U c)≤ IM(D, f ,U )Therefore, we can write IM(D, f ,U ) as the sum of IV (D, f ) and IM(D, f ,U )−IM(D, f ,U c). The former is a measure of the cost associated with inconsistentchoices that is independent of any parametric restriction and depends only on theDM, while the latter measures the cost of restricting the preferences to a specificparametric form by the researcher who tries to recover the DM’s preferences. Thisdecomposition has the advantage that the two measures are comparable (sameunits) and are constructed to maintain revealed preference information encodedin the choices. As such, IM(D, f ,U )− IM(D, f ,U c) serves as a natural measure ofmisspecification that is rooted in economic theory. Two reasons lead us to believethat such a decomposition is essential for any method of recovering preferences ofa DM who is inconsistent, although we are not aware of its existence elsewherein the literature. First, since for a given data set, the inconsistency index is con-stant (zero if GARP is satisfied) we can be certain that IM(D, f ,U ) can be used torecover parametric preferences within some parametric family U by minimizingthe misspecification. Second, only when the decomposition exists, one can trulyevaluate the cost of restricting preferences to some parametric family compared tothe cost incurred by the inconsistency in the choices.Figure 1.3 demonstrates the decomposition graphically. Consider a data set ofsize 2: D ={(p1,x1),(p2,x2)}where pixi = 1. The dataset is inconsistent with141.3. Parametric Recoverabilityx1x2u(x1)u(x2)p1x1p2x2v∗2p2x2v2p2x2Figure 1.3: DecompositionGARP since xiRDx j and x jP0Dxi for i, j ∈ {1,2} i 6= j. It is easy to see that forany anonymous aggregator, the Varian Index will be IV (D, f ) = f(1,v2). Hence,the dashed line (together with the original budget line from which x1 was chosen)represents graphically the minimal adjustments required for D to satisfy GARPv.Now consider, for example, the singleton set of utility functions that includes themonotonic and continuous function u. We would like to find v? (D,u). Since x1 isrationalizable by this utility function, then v?1 (D,u) = 1. v?2 (D,u) is the minimalexpenditure required to achieve utility level of u(x2)under prices p2, which is rep-resented graphically by the dotted line. IM (D, f ,{u}) = f(1,v?2 (D,u))and sincev?2 (D,u) is smaller than v2, it implies that IM (D, f ,{u}) is weakly greater thanIV (D, f ). The difference between the original budget line from which x2 was cho-sen and the dashed line - v2 p2x2, represents graphically the inconsistency impliedby D, while the difference between the dashed line and the dotted line - v?2 p2x2,represents the misspecification implied by u. Their sum is the goodness of fit mea-sured by the money metric index. If one considers an alternative utility functionu′ such that x1 is not rationalizable by u′ (but suppose v?2 (D,u′) = v?2 (D,u)), thiswould not affect the Varian Index but would imply higher money metric index thanu and therefore u′ would be more misspecified than u.It is crucial to note that since, for a given data set, the inconsistency index isconstant, the goodness of fit measure can be used to recover parametric preferenceswithin some parametric family. The same idea can be applied to hypotheses testingand model selection. Consider two parametric families U and U ′. A researcherwill calculate IM(D, f ,U ′) and IM(D, f ,U ). As argued before, both incorporatethe same inconsistency measure - IV (D, f ), hence the data set D may be better ap-151.4. Application to Choice under Riskproximated by U or U ′ depending on the relative magnitude of the money metricindex. Moreover, an important implication of Fact 15 is that if we impose an addi-tional parametric restriction on preferences (hence reduce the set of possible util-ity functions we consider), the misspecification will necessarily (weakly) increase.That is, ifU ′ is a subset ofU that is generated by some parametric restriction, thenIM(D, f ,U ′)−IM(D, f ,U )IM(D, f ,U )−IV (D, f ) is a measure of the relative marginal misspecification impliedby the restriction of U to U ′. We will tend to accept the restriction if this ratio islow. This methodology resembles statistical hypothesis testing, although the cur-rent study does not incorporate any error structure. Inclusion of such structure mayprovide an interesting avenue for future research, but is not pursued here.181.4 Application to Choice under RiskThe goal of this section is to demonstrate the empirical applicability of the MoneyMetric Index as a criterion for recovering preferences. First, we compare thismethod and a recovery method that utilizes a loss-function that is based on the Eu-clidean distance between observed and predicted choices in the commodity space,in particular Non-linear Least Squares (NLLS). Important qualitative differencesarise including varied emphasis on first-order versus second-order risk aversion.Additionally, we demonstrate how the suggested method can be used to recoverapproximate preferences for decision makers who are not strictly rational (in theGARP sense) and assess the degree to which these recovered preferences encodethe revealed preference information contained in the choices. Finally, we illustratehow this method can be applied to evaluate nested parametric restrictions, as is thecase when we compare models of disappointment aversion with expected utility,as well as non-nested model restrictions, as is the case when we compare variousfunctional forms, e.g. CRRA versus CARA.We apply the parametric recoverability method developed in this study andNLLS to a data set of portfolio choice problems collected by Choi et al. (2007).In their experiment, subjects were asked to choose the optimal portfolio using acombination of Arrow securities from linear budget sets with varying prices. Wefocus our analysis only on the treatment where the two states are equally proba-ble. For each subject, the authors collected 50 observations and proceeded to testthese choices for rationality (i.e. GARP) as well as estimate a parametric utilityfunction in order to determine the magnitude and distribution of risk attitudes inthe population. Choi et al. (2007) estimate a Disappointment Aversion functionalform introduced by Gul (1991) (for equally probable states):1918See related discussions in Afriat (1972) and in Varian (1985).19A reader who is not familiar with Gul (1991) model, may find the following footnote helpful:161.4. Application to Choice under Riskx2x1(a) Disappointment aversion, β > 0x2x1(b) Elation loving, −1 < β < 0Figure 1.4: Typical non-expected utility indifference curves induced by Gul (1991)Disappointment Aversion functionu(xi1,xi2) = γw(max{xi1,xi2})+(1− γ)w(min{xi1,xi2})(1.1)Let p = (p1,x1; ...pn,xn) be a lottery such that x1 ≤ ·· · ≤ xn. Assuming (for simplicity) that ce(p) /∈supp(p), the support of p can be partitioned into elation and disappointment sets: there exists aunique j such that for all i < j : (xi,1)≺ p and for all i≥ j : (xi,1) p. Gul’s elation/disappointmentdecomposition is then given by r =(x1,r1; · · · ;x j−1,r j−1), q =(x j,q j; · · · ;xn,qn)and α = ∑ni= j pisuch that ri = pi1−α and qi =piα . Note that p = αq+(1−α)r. Then:uDA (p) = γ (α)E (v,q)+(1− γ(α))E (v,r)and ∃−1 < β < ∞ such thatγ (α) = α1+(1−α)βwhere v(·) is a utility index and E (v,µ) is the expectation of the functional v with respect to measureµ . If β = 0 disappointment aversion reduces to expected utility, if β > 0 the DM is disappointmentaverse (γ (α) < α for all 0 < α < 1), and if β < 0 the DM is elation seeking (γ (α) > α for all0 < α < 1). Gul (1991) shows that the DM is averse to mean preserving spreads if and only if β ≥ 0and v is concave. That is, if v is concave then, by Yaari (1969), preferences are convex if and only ifthe DM is weakly disappointment averse.For binary lotteries: Let (x1, p;x2,1− p) be a lottery. The elation component is x2 and the disap-pointment component is x1 and α = 1− p (in our case α = 0.5). Therefore:uDA (x1, p;x2,1− p) = γ (1− p)v(x2)+(1− γ (1− p))v(x1)and since γ (0) = 0,γ (1) = 1 and γ (·) is increasing, γ (·) can be viewed as a weighting function, andDA is a special case of Rank Dependent Utility (Quiggin, 1982).171.4. Application to Choice under Riskwhereγ = 12+β β >−1 w(z) ∈{z1−ρ1−ρ ,−e−Az}ρ ≥ 0, A≥ 0The parameter γ is the weight placed on the better outcome. For β > 0, the bet-ter outcome is under-weighted relative to the objective probability (of 0.5) and thedecision maker is disappointment averse. For β < 0, the better outcome is over-weighted relative to the objective probability (of 0.5) and the decision maker iselation seeking. In the knife-edge case, when β = 0, (1.1) reduces to expected util-ity. β has important economic implication: if β > (=)0 the decision maker exhibitsfirst-order (second order) risk aversion (Segal and Spivak, 1990). That is, the riskpremium for small fair gambles is proportional to the standard deviation (variance)of the gamble.20 First-order risk aversion can account for important empirical reg-ularities that expected utility (with its implied second-order risk aversion) cannot,such as in portfolio choice problems (Segal and Spivak, 1990), calibration of riskaversion in the small and large, and disentangling intertemporal substitution fromrisk aversion (see Epstein, 1992 for a survey). Figure 1.4 illustrates characteristicindifference curves for disappointment aversion and elation seeking (locally non-convex) subjects, respectively. Additionally, w(x) is a standard utility for wealthfunction and is represented here by either the CRRA or CARA functional form.We recover parameters using two different methods: Non-Linear Least Squares(NLLS) based on Euclidean distance and the Money Metric Index developed here.To calculate the Varian Inefficiency Index, IV (D, f ), and the Money Metric Index,IM (D, f ,U ),21 we use both the mean and sum-of-squares aggregators:f (v) ∈{1nn∑i=1(1− vi),√1nn∑i=1(1− vi)2}For both methods we use an analytical optimization algorithm that allows us toinstantaneously recover individual parameters from observed choices for each sub-ject.2220−1 < β < 0 implies local risk-seeking behavior.21Computing the Varian Index is a hard computational problem (see discussion in Section 1.5.3),hence we implemented an algorithm that over-estimates the real Varian Index (the details of theimplementation are in Appendix A.4). The implication of this overestimation is that in most ofthe results that follow, the decomposition of the Money Metric Index overestimates the irrational-ity component and underestimates the misspecification component. An unavoidable consequence ofthis computational bias is that in some cases the misspecification component will be negative. Thatsaid, while the extent of misspecification with respect to the approximate preferences may be un-derestimated, the recovered parameters are independent of the calculation of the Varian Index, i.e.minimizing the the Money Metric Index is sufficient and exact. Moreover, in those cases the MoneyMetric Index is a better approximation to the real Varian Inefficiency Index than the computed value.22For the CRRA functional form we require a restriction of ρ < 1 for all subjects that exhibit181.4. Application to Choice under RiskNLLS (SSQ) Money Metric (SSQ) Money Metric (MEAN)w(·) β ρ/A β ρ/A IM β ρ/A IMCRRA GARP (12) 0.006 1.279 0.413 0.732 0.029 0.249 0.791 0.016All (47) 0.171 0.580 0.333 0.356 0.050 0.268 0.387 0.028CARA GARP (12) -0.07 0.047 0.452 0.022 0.028 0.16 0.024 0.012All (47) 0.077 0.028 0.383 0.018 0.060 0.2385 0.019 0.028Table 1.1: The median recovered parameters1.4.1 Qualitative Comparison of MethodsIn this section we compare differences in recovered parameters according to thechoice of recovery methods (NLLS vs Money Metric), specification (CRRA vsCARA), and aggregator (mean vs sum-of-squares). Summary statistics for therecovered parameters are reported in Table 1.1.23,24 Additionally, we report thegoodness of fit, expressed as the Money Metric Index. The first and third rowreport the statistics for only those subjects that satisfy GARP (12 out of 47), andthe second and fourth row report the statistics for the entire sample.The summary statistics suggest substantial qualitative and quantitative differ-ences between recovery methods.25 Since the Money Metric Index does not in-clude any stochastic component, these differences cannot be tested for statisticalsignificance, yet we may still interpret them in terms of economic significance. Inother words, numerical differences in the recovered parameters are suggestive ofimportant qualitative differences in behavior. Consider, for example, the highermedian value of β reported for all subjects (as well as for the restricted sampleof consistent subjects): NLLS results in β ≈ 0 implying that the decision makersare roughly expected utility maximizers on average, i.e. exhibit only second-orderrisk aversion, whereas the Money Metric Index suggests the opposite, i.e. decisionmakers are disappointment averse on average and exhibit first-order risk aversion.Note the smaller curvature of the utility function measured by lower median valueof ρ for the Money Metric Index, indicates lesser emphasis on second-order riskaversion.26corner choices. For ρ ≥ 1 both assets are essential, hence utility is infinitely negative at the corners.This is not a problem for the CARA functional form.23Note that the recovered parameters for NLLS may differ from those reported in Choi et al.(2007) for several reasons: we allow for elation loving (−1 < β < 0); we permit boundary observa-tions (xi = 0); we use Euclidean norm (instead of the geometric mean); we use multiple initial points(including random) in the optimization routine (instead of a single predetermined point). We wereable to replicate the results reported by Choi et al. (2007).24Table 1.1 reports medians since the recovered parameter values of a handful of subjects areextreme and distort the average statistics.25The code and disaggregated data is available for download from the online Appendix.26We find important qualitative differences at the individual level as well. For all combinationsof loss function and functional form we find some subjects that are disappointment averse according191.4. Application to Choice under RiskSubject IV β ρ IM320 0 -0.698 1.025 0.083206 0.011 0.044 1.793 0.023Table 1.2: Comparing consistent and inconsistent subjects1.4.2 Recovering Preferences for Inconsistent SubjectsIn Section 1.3.2 we proved the decomposition of the Money Metric Index into theVarian Inefficiency Index - which serves as a measure of inconsistency, and a re-mainder - which is a measure of misspecification. As such, we recover parametersthat are closest to approximating preferences for those subjects who fail GARP .27We exclude only those subjects with a value of the Varian Index exceeding 10%.To illustrate, consider Table 1.2 that compares the recovered parameters usingthe Money Metric Index for the mean aggregator and the CRRA functional formfor two subjects taken from Choi et al. (2007). Subject 320’s choices are consis-tent with GARP while subject 206’s are inconsistent. In spite of the fact that 320is consistent, the parametric preferences considered do not accurately encode theranking implied by her choices, as it requires 8.3% wasted income on average. Onthe other hand, the revealed preference information implied by 206’s choices arewell captured by the parametric family, since it implies inefficiency of only 2.3%,in spite of the fact that her choices are not strictly consistent (IV = 0.0105 > 0, 116GARP violations between pairs of observations). Additionally, the decomposedmisspecification for Subject 206 amounts to only 1.2% (IM−IV ) with respect to herapproximate preferences. In other words, although 320 is consistent with GARP,the choices of 206 are better approximated using the specified functional form. Assuch, the Money Metric Index can be applied uniformly to all data sets, and theappropriateness of a certain functional form can be evaluated ex-post.Using the decomposition of the Money Metric Index into the Varian Index(measure of consistency) and a residual which measures misspecification, we cancalculate the misspecification for each subject (recall that these are underestima-tions). Figure 1.5 presents the distribution of misspecification in the Choi et al.to the Money Metric Index (β > 0), yet elation loving according to NLLS (β < 0), or vice versa.For CRRA and the mean aggregator, we find 8 subjects for which the Money Metric Index reportsβ > 0 and NLLS reports β < 0, and none for which the opposite is true. The incidence and subjectsaffected vary according to functional form and aggregator selected.27Approximate preferences are defined by the set U˜ = {u ∈U c : IV (D, f ) = IM(D, f ,{u})}where D, f , and U c are defined as above. In general, this set is not a singleton as the vector ofbudget adjustments, v, required by the calculation of the Varian Inefficiency Index, is not uniquenor is the utility function that rationalizes a given revealed preference relation, RD,v, for a particularvector of adjustments.201.4. Application to Choice under Risk       	       Figure 1.5: Cumulative distribution of misspecification for CRRA/CARA func-tional forms for mean/sum-of-squares aggregators(2007) sample for various functional forms controlling for the two aggregatorswe study. Due to the underestimation of the misspecification and the lack of in-formation about the properties of this bias, all that can be learned in certainty isthat the percentage of subjects with misspecification exceeding the 5% thresholdis considerably higher using the sum-of-squares aggregator function than the meanaggregator. In this case, at least 25% of subjects exceed the 5% threshold. Resultsare similar when CARA is used instead of CRRA.1.4.3 Evaluating a Restriction to Expected UtilityThe expected utility model is a nested alternative of the disappointment aversionmodel, satisfying the restriction that β = 0. We propose two methods for evaluatingwhether or not this restriction is justified: one based on the additional misspecifi-211.4. Application to Choice under Riskcation implied by this restriction and the other utilizing the bootstrap method. Theformer has the advantage of being based on Theorem 16, but suffers from the factthat the computed Varian Inefficiency Index is an upper bound to the real index(and hence the misspecification under the more general model of disappointmentaversion is downward biased). The latter can be viewed as appropriate to testingthe sensitivity of the recovered parameters to extreme observations and is inde-pendent of the overestimation of the inconsistency (through the Varian InefficiencyIndex), but is not directly derived from the theoretical considerations explored inthe current study.Misspecification TestUtilizing a specific functional form, we recover parameters under the restrictionthat β = 0 and calculate the additional misspecification implied by this restriction.As proposed in Section 1.3.2, given the choice of functional form (CRRA or CARA)and aggregator (mean or sum-of-squares), we use the ratio IM(D, f ,EU)−IM(D, f ,DA)IM(D, f ,DA)−IV (D, f )where DA stands for the disappointment aversion (unrestricted) model , EU standsfor expected utility model and f is the chosen aggregator. We allow up to 10%additional misspecification. That is, if the restriction to expected utility implies aproportional increase in the misspecification of more than 10%, then we tend to re-ject the expected utility specification. Note that since IV (D, f ) is an overestimationof the Varian Inefficiency Index, the calculated ratio is also an overestimation ofthe real ratio, meaning that the test is actually more strict and will tend to reject ex-pected utility for inconsistent subjects for whom IV is overestimated.28 In contrast,the calculation of the Money Metric Index is exact.The left hand side of Table 1.3 reports the percentage of subjects with addi-tional misspecification below the 10% threshold. The number of subjects includedin each scenario is in brackets. We exclude subjects for three reasons: Subjectswith a Varian Inefficiency Index of more than 10% are too inconsistent to con-sider any reasonable recoverability; those with a Money Metric Index of more than10% implies that the disappointment aversion specification does not capture theirbehavior; and those where the Varian Inefficiency Index is overestimated to theextent that it exceeds the value of the Money Metric Index. The results betweenscenarios are qualitatively similar, with a range of subjects consistent with the re-striction to expected utility maximization between 13 and 22 (out of 47), dependingon the combination of functional form and loss function being used. On the otherhand, there does exist some variation in the specific subjects that are excluded due28For certain subjects it is the case that IV (D, f ) > IM(D, f ,DA) and hence the ratio above isa negative number. In these cases we exclude these subjects from analysis. The incidence of thisproblem varies depending on the loss function and the choice of functional form.221.4. Application to Choice under Riskmisspecification* bootstrapping**MEAN SSQ MEAN SSQCRRA 34.1% (41) 40.0% (35) 26.7% (45) 29.7% (37)CARA 56.4% (39) 40.6% (32) 40.1% (42) 35.3% (34)*Percentage of subjects for which additional misspecification implied by expected utility restric-tion is less than 10%.**Percentage of subjects for which β = 0 is included in the 95% range of recovered parameters.(Number of subjects included in each sample in brackets)Table 1.3: Evaluating restriction to expected utility using misspecification andbootstrappingto high values for the Varian Inefficiency or Money Metric Indices as well as whichsubjects are rejected as expected utility maximizers. While some of this variation isinherent due to the somewhat arbitrary choice of loss function, we will show belowhow the Money Metric Index can be used to select amongst functional forms.BootstrappingAs an alternative to the procedure above, we use a bootstrapping technique in or-der to determine the sensitivity of recovered parameters to inclusion of all 50 ob-servations. In other words, this procedure provides a sense of how sensitive therecovered parameters are to certain observations by quantifying the variation in re-covered parameters that occurs as the composition of the data set varies. The exactprocedure is described in detail in Appendix A.4. This more standard method en-ables to check the robustness of the conclusions presented in the left hand side ofTable 1.3.The bootstrapping procedure may be applied to evaluate a parametric restric-tion. If the interval of recovered β generated by 95% of samples includes β = 0then we may conclude that the expected utility model is a reasonable approximationof individual choices. In the right hand side of Table 1.3 we present the fraction ofsubjects for which β = 0 falls within the 95% range.29,30 As before, there is somevariation in the identity and number of subjects for which this is the case dependingon the choice of functional form and loss function. For the combinations tested wefind between 11 and 17 subjects may be represented by expected utility accordingto this procedure.29As above, we exclude subjects for having a Varian Inefficiency Index or Money Metric Indexexceeding 10%.30The 95% range is constructed in exactly the same way as with standard statistical bootstrappingprocedure by excluding the bottom and top 2.5% percentile of recovered parameters from all samples.231.4. Application to Choice under RiskAlthough the percentages are somewhat lower, we find there is generally agree-ment between these procedures for evaluating the expected utility restriction. Forexample, with respect to the CRRA functional form and mean aggregator, of the14 subjects with additional misspecification below the 10% threshold, 11 of thesesubjects satisfy the restriction under the bootstrapping procedure. Additionally,there is one subject that satisfies the bootstrapping criterion for expected utilitybut is not under the threshold for misspecification. A similar pattern is present forvarious combinations of functional form and loss function. It is important to notethat the lower proportion of expected utility under bootstrapping is in the oppositedirection of the bias introduced by the overestimation of IV .1.4.4 Comparison of Non-nested AlternativesThe Money Metric Index also allows one to evaluate non-nested alternatives as isthe case if we wish to compare functional forms, for example CRRA versus CARA.We can calculate the extent of misspecification implied by each functional formand select the functional form which best represents a decision maker’s preferenceson a subject by subject basis. For the two loss functions used, we find that mostsubjects are better represented by the CRRA functional form. The percentage ofsubjects for which the Money Metric Index is lower using CRRA rather than CARAis reported in Table 1.4. The number of subjects included in our calculations isin brackets, again excluding subjects with a Varian Index or Money Metric Indexexceeding 10% for both functional forms.In Section 1.4.3 we evaluated the expected utility model under the restrictionof a particular functional form for utility, CRRA or CARA, for all subjects. Incontrast we can evaluate the Money Metric Index for each subject including bothfunctional forms for both the restricted (expected utility) and unrestricted (disap-pointment aversion) models. Hence, it may be the case that for a single subject thefunctional form that minimizes misspecification for each model may differ, and ofcourse, the functional form that minimizes misspecification may also differ acrosssubjects. Table 1.4 reports the results from this more flexible version of the mis-specification test above. We conclude that choices of close to half of all subjectsmay be reasonably approximated by the expected utility model when we allow theutility function to be either CRRA or CARA.241.5. Short DiscussionsaggregatorMEAN SSQCRRA (vs CARA)* 68.9% (45) 80.0% (37)Expected Utility (allowing CRRA or CARA)** 46.3% (41) 48.6% (37)*Percentage of subjects for which misspecification is lower with CRRA than CARA.**Percentage of subjects for which the additional misspecification implied by expected utilityrestriction is less than 10% including both CRRA and CARA.(number of subjects included in each sample are in brackets)Table 1.4: Choice of utility index and evaluation of expected utility restriction1.5 Short Discussions1.5.1 Comments on alternative loss-functionsArea-based Parametric RecoverabilityFigure 1.1 suggests an obvious alternative to the money metric as a foundation formeasuring misspecification: a measure that is based on the area of intersection be-tween the upper contour set corresponding to a specific utility function, and theset of alternatives that are revealed worse than the observed choice. This measureis related to the Minimal Swaps Index, which is a measure of inconsistency pro-posed recently by Apesteguia and Ballester (2012) for the case of finite numberof alternatives. To generalize their method to infinite alternatives set as studied inthe current paper, in light of Theorem 16, one needs to calculate an index that isbased on the area above for the entire set of continuous and non-satiating utilityfunctions. When the set of utility functions is restricted to a parametric family theMinimal Swaps Index could then measure the inconsistency, while the remainderwill represent the misspecification. While the current study demonstrates how toachieve this goal with respect to the Money Metric Index and the correspondingVarian Inefficiency Index, it is not entirely clear how to measure inconsistencydirectly using areas.One can define a measure of inconsistency based on the area of intersectionbetween the revealed preferred set and the budget set corresponding to an observedchoice. Define the revealed preferred set as only those bundles which are eitherrevealed preferred or monotonically dominate a bundle that is revealed preferredto a given bundle. Hence, as illustrated in Figure 1.6, violations of consistency areremoved by modifying budget sets so as to eliminate the area of overlap betweenthe budget set and those bundles which are revealed preferred. Hence, we can usethis measure to decompose an area index into separate measures of inconsistencyand misspecification just as we did with the Money Metric Index.251.5. Short Discussionsx1x2B˜(x2)B(x1)Figure 1.6: Modified budget setsNevertheless, an area index is not ideal. First, there does not exist an eleganttheoretical analog to Afriat (1987) Theorem with respect to the modified budgetsets in Figure 1.6 as there does for the specific type of budget set adjustments uti-lized in calculating the Money Metric Index (see footnote 64 in Appendix A.3).Second, computing the inconsistency index suggested above would not be any eas-ier than computing the Varian Inefficiency Index, a problem which is NP-hard.Third, it is a simple exercise to show that choices with modified budget sets as inFigure 1.6 can be easily rationalized by non-convex preferences and, in fact, anyrecovery procedure based on an area index would be biased towards these types ofnon-convexities. Put another way, with the area loss function as a criterion, anyconvex preferences which rationalize the modified data set can be improved uponwith similar non-convex preferences. Lastly, the simple area index may lack intu-itive interpretation that the Money Metric Index enjoys. All these are surmount-able difficulties, that we think are worthwhile pursuing in future work. Ultimately,since the Money Metric Index does not appear to suffer from the same issues wecurrently believe it dominates the proposed area loss-function both as a measure ofmisspecification and as a method for recovering preferences.When Closer is NOT BetterAs noted above, a recovery method that employs a loss function that is based onlyon the distance between observed and predicted choices (as NLLS) fails to accountfor all the ranking information encoded in the choices, since it compares only thedistance between predictions and choices and does not incorporate all other bundlesthat were feasible but were not chosen. Moreover, if the “true” (unobserved) pref-261.5. Short Discussionsx2x1x0x′ x′′Figure 1.7: Non-convex preferences and a distance-based loss-functionerences are not convex, the ranking information induced by a utility function thatgenerates a prediction closer to the observed bundle may be more inconsistent withthe “true” ranking of bundles. In other words, the intuition that a closer predictionrepresents less misspecification relies crucially on the assumption of convex pref-erences, which is not part of revealed preference theory. Figure 1.7 demonstratesthis argument. Consider a choice of x0 generated by the non-convex preferencesdepicted in the figure. These preferences would imply that had the DM faced themenu {x′,x′′}, she would choose x′′. Since x′ is closer to x0 than x′′, every recoverymethod that is based on a distance between observed and predicted choices, wouldassign a lower loss to preferences with predicted choice at x′ than to preferenceswith predicted choice at x′′. This would imply that x′ is preferred to x′′ - contraryto the “true” preferences that generated the data.1.5.2 Random Utility MaximizationA Random Utility Maximization (henceforth, RUM) model is a probability spaceover a set of utility functions. A data set is rationalizable if there exists an RUMmodel such that for every choice problem, the expected frequency of every feasiblealternative as generated by the RUM model equals the observed frequency of thisalternative.3131See McFadden (2005), Kitamura and Stoye (2011) and Stoye and Hoderlein (2012) for revealedstochastic preference characterizations and tests for rationalizablity. Gul and Pesendorfer (2006) con-sider the case where the objects of choice are lotteries and provide necessary and sufficient conditionsfor a random choice behavior to represent a maximization of some RUM model on lotteries.271.5. Short DiscussionsRUM models are usually studied in the context of population level data, wherefor each problem only the distribution of choices is observed. In such frame-work, the population is assumed to be heterogeneous and individuals are assumedto hold deterministic preferences (McFadden, 2005). However, some authors in-terpret RUM models as describing homogeneous population of individuals withstochastic preferences, or, equivalently, an individual with stochastic preferencesthat encounters the same choice problem repeatedly (Gul and Pesendorfer, 2006).The application of RUM to individual level data seems conceptually attractive.Previously, such application was challenged by experimental results that exhib-ited monotonicity32 violations. Recently, extensions were introduced to accountfor monotonicity violations due to the attraction effect (Gul et al. 2012; Naten-zon 2013). However, the finding that subjects exhibit stochastic behavior morefrequently when facing “difficult” decision problems (Rubinstein 2002a; Agranovand Ortoleva 2014) poses an additional challenge in applying RUM models to in-dividual level data analysis, since it implies deliberate randomization rather thanrandom preferences.1.5.3 The Computation of the Varian Inefficiency IndexAfriat (1972, 1973, 1987) and Varian (1990) discuss non-uniform adjustments ofthe budget lines so that the inconsistencies in the data are removed. Varian (1990)argues that given an aggregator function an optimal vector of adjustments can befound. Moreover, the value of this vector can be interpreted as the inconsistencylevel of a given data set. The problem of finding this exact value is equivalent to theminimum cost feedback arc set problem.33 Karp (1972) shows that the minimumcost feedback arc set problem is NP-Hard and therefore finding the exact VarianInefficiency Index is also NP-Hard as suggested in Varian (1990).Three algorithms to compute a polynomial approximation were suggested inthe economics literature. The first algorithm (Tsur (1989) and Algorithm 1 in Al-cantud et al. (2010)) suggests to report the vector v such that v j is the minimaladjustment required to exclude all xi such that xiRx j from the budget set of obser-vation j. The second algorithm (Algorithm 2 in Alcantud et al. (2010)) is such thatv j is the minimal adjustment required to exclude one xi such that xiRx j from thebudget set of observation j. If the data satisfies GARPv, v is reported, otherwiseanother point is removed for each observation j and so on until GARPv is satisfied.32A random choice rule is monotonic if the probability of an existing alternative being chosencannot increase when a new alternative is introduced into the choice set. Monotonicity is a commonproperty to all RUM models.33Given a directed and weighted graph, find the “cheapest” subset of arcs such that its removalturns the graph into an acyclic graph281.5. Short DiscussionsThe third algorithm (Varian (1993) and Algorithm 3 in Alcantud et al. (2010)) sug-gests to calculate the minimal adjustment to one of the budget sets, such that oneviolation of GARP is removed. This minimal value should be substituted into vand GARPv should be checked. If the data satisfies GARPv, v is reported, other-wise another point is removed and the procedure is repeated until the data satisfiesGARPv.Alcantud et al. (2010) show that Algorithms 2 and 3 are better approximationsthan Algorithm 1 and that they do not dominate each other. Moreover, Alcantudet al. (2010) show that D satisfies GARPv for the v found by Algorithms 2 and 3.This implies that these approximations overestimate the actual Varian InefficiencyIndex. We do not know of any measure for the quality of this approximation. Also,note that none of these algorithms uses the chosen aggregator function as part of itsiterative mechanism. We believe that incorporating the computer science literatureon the “minimum cost feedback arc set problem” and using the chosen aggregatormay improve considerably the quality of approximation.1.5.4 From Inefficiency to Consideration SetsIn the consistency literature, Afriat (1972) and Varian (1990, 1993) view the extentof the adjustment of the budget line as the amount of income wasted by a decisionmaker relative to a fully consistent one (hence the term “Inefficiency Index”). Arelated interpretation, mentioned by Houtman (1995), holds that the DM overesti-mates prices and hence does not consider all feasible alternatives. An alternativeinterpretation (due to Manzini and Mariotti, 2007, 2012; Apesteguia and Ballester,2012; Masatlioglu et al., 2012; Cherepanov et al., 2012), views the adjusted bud-get set as a consideration set which includes only the alternatives from the originalbudget menu that the DM compares to the chosen alternative. By construction,those bundles not included in the attention set are irrelevant for revealed prefer-ence consideration. Another line of interpretation for inconsistent choice data, ismeasurement error (Varian, 1985; Tsur, 1989). These errors could be the result ofvarious circumstances as (literally) trembling hand, indivisibility, omitted variablesetc.All above interpretations take literally the existence of an underlying “welfare”preferences that generate the data (Bernheim and Rangel, 2009). In addition thereexist other plausible data generating processes that result in approximate (and evenexact) consistent choices (Simon, 1976; Rubinstein and Salant, 2012). We do notfind a clear reason to favor one interpretation over the other, and would ratherremain agnostic about the nature of the adjustments required to measure inconsis-tency.More importantly, this paper studies the problem of recoverability of prefer-291.5. Short Discussionsences and not consistency. That is, we take the data set as the primitive and theutility function as an approximation. As such, the adjustments serve us as a mea-surement tool (“ruler”) for quantifying the extent of misspecification. We view thecurrent work as contributing to the measurement of misspecification and recoveryof approximate preferences rather than to the literature that explains how inconsis-tency arises.30Chapter 2The Predictive Power ofParametric Recovery Methods2.1 IntroductionIn Chapter 1 we introduce a novel approach for recovering structural parame-ters from individual choice data from convex budget sets, the Money Metric In-dex (henceforth referred to as the MMI). The MMI is based on the well-knownMoney Metric utility function and was originally recommended by Varian (1990)as an “economic” alternative to standard statistical techniques for estimating pref-erences. While we demonstrate the normative appeal of this method, by illuminat-ing its foundation in revealed preference theory, we do not provide any empiricalevidence that the MMI is a superior method for recovering parameters. Never-theless, we show that the MMI recovers different parameter values than statisticalmethods. Hence, the purpose of this chapter is to evaluate the usefulness of theMMI as an empirical technique as compared to traditional statistical methods.The practice of recovering preferences from choice data, from the laboratory orthe market, has a long history in economics spanning all sub-fields and disciplines.At its most basic, this process involves extracting the ranking information encodedin the data in order to construct a preference relation that is as close as possibleto the decision makers true preferences, which are, of course, unobservable. Weclaim that the revealed preference relation contains as much information about thedecision maker’s true preferences as can be inferred from their choices. More-over, if a decision maker’s observed choices satisfy finite nonemptiness and choicecoherence (Kreps, 2013), then these choices induce a preference relation that isboth complete and transitive, and can be constructed using the revealed preferencerelation implied by the observed choices.In contrast, statistical methods for recovering preferences rely only on a com-parison between observed and predicted choices and seeks to minimize the distancebetween these quantities. This amounts to finding the preference relation whichinduces choices from observed choice sets that most closely resemble the actualchoices of the decision maker. Of course, this is exactly what we want except that312.1. Introductionthis method places no explicit restrictions on out-of-sample comparisons as muchof the revealed preference information is discarded.To illustrate the difference, consider a decision maker with the following choicecorrespondence: a ∈ c({a,b,c}). Suppose we wish to compare two preferencerelations: 1 and 2 such that b 1 a 1 c, and b 2 c 2 a in terms of theirconsistency with the observed choice. The choice correspondence induced by bothpreference relations contradicts our observations, i.e. a /∈ ci({a,b,c}) for i = 1,2,so there is no way to select between them based on this alone. On the other hand,for the choice set {a,c}we have a∈ c1({a,c}) and a /∈ c2({a,c}). If the decisionmaker satisfies finite nonemptiness and choice coherence (Kreps, 2013) then thechoice induced by 1 is consistent with the decision maker’s revealed preferenceswhere as the choice induced by 2 is not. Hence if we utilize this additionalrevealed preference information we may be able to select among the preferencerelations where previously we could not.If we extend this logic into the domain of choice from convex budget sets andapply it to the task of recovering parameters for a pre-specified structural model,then the MMI provides a tractable method for utilizing the decision maker’s re-vealed preference information.34 As above, even with respect to a parametric anal-ysis of the data, the MMI utilizes more ranking information than does a methodthat compares only observed and predicted choices. Hence, we would expect pa-rameters recovered using the MMI to perform no worse than the statistical methodin predicting choices out of sample.Consider Figure 2.1 where we have a only a single observation, xR, chosen froma budget set and we wish to determine which of the two utility functions, u and u′,is a more accurate representation of the decision maker’s underlying preferencesinduced by their choice. It is trivial to recover a utility function that rationalizes asingle observation as in Figure 2.1, however in practice data sets are much largerand therefore more difficult to rationalize with simple functional forms for utility.As such, our example restricts our attention to two candidates, both of which areclearly mis-specified. Hence, the goal of recovering preferences is to find the utilityfunction that minimizes this mis-specification.The statistical method, in this case Non-Linear Least Squares (henceforth re-ferred to as the NLLS), compares the implied optimal choice induced by each util-ity function and selects the one that is closer. On the other hand, the MMI attemptsto quantify the extent to which the rankings implied by each utility function areconsistent with the ranking information revealed by the subject’s choice (strictly34In Section 1.5.1, we discuss a method that more accurately relates the revealed preferenceinformation than does the MMI, however the MMI has several practical advantages that make it thepreferable choice in application.322.1. Introductionx2x1u′uxRxSNLLSMMIFigure 2.1: The Money Metric Indexspeaking this is the area of overlap between the upper contour set of the indiffer-ence curve and the budget set). The MMI quantifies this area in a tractable way bycalculating the budget set adjustment required to remove these inconsistencies asis shown in the figure. This procedure incorporates many more comparisons thandoes the statistical method and so may be more useful for evaluating the accuracyof a given utility function as well as for predicting choices from unobserved choicesets.To illustrate, suppose that the same decision maker is later given a choice be-tween only two points xR and xS and we wish to select the utility function, u or u′,that predicts their choice correctly. In Figure 2.1, the two recovery methods dis-agree as to which function is a more accurate and make different predictions withrespect to the choice between these two options. On the other hand, from our initialobservation, we know that xR is revealed preferred to xS, and the MMI selects theutility function that is consistent with this ranking information encoded in the deci-sion maker’s choice. In contrast, NLLS selects the utility function that predicts anoptimal choice closer to the observed choice, and appears not to use the additionalranking information implied by revealed preference at all. It is in this sense thatwe believe that the MMI utilizes more ranking information than does NLLS, andfor this reason we expect it to be a more accurate predictor of choice.We test our hypothesis by collecting data in a laboratory setting using a graph-332.1. Introductionical interface and a unique chained experimental design. In the first of two parts,subjects are presented with a sequence of decision problems where in each they areasked to choose a combination of two contingent assets from a convex budget set.This part of the experiment is inspired by the design used by Choi et al. (2007).35These choices are then used to instantaneously recover parameters of a specifiedutility function using both the MMI and NLLS methods. From these, we constructa sequence of pairwise choices such that one of the options in each pair is consis-tent with the recovered parameters from the MMI and the other to those of NLLS.Then, in the second part of the experiment, subjects are presented with these pair-wise choices and their responses identify which of the two sets of parameters moreaccurately predicts their choices.We find that that the MMI more accurately predicts subject choices both in theaggregate and at the individual level, however in some cases this determination isnot statistically significant at the 5% level. At the aggregate level approximately55% of pairwise choices are more accurately predicted by the MMI and at theindividual level 52 out of 97 subjects are more accurately predicted by the MMI.These results persist, and in some cases are strengthened, when the data is refinedto exclude subjects with inconsistent (i.e. non-rational) or erratic behavior. Adetailed analysis of the data is provided below and, in all where the MMI andNNLS recover substantially different parameters, the MMI proves to be the moreaccurate method.Additionally, and as a follow-up to the findings in Chapter 1, we focus our anal-ysis on identifying evidence of first-order risk aversion (FORA) in subject choices.In the first part of the experiment, where subjects make choices from convex budgetsets, we over-sample budget lines with moderate price ratios, i.e. close to the oddsratio, in order to better identify possible “kinks” in indifference curves. We identifysubstantial deviations from expected utility with respect to the recovered parame-ters both in terms of the number of subjects and the magnitude of the parameterthat corresponds to non-expected utility.36 This is true for both recovery methodsdescribed below, however quantitative differences do exist between methods. Inthe second part of the experiment, we emphasize pairwise comparisons involvingportfolios with low-variability in order to identify local risk seeking behavior in theneighborhood of certainty. For many subjects we find evidence of local risk seek-ing behavior which corresponds to a high degree of risk aversion over low stakesor first-order risk aversion.The paper is organized as follows. Section 2.2 introduces the Money Metric35We discuss the similarities and differences in more detail below.36The recovery methods employed in this paper do not utilize a stochastic framework and, assuch, a discussion of statistical significance is not meaningful.342.2. Parametric Recovery MethodsIndex as a recovery method and reviews Non-linear Least Squares. In Section 2.3,we describe the experiment in detail including the manner in which budget sets areselected and the logic underlying the algorithm for generating the pairwise choices.Our results, both at the aggregate and individual level, are presented in Section2.4 including several refinements based on an assessment of subject consistency indecision making. Section 2.5 provides a discussion of several issues that arise fromanalysis of the extremely rich data set that is generated by the experiment.2.2 Parametric Recovery MethodsWe compare two parametric methods for recovering preferences, i.e. structural pa-rameters, from individual choice data: the Money Metric Index (henceforth MMI)and Non-linear Least Squares (henceforth NLLS). The former is a novel method,suggested first by Varian (1990) and developed in Chapter 1, where as the latteris well-known and frequently used.37 Both are examples of extremum estimators(Amemiya, 1985), yet differ with respect to the manner in which the criterion isdefined for non-linear optimization. The NLLS criterion is often defined as theaggregate “distance” between observed and predicted choices (or function thereofdepending on the context), where as the MMI is defined as the aggregate “dis-tance” between actual and predicted expenditures. While both measures reflectthe difference between observed and implied optimal choices, the former utilizesmore revealed preference information (encoded in the subjects’ choices) in orderto minimize the inconsistency between these rankings and those implied by givenpreferences. Chapter 1 outlines the properties of the MMI and discusses substan-tial normative differences that exist as compared to NLLS. We will recap some ofthese findings briefly below as well as illustrate the difference between the NLLSand MMI criteria.The decision making environment used in this paper is defined as follows:Consider a decision maker (DM) who chooses bundles xi ∈ ℜ2+ (i ∈ 1, . . . ,22)out of budget menus{x : pix≤ 1, pi ∈ℜ2++}. In the case of budget menus, letD ={(pi,xi)22i=1}be a finite data set, where xi is the chosen bundle at normal-ized prices pi.38 For pairwise choices, let C ={(A j,x j)9j=1}be a finite data set37See Choi et al. (2007); Fisman et al. (2007); Andreoni and Sprenger (2012)Choi et al. (2007);Fisman et al. (2007); Andreoni and Sprenger (2012) for recent applications of this method in thecontext of choices from linear budget sets in the lab.38In accordance with convention and without loss of generality, we assume income is equal to 1and prices are normalized in the data to reflect this.352.2. Parametric Recovery Methodswhere x j ∈ℜ2+ j ∈ {1, ...,9}is the chosen bundle from choice set A j which containsonly two elements. Additionally, define a utility function u : ℜ2+→ℜ. Hence, foreach DM, given their choices, D, we are looking for the utility function, withina pre-selected parametric family of utility functions, which “best” represents theDM’s preferences. As with all recovery/estimation methods, “best” is defined bythe particular criterion used by the non-linear optimization routine for recoveringparameters.2.2.1 The Money Metric IndexThe Money Metric Index is minimizes the inconsistency between the ranking in-formation encoded in a DM’s observed choices and that implied by a given util-ity function by comparing actual expenditures, in this case normalized to 1, andoptimal expenditures implied by the solution to the consumer’s Expenditure Mini-mization Problem. This procedure requires computation of the money metric utilityfunction, defined below, for all observations in a given data set.Definition 17. The money metric utility function for a utility function u(·), pricesp, and bundle x, is given by:m(x, p,u) = min{y∈ℜK+:u(y)≥u(xi)}piy.For a given observation, (pi,xi), the difference between this value and actualexpenditures can be interpreted as the loss incurred by the DM were they to maketheir decision according to the choice correspondence induced by the given utilityfunction rather than on their own according to their preferences.39 Hence, therecovered parameters are chosen so as to minimize the aggregate difference withina parametric family of utility functions. We define the criterion as follows:fMMI (D,u(·,θ)) =n∑i=1(1−m(xi, pi,u))2(2.1)where θ ∈Θ are the parameters that index each utility function within the paramet-ric family.40 The recovered parameters are the solution to a non-linear optimizationroutine and are defined below:θˆMMI = argminθ∈ΘfMMI (D,u(·;θ))39Of course, this wouldn’t make sense literally, yet for a researcher wishing to represent a DM’spreferences using a utility function this is a useful interpretation.40By definition, m(xi, pi,u)≤ 1.362.2. Parametric Recovery Methods2.2.2 Non-Linear Least SquaresNon-linear Least Squares minimizes the aggregate distance between observed choices(or a function thereof) and a non-linear function of model parameters and prices.Given a utility function, u(·), specific parameters, θ ∈Θ, and prices, p, the impliedoptimal choice of a subject is defined by xˆ = argmaxx {u(x;α) : px≤ 1}. Hence, xˆ,is the solution to the consumer’s Utility Maximization Problem (UMP). We definethe NLLS criterion, fNLLS, as the sum-of-squared distances between between theobserved choices,{xi}ni=1, and the predicted choices,{xˆi}ni=1, orfNLLS (D,u(·;θ)) =n∑i=1||xi− xˆi||2The recovered parameters for NLLS are then given by41θˆNLLS = argminθ∈ΘfNLLS (D,u(·;θ))2.2.3 Disappointment AversionAs in Choi et al. (2007), we will adopt the Disappointment Aversion Model (DA) ofGul (1991) as it is observationally equivalent to Rank-Dependence Utility (RDU)with only two states and nests Expected Utility (EU) as a special case. The struc-ture and properties of this functional form are described in detail in Section 1.4,however, for ease of exposition, we provide the functional form below for refer-ence.u(xi1,xi2) = γw(max{xi1,xi2})+(1− γ)w(min{xi1,xi2})whereγ = 12+β β >−1 w(z) =z1−ρ1−ρ , ρ ≥ 0In order to facilitate a comparison, we recover parameters using both MMI andNLLS and assess the predictive power of the DA model for both in the context ofpairwise choice.41The statistical properties of NLLS are well known, however, for consistency, we assume thatthere is no stochastic component to the data generating process. Some readers may be uncomfort-able with this extreme assumption and (correctly) point out that the recovery method can no longerrightfully be called NLLS nor is statistical inference valid. Nevertheless, we feel that most readerswill still associate this method with NLLS and so we maintain this assumption for consistency withthe Money Metric Index.372.3. Experimental Design and Procedures2.3 Experimental Design and ProceduresFor the experiment we recruited 207 subjects using the ORSEE system (Greiner,2004) which is operated by the Vancouver School of Economics (VSE) at the Uni-versity of British Columbia. Subjects participate voluntarily and are primarilyundergraduate students representing many disciplines within the university. Theexperiments were conducted over several sessions in October 2014 and February2015 at the Experimental Lab at the Vancouver School of Economics (ELVSE).Each experimental session lasted approximately 45 minutes.The experiment is composed of two sequential parts. In the first part, subjectsselect portfolios of contingent assets from a series of 22 convex (linear) budget setswith differing price ratios and/or relative wealth levels. These choices are usedto instantaneously recover individual preferences using the two recovery methodsdiscussed above, NLLS and MMI.From the two sets of recovered parameters we construct a sequence of 9 pair-wise choices from which subjects must choose during Part 2 of the experiment.Each round involves a choice between two portfolios: a risky one, where outcomesdiffer across states, and a safe one, where the subject obtains a certain payoff re-gardless of the state. For each choice, the subject’s decision corresponds to one(and only one) of the sets of recovered parameters. As such, the subject choicesdetermine which set of recovered parameters, from either the MMI or NLLS, is abetter predictor of their choices and hence more accurately resembles the subject’strue preferences.Figure 2.2 illustrates the general method for constructing the pairwise choices.Given two utility functions, each corresponding to a different set of parameters,we select a risky portfolio, xR. Then we select a safe portfolio, xS, such that itlies in between the certainty equivalents of the risky portfolio implied by the twoutility functions. Hence, a subject’s choice from the set {xR,xS} identifies whichof the two utility functions more accurately corresponds to their preferences. Thisprocess is repeated in order to create a series of 9 pairwise choices and is describedin more detail below.Before subjects were allowed to begin, the instructions were read aloud as sub-jects followed along by viewing a dialog box on-screen (reproduced in AppendixB.1). Throughout, subjects were encouraged to ask questions in order to clarifytheir understanding of the task and graphical interface. In total, each subject made31 choices across the two parts of the experiment. After both rounds were com-plete, one of these rounds was selected randomly to be paid according to the sub-ject’s choice. For whichever round was selected, subjects were asked to flip a faircoin in order to determine for which state they would be paid. The choices were382.3. Experimental Design and Proceduresx2x1u′uxRxSFigure 2.2: Constructing Pairwise Choicesmade over quantities of tokens which are converted at a 2 to 1 ratio, i.e. 2 tokens= $1. Subjects were paid privately upon completion of the experiment. Subjectearnings averaged about $19.53 in addition to a fixed ($10) fee for showing up tothe experiment on time.2.3.1 Part 1: Convex Budget SetsIn this part of the experiment subjects choose bundles of contingent assets fromconvex budget sets. Each bundle, xi = (xi1,xi2), consists of quantities of tokens suchthat subjects receive xi1 tokens if state 1 occurs and xi2 tokens if state 2 occurs,with each state equally likely to occur. Bundles are selected from a linear budgetset, defined by normalized prices, pi, and displayed graphically via a computerinterface. The manner in which budget sets are chosen is described below and eachsubject is faced with the same series of budget sets and in the same order.The interface is a two-dimensional graph that ranges from 0 to 100 tokens oneach axis. The resolution of the budget lines in 0.2 tokens, i.e. subjects may adjusttheir choices in increments of 0.2 tokens with respect to the x-axis. Additionally,bundles are represented up to one decimal place, i.e. 0.1 tokens. Screenshots of thegraphical interface are included in the Instructions in Appendix B.1.Subjects choose a particular allocation by left-clicking on their desired choiceon the budget line, and are asked to confirm their choice before moving on to thenext round. At the beginning of each round the bundle is repositioned where thebudget line intersects the y-axis, however once the cursor re-enters the graph the392.3. Experimental Design and Procedures0 10 20 30 40 50 60 70 80 90 1000102030405060708090100Figure 2.3: Budget Linesbundle responds immediately to the position of the cursor. Subjects are restrictedto choose only those points which lie on the boundary of the budget set to eliminatepotential violations of monotonicity that may arise.There are two special cases which are treated differently by the interface. Whensubjects choose a point that is close to the certainty line, a dialog box appears thatasks them if they meant to choose the allocation where the value in both accountsis equal, guaranteeing themselves a sure payoff, or if they prefer to stick with thepoint they chose. Similarly, when subjects choose a point that is close to eitheraxis, a dialog box appears that asks them if they meant to choose a corner choice,i.e. where the payoff in one state is zero, or if they prefer to stick with the pointthey chose. This is done because it can be challenging to select a particular pointwith precision using the mouse due to the mechanical aspects of the interface andthese points have specific qualitative significance.The budget sets, and associated prices, are specifically chosen to address twoissues. First, we require a sufficient amount of overlap between budget sets so thatrevealed preference tests for rationality, i.e. GARP, will have sufficient power. Sec-ond, we would like to emphasize moderate price ratios, i.e. those that are relativelyclose to the odds ratio, in order to identify the role of First-order Risk Aversion(represented by β ) in subject preferences.We construct the set of 22 budget lines as follows: First, we divide the 22choices into 2 sets of 11 budget lines. Each set will be composed of the same 11price ratios, yet each set of prices will correspond to a different relative wealthlevel. Figure 2.3 shows the set of 11 budget lines for the highest relative wealthlevel. The other set (not shown) is simply a parallel translation of each line towardthe origin. Hence, for each price ratio, we have two observations, each at a different402.3. Experimental Design and Procedureswealth level.We assess the power of the Generalized Axiom of Revealed Preference (GARP)with respect to this set of prices using a Bronars Test (Bronars, 1987). For theBronars Test, we construct 1000 artificial data sets by selecting points from eachof the 22 budget lines uniformly at random. Then we calculate the Critical CostEfficiency Index (CCEI), introduced by Afriat (1972), for each of the artificialdata sets.42 In the case of the 22 budget lines proposed, we find that not a singledata set passes GARP, 0.4% have CCEI scores exceeding 0.95, and 1.2% haveCCEI scores above 0.9. Hence, it is extremely unlikely that a subject could passGARP, or receive a high CCEI score, by chance alone. We interpret these findingsas confirmation that the budget lines we have selected provide for an extremelypowerful test of rationality.43Finally, as illustrated in Figure 2.3, we attempt isolate the effect of First-orderRisk Aversion by emphasizing moderate price ratios, i.e. those close to the oddsratio. We choose 5 of the 11 price ratios have relatively moderate slopes, where asthe other 6 are more extreme. The goal of this design is to utilize small variationaround the odds ratio to identify local risk attitudes around certainty, so calledFirst-order Risk Aversion which is represented by the parameter β in our model.All subjects face the same 22 budget lines in the same order, however the order ofthe budget lines themselves is random.44Upon completion of the tasks in Part 1, the subject’s choices are used to recoverstructural parameters for the DA functional form using both methods introducedabove, NLLS and MMI. This process will occur instantaneously while subjectsprepare for Part 2 of the experiment, and subjects will be unaware of this processrunning in the background. We will use these two sets of parameters to construct42The CCEI is special case of the Varian Efficiency Index which defined in Section 1.2.3. TheCCEI is formally defined as follows and can be interpreted as the minimal adjustment required to allbudget sets in order to remove preference cycles from a subject’s revealed preference relation.IA(D) = maxv∈[0,1]n:D satisfies GARPvvwhere D satisfies the General Axiom of Revealed Preference Given v (GARPv) if for every pair ofobserved bundles, pix j ≤ vi pixi implies not p jxi < v j p jx j.43Selecting budget lines in this way is a combination of approaches already existing in the litera-ture, and in our view, an improvement. We take after Andreoni and Miller (2002) by structuring thebudget lines in groups with identical price ratios at various wealth levels and after Choi et al. (2007)by choosing budget lines with sufficient overlap in order to guarantee a powerful GARP test. Thecombination results in a highly structured set of budget lines that can be tailored to the question athand without sacrificing power.44It is not clear how possible order-effects would manifest in this environment nor how they couldbe identified in the first place. Hence, we deliberately, though cautiously, remain agnostic and presentthe budget lines in the same order for all subjects.412.3. Experimental Design and Proceduresa sequence of 9 pairwise choice problems to be used in Part 2 of the experiment.The algorithm for creating the pairwise choices is described in detail below.2.3.2 Part 2: Pairwise ChoicesIn Part 2, subjects will be faced with a sequence of 9 pairwise choices. In eachround, subjects must choose one of two portfolios, each represented as a point ona graphical interface, which includes one risky option, where payoffs differ acrossstates, and one safe option, where the payoff is the same. As in Part 1, the interfaceis a two-dimensional graph that ranges from 0 to 100 tokens on each axis. Subjectschoose a particular portfolio by left-clicking on their desired choice on the graphor by selecting a corresponding radio button on the right-hand side of the interface.The pairwise choice problems are constructed in order to identify which set ofrecovered parameters (from Part 1) more accurately predicts the subject’s choice ineach pairwise choice round.45 We employ the following general procedure: Givena risky portfolio, we calculate the certainty equivalent for both sets of parameters.In virtually all cases these differ, so we select the safe portfolio as the mid-pointbetween the two certainty equivalents. In this way, if we consider each set of pa-rameters as representing different preferences, then one set of preferences impliesa preference for the risky portfolio and the other for the safe one.To illustrate formally consider two arbitrary sets of parameters, θ1 ≡ {β1,ρ1}and θ2 ≡ {β2,ρ2}. We assume that each set of parameters (and associated utilityfunction) represents a decision maker i’s preferences, i.e. θi⇔%i. Given an arbi-trary portfolio, xR, we calculate the certainty equivalent for each set of parameters,CEi(xR)= u−1(u(xR;θi)). Without loss of generality, assume CE1 > CE2and weselect the safe portfolio, xS, such that xS ∈ (CE2,CE1). Under this construction wenecessarily have xR %1 xS and xS %2 xR, hence the choice of the subject identifieswhich of the two sets of the parameters is a more accurate representation of thesubject’s “true” preferences.In addition to determining which recovery method is a better predictor of indi-vidual choices, we would also like to identify the extent of First-order Risk Aver-sion in subject choices. We designate 6 out of the 9 pairwise choices to this task andrefer to these as low-variability portfolios. Hence, we would like to present sub-jects with risky portfolios as close as possible to the certainty line, yet it is often thecase that the difference between certainty equivalents for these portfolios is quitesmall. As a (partial) solution, we require that the chosen risky portfolio implies45In general, it is not the case that one set of parameters corresponds to risky choice and the otherto the safe choice for all 9 rounds. Reversals are common as relative risk attitudes often vary locally.Of course, this is not the case when one set of parameters implies globally more risk aversion thanthe other.422.3. Experimental Design and Proceduresa sufficiently large difference in certainty equivalents. We specify this differenceto be at least 1 token.46 Nevertheless, for some pairs of sets of parameters it maynot be possible to find a portfolio with a sufficiently large difference in certaintyequivalents. This occurs most often when the sets of parameters are very similarbut can occur in other cases as well due to the substitutability of the parameterswith respect to risk attitude. In these situations we relax the requirement of a 1token difference incrementally until a reasonable portfolio is found.Additionally, in some cases we use a slightly different procedure to differenti-ate between the two sets of parameters. For example, when βi < 0, then it is thecase that CEi(x) > E[x] for some portfolios, i.e. the certainty equivalent exceedsthe expected value for a risky portfolio.47 This may occur for one or both sets of re-covered parameters but in either case we apply a similar methodology. In the case,where βi > 0 > β j, then we can select a risky portfolio extremely close to the cer-tainty line such that CEi(xR) < E[xR] < CE j(xR) and choose the safe portfolio, xSsuch that xS = E[xR]. On the other hand, when βi,β j < 0 we have CEi,CE j > E[x]for those portfolios close to the certainty line. We ignore these portfolios and searchfor those where CEi(xR) < E[xR] < CE j(xR), which may be quite far from the cer-tainty line. As before, we select xS such that xS = E[xR]. In both cases, we are notconcerned about choosing a risky portfolio with a sufficient difference in certaintyequivalents. Instead we take advantage of the important qualitative difference inlocal risk attitude in order to distinguish between sets of parameters.Of course, we are also interested in subjects’ preferences towards portfolioswith a high variance between outcomes, which we refer to as high-variability port-folios. In the remaining 3 out of 9 rounds, subjects are presented with a riskyportfolio that is close to, but not actually on, the axis. As before, subjects mustdetermine whether they prefer this risky portfolio to a safe (or certain) portfoliothat is chosen in the manner described above. We try to select the risky portfolio sothat it is as close to the axis as possible, yet also 1) implies a difference in certaintyequivalents of at least 1 token, and 2) has a minimum payoff of 2 tokens in theworst outcome.4846It is often, if not always, the case that risky portfolio we choose is not the one with the largestdifference in certainty equivalents. While choosing points that imply large differences is attractivein terms of drawing inference from observed choices, most often these portfolios are far from thecertainty line and hence do little to inform us about the influence of First-order Risk Aversion.47Since risk attitude depends on both β and ρ it is possible to have β < 0 and have the associatedutility function exhibit risk aversion with respect to a risky portfolio. However, β < 0 is sufficientfor a utility function to display, at least, local risk seeking behavior with respect to portfolios withsmall variance.48We avoid offering corner choices as they can be difficult to rationalize with the CRRA func-tional form. Since, the choices presented to subjects in Part 2 are constructed using recovered pa-rameters using this function, we are not sure what sense it makes to include corner choices in our432.3. Experimental Design and ProceduresOf course, the procedure described above is highly idealized and in practicethere are many problems and limitations. For the most part the algorithm is pro-grammed to make reasonable adjustments when these problems arise. For example,in some cases, given the recovered parameters, it is difficult to find any risky port-folio that implies a difference in certainty equivalents that is larger than 1 token.This circumstances arises in two cases: 1) the two sets of recovered parametersare (virtually) numerically identical or 2) since ρ and β play complimentary rolesin determining a subject’s risk attitude it is possible that two different sets of pa-rameter values may imply very similar risk attitudes (at least locally) and certaintyequivalents. In the latter scenario, the algorithm iteratively lowers the threshold forthe difference between certainty equivalents until a risky portfolio can be chosen.In the former scenario there is little (i.e. nothing) that can be done to separate thetwo methods, yet we still provide these subjects with a set of choices to completethe experiment.2.3.3 Discussion of Experimental DesignIn pilot versions of this experiment there was some concern that subjects viewed thetwo parts of the experiment as conceptually different tasks. In order to address thisconcern we altered the instructions so as to introduce subjects to the more familiarpairwise choice problems first (Part 2). Once we were confident that all subjectsunderstood the nature of this task, we introduced them to the linear budget sets andframed these problems as conceptually similar to the pairwise comparisons in Part1, differing only with respect to the quantity of alternatives available. As mentionedbefore, the instructions can be found in Appendix B.1 and Figure B.7 illustratesthe relationship between the pairwise choice and linear budget set problems asexplained to the subjects.An additional difficulty with two-stage experiments, such as this one, is that ifsubjects are aware that parts of the experiment are connected then they may be ableto “game” the experiment. In this case we do not think this is a significant concern.First, even if subjects are aware that the two parts are related, they cannot imaginehow, i.e. it is extremely unlikely they would guess that we are comparing recoverymethods. Moreover, even if they understood the purpose of the experiment it isvirtually impossible that they could guess the utility function for which parametersare recovered, the methods for recovering these parameters (recall that the MMI isnew and virtually unknown), and the algorithm for selecting the pairwise choices.Nevertheless, it is possible, in principle, to make selections in Part 1 that in-crease the value of the safe option in Part 2 to their maximum for each round, i.e.analysis.442.4. Resultsthe expected value of the risky option. For example, a subject could pick all cornersolutions in Part 1 according to an expected value criterion. Yet this would requirean extremely detailed understanding of the algorithm used to select these options,and this algorithm is very complicated with many contingent loops that are not atall obvious. For these reasons, we are confident that the experimental design issound and that our results are valid.2.4 ResultsBy construction, for each pairwise choice, one of the options is implied preferredby the parameters recovered using the MMI and the other by NLLS. As such, wereport our results, both at the individual and aggregate level, as the number ofpairwise choices correctly predicted by the MMI. Consequently, the number ofpairwise choices that is predicted correctly by NLLS is easily inferred from thisnumber and knowledge of the total number of pairwise choices considered.Our analysis proceeds as follows: First, we consider the data without any re-finements, i.e. including all choices and all subjects. Then, we refine the dataset in order to eliminate subjects who make inconsistent choice in Part 1 of theexperiment and/or for which the difference in recovered parameters between meth-ods is negligible. The remaining subjects are then analyzed in order to determinewhich method is the more accurate predictor of subject choices. Moreover, we willalso partition the data into analysis of low-variability and high-variability portfo-lios separately in order to identify local risk attitudes (for example First-order RiskAversion).2.4.1 Unrefined ResultsConsider the data as a single, large data set with 1863 observations. The resultsare reported in Table 2.1. The MMI is a better predictor of subject choices overall,as well as for the low-variability and high-variability portfolios separately. Theseresults are statistically significant at the 1% level against a null hypothesis of pre-dicting subject choices randomly, e.g. by flipping a coin. Hence, the p-value canbe interpreted as the likelihood that the MMI correctly predicts 1012 or greater outof 1863 choices correctly by chance alone.49 A similar interpretation applies to allother cases.In Table 2.2 we separate the data by subject and treat each subject as a singledata point. We report the number of correct predictions by the MMI out of all 949In this case, the test distribution is derived from the sum of 207 independent binomial randomvariables with 9 trials and the same success probabilities.452.4. Results# of Observations Correct Predictions by MMI (%) p-valueAll 1863 1012 (54.3%) 0.0001Low-variability 1242 669 (53.9%) 0.0035High-variability 621 343 (55.2%) 0.0051Table 2.1: Unrefined Results - AggregateX≥7 X≤2 Pr(X≥ 7) p-value64 42 0.09 0.0204X≥8 X≤1 Pr(X≥8) p-value43 24 0.02 0.0136X≥9 X≤0 Pr(X≥9) p-value28 16 0.002 0.0481(a) All ChoicesX≥5 X≤1 Pr(X≥ 5) p-value77 59 0.11 0.0723X≥6 X≤0 Pr(X≥6) p-value53 39 0.016 0.0875(b) Low-variability ChoicesX≥3 X≤0 Pr(X≥ 3) p-value80 61 0.125 0.0646(c) High-variability ChoicesTable 2.2: Individual Results - All Subjects (n=207)462.4. Resultschoices (as well as for the 6 low-variability choices and 3 high-variability choicesseparately). With only 9 choices per subject it can be difficult to declare one ofthe two methods as definitively better for moderate values, i.e. three to six correctchoices. Hence, we only consider those subjects with values of 0, 1, 2, 7, 8, or9 and exclude the remainder as indeterminate (more on this below). As above,the null hypothesis corresponds to making predictions randomly, where for eachchoice both options are equally likely to be chosen. Hence, under the null, theprobability of predicting 9 choices correctly by chance alone is 0.2%, for 8 choicesor more it is 2%, and for 7 choices or more it is 9%. These cumulative probabilitiesare listed in the third column of Table 2.2. Similar logic is applied to analysis oflow-variability and high-variability choices as well.50In all cases, the number of subjects for which the MMI is the better predictor isgreater, however we cannot reject the null hypothesis at the 5% in many cases. Thenull corresponds to a one-sided p-value that can be interpreted as the probabilitythat the MMI is the better predictor Y times or more out of Z by chance alone.While these results are encouraging for the MMI, the high p-values temper ourconfidence in these results.2.4.2 Data Set RefinementsIn this section we introduce two refinements intended to remove some of the “noise”in our data set and facilitate a more precise comparison between methods. The firstrefinement removes subjects who behave too irrationally in Part 1 of the experi-ment to justify a utility representation in the first place where as the second refine-ment removes subjects for which there is little or no difference between recoveredparameters across recovery methods. The remaining subjects are approximatelyrational and imply a sufficient difference between methods to admit a comparison.Rationality RefinementFor each subject, we test for consistency with the Generalized Axiom of RevealedPreference (henceforth referred to as GARP) and compute the Critical Cost Effi-ciency Index (CCEI). The latter is as a measure of the extent of a subject’s viola-tions of GARP and may be a desirable relaxation of the strict rationality constraintsimposed by GARP.51 Out of 207 subjects, 194 of them have CCEI values greaterthan 0.90 and 92 of them pass GARP.50In all cases, a multinomial likelihood ratio test confirms that this empirical distribution is sig-nificantly different from a null-hypothesis of random choice with a p-value of approximately zero.51We also compute the Varian Efficiency Index (Section 1.2.3) and the Houtman-Maks Index(Houtman and Maks, 1985) for each subject. The former is similar to the CCEI except it calculatesthe minimal budget set adjustments required for each set individually in order to remove cycles from472.4. ResultsFigure 2.4: Parameter RefinementParameter RefinementIn some cases the two methods recover similar parameters, hence a comparisonbetween them is relatively meaningless. We would like to remove these subjectsfrom our analysis as well, yet sometimes it can difficult to determine when exactlythe recovered parameters imply substantially different choices. Substitutability be-tween the two parameters, β and ρ , with respect to the subject’s risk attitude, canleave the parameters unidentified in finite samples. Hence, it is possible for twonumerically different sets of parameters to imply roughly the same behavior (atleast locally). Consequently, rather than comparing the numerical values of the re-covered parameters we use the MMI as a measure of mis-specification, evaluated atboth sets of recovered parameters, and exclude those subjects where the differencebetween this measure is near zero. Strictly speaking, we compute the root meansquared error assiciated with Equation 2.1 and exclude those subjects for which thedifference between methods is greater than 0.01 or 1% of expenditure on average.Figure 2.4 illustrates the differences in the sets of recovered parameters parti-tioned in to those which remain after this refinement (left panel) as well as thosethat are removed (right panel). The excluded parameters suggest a pattern whichis consistent with β being unidentified in small samples, i.e. we sometimes ob-serve large differences in β yet very small differences in ρ representing similarpreferences. For the sets of parameters that remain, there are often, but not always,different recovered values for both parameters. Combining this refinement with therationality refinement discussed above, 103 of the original 207 subjects remain.the preference relation. This is an np-Hard computational problem, hence the values we report areapproximations. The latter calculates the minimum number of choices that must be removed fromthe choice set in order to remove cycles from the preference relation. For all subjects, the values ofthese indices are reported in Appendix B.2.482.4. Results# of Observations Correct Predictions by MMI (%) p-valueAll 927 532 (57.3%) <0.0001Low-variability 618 349 (56.5%) 0.0007High-variability 309 183 (59.2%) 0.0007Table 2.3: Aggregate Results - Consistent Subjects (n=103)X≥7 X≤2 Pr(X≥ 7) p-value36 14 0.09 0.0013X≥8 X≤1 Pr(X≥8) p-value24 8 0.02 0.0035X≥9 X≤0 Pr(X≥9) p-value16 5 0.002 0.0133(a) All ChoicesX≥5 X≤1 Pr(X≥ 5) p-value41 24 0.11 0.0232X≥6 X≤0 Pr(X≥6) p-value24 17 0.016 0.1744(b) Low-variability ChoicesX≥3 X≤0 Pr(X≥ 3) p-value43 26 0.125 0.0266(c) High-variability ChoicesTable 2.4: Individual Results - Consistent Subjects (n=85)2.4.3 Refined ResultsThe aggregate results for the refined sample of 103 subjects are reported in Table2.3. As above, in all cases the MMI is a better predictor of subject choices, and theresults are statistically significant at the 1% level.The individual level results are displayed in Table 2.4. As was the case whenall subjects were included, the MMI remains a better predictor in all cases forthe refined subject pool, yet in many cases the p-values are even smaller here (inspite of the smaller sample size). A notable exception is with respect to the low-variability choices where the results are not statistically significant for the casewhere we compare zero correct predictions versus six.492.4. ResultsMMI NLLSβ ρ β ρAll 0.4130 0.3854 0.3365 0.6191AEI > 0.90 0.4375 0.3876 0.3268 0.6154GARP 0.4242 0.3061 0.3268 0.4774Choi et al 0.333 0.356 0.171 0.580Table 2.5: Recovered Parameters - Median2.4.4 Recovered Parameters: Risk AttitudeTable 2.5 reports the median values of the recovered parameters for both recov-ery methods and broken down according to various consistency criteria introducedabove.52 In aggregate, we observe noticeable differences between the recoverymethods, yet in both cases the median value for β is greater than zero.53 This sug-gests that many of the subject’s choices deviate from expected utility, i.e. β = 0,and that First-Order Risk Aversion may play a prominent role in decision mak-ing. Additionally, the oversampling of budget lines with moderate price ratios, i.e.those close to the odds-ratio (one in this case), provides confidence in these resultsas it aides in identification of β .Additional evidence of first-order risk aversion is observed in subject decisionsamong low-variability choices in Part 2 of the experiment. As mentioned above,these choices often (though not always) involve portfolios with very small dif-ferences between the high and low payoffs. Hence subjects who choose the safeportfolio, or certain option, may demonstrate a substantial degree of risk aversion.Considering all 207 subjects, 166 of them choose the safe option in at least one ofthe 6 low-variability choices and 51 of these choose the safe option in all cases.54In all, we conclude that first-order risk aversion is apparent in subject preferences,whether directly in observed choices or indirectly through recovered parameters.52The recovered parameters for all subjects are provided in Appendix B.3.53The difference in the median values for ρ may be attributed to the mechanics of the MMI withrespect to the CRRA functional form. In this case, the MMI may punish selections close to theaxis severely, hence the recovery procedure restricts ρ to be less than 1 when subjects make cornerchoices.54For the 21 subjects from the refined subject pool who choose the safe option for all 6 low-variability choices, the MMI predicts all 9 choices correctly for 16 of them. This would occur bychance alone with probability 0.0133.502.5. Issues and Extensions2.5 Issues and ExtensionsThe experimental design utilized in this paper is both novel and complicated, asis the data set collected through its use. As such, many interesting challenges(and opportunities) arise from both the implementation of the experiment and theanalysis of the subsequent data. The discussion that follows illustrates some ofthese interesting issues that may prove to be fruitful areas of future research.2.5.1 Subject ConsistencyIn Part 1 of the experiment, as in Choi et al. (2007), subjects demonstrate a re-markable level of consistency given the degree of complexity involved in the task.This is especially striking given the tremendous power of the GARP test againstthe null of random choice in both cases. Somehow, subjects are finding a way,a rule of thumb perhaps, to help them negotiate this complex task. Nevertheless,the specific decision rule used is often not obvious through observation nor mayit be represented by the highly structured utility functions commonly used in theliterature (and in this paper).The differences between our study and Choi et al. (2007) may provide someinsight. In our study, subjects are faced with the same set of only 22 budget lineswhich are selected in a highly structured way (described above), where as in Choiet al. (2007) each subject faces a different set of 50 budget lines chosen randomly,yet our budget sets provide a similarly powerful test of GARP against the null ofnaive random choice.55 Even so, a larger percentage of our subjects, 86% (178 outof 207), approximately pass GARP, i.e. have CCEI scores greater than 0.95. Thiscompares to 66% (31 out of 47) in Choi et al. (2007).These results suggest a fundamental difference between the notions of powerand complexity in this context. In particular, the manner in which budget lines areselected by Choi et al. (2007) leads to considerable variation in price ratios andwealth levels than is present in our experiment. This makes the implementation ofa general rule of thumb perhaps more challenging than in our environment wherethe budget lines are presented in a more ordered way. Nevertheless, since com-plexity is difficult to quantify and rules of thumb can be difficult to identify, thisargument is merely conjecture. More work, with respect to the role of heuristics asa mechanism to negotiate complex decisions, is required.5655Though budget lines are chosen randomly (and independently from each other), Choi et al.(2007) impose a clever restriction on each budget line that at least one of the intercepts must begreater than 50. This restriction greatly enhances the power of the GARP test when compared tonaive random choice.56See Chapter 3 for some evidence of the role of complexity in individual decision making.512.5. Issues and Extensions2.5.2 Context DependenceIn the experiment, subjects are asked to make a series of decisions with uncertainoutcomes yet the specific structure of the problems varies from Part 1 to Part 2.This design facilitates analyzing each part in isolation (as we have above) as wellas the relationship (if any) between the two. In the latter case, we might ask ifsubjects employ similar decision rules in both contexts, or if their approach todecision making is context dependent?On the one hand, if either of the utility functions recovered in Part 1 perfectlypredicts the subject’s choices in Part 2 then we cannot reject that a consistent de-cision rule underlies the subjects choices across contexts. Alternatively, we couldutilize non-parametric tools in order to determine whether the choices in Part 2 ofthe experiment are consistent with their choices from Part 1 à la Varian (1982).On the other hand, these methods may reveal inconsistencies between a subject’schoices across the two environments. In this case, we might wonder to what extentthe nature of the problem influences the subject’s preferences as revealed throughcontext varying decision rules and/or behavioral (i.e. risk) attitudes?Where as anecdotal evidence of such variability in decision making is not hardto find, a more rigorous approach is required if we hope identify which aspects ofthe decision making context are most salient with respect to framing the decisionmaker’s attitudes. In the present context it is not obvious why subjects may viewconvex and pairwise choice sets differently, and what influence, if any this mayhave on their decision making. One possibility is that subjects experience lessself-determination with respect to pairwise choice sets since they are constrainedto select among only two options and may prefer an intermediate portfolio. Ofcourse, many other possibilities exists as well, hence this remains an interestingquestion for future research.2.5.3 First-Order versus Second-Order Risk AversionEvidence of first-order risk aversion can often be identified in subject behavior withrespect to the low-variability choices in Part 2 of the experiment. If we considerthose subjects who make consistent choices in Part 1 of the experiment, then inmany cases (88 out of 194 or 45.4% of subjects) one of the two methods predicts all6 low-variability choices correctly. On the other hand, when we consider both low-variability and high-variability portfolios, it is less frequent that any one methodmakes correct predictions for all 9 choices. In fact, of the 194 consistent subjects, itis the case that one of the recovery methods is the better predictor for both the low-variability and high-variability choices for only 44 subjects (or 22.7% of subjects).Hence, often no one set of recovered parameters can represent a subject’s attitude522.5. Issues and Extensionstowards both high-variability and low-variability choices.This is an interesting finding as the disappointment aversion model appearsquite flexible in this regard ex ante. Perhaps this is still true in principle but em-pirically a two-parameter functional form may be too inflexible to account thisobserved variance in risk attitudes. On the other hand, this may be an artifact ofthe CRRA functional form which can imply extreme behavior close to the axis andhence may have a large impact on the values of the recovered parameters. Moreinvestigation is required in order to determine whether more flexible functionalforms are required in order to account for both the variance and complementarynature of these differing types of risk attitudes.53Chapter 3Computational Difficulty andStochastic Choice: AnExperiment3.1 MotivationStochastic choice, defined as selecting different options from the same choice set atdifferent times, is commonly observed when decision makers face the same prob-lem repeatedly and the correct decision is unknown (Rubinstein, 2002b). We de-sign and implement a simple experiment in order to determine a possible sourceof this uncertainty. We find that computational difficulty is sufficient to generatethe uncertainty required to induce stochastic choice. Additionally, these findingssuggest that observed stochastic choice in repeated pairwise comparisons over ob-jective portfolios, (e.g. Agranov and Ortoleva (2014)), may be due to ambiguityattitudes induced by the difficulty of evaluating prospects rather than risk attitudesor random preferences.57In our experiment, subjects are presented with 3 identical sets of 10 pairwisecomparisons between mathematical expressions. For each set of questions, sub-jects must select which of the two options is greater within a specified time-limit.Additionally, we vary the time-limit and a subject’s access to a calculator acrosstreatments in order to identify the possible mechanism driving our results.In this context, stochastic choice is identified as “switching” one’s responsefrom Option A to Option B (or vice versa) for the same question from one set toanother, i.e. providing different answers for the same question at different times.We record the number of questions for which each subject switches and reportaggregate statistics in Section 3.3. Our results can be summarized as follows:1. Stochastic choice is observed in all four treatments57Evidence for stochastic choice in experimental studies involving repeated pairwise comparisonsover objective lotteries is plentiful. References include Camerer (1989); Hey and Orme (1994);Starmer and Sugden (1993); Tversky (1969).543.1. Motivation2. Stochastic choice increases with question difficulty3. Use of a calculator reduces instance of stochastic choice4. Increased time-limit reduces instance of stochastic choice when subjectshave access to a calculatorOur approach to eliciting stochastic choice in the lab is different than previousstudies. In Agranov and Ortoleva (2014) as well as Dwenger et al. (2014), subjectsmake comparisons between objects for which their preferences are subjective, i.e.unknown to the experimenter and (possibly) varying across subjects. In contrast,in our study, the subject’s preferences between the mathematical expressions areboth known to the experimenter and invariant across subjects (since which one isgreater is a matter of fact). Consequently, the problems faced by subjects in ourstudy are not in the domain of random utility models. Nevertheless, we reportsimilar degrees of stochastic choice as in these studies.One possibility is that all of these results can be explained by the same mech-anism: complexity (or computational difficulty). While there exists only one rightanswer for each question in our study, the correct answer may not be apparentto the subject as it may be too difficult to evaluate given the time and computa-tional resources available. Hence, subjects may be uncertain about which option isbest and stochastic choice (or “hedging”) may be their response to this uncertainty.Similarly, subjects in previous studies may indeed have standard preferences overthe objects of choice they are faced with but may have difficulty evaluating vari-ous options due to the inherent complexity associated with the problem. This isalmost certainly true when comparing lotteries with many states as in Agranovand Ortoleva (2014) or colleges that differ on several dimensions as in Dwengeret al. (2014). We conclude that, it is at least possible, observed stochastic choice inprevious studies can be explained by complexity alone and that specialized choicetheories may be unnecessary to explain this phenomenon.The paper is organized as follows. In Section 3.2, we describe the experimentin detail. Our results are summarized in Section 3.3. And in Section 3.4, we relateour findings to a simple theoretical representation of choice under uncertainty (i.eambiguity) and discuss the implications for previous studies regarding choice underrisk.553.2. Experiment3.2 Experiment3.2.1 DesignThe experiment we employ to test our hypothesis is extremely simple. Subjectsare faced with a series questions where they must compare two mathematical ex-pressions and select the one that is larger within a specified time limit. At theconclusion of the experiment, one question is randomly selected (with equal prob-ability) for payment. If the selected question was answered correctly then subjectsreceive a nominal monetary reward, i.e. $5, and nothing otherwise. Additionally,the questions are classified into two types: EASY or HARD. EASY questions in-volve a single, simple arithmetic operation, e.g. 3+4, where as HARD questionsinvolve multiple, more complex calculations, e.g.√2116− 137. Subjects face anequal number of EASY and HARD questions and each question is repeated multi-ple times within each session. Subjects are informed of this in advance, hence theyare aware that questions will be repeated.3.2.2 ProceduresExperimental trials were conducted in the Experimental Lab at the VancouverSchool of Economics (ELVSE). In total, 119 subjects took part across varioussessions that took place in November 2014.58 Subjects were recruited using theORSEE system (Greiner, 2004) administered by the Vancouver School of Eco-nomics. All subjects were registered undergraduate students representing variousdisciplines at the University of British Columbia. All subjects were paid a nominalfee of $10 for showing up to the experimental session on time as well as for theirperformance. All payments were made privately and immediately following eachexperimental session.In all sessions, subjects were asked to complete three identical sets of 10 ques-tions, for a total of 30 questions in all. In all cases, one (and only one) of thequestions was randomly selected for payment. Subjects were paid $5 if their re-sponse to this question was correct and nothing otherwise. Subjects were awarethat questions would be repeated. Rather than using a computer interface, each setof questions was distributed to the subjects on paper and responses were made inpen directly on the question sheet by circling their choice.Subjects were given a time limit to complete each set of questions and all com-pleted question sheets were collected after time expired. Then the next set of ques-tions was distributed and subjects were given the same amount of time to complete58We discard three subjects from our analysis; two for improperly entering their choices and theother for not completing all three sets of choices.563.3. ResultsCalculatorNo YesTime Limit2 min. 2.04 (25) 1.74 (31)5 min. 1.83 (30) 0.50 (30)Table 3.1: Mean Switches (Number of Subjects in parentheses)this set. This procedure was repeated until all three sets of questions were com-plete. Subjects were asked to indicate their choice by drawing a circle around theexpression they believed was greater. The experiment instructions and questionsheet are provided in Appendix C.1.We implemented a 2x2 factorial design with treatment variables time limit andcalculator use. The time limits tested were 2 minutes and 5 minutes per set andin some treatments subjects were provided with calculators where as in others theywere not allowed access to electronic aides. For reference, we label the 4 treatmentsas follows: 2MIN_NOCALC, 2MIN_CALC, 5MIN_NOCALC, and 5MIN_CALCwhere the treatment name clearly identifies the state of the variables.3.3 ResultsFor all treatments we report the number of questions (out of 10) for which a sub-ject switches their choice between sets. We do not distinguish, however, betweendifferent patterns of switching. Hypothetically, if for Question 6, a subject choosesOption A in Sets 1 and 3, and Option B in Set 2 this is not distinguished from asubject who chooses Option A in Set 1, and Option B in Sets 2 and 3. In both cases,we simply report that the subject switched their choice (at least once) on Question6. Hence, the maximum number of “switches” per subject is 10 (as there are 10distinct questions).Table 3.1 reports the results for all 4 treatments. On inspection, we observethe expected pattern of behavior: 1) In all treatments we observe a statistically sig-nificant (at the 1% level) quantity of switching on average, and 2) the extent ofswitching appears to decrease as the time limit is increased or when subjects aregiven a calculator. These effects are statistically significant (at the 1% level) withrespect to the difference in means between 2MIN_CALC versus 5MIN_CALCand 5MIN_NOCALC versus 5MIN_CALC respectively. Hence, we get a time ef-fect when subjects have a calculator, and a calculator effect when subjects haveenough time. On the other hand, we do not find a statistically significant dif-ference between 2MIN_NOCALC versus 5MIN_NOCALC or 2MIN_NOCALCversus 2MIN_CALC.573.3. ResultsCalculatorNo YesTime Limit 2 min. 1 0.58% of Subjects who switch 64% 35.5%Table 3.2: Mean Switches - Sets 2 and 3 onlyInspection of subject responses suggests an explanation for the high degree ofswitching observed in the 2MIN_CALC treatment. Even with a calculator, twominutes is an extremely short of time to complete a single set of questions. In fact,the mean number of correct responses increases from 8.4 to 9.3 from Set 1 to Set2, where as the increase between Sets 2 and 3 is only 0.2 (9.3 to 9.5). Hence, weassume that the high frequency of switching observed between Sets 1 and 2 maybe due to subjects correcting their choices as the questions were repeated and notdue to any specific randomization.Based on this assessment, we restrict our attention to only Sets 2 and 3 andcompare the mean number of switches between 2MIN_NOCALC and 2MIN_CALC.The results are reported in Table 3.2. The p-value for the difference in means t-testis approximately 0.06, which suggests that subjects switch less if they are givenenough time to correct their mistakes. Additionally, excluding Set 1, the percentageof subjects who switch at least once falls from 84% to 64% for 2MIN_NOCALCand from 74.2% to 35.5% for 2MIN_CALC. On the other hand, a similar argumentdoes not exist for the difference between 2MIN_NOCALC and 5MIN_NOCALCas the questions may be too difficult to estimate even with increased time.59If we compare subject behavior across questions we find that almost all switch-ing occurs with respect to the HARD questions. This is true for all treatments andthese results, among others, are summarized in Table 3.3. Additionally, for alltreatments, excluding 5MIN_CALC, a large proportion of subjects display switch-ing behavior, and among these subjects it is common to switch more than once, i.e.with respect to more than one question.We summarize our findings below:1. Stochastic choice, i.e. switching, is observed in all four treatments.2. Switching behavior is more common with respect to HARD questions thanEASY ones.3. Providing subjects with a calculator decreases the frequency of switching.59It may also be that the incentives are not sufficient to encourage students to exert more effort inorder to accurately estimate the expressions even with a longer time limit.583.4. InterpretationConditional on Switching% of subjects that switched mean # of median # of % with 2 orTreatment Difficulty at least once (# of subjects) switches switches more switches2MIN_NOCALC EASY 20% (5) 1.4 1 20%HARD 84% (21) 2.4 2 66.7%2MIN_CALC EASY 3.2% (1) 1 1 100%HARD 74.2% (23) 2.35 2 82.6%5MIN_NOCALC EASY 13.3% (4) 1.25 1 25%HARD 80% (24) 2.28 2 66.7%5MIN_CALC EASY 6.7% (2) 1.5 1.5 50%HARD 23.3% (7) 1.71 1 28.6%Table 3.3: Switching Statistics - EASY versus HARD QuestionsStatess1 s2Actsf1 a 0f2 0 a12 ◦ f1 + 12 ◦ f2 12 ◦a+ 12 ◦0 12 ◦0+ 12 ◦aTable 3.4: Hedging and Ambiguity4. Increasing the time limit decreases the frequency of switching when subjectshave access to a calculator3.4 InterpretationOne way to interpret the results in Section 3.3 is through the lens of decision mak-ing under uncertainty. The computational difficulty of the HARD questions createsuncertainty in the minds of the subjects over which expression is greater and hencethe problem can be thought of as a simple lottery over two outcomes where theprobability of each is unknown. A preference for hedging, and observed stochas-tic choice, may then be understood as a response to this ambiguity, as if a subjectwere ambiguity averse for instance. This interpretation is similar to Fudenberget al. (2014) who suggest an equivalence between certain preferences over risk thatinduce stochastic choice and ambiguity aversion.To illustrate, consider a formal representation of a simplified version of ourexperiment in Table 3.4. Here, we illustrate the payoffs for two different acts as-sociated with a single question repeated two times. Assume, as in the experiment,that only one of the repetitions is randomly selected for payment with equal prob-ability. Hence, the act f1 represents choosing Option 1 in both repetitions and theact f2 represents choosing Option 2 for both repetitions. The states, s1and s2, corre-593.4. InterpretationStatesHeads TailsLotteriesL1 21 17L2 27 13Table 3.5: Lotteriessponds which option is greater and hence if this option is chosen by the DM resultsin the monetary payoff, a.When the computational tasks are EASY, then the state is known to the decisionmaker and hedging is dominated by the act that pays off in the correct state. Onthe other hand, when computational tasks are HARD, then the correct state maybe unknown to the DM and they can perfectly hedge against this uncertainty bychoosing an equal mixture between the acts. A DM who is ambiguity averse mayprefer this mixture over either of the two “pure” acts hence uncertainty with respectto which option is greater may explain observed switching between options.This issue of computational difficulty may be relevant in more complex de-cision making environments as well. For example, consider a decision makerwith expected utility preferences that can be represented by a square-root utilityfor wealth function who must choose between two lotteries with equally proba-ble outcomes determined by flipping a fair coin. The lotteries are described inTable 3.5. With the aid of a calculator, it is easy to show that L2  L1 since12√27+ 12√13 > 12√21+ 12√17. On the other hand, without a calculator subjectsmay be unsure which of the two expressions is larger and hence uncertain aboutwhich of the two lotteries they prefer. This situation is described formally in Table3.6 where a > c and d > b.60 That is, when subjects are uncertain as to which lot-tery corresponds to a higher level of expected utility, State 1 (State 2) correspondsto the situation where L1 (L2) is preferred.If subjects are asked to choose between these lotteries twice, then if they areambiguity averse they may prefer to “hedge” against this uncertainty by mixingbetween the two acts with probability p.61 In this case, observed stochastic choiceis the result of the subjects attitude for ambiguity and not non-standard attitudestoward risk. Of course, we are not suggesting that subjects literally compute theutility associated with a particular alternative, but rather that the computationaldifficulty of this task is similar to the complexity of evaluating lotteries according60It is not necessarily the case that a = d and b = c as the computational difficulty of the problemmay create uncertain with respect to by how much a certain lottery is preferred as well.61Unlike the situation desribed in Table 3.4, subjects may not be able to perfectly hedge againstthe uncertainty by mixing with equal probability. Hence, if we repeat the problem only two timesthen we may not observe switching in spite of subjects being ambiguity averse.603.4. InterpretationStatess1 s2Actsf1 a bf2 c dp◦ f1 +(1− p)◦ f2 p◦a+(1− p)◦ c p◦b+(1− p)◦dTable 3.6: Hedging and Ambiguitythe one’s preferences. Nevertheless, it remains a challenge to identify the sourceof uncertainty from observation alone.Alternatively, as suggested by Cerreia-Vioglio et al. (2014), it may be un-certainty over one’s true utility function that explains stochastic choice. For ex-ample, if a subject is uncertain with respect to which utility function, U1 or U2,represents their true preferences, then we can think of State 1 as the case whereU1(L1) > U1(L2) and State 2 as the case where U2(L2) > U2(L1). This scenariois theoretically equivalent to that described above and in Table 3.6 yet implies adifferent source of uncertainty.In principle it should be possible to design an experiment that separates thesephenomena. For example, in an experiment where subjects are asked for repeatedchoice over lotteries, if we observe the incidence of stochastic choice decreasewhen subjects are given access to calculators then this suggests that computationaldifficulty may be the source of the uncertainty. On the other hand, if subjects areuncertain about their preference, as described above, then we should see no change.This issue is related to a larger issue with respect to the interpretation of lab-oratory data. A criticism of laboratory experiments in economics is that behaviorwe observe in the lab may be due to the procedural difficulty of the tasks involvedrather than evidence of anomalous decision making. These criticisms are discussedin detail in Friedman (1998) and Cason and Plott (2014). In the latter, the authorsshow that subjects often do not understand the BDM mechanism and, based on this,argue that we cannot infer anything about subject behavior from experiments thatutilize this mechanism for eliciting preferences. Our results imply similar cautionshould be exercised when laboratory tasks are sufficiently complex that subjectsmay be uncertain about the relation between actions and payoffs. This does not im-ply, however, that experimental economics is a lost cause but rather that researchersshould be careful to provide subjects with sufficient computational resources andexercise caution when attributing observed behavior to specific anomalies.61BibliographyAfriat, S. N. (1967). The Construction of Utility Functions from Expenditure Data.International Economic Review, 8(1):67–77.Afriat, S. N. (1972). Efficiency estimates of production functions. InternationalEconomic Review, 13(3):568–598.Afriat, S. N. (1973). On a system of inequalities in demand analysis: An extensionof the classical method. International Economic Review, 14(2):460–472.Afriat, S. N. (1987). Logic of Choice and Economic Theory. Oxford: ClarendonPress.Agranov, M. and Ortoleva, P. (2014). Stochastic Choice and Hedging. WorkingPaper.Ahn, D., Choi, S., Kariv, S., and Gale, D. (2011). Estimating Ambiguity Aversionin a Portfolio Choice Experiment. Working Paper.Alcantud, J. C. R., Matos, D. L., and Palmero, C. R. (2010). Goodness-of-fit in op-timizing a consumer model. Mathematical and Computer Modelling, 52:1088–1094.Amemiya, T. (1985). Advanced Econometrics. Harvard University Press.Andreoni, J. and Miller, J. (2002). Giving According to GARP: An ExperimentalTest of the Consistency of Preferences for Altruism. Econometrica, 70(2):737–753.Andreoni, J. and Sprenger, C. (2012). Estimating Time Preferences from ConvexBudgets. American Economic Review, 102(7):3333–3356.Apesteguia, J. and Ballester, M. A. (2012). A Measure of Rationality and Welfare.Working Paper.Bernheim, B. D. and Rangel, A. (2009). Beyond revealed preference: Choice-theoretic foundations for behavioral welafare economics. Quarterly Journal ofEconomics, pages 51–104.62BibliographyBlundell, R. W., Browning, M., and Crawford, I. A. (2003). Nonparametric EngelCurves and Revealed Preference. Econometrica, 71(1):205–240.Blundell, R. W., Browning, M., and Crawford, I. A. (2008). Best NonparametricBounds on Demand Responses. Econometrica, 76(6):1227–1262.Bronars, S. (1987). The Power of Nonparametric Tests of Preference Maximiza-tion. Econometrica, 55:693–698.Camerer, C. F. (1989). An Experimental Test of Several Generalized ExpectedUtility Theories. Journal of Risk and Uncertainty, 2:61–104.Cason, T. and Plott, C. (2014). Misconceptions and Game Form Recognition:Challenges to Theories of Revealed Preference and Framing. Journal of PoliticalEconomy, forthcoming.Cerreia-Vioglio, S., Dillenberger, D., and Ortoleva, P. (2014). Cautious ExpectedUtility and the Certainty Effect. Econometrica, forthcoming.Cherchye, L., Crawford, I., Rock, B. D., and Vermeulen, F. (2009). The revealedpreference approach to demand. In Slottje, D. J., editor, Quantifying ConsumerPreferences (Contributions to Economic Analysis, Volume 288), pages 247–279.Emerald Group Publishing Limited.Cherepanov, V., Feddersen, T., and Sandroni, A. (2012). Rationalization. WorkingPaper.Choi, S., Fisman, R., Gale, D., and Kariv, S. (2007). Consistency and Hetero-geneity of Individual Behavior under Uncertainty. American Economic Review,97(5):1921–1938.Choi, S., Kariv, S., Muller, W., and Silverman, D. (2011). Who is (more) rational?Working Paper.Dean, M. and Martin, D. (2011). Testing for Rationality with Consumption Data:Demographics and Heterogeneity. Working Paper.Diewert, W. E. (1973). Afriat and Revealed Preference Theory. Review of Eco-nomic Studies, 40(3):419–425.Dwenger, N., Kubler, D., and Weizsacker, G. (2014). Flipping a Coin: Theory andEvidence. Working Paper.63BibliographyEchenique, F., Lee, S., and Shum, M. (2011). The Money Pump as a Measureof Revealed Preference Violations. Journal of Political Economy, 119(6):1201–1223.Epstein, L. G. (1992). Behavior under risk: Recent developments in theory andapplications. In Econometric Society.Fisman, R., Kariv, S., and Markovits, D. (2007). Individual Preferences for Giving.American Economic Review, 97(5):1858–1876.Friedman, D. (1998). Monty Hall’s Three Doors: Construction and Deconstructionof a Choice Anomaly. Amercian Economic Review, 88(4):933–946.Fudenberg, D., Iijima, R., and Strzalecki, T. (2014). Stochastic Choice and Re-vealed Preference Theory. Working Paper.Greiner, B. (2004). An Online Recruitment System for Economic Experiments.Forschung und Wissenschaftliches Rechnen, 63:79–93.Gul, F. (1991). A Theory of Disappointment Aversion. Econometrica, 59(3):667–686.Gul, F., Natenzon, P., and Pesendorfer, W. (2012). Random choice as behavioraloptimization. Working Paper.Gul, F. and Pesendorfer, W. (2006). Random expected utility. Econometrica,74:121–146.Halevy, Y., Persitz, D., and Zrill, L. (2014). Parametric Recoverability of Prefer-ences. Working Paper.Hey, J. D. and Orme, C. (1994). Investigating Generalizations of Expected UtilityTheory Using Experimental Data. Econometrica, 62(6):1291–1326.Houtman, M. (1995). Nonparametric consumer and producer analysis. Disserta-tion.Houtman, M. and Maks, J. (1985). Determining all Maximal Data Subsets Consis-tent with Revealed Preference. Kwantitatieve Methoden, 19:89–104.Houtman, M. and Maks, J. (1987). The existence of homothetic utility functionsgenerating dutch consumer data. Mimeo.Karp, R. M. (1972). Reducibility among combinatorial problems. In Miller, R. E.and Thatcher, J. W., editors, Complexity of Computer Computations, pages 167–174. New York: Plenum.64BibliographyKitamura, Y. and Stoye, J. (2011). Nonparametric analysis and random utilitymodels. Working Paper.Kreps, D. M. (2013). Microeconomic Foundations I: CHoice and CompetitiveMarkets. Princeton University Press.Manzini, P. and Mariotti, M. (2007). Sequentially Rationalizable Choice. AmericanEconomic Review, 97(5):1824–1839.Manzini, P. and Mariotti, M. (2012). Categorize then Choose: Boundedly RationalChoice and Welfare. Journal of the European Economic Association.Masatlioglu, Y., Nakajima, D., and Ozbay, E. (2012). Revealed Attention. Ameri-can Economic Review, Forthcoming.McFadden, D. L. (2005). Revealed stochastic preference: a synthesis. EconomicTheory, 26:245–264.Natenzon, P. (2013). Random choice and learning. Working Paper.Quiggin, J. (1982). A Theory of Anticipated Utility. Journal of Economic Behaviorand Organization, 3:323–343.Rubinstein, A. (2002a). Irrational diversification in multiple decision problems.European Economic Review, 46:1369–1378.Rubinstein, A. (2002b). Irrational Diversification in Multiple Decision Problems.European Economic Review, 46:1369–1378.Rubinstein, A. and Salant, Y. (2012). Eliciting Welfare Preferences from Behav-ioral Datasets. Review of Economic Studies, 79(1):375–387.Samuelson, P. A. (1974). Complementarity: An Essay on the 40th Anniversary ofthe Hicks-Allen Revolution in Demand Theory. Journal of Economic Literature,12(4):1255–1289.Segal, U. and Spivak, A. (1990). First Order versus Second Order Risk Aversion.Journal of Economic Theory, 51(1):111–125.Simon, H. A. (1976). From substantive to procedural rationality. In 25 Years ofEconomic Theory, pages 65–86. Springer US.Starmer, C. and Sugden, R. (1993). Testing for Juxtaposition and Event-SplittingEffects. Journal of Risk and Uncertainty, 6:235–254.65Stoye, J. and Hoderlein, S. (2012). Testing stochastic rationality and predictingstochastic demand: The case of two goods. Working Paper.Tsur, Y. (1989). On testing for revealed preference conditions. Economics Letters,31:359–362.Tversky, A. (1969). Intransitivity of Preferences. Psychological Review, 76(1):31–48.Varian, H. R. (1982). The Nonparametric Approach to Demand Analysis. Econo-metrica, 50(4):945–973.Varian, H. R. (1985). Non-parametric analysis of optimizing behavior with mea-surement error. Journal of Econometrics, 30:445–458.Varian, H. R. (1990). Goodness-of-fit in Optimizing Models. Journal of Econo-metrics, 46:125–140.Varian, H. R. (1993). Goodness-of-fit for Revealed Preference Tests. WorkingPaper.Warshall, S. (1962). A Theorem on Boolean Matrices. The Journal of the AmericanAssociation of Computing Machinery, 9:11–12.Yaari, M. E. (1969). Some remarks on measures of risk aversion and on their uses.Journal of Economic Theory, 1:315–329.66Appendix AAppendix for Chapter 1A.1 Non-Parametric Recovery and Non-ConvexPreferencesAssume D satisfies GARP. The following definitions follow Varian (1982).Definition 18. Pu (x) ≡ {x′ : u(x′) > u(x)} is the strictly upper contour set of abundle x ∈ℜK+ given a utility function u(x).Next, consider the set of prices at which an unobserved bundle, x, is chosenand the augmented data set continues to be consistent with GARP.Definition 19. Suppose x ∈ℜK+ is an unobserved bundle, thenS (x) = {p |{(p,x)}∪D satisfies GARP and px = 1}For every unobserved bundle x, Varian (1982) employs S (x) to construct lowerand upper bounds on the upper and lower contour sets through x.Definition 20. For every unobserved bundle x ∈ℜK+:1. The revealed worse set is RW (x) ≡{x′∣∣∀p ∈ S(x),xPD∪{p,x}x′}. The notrevealed worse set, denoted by NRW (x), is the complement of RW (x).2. The revealed preferred set is RP(x)≡{x′∣∣∀p ∈ S(x′),x′PD∪{p,x′}x}.In Fact 5, Varian (1982) (page 953) states: “let u(x) be any utility function thatrationalizes the data. Then for all (unobserved bundles - HPZ) x , RP(x)⊂ Pu(x)⊂NRW (x)”. Thus, given a data set that satisfies GARP and a utility function thatrationalizes these data, every indifference curve through a given unobserved bundlemust be bounded between the revealed worse set and the revealed preferred set ofthis bundle.Suppose a DM has to decide how to allocate a wealth of 1 between con-sumption in two mutually exclusive, exhaustive and equally probable states ofthe world. The allocation is attained by holding a portfolio of Arrow securities67A.1. Non-Parametric Recovery and Non-Convex PreferencesRP(A)x1x2ABx1x2(a) Violation of the Revealed Preferred SetRW (B)x1x2ABx1x2(b) Violation of the Revealed Worse SetFigure A.1: Violations of Fact 5with unit prices p = (p1, p2). Figure A.1 presents a data set D of two observa-tions. Portfolio x1 = (0.124,2.222) is chosen when prices are p1 = (0.450,0.425),and portfolio x2 = (3.850,0.094) is chosen when prices are p2 = (0.250,0.400).Notice that since p2 < p1, every portfolio that is feasible under p1 is also feasi-ble when prices are p2, therefore x2R0Dx1. Now consider two unobserved portfo-lios A = (0.390,1.806) and B = (1.390,1.390). Portfolio A is feasible under bothprices, but portfolio B is feasible only under p2. The revealed preferred set of Aand the revealed worse set of B are drawn in panels A.1a and A.1b, respectively.Now consider the following utility function over portfolio x = (x1,x2) :u(x1,x2) =√max{x1,x2}+14√min{x1,x2} (A.1)which represents the preferences of an elation seeking DM (Gul, 1991) with β =−0.75 and a CRRA utility index with ρ = 0.5 over Arrow securities. Therefore, theDM’s preferences are not convex and u(·) is not quasi-concave (let alone not con-cave). The indifference curves drawn in Figure A.1 through x1 and x2 demonstratethat this utility function rationalizes the data.Recall that Fact 5 in Varian (1982) states that for any unobserved bundle x,if u rationalizes the data then RP(x) ⊂ Pu(x) ⊂ NRW (x). However, Figure A.1aclearly demonstrates that while B ∈ RP(A), it is not true that B ∈ Pu(A). Similarly,Figure A.1b shows that while A ∈ Pu(B) it is not true that A ∈ NRW (B). Thatis, the ranking of unobserved portfolios implied by the Revealed Preferred and68A.2. Proof of Proposition 13Revealed Worse sets is inconsistent with the ranking of portfolios induced by autility function that rationalizes the data. In other words, the utility function’sindifference curves do not abide by Varian (1982) non-parametric bounds.Figure A.1 suggests the source of the above inconsistency with Varian’s Fact5: when the DM is elation seeking, her preferences are non-convex and the utilityfunction is not concave. The failure of the nonparametric bounds can be tracedback to the construction of the revealed preferred and revealed worse sets. Since byAfriat’s Theorem if the data satisfies GARP there exists a concave utility functionthat rationalizes it, S (x) (Definition 19) is non-empty for every x. However, theremay exist a utility function that rationalizes the data for which there is no pricevector p that supports x as an optimal choice. Therefore, even if x′ is such thatxPD∪{p,x}x′ for every p ∈ S (x), it does not imply that the utility function that neverchooses x will rank x above x′. In Figure A.1a BPD∪{p,B}A for every p ∈ S (B) ,however the utility function never chooses B and therefore can rank B below A.62A.2 Proof of Proposition 13Let x ∈ℜK and δ > 0. Bδ (x) ={y ∈ℜK : ‖y− x‖< δ}.Definition 21. A utility function u : ℜK →ℜ is1. locally non-satiated if ∀x∈ℜK and ∀ε > 0 ∃y∈Bδ (x) such that u(y)> u(x).2. continuous if ∀x ∈ ℜK and ∀ε > 0 there exists δ > 0 such that y ∈ Bδ (x)implies u(y) ∈ Bε (u(x)).Lemma. If u(·) is a locally non-satiated utility function that rationalizes D ={(pi,xi)ni=1}, then xiP0Dx implies u(xi)> u(x).Proof. Suppose xiP0Dx (pixi > pix). Then by the definition of the revealed pref-erence relations (Definition 1), xiR0Dx. Since u(·) rationalizes D, xiR0Dx impliesu(xi)≥ u(x). Suppose that u(xi)= u(x). Since pixi > pix ∃ε > 0 such that∀y∈Bε (x) : pixi > piy. By local non-satiation ∃y′ ∈Bε (x) such that u(y′)> u(x)=u(xi). Thus, y′ is a bundle such that pixi > piy′ and u(y′)> u(xi), in contradictionto u(·) rationalizes D. Therefore, u(xi)> u(x).For what follows, let D ={(pi,xi)ni=1}and let u(·) be a continuous and locallynon-satiated utility function.62Definitions 19 and 20 can be trivially extended to include observed bundles, and then a similarargument can be constructed for the observed portfolio x1 in Figure A.1a. Note that the violation ofthe revealed worse set demonstrated in Figure A.1b cannot occur for an observed bundle since thereexists a price vector p that supports the bundle as an optimal choice.69A.2. Proof of Proposition 13Part 1: u(·) v?(D,u)-rationalizes D.Proof. Suppose that for some observation(pi,xi)∈ D there exists a bundle x suchthat xiR0D,v?(D,u)x and u(xi)< u(x). By the definition of the revealed preferencerelations induced by adjusted data sets (Definition 4.1), v?i(D,u)pixi ≥ pix. Bythe normalized money metric definition (Definition 12), m(xi, pi,u)≥ pix. Sincem(xi, pi,u)is the minimal expenditure required to achieve a utility level of atleast u(xi), the case where the inequality is strict contradicts Definition 12. Ifm(xi, pi,u) = pix and u(xi)< u(x), by continuity of u(·) there exists γ > 0 such thatu(xi)< u((1− γ)x). However, since pi (1− γ)x < pix = m(xi, pi,u), we reach acontradiction to Definition 12.Part 2: v? (D,u) = 1 if and only if u(·) rationalizes D.Proof. First, let us show that if u(·) rationalizes D then v? (D,u) = 1. Suppose thatfor some observation(pi,xi)∈ D, v?i (D,u) < 1, that is: m(xi, pi,u)< pixi. ByDefinition 12, there exists a bundle x such that pix < pixi and u(x)≥ u(xi). How-ever, since by Definition 1.2, xiP0Dx, and since u(·) is a locally non-satiated utilityfunction that rationalizes D, the above proven lemma implies, in contradiction, thatu(xi) > u(x). Thus, v?i (D,u) = 1 for all i. Therefore v? (D,u) = 1.Next, let us show that if v? (D,u) = 1 then u(·) rationalizes D. By Definition12, v?(D,u) = 1 implies m(xi, pi,u) = pixi for every (pi,xi) ∈D. Suppose that u(·)does not rationalize the data. That is, for some observation(pi,xi), there exists abundle x such that u(x) > u(xi) and xiR0Dx. By continuity of u(·) there exist γ > 0such that u((1− γ)x)> u(xi). However, since pi (1− γ)x < pixi = m(xi, pi,u)wereach a contradiction to Definition 12.Part 3: Let v∈ [0,1]n. u(·) v-rationalizes D if and only if v 5 v?(D,u).Proof. First, let us show that if u(·) v-rationalizes D then v 5 v?(D,u). Supposethat v is such that u(·) v-rationalizes D and for observation i, vi > v?i (D,u). ByDefinition 11, u(xi)≥ u(x) for all x such that xiR0D,vx or equivalently vi pixi ≥ pix.By Definition 12 and since vi > v?i (D,u) we get that vi pixi > m(xi, pi,u)= pixi?where xi? ∈ argmin{y∈ℜK+:u(y)≥u(xi)}piy. It follows that ∃ε > 0 such that ∀y ∈Bε(xi?): vi pixi > piy. By local non-satiation ∃y′ ∈ Bε(xi?)such that u(y′) >u(xi?)≥ u(xi). Thus, y′ is a bundle such that vi pixi > piy′ and u(y′) > u(xi)contradicting that u(·) v-rationalizes D.Next, let us show that if v5 v?(D,u) then u(·) v-rationalizes D. By Part 1: u(·)v?(D,u)-rationalizes D. That is, for every observation(pi,xi)∈ D, v?i(D,u)pixi ≥pix implies u(xi)≥ u(x). Since v 5 v?(D,u), for every observation(pi,xi)∈ D,70A.3. Proof of Theorem 16v?i (D,u) pixi ≥ vi pixi. Therefore, for every observation(pi,xi)∈ D, vi pixi ≥ piximplies u(xi)≥ u(x). Hence, u(·) v-rationalizes D.A.3 Proof of Theorem 16We use the following notations throughout the proof:• Let v ∈ [0,1]n and δ > 0. B¯δ (v) = {v′ ∈ [0,1]n : ‖v′−v‖< δ} .• Ev = {v ∈ [0,1]n : f (v) = IV (D, f )}• ∀ε < M− IV (D, f ) : Ev+ε = {v ∈ [0,1]n : f (v) = IV (D, f )+ ε}.• EG = {v ∈ [0,1]n : ∀r > 0, ∃v′ ∈ B¯r(v), D satisfies GARPv′}.• Eˆ = Ev∩EG.Lemma 22. Ev is non-empty, bounded and closed.Proof. First, by Fact 10, IV (D, f ) always exists. Second, by Definition 8 f (·) iscontinuous and bounded. By the Intermediate Value Theorem, for every value ofIV (D, f ) there exists a vector v such that f (v)= IV (D, f ), concluding that Ev is non-empty. Third, Ev ⊆ [0,1]n and therefore it is bounded. Finally, since f (·) is contin-uous it induces a continuous ordering on [0,1]n. Therefore, for every IV (D, f ), theupper contour set and the lower contour set are closed and their intersection, Ev, isclosed as well.Lemma 23. Eˆ is non-empty.Proof. Assume IV (D, f ) < M. Suppose that Eˆ is empty, that is v ∈ Ev ⇒ v /∈ EG(due to Lemma 22, this condition is not vacuous). Thus, ∀v ∈ Ev, ∃r > 0, ∀v′ ∈B¯r(v), D violates GARPv′ .Let r(v) = sup{r∈(0,√n]:∀v′∈B¯r(v), D violates GARPv′} r. r(v) is uniform continuous on Evsince ∀v,v′ ∈ Ev if ‖ v−v′ ‖< ε then by the triangle inequality |r(v)−r(v′)|< ε .63Let r¯ = minv∈Ev r(v). r¯ exists since r(v) is continuous on Ev and Ev is boundedand closed (by Lemma 22). In addition, r¯ > 0 since ∀v ∈ Ev : r(v) > 0. Then,∀v ∈ Ev, ∀r < r¯, ∀v′ ∈ B¯r(v), D violates GARPv′ . Thus, we established that forevery IV (D, f ) < M if Eˆ is empty there exists a hypercylinder H of radius r¯ > 0around Ev such that if v′ is an interior point in H then D violates GARPv′ .63The distance between v and v′ is at most ε , the distance between v and some w such that Dsatisfies GARPw is r(v) and by the triangle inequality the distance between v′ and w, which serves asa bound on r(v′), is between r(v)− ε and r(v)+ ε .71A.3. Proof of Theorem 16The next step is to show that there exists 0 < ε < M− IV (D, f ) such that Ev+ε iscontained in H (by Lemma 22 Ev+ε is non-empty). Suppose that for every 0 < ε <M− IV (D, f ) there exists v′ε ∈Ev+ε such that v′ε /∈H. Then, v′ε , where ε → 0, is aninfinite bounded sequence in [0,1]n and therefore it has a convergent subsequence.Denote the limit of this subsequence by vˆ. Since vˆ is not an interior point of H itmust be that f (vˆ) 6= IV (D, f ). However, by construction, limε→0 f (v′ε) = IV (D, f ),suggesting that f (·) is not continuous. Thus, there exists ε¯ such that Ev+ε¯ ⊂ H.Moreover, since f (·) is continuous ∀ε ∈ [0, ε¯) : Ev+ε ⊂ H.That is, there exists ε¯ > 0 such that for every v′∈ [0,1]n that satisfies IV (D, f )≤f (v′) < IV (D, f )+ ε¯ < M, D violates GARPv′ . Since IV (D, f ) is an infimum thereis no v∈ [0,1]n such that f (v)< IV (D, f ) and D satisfies GARPv. Thus, there existsIV (D, f )< m < M such that for every v∈ [0,1]n : f (v)< m and D violates GARPv.That contradicts the maximality of IV (D, f ) as an infimum. Therefore, we haveshown that if IV (D, f ) < M then Eˆ is non-empty.Finally, suppose IV (D, f ) = M. By Definition 8, 0 ∈ Ev. By Fact 6, 0 ∈ EG.Thus, also if IV (D, f ) = M then Eˆ is non-empty.Lemma 24. Let v ∈ [0,1]n. If v˜ ∈ B¯δ (v) and D satisfies GARPv˜, there exists vˆ ∈B¯δ (v) where vˆ≤ v and D satisfies GARPvˆ.Proof. If v˜≤ v then the lemma is trivial. If v≤ v˜ then by Fact 7 D satisfies GARPv.By the same fact, D satisfies GARPvˆ for every vˆ ∈ B¯δ (v) where vˆ≤ v. Otherwise,define vˆ such that ∀i ∈ {1, . . . ,n} : vˆi = min{vi, v˜i}. By construction, vˆ ≤ v andvˆ ≤ v˜. Since ∀i ∈ {1, . . . ,n} : |vˆi− vi| ≤ |v˜i− vi| then vˆ ∈ Bδ (v). In addition,since v, v˜ ∈ [0,1]n then vˆ ∈ [0,1]n. Therefore, vˆ ∈ B¯δ (v). Finally, since vˆ ≤ v˜ andD satisfies GARPv˜ by Fact 7 D satisfies GARPvˆ. Thus, we constructed vˆ ∈ B¯δ (v)where vˆ≤ v and such that D satisfies GARPvˆ.Lemma 25. Let v′ ∈ Eˆ. D satisfies GARPλv′ for all λ ∈ [0,1).Proof. By Lemma 23 v′ exists. If v′ = 0 the Lemma is trivial by Fact 6. Supposev′ ≥ 0. Denote v′min = min{v′i>0} v′i and let δ ∈ (0,v′min). Let B˜δ (v′) = {v : v≤ v′}∩B¯δ (v′). Let v˜∈ B˜δ (v′) such that D satisfies GARPv˜. By Lemma 24 such v˜ exists andby construction v˜ 6= v′. Note that choice of δ implies that for every i ∈ {1, . . . ,n},v′i > 0 =⇒ v˜i > 0. Define λ ={λ i}ni=1 such that if v′i = 0 then λ i = 0 and otherwiseλ i = v˜iv′i > 0. Then, λ ∈ [0,1]n\{0}. Denote λ¯ = min{λ i>0}λ i. Then, 0 < λ¯ < 1.For every i ∈ {1, . . . ,n} define vˆi = λ¯v′i. First, note that ∀i ∈ {1, . . . ,n} : vˆi ≤ v˜i (ifv′i = 0 then vˆi ≤ v˜i = v′i = 0, otherwise, vˆi = λ¯v′i≤ v˜iv′i v′i = v˜i) and by Fact 7 sinceD satisfies GARPv˜ then D satisfies GARPvˆ. Second, vˆ = λ¯v′. Finally, note that∀i ∈ {1, . . . ,n} : v′i−δ ≤ v˜i ≤ v′i. Therefore, ∀i ∈ {1, . . . ,n} : 1− δv′i ≤ λi ≤ 1 and72A.3. Proof of Theorem 161− δv′min≤ λ¯ < 1. Thus, for every ε > 0 there exists λ¯ > 1−ε such that vˆ = λ¯v′ andD satisfies GARPλv′ . By Fact 7 for every 0 ≤ λ ≤ λ¯ D satisfies GARPλv′ . Hence,D satisfies GARPλv′ for all λ ∈ [0,1).Definition. Let v ∈ [0,1]n. D satisfies v-Cyclical Consistency ifvr prxr ≥ prxs,vs psxs ≥ psxt , . . . ,vq pqxq ≥ pqxr=⇒ vr prxr = prxs,vs psxs = psxt , . . . ,vq pqxq = pqxrLemma 26. Let v ∈ [0,1]n. D satisfies v-Cyclical Consistency if and only if itsatisfies GARPv.Proof. Suppose D violates v-Cyclical Consistency. Then, there exists a sequenceof observations such that vr prxr≥ prxs,vs psxs≥ psxt , . . . ,vq pqxq≥ pqxr and vs psxs >psxt . By Definition 4, xrR0D,vxs,xsR0D,vxt , . . . ,xqR0D,vxr and therefore xtRD,vxs. How-ever, by the same definition xsP0D,vxt . Thus, D violates GARPv. On the other hand,suppose D violates GARPv. There exists a pair of observations (pt ,xt) and (ps,xs)such that xtRD,vxs and xsP0D,vxt . Again, by Definition 4, there exists a subset ofobservations such thatxtR0D,vxu,xuR0D,vxv, . . . ,xqR0D,vxs and since xsP0D,vxt implies xsR0D,vxt there is a sub-set of observations such that vt ptxt ≥ ptxu,vu puxu ≥ puxv, . . . ,vs psxs ≥ psxt . Inaddition, since xsP0D,vxt we have vs psxs > psxt . However, this combination violatesv-Cyclical Consistency.Lemma 27. IV (D, f )≤ IM(D, f ,U c)Proof. If IV (D, f ) = 0 the lemma follows from definitions 8 and 14. Otherwise,suppose that IV (D, f ) > IM(D, f ,U c). Since IM(D, f ,U c) = infu∈U c f (v? (D,u))there exists u ∈ U c such that f (v? (D,u)) < IV (D, f ). By Proposition 13.1 u(·)v?(D,u)-rationalizes D. By Theorem 6.3.I in Afriat (1987) (p. 179)64 u(·) v?(D,u)-rationalizes D if and only if D satisfies v?(D,u)-Cyclical Consistency, which isequivalent, by Lemma 26, to D satisfies GARPv?(D,u). However, since D satisfiesGARPv?(D,u) and f (v? (D,u)) < IV (D, f ), IV (D, f ) cannot be the infimum of f (·)on the set of all v ∈ [0,1]n such that D satisfies GARPv.64Afriat (1987) does not provide a proof for this theorem. Afriat (1973) provides a proof forthe uniform case (same adjustments for all observations) which can be generalized to this theorem.Houtman (1995) studies general cost functions that include the uniform case, the non-uniform casethat we use and many other cases. He provides a proof for a general form of Theorem 6.3.I in Afriat(1987) that applies here as well. Note that while Houtman (1995) elaborates on the uniform case, allhis statements on this case apply also to the non-uniform linear case that is considered here.73A.4. The CodeLemma 28. Let v ∈ [0,1]n be such that D satisfies GARPv. Then IM(D, f ,U c) ≤f (v).Proof. By Lemma 26, D satisfies GARPv if and only if D satisfies v-Cyclical Con-sistency. By Theorem 6.3.I in Afriat (1987) (p. 179) D satisfies v-Cyclical Consis-tency if and only if there exists a non-satiated continuous utility function u ∈ U cthat v-rationalizes D. By Proposition 13.3, v≤ v? (D,u). Since f (·) is weakly de-creasing f (v? (D,u)) ≤ f (v). Therefore, by Definition 14, IM(D, f ,U c) ≤ f (v).Theorem. For every finite data set D ={(pi,xi)ni=1}and aggregator functionf : [0,1]n→ [0,M] :IV (D, f ) = IM(D, f ,U c)where U c is the set of contionuous and locally non-satiated utility functions.Proof. Let v? ∈ Eˆ. By Lemma 23 such point exists. For every λ ∈ [0,1] denoteFλ = f (λv?). Consider the sequence of intervals [IV (D, f ),Fλ ). By Lemma 25,D satisfies GARPλv? for all λ ∈ [0,1). Therefore, by Lemma 28, ∀λ ∈ [0,1) :IM(D, f ,U c) ≤ Fλ . In addition, by Lemma 27, IV (D, f ) ≤ IM(D, f ,U c). Hence,∀λ ∈ [0,1) : IM(D, f ,U c) ∈ [IV (D, f ),Fλ ). Since limλ→1 Fλ = IV (D, f ) we getIV (D, f ) = IM(D, f ,U c).A.4 The CodePreliminariesThis appendix describes the code designed to implement the indices and estima-tions mentioned in the paper. The code accommodates the data gathered by thesymmetric treatment in Choi et al. (2007) and supplies recovery procedures usingthe family of disappointment aversion utility functions introduced in Gul (1991)with CRRA or CARA utility indices. We hope that this appendix will ease theprocess of adapting this software to other data sets and other families of parametricutility functions.Consistency Tests (HPZ_Subject_Consistency)To construct the relations mentioned in Definition 1, we first calculate a matrix(REF) such that the cell in the ith row and the jth column stores pixi− pix j. If thisdifference is non-negative we say that xiR0Dxj (DRP matrix) while if it is strictly74A.4. The Codepositive we say that xiP0Dxj (SDRP matrix).65 Then we use the Floyd–Warshall al-gorithm (Warshall, 1962)66 to construct the revealed preferred relation (RP matrix)which is the transitive closure of the directly revealed preferred relation. Finally,we construct the strictly revealed preferred relation (SRP matrix).Using these relations we implement three consistency tests for a given data setD. SARP (xiRDx j and x jRDxi implies xi = x j) , GARP (xiRDx j implies not x jP0Dxi)and WARP (xiR0Dxj implies not x jP0Dxi). For each test we report the number ofviolations and the number of inconsistent pairs of observations. If GARP is notsatisfied we calculate the inconsistency indices described in the next section. Ifthere are no GARP violations, we report that the Afriat index equals 0, the Varianindices equal 0, 0 and 0 (minimum, mean and sum of squares, respectively) andthat the Houtman Maks index equals 50.Inconsistency IndicesAfriat’s Index (HPZ_Afriat_efficiency_index)Definition of the Afriat’s Inefficiency Index is67IA(D) = infλ∈[0,1]:D satisfies GARPλ11−λWe use a bisection search to approximate Afriat’s index as suggested by Hout-man and Maks (1987), Varian (1990) and Houtman (1995). The input is a matrix(expenditure) such that the cell in the ith row and the jth column contains pix j. Weinitialize the index (AFRIAT) to 12 and the bounds to 0 and 1. In each iterationwe adjust the matrix by multiplying its main diagonal elements by AFRIAT. Theadjusted data is checked for GARP. As in any bisection search, if GARP is satisfiedthe next examined index is the average of the current index and the upper boundwhile the lower bound is changed to the current index. If GARP is not satisfiedthe next examined index is the average of the current index and the lower boundwhile the upper bound is changed to the current index. The number of iterationsdetermines the extent of approximation. We use 30 iterations and therefore we ap-proximate IA(D) to a level of 2−30 ≈ 10−9. To follow the definition we report oneminus the result of the algorithm.6865We introduce a variable named THRESHOLD (initialized to Matlab’s epsilon) to allowfor some flexibility in these definitions. xi is directly revealed preferred over x j if pixi +T HRESHOLD > pix j while xi is strictly directly revealed preferred over x j if pixi > pix j +T HRESHOLD.66We use an external graph theory package (matlab_bgl) that implements this algorithm.67See Afriat (1972, 1973).68The procedure reports λ .75A.4. The CodeVarian’s Index (HPZ_varian_efficiency_index)The definition of Varian’s Inefficiency Index isIV (D) = infv∈[0,1]n:D satisfies GARPvf (v)Calculating Varian’s index is an NP-hard problem and we use Algorithm 3 in Al-cantud et al. (2010) to approximate it (from above). The input is a matrix (expen-diture) such that the cell in the ith row and the jth column contains pix j. The vectorof adjustments (denote by v and called var in the code) is initialize to 1. The mainpart of the procedure is embedded in a loop that ends only when the data satisfiesGARPv. If indeed GARPv is satisfied the procedure is done and the vector of ad-justments is the result of the loop. Otherwise, we construct the matrix Pert_matsuch that the cell in the ith row and the jth column contains p jxiv j p jx j (v j is the jthelement of v) if xiRv,Dx j and x j p0v,Dxi (GARPv violation) and zero otherwise. Themaximal element of Pert_mat is picked and substituted into the corresponding el-ement in the vector of adjustments (the substitution is by multiplication with theprevious value). Finally, when the loop ends, the vector of adjustments is aggre-gated by three distinct aggregators of wastes (1− vi): maximum (maxi (1− vi)),mean ( 1n ∑ni=1 (1− vi)) and average sum of squares ( 1n ∑ni=1 (1− vi)2), and thesethree numbers are reported.69Houtman Maks Index (HPZ_Houtman_Maks_efficiency_index)The Houtman Maks efficiency index is defined as the size of the largest subset ofobservations that satisfy GARP.70 The input is a matrix in which the cell in the ithrow and the jth column contains 1 if xiP0Dx j and 0 otherwise. This matrix is turnedinto a list of the pairs in the relation P0D. This list serves as an input to a programthat was used in Dean and Martin (2011) which returns an approximation of theminimum number of removals needed for acyclicality.71 The procedure returns theapproximated size of the largest subset of observations that satisfy GARP.Nonlinear Least Squares Method (HPZ_NLLS)The Nonlinear Least Squares estimation procedure finds the parameters that min-imize the aggregated distance between bundles predicted by utility maximizationand observed bundles.69For the max aggregator, the number reported by the package is mini (vi).70See Houtman and Maks (1985).71Downloaded from Daniel Martin’s personal website on November 5th 2011.76A.4. The CodeInputThe input includes the subject ID, the number of observations, the chosen quantitiesand the given prices.72 The user defined parameters:Data parameters (within Choi et al. (2007))1. The treatment in Choi et al. (2007) (treatment, in the current version theasymmetric treatments are disabled).2. Correction for corner choices (zeros_flag=1 means no correction while ze-ros_flag=2 implements the correction suggested in page 1929 in Choi et al.(2007) with ω = 0.001, using the function HPZ_No_Corners).Functional family parameters (within Disappointment Aversion)1. The vNM utility function (when function_flag equals 1 the function is CRRAwhile when it equals 2 it is CARA).2. Restricted forms (beta_zero_flag=true fixes the disappointment aversion pa-rameter to zero to obtain expected utility).3. Elation loving (beta_flag=1 allows for negative values for the disappointmentaversion parameter while beta_flag=2 restricts disappointment aversion pa-rameter to be non-negative).4. Correction for the disappointment aversion parameter in asymmetric treat-ments (asymmetric_flag=1 follows Gul (1991) while asymmetric_flag=2 fol-lows the implementation in Choi et al. (2007), in the current version thesetreatments are disabled).5. Restricted risk aversion for cases where there are corner choices and thechosen vNM utility function is CRRA (restricted_rho). This parameter isdetermined within the code.Estimation procedure parameters (within NLLS)72In the case of Choi et al. (2007), the prices are the reciprocals of the maximum quantities givenin the online data sheet.77A.4. The Code1. The distance norm (when metric_flag equals 1 the distance is measured bythe Euclidean norm while when it equals 2 it is measured by the geometricmean norm implemented in Choi et al. (2007)).732. Estimation approach (when numeric_flag equals 1 numeric approximation isused to recover predicted choices while when it equals 2 we use the analyticfirst order conditions).3. The minimal number of repetitions with identical minimal aggregate dis-tance to establish convergence (NLLS_min_counter).4. Maximal number of repetitions (max_starting_points, currently determinedwithin the code).5. Maximal estimation time (NLLS_max_time_estimation measured in min-utes, infinity if time is not a constraint).6. Parallel computing (when parallel_flag=true the matlabpool command is used,otherwise no parallel computing).7. Output (determines the additional measures reported for the chosen parame-ters).Main ProcedureThe procedure begins with correcting corner choices (if required by the user) andgenerating random initial points subject to the restrictions on the functional family(HPZ_Initial_Points).74For each initial point we search for the element of the disappointment aver-sion functional family (defined by two parameters) that minimizes the distancesbetween the predicted bundles and the observed bundles. For each initial point,73The distance norm used in Choi et al. (2007) (implemented in HPZ_ldr_Criterion):n∑i=1(lnxobserved2xobserved1− lnxpredicted2xpredicted1)2while the Euclidean distance aggregator (implemented in HPZ_Euclid_Criterion):n∑i=1√(xobserved1 − xpredicted1)2+(xobserved2 − xpredicted2)274The first initial point is (ee−3 − 1,e−2) as chosen by Choi et al. (2007), the second is (0,0)while the rest are chosen randomly (using Matlab’s rand function which simulates a standard uniformdistribution on the open interval (0,1)).78A.4. The Codethe recovered parameters are held in results, while the value is stored in crite-rion. Since in many cases the procedure encounters local minima, we repeat theestimation procedure as long as the best parameters combination (yield minimalaggregate distance) is recovered less than NLLS_min_counter times, provided thatthe number of estimations did not reach max_starting_points.75 If the number ofrepetitions reachesmax_starting_points then we report the best estimations, even if recovered less thanNLLS_min_counter times. We also provide an option for time constraint in whichthe procedure ends when the time limit expires as long as at least 5 estimationstook place.The Parameters Recovery RoutineGiven an initial point the recovery is executed using fminsearchbnd76 which is aversion of fminsearch (the unconstrained non-linear optimization routine of Mat-lab) that allows for simple bounds on the parameters. When one of the parametersis fixed to zero, the optimization is uni-dimensional and the objective function isimplemented in HPZ_Criterion_Extreme_Param. In case no parameter is fixed,the optimization is bi-dimensional and the objective function is implemented inHPZ_Criterion. Given parameter(s), these functions calculate the aggregate differ-ence between the predicted bundles and the observed bundles. The optimal choicescan be calculated numerically or analytically. The analytical procedure relies onfirst order condition and is very efficient, while the numerical procedure can beeasily adapted to various families of functions.Numeric Approach (HPZ_Choices) The numerical calculation is carried out byfmincon (the constrained non-linear optimization routine of Matlab). In every callthis function calculates the optimal choice for all observations. Since these calcu-lations are independent, our code implements them using the parallel computingtoolbox of Matlab. This function minimizes the objective function implemented inHPZ_Utility_Helper subject to a linear budget constraint while keeping the quan-tities non negative. HPZ_Utility_Helper uses HPZ_Utility, CRRA and CARA tocalculate the utility level given the parameters of the disappointment aversion util-ity function, the treatment and the chosen vNM utility function.7775Considerable part of HPZ_NLLS is dedicated to the implementation of this ad-hoc mechanism,specifically to the adjustments needed when a new best is recovered. equal_fval_counter counts thenumber of estimations that recovered the minimal value and optimal_parameter_matrix stores theresults of those estimations.76Released by John D’Errico in July 2006.77As fmincon is an iterative process, a starting point is required. We perform fmincon twice usingtwo different starting points which are chosen on two different sides of the intersection between the79A.4. The CodeAnalytic Approach (HPZ_Choices_Analytical) The utility function isu(x,y) = γw(max{x,y})+(1− γ)w(min{x,y})where γ = 12+β for −1 < β < ∞ and w(x) = x1−ρ1−ρ for CRRA or w(x) = −e−ax forCARA. Denote p = pxpy =mymx;my = Mpy ;mx =Mpx(if mx = my we denote both by m).We first elaborate on the CRRA analysis. The marginal rate of substitution:MRSxy =11+β( yx)ρx > y[11+β ,1+β]x = y(1+β )( yx)ρx < yThe utility maximization problem can be broken to the following cases basedon values of β and ρ (we refer only to identifiable cases):1. ρ > 0,β ≥ 0 :(x,y)d =(mx1+ [p(1+β )]1/ρp, my1+ p[p(1+β )]1/ρ)p < 11+β(myp+1 ,myp+1)11+β ≤ p≤ 1+βmx1+ 1p(p1+β)1/ρ , my1+ p(p1+β)1/ρ 1+β < p2. ρ > 0,−1 < β < 0 :(x,y)d =(mx1+ [p(1+β )]1/ρp, my1+ p[p(1+β )]1/ρ)p < 1{(m1+(1+β )1/ρ, m1+(1+β )−1/ρ),(m1+(1+β )−1/ρ, m1+(1+β )1/ρ)}p = 1mx1+ 1p(p1+β)1/ρ , my1+ p(p1+β)1/ρ 1 < p3. ρ > 0,β =−1 :(x,y)d =(mx,0) p < 1{(m,0) ,(0,m)} p = 1(0,my) 1 < pbudget line and the 45 degrees line, close to the corners (not including the corners). Then the optimalchoice among those two is the one with the higher utility level.80A.4. The Code4. ρ = 0,β ≥ 0 :(x,y)d =(mx,0) p < 11+β{x≥ y, px+ y = my} p = 11+β(myp+1 ,myp+1)11+β < p < 1+β{ x≤ y, px+ y = my} 1+β = p(0,my) 1+β < pNext we consider the CARA analysis. The marginal rate of substitution:MRSxy =11+β e−a(x−y) x > y[11+β ,1+β]x = y(1+β )e−a(x−y) x < yThe utility maximization problem can be broken down to the following casesbased on values of β and a (we refer only to identifiable cases):1. a > 0,β ≥ 0 :(x,y)d =(mx,0) p < 11+β e−amx(1p+1[my− 1a ln(p(1+β ))],1p+1[my + pa ln(p(1+β ))])11+β e−amx ≤ p < 11+β(myp+1 ,myp+1)11+β ≤ p≤ 1+β1p+1[my− 1a ln(p1+β)],1p+1[my + pa ln(p1+β)] 1+β < p≤ (1+β )eamy(0,my) (1+β )eamy < p2. a > 0,−1 < β < 0 :(x,y)d =(mx,0) p < 11+β e−amx(1p+1[my− 1a ln(p(1+β ))],1p+1[my + pa ln(p(1+β ))])11+β e−amx ≤ p < 1{ (12[my− 1a ln(1+β )], 12[my + 1a ln(1+β )]),(12[my + 1a ln(1+β )], 12[my− 1a ln(1+β )])}p = 11p+1[my− 1a ln(p1+β)],1p+1[my + pa ln(p1+β)] 1 < p≤ (1+β )eamy(0,my) (1+β )eamy < p81A.4. The Code3. a > 0,β =−1 :(x,y)d =(mx,0) p < 1{(m,0) ,(0,m)} p = 1(0,my) 1 < pMoney Metric Method (HPZ_MME_Estimation)The Money Metric recovery procedure finds the parameters that minimize the ag-gregated adjustments needed to remove all inconsistencies between the utility func-tion ranking and the revealed preference information.InputThe input includes the subject ID, the number of observations, the chosen quantitiesand the given prices. The user defined parameters areData parameters (within Choi et al. (2007))1. The treatment in Choi et al. (2007) (treatment, in the current version theasymmetric treatments are disabled).2. Correction for corner choices (zeros_flag, see above for details).Functional family parameters (within Disappointment Aversion)1. The vNM utility function (function_flag, see above for details).2. Restricted forms (beta_zero_flag, see above for details).3. Elation loving (beta_flag, see above for details).4. Restricted risk aversion for cases where there are corner choices and thechosen utility index is CRRA (restricted_rho). This parameter is determinedby the code.Estimation procedure parameters (within MME)1. The wastes aggregation function (when aggregation equals 1 it is the max-imum function, when it equals 2 it is the average and when it is 3 it is theaverage sum of squares).2. Estimation approach (numeric_flag, see above for details).82A.4. The Code3. The minimal number of repetitions with identical minimal aggregate dis-tance to establish convergence (MME_min_counter).4. Maximal number of repetitions (max_starting_points, see above for details).5. Maximal estimation time (MME_max_time_estimation, see above for de-tails).6. Parallel computing (parallel_flag, see above for details).7. Output (see above for details).Main ProcedureThe procedure begins with correcting corner choices (if required by the user) andgenerating random initial points subject to the restrictions on the functional family(HPZ_Initial_Points).For each initial point we search for the element of the disappointment aversionfunctional family (defined by two parameters) that minimizes the aggregated ad-justments needed to remove all inconsistencies between the utility function rankingand the revealed preference information. For each initial point, the recovered pa-rameters are held in results, while the value is stored in criterion. Here also werepeat the estimation procedure as long as the best parameters combination (yieldminimal value function) is recovered less than MME_min_counter times providedthat the number of estimations did not reach max_starting_points (Footnote 75 isrelevant here as well). If the number of repetitions reaches max_starting_pointsthen we report the best estimations, even if recovered less than MME_min_countertimes. We also provide an option for time constraint in which the procedure endswhen the time limit expires as long as at least 3 estimations took place.The Parameters Recovery RoutineGiven an initial point the recovery is executed using fminsearchbnd (see detailsabove). When one of the parameters is fixed to zero, the optimization is uni-dimensional and the objective function is implemented inHPZ_MME_Helper_Extreme_Param. In case no parameter is fixed, the optimiza-tion is two-dimensional and the objective function is implemented in HPZ_MME_Helper.Given parameter(s), these functions calculate the aggregate adjustments needed toremove all inconsistencies between ranking induced by the utility function and therevealed preference information. Both functions use HPZ_MME_Criterion thatcalculates the three aggregates78. The optimal parameters can be calculated nu-78Maximum (maxi(1− v?i)) , mean (∑ni=1(1−v?i )n ) and the sum of squares (∑ni=1(1− v?i)2).83A.4. The Codemerically or analytically.Numeric Approach (HPZ_MME) The inputs for the numerical calculation arethe observations and the relevant family of utility functions (parameters and util-ity index). For every observation the level of utility is directly calculated (byHPZ_Utility). If it is a corner choice, HPZ_Grid_Search_MME implements a bi-section search (parallel movements of the budget line) to recover the waste incurredby this choice (using the numeric HPZ_choices). If the observation is an interiorchoice, the fmincon procedure with a nonlinear constraint (HPZ_Utility_Constraint)minimizes the expenditure subject to achieving at least the same utility level as cal-culated for the observation. We perform fmincon twice using two different startingpoints which are chosen on two different sides of the budget line close to the cor-ners. If both instances of fmincon were satisfactory (exitflag=1) then the lower ex-penditure of the two is reported. Otherwise, a grid search is invoked. Proposition 1guarantees that the optimization can be implemented observation-by-observation.Therefore, our code implements parallel computing over observations.Analytical Approach (HPZ_MME_Analytical) Let (x0,y0) be the chosen bun-dle. For CRRA, denoteu˜0 = (2+β )(1−ρ)u(x0,y0) = (max{x0,y0})1−ρ +(1+β )(min{x0,y0})1−ρThen, by equating the MRS of the indifference curve u˜0 to the price ratio we findthe minimal expenditure required to achieve u˜0. The different cases are based onvalues of β and ρ (we refer only to identifiable cases):1. ρ > 0,β ≥ 0 :e(px, py,(x0,y0)) =px[u˜01+(1+β ) 1ρ p 1−ρρ] 11−ρ + py[u˜01+(1+β ) 1ρ p 1−ρρ] 11−ρ ((1+β ) p) 1ρ p < 11+β(px + py)[u˜02+β] 11−ρ 11+β ≤ p≤ 1+βpx u˜0(1+β )+(p1+β) 1−ρρ11−ρ+ py u˜0(1+β )+(p1+β) 1−ρρ11−ρ (p1+β) 1ρ 1+β < p2. ρ > 0,−1 < β < 0 :e(px, py,(x0,y0)) =px[u˜01+(1+β ) 1ρ p 1−ρρ] 11−ρ + py[u˜01+(1+β ) 1ρ p 1−ρρ] 11−ρ ((1+β ) p) 1ρ p < 1px[u˜01+(1+β ) 1ρ] 11−ρ + py[u˜01+(1+β ) 1ρ] 11−ρ (1+β ) 1ρ p = 1px u˜0(1+β )+(p1+β) 1−ρρ11−ρ+ py u˜0(1+β )+(p1+β) 1−ρρ11−ρ (p1+β) 1ρ p > 184A.4. The Code3. ρ = 0,β ≥ 0 :e(px, py,(x0,y0)) =pxu˜0 p≤ 11+βpx+py2+β u˜011+β ≤ p≤ 1+βpyu˜0 1+β ≤ pFor CARA, denoteu˜0 = (2+β )u(x0,y0) =−e−amax{x0,y0}− (1+β )e−amin{x0,y0}The indifference curve u˜0 intersects with the axis if there exists a positive x suchthat u˜0 = −e−ax− (1+β ). This is possible only if −u˜0 > 1+β , or alternatively,only if e−amax{x0 ,y0}1−e−amin{x0 ,y0}> 1+β . The different cases are based on values of β and ρ(we refer only to identifiable cases):1. β ≥ 0 and −u˜0 > 1+β :e(px, py,(x0,y0)) =− pxa ln(−(u˜0 +(1+β ))) p <− u˜0+(1+β )1+βpx[1a ln(− p+1pu˜0)]+py[1a ln(− (1+p)(1+β )u˜0)]− u˜0+(1+β )1+β ≤ p < 11+β(px + py)[1a ln(−2+βu˜0)]11+β ≤ p≤ 1+βpx[1a ln(− (1+β )(1+p)u˜0 p)]+ py[1a ln(− (1+p)u˜0)]1+β < p≤− 1+βu˜0+(1+β )− pya ln(−(u˜0 +(1+β ))) − 1+βu˜0+(1+β ) < p2. β ≥ 0 and −u˜0 ≤ 1+β : e(px, py,(x0,y0)) =px[1a ln(− p+1pu˜0)]+py[1a ln(− (1+p)(1+β )u˜0)]p < 11+β(px + py)[1a ln(−2+βu˜0)]11+β ≤ p≤ 1+βpx[1a ln(− (1+β )(1+p)u˜0 p)]+ py[1a ln(− (1+p)u˜0)]1+β < p3. −1 < β < 0 and −u˜0 > 1+β and − u˜0+(1+β )1+β < 1 e(px, py,(x0,y0)) =− pxa ln(−(u˜0 +(1+β ))) p <− u˜0+(1+β )1+βpx[1a ln(− p+1pu˜0)]+py[1a ln(− (1+p)(1+β )u˜0)]− u˜0+(1+β )1+β ≤ p≤ 1px[1a ln(− (1+β )(1+p)u˜0 p)]+ py [1a ln(− (1+p)u˜0)]1 < p≤− 1+βu˜0+(1+β )− pya ln(−(u˜0 +(1+β ))) − 1+βu˜0+(1+β ) < p4. −1 < β < 0 and −u˜0 > 1+β and − u˜0+(1+β )1+β ≥ 1e(px, py,(x0,y0)) ={− pxa ln(−(u˜0 +(1+β ))) p≤ 1− pya ln(−(u˜0 +(1+β ))) p > 15. −1 < β < 0 and −u˜0 ≤ 1+β :e(px, py,(x0,y0))=px[1a ln(− p+1pu˜0)]+py[1a ln(− (1+p)(1+β )u˜0)]p≤ 1px[1a ln(− (1+β )(1+p)u˜0 p)]+ py[1a ln(− (1+p)u˜0)]1 < p6. β =−1e(px, py,(x0,y0)) ={px max{x0,y0} p≤ 1py max{x0,y0} 1 < p85A.4. The Code7. a = 0,β ≥ 0 (u˜0 = max{x0,y0}+(1+β )min{x0,y0}):e(px, py,(x0,y0)) =pxu˜0 p≤ 11+βpx+py2+β u˜011+β ≤ p≤ 1+βpyu˜0 1+β ≤ pUser Interface (HPZ_Interface)In order to use this Matlab package follow the following steps:• Set the MATLAB path to the place that stores the HPZ_PRU_Software folder(using the Set Path option with Add Folder with Subfolders).• After the path is set, to run the code, the user should write HPZ_Interfacecommand on the MATLAB command window. The data set we currentlyuse as the input data is (Choi et al. (2007), Data_CFGK_2007.csv).• The Action Selection window: The user is required to choose three actions -consistency analysis (Consistency Tests and Inconsistency Indices), the Non-linear Least Squares recovery method or the Money Metric recovery method.• The Subjects Selection window: The user is required to select the analyzedsubjects (one or multiple subjects can be chosen). If the Consistency Anal-ysis action was chosen in the Action Selection window, the next windowwould be the result window (described below). Otherwise, the followingwindow would be the Functional Form Settings window.• The Functional Form Settings window: The user is required to decide onfour issue - the functional form of the utility indices (CRRA or CARA),the computational approach (numerical or analytical), the disappointmentaversion parameter boundary (β = 0, β ≥ 0 or β >−1) and the adjustmentfor boundary solutions.• The Optimization Settings window slightly differs between the NonlinearLeast Squares method and the Money Metric method. For both methods theuser is first required to specify the number of times the recovered parame-ters should be recovered before the process is terminated and the parametersare reported. Increasing the number of convergence points improves relia-bility but reduces efficiency. Note that when the user selects the number ofconvergence point there is no time limit on the recovery process. Then, theuser is required to specify the allocated time (in minutes). Defining a timelimit overrides the choice of the number of convergence points. The param-eter estimation process (i.e., fminsearchbnd procedure) would stop when the86A.4. The Codetime is over and report the results that have been computed during the al-located time. Third, the user can choose to use parallel processing (moreimportant for the numerical approach). If the parallel processing is selectedthen the code would kill other processes running on the machine and use allcomputing power to run the software. For the Money Metric method, theaggregator of the wastes vector is required (either maximum, mean or aver-age sum of squares). For the Nonlinear Least Squares method, the distancemetric should be selected to be either the Euclidean metric or the metric usedin Choi et al. (2007).• The Output File Format window: The user is required to customize the in-formation in the output file. The basic format of the output file includes therecovered parameters and the value of the optimized aggregated criterion.There are five different options for evaluating the resulting parameters in-cluding the one with which the optimization was carried out. The five arethe three possible Money Metric methods and the two possible NonlinearLeast Squares methods.• The Output File Notification window: When the routine is done, a windowis prompted on the screen including the path to the output file.Bootstrapping package for Money Metric MethodThe bootstrapping module has been developed to provide the parameter distributionfor the underlying recovered parameters and compute confidence intervals for therecovered Disappointment Aversion parameter (β ) using Money Metric method,while β is bounded to -1. For bootstrapping we used 1000 draws with replacementon all observations of each individual subject. In order to report confidence inter-vals on recovered β , we sort all 1000 results for all draws, and then report the 25thand 975th ones as the lower bound and upper bound of 95% confidence intervalfor β , and also report 50th and 950th ones as the lower bound and upper bound of90% confidence interval for β (we also report mean and standard deviation of theparameter).User Interface (HPZ_Bootstrapping_Module)In order to make this module work, the user has to follow the following steps:• Set the MATLAB path to the place that stores the HPZ_PRU_Software folderand the HPZ_Bootstrapping_Package folder (using the Set Path option withAdd Folder with Subfolders).87A.4. The Code• After the path is set, to run the code, the user should writeHPZ_Bootstrapping_Module command on the MATLAB command window.• The bootstrapping window includes the following choices: The functionalform of the vNM numbers (CRRA or CARA), the computational approach(numerical is currently disabled), the MME aggregation functional (eithermean or average sum of squares).• The Subjects Selection window: The user is required to select the analyzedsubjects (one or multiple subjects can be chosen).• The Output File Notification window: When the routine is done, a windowis prompted on the screen including the path to the output file.88Appendix BAppendix for Chapter 2B.1 Experiment InstructionsWelcomeWelcome to the experiment. Please silence your cell phone and put it away for theduration of the experiment. Additionally, please avoid any discussions with otherparticipants. At any time, if you have any questions please raise your hand and anexperiment coordinator will approach you.Please note: If you want to review the instructions at any point during theexperiment, simply click on this window (the instructions window). To return tothe experiment, please click on the experiment icon on the taskbar.Study ProceduresThis is an experiment in individual decision making. The study has two partsand the second part will begin immediately following completion of the first part.Before Part 1, the instructions will be read aloud by the experiment coordinatorand you will be given an opportunity to practice. The practice time will allow youto familiarize yourself with the experimental interface and ask any questions youmay have. We describe the parts of the experiment in reverse order, beginning withPart 2 now.Part 2You will be presented with 9 independent decision problems that share a commonform. In each round you will be given a choice between a pair of allocations oftokens between two accounts, labeled x and y. Each choice will involve choosinga point on a two-dimensional graph that represents the values in the two accounts.The x-account is represented by the x-axis and the y-account is represented by they-axis.For all rounds, in Option 1 the amount allocated to the x-account and y-accountwill differ, and in Option 2 the amount allocated to each account will be the same.89B.1. Experiment InstructionsFor both options, the values allocated to each account will be displayed besidethe point corresponding to each option on the graph, as well as, in the dialog boxlabeled “Options” on the right-hand side of the screen. Figure B.1 illustrates someexamples of types of choices you may face.Figure B.1: Pairwise ChoicesFor the round that is selected for payment, your payment is determined by the90B.1. Experiment Instructionsnumber of tokens allocated to each account. At the end of the experiment, you willtoss a fair coin to randomly select one of the two accounts, x or y. For each partic-ipant, each account is equally likely to be chosen. That is, there is a 50% chanceaccount x will be selected and a 50% chance account y will be chosen. You willonly receive the amount of tokens you allocated to the account that was chosen.The round for which you will be paid will be selected randomly at the conclusionof the experiment and each round is equally likely to be chosen. Remember thattokens are valued at the following conversion rate: 2 tokens = $1.Please Note: Only one round (from both parts combined) will be selected forpayment and your payment will be determined only after completion of both parts.Each round begins with the computer selecting a pair of allocations. For ex-ample, as illustrated in Figure B.2, Option 1, if selected, implies a 50% chance ofwinning 32.0 tokens and a 50% chance of wining 58.0 tokens, where as Option 2,if selected, implies winning 43.0 tokens for sure.Figure B.2: Pairwise Choices - ExampleIn some cases, the two options will be so close to each other that it will bedifficult to distinguish between them graphically. In this case, you may refer tothe “Options” box on the right-hand side of the screen where the values associatedwith each option are listed. Additionally, it may be difficult to select your preferredoption by clicking on the graph itself, so instead you may use the radio buttons in91B.1. Experiment Instructionsthe “Options” box to make you selection. Figure B.3 provides an example of thissituation.Figure B.3: Pairwise Choices - Overlapping PointsIn all rounds, you may select a particular allocation in either of two ways: 1)You may use the mouse to move the pointer on the computer screen to the optionthat you desire, and when you are ready to make your decision, simply left-clicknear that option, or 2) You may select your preferred option using the radio buttonson the right-hand side of the screen, and when you are ready to make your decision,simply left-click on the radio button that corresponds to your choice. In either case,a dialog box, illustrated in Figure B.4, will ask you to confirm your decision byclicking “OK”.92B.1. Experiment InstructionsFigure B.4: Pairwise Choices - Confirmation ScreenIf you wish to revise your choice simply click “Cancel” instead. After youclick “OK”, your choice will be highlighted in green and the screen will darken,as illustrated in Figure B.5, indicating that your choice is confirmed. You mayproceed to the next round by clicking on the “>>” button located in the lower right-hand corner of the screen in the box labeled “Controls”. Please note that you willbe given an opportunity to review and edit your choices upon completion of Part 2of the experiment.93B.1. Experiment InstructionsFigure B.5: Pairwise Choices - Confirmed ChoiceNext. you will be asked to make an allocation in another independent decisionproblem. This process will be repeated until all 9 rounds are completed. At the endof the last round, you must click the “Finish” button, located in the lower right-hand corner of the screen in the box labeled “Controls”, and you will be givenan opportunity to review your choices. You may use the navigation buttons tomove between choices or the “Jump to” feature in the “Edit Panel” to navigate toa specific round. If you are content with your choices, you may exit the review byclicking on the “Finish” button. At this stage you may no longer go back to reviewand/or edit your choices. Instead, click “OK” to complete the experiment.Part 1In Part 1, you will be presented with 22 independent decision problems that arevery similar to those in Part 2. However, rather than selecting an allocation fromamong only two options, now you will have many options to choose from. In eachround your available options will be illustrated by a straight line on the graph andyou will make your choice by selecting a point on this line. As in Part 2, yourpayoff in the round that is selected for payment is determined by the number oftokens allocated to each account. Examples of different lines you may face areillustrated in Figure B.6.94B.1. Experiment InstructionsFigure B.6: Budget Lines95B.1. Experiment InstructionsFigure B.7 illustrates the differences and similarities between the problems inPart 1 and Part 2. In Part 2, you are offered the choice between only two options,A and B. On the other hand, if we were to draw a straight line between theseoptions and allow one to choose any point on this line, then this would increase thenumber of available choices. Notice, however, that the two original options are stillavailable as well as many more. Hence, the problems in Part 1 are conceptually thesame as in Part 2, but with many more possible allocations.Figure B.7: Budget Lines - Relationship to Pairwise ChoiceThe following two examples further illustrate the nature of the problem. If, ina particular round, you were to select an allocation where the amount in one of theaccounts is zero, for example if you allocate all tokens to account x and $0 to ac-count y (or vice versa), then in the event that this round is chosen for payment thereis a 50% chance you will receive nothing at all, and a 50% chance you will receivethe highest possible payment available in that round. In contrast, if you were toselect an allocation where the amount in accounts x and y are equal, then in theevent that this round is chosen for payment you will receive this amount regardlessof which account is chosen by the coin toss.Each round begins with the computer selecting a line. As in Part 2, the linesselected for you in different rounds are independent of each other. For example,as illustrated in Figure B.8, choice A represents an allocation in which you allo-96B.1. Experiment Instructionscate approximately 12.8 tokens in the x-account and 40.5 tokens in the y-account.Another possible allocation is choice B, in which you allocate 30.4 tokens in thex-account and 18.4 tokens in the y-account.Figure B.8: Budget Lines - ExamplesTo choose an allocation, use the mouse to move the pointer on the computerscreen to the allocation that you desire. On the right hand side of the program dia-97B.1. Experiment Instructionslog window you will be able to see the exact allocation where the pointer is located.Please note that, in each choice, you may only choose an allocation which lies onthe line provided. Additionally, if you select an allocation that is close to the x-axisor the y-axis, you will be asked if you would like to select an allocation on theboundary or if you intended for your choice to be as originally selected. Similarly,if you select an allocation that is close to the middle, (roughly the same amountsin each account), you will be asked if you would like to select an allocation wherethe amount in each account are exactly equal or if you intended for your choiceto be as originally selected. The dialog boxes associated with these scenarios areillustrated in Figure B.9.98B.1. Experiment InstructionsFigure B.9: Budget Lines - Special CasesThe controls to confirm your choices and navigate between rounds are identi-cal to those described above for Part 2. Once you have finished with all 22 rounds,you will be given an opportunity to review your choices. You may conclude yourreview by clicking on the finish button in the “Edit Panel” at any time. Once com-plete, please click on the instructions window in order to move on to Part 2.99B.2. Subject ConsistencyPlease remember that there are no “right” or “wrong” choices. Your pref-erences may be different from other participants, and as a result your choicescan be different. Please note that as in all experiments in Economics, the pro-cedures are described fully and all payments are real.CompensationAfter completing both parts of the experiment you will be informed of your pay-ment via an on-screen dialog box. Payments are determined as follows:The computer will randomly select one decision round from both parts (com-bined) to carry out. The round selected depends solely on chance and it is equallylikely that any particular round will be chosen. The payment dialog box will informyou of which round was randomly chosen as well as your choice in that round. Atthis point please raise your hand and an experiment coordinator will provide youwith a fair coin, e.g. a quarter. To determine your final payoff, please flip thecoin. If it lands heads, you will be paid according to the amount of tokens in thex-account and if it lands tails, you will be paid according to the amount of tokens inthe y-account. For both parts of the experiment, tokens are valued at the followingconversion rate:2 tokens = $1You will receive your payment, along with the $10 show-up bonus, privatelybefore you leave the lab. You will be asked to sign a receipt acknowledging receiptof your payment, after which time you may leave.B.2 Subject ConsistencySubject ID GARP Violations CCEI VEI HM (out of 22)201 0 1 1 22202 0 1 1 22203 7 0.9654 0.9354 20204 0 1 1 22205 0 1 1 22206 0 1 1 22207 8 0.9839 0.9561 21100B.2. Subject ConsistencySubject ID GARP Violations CCEI VEI HM (out of 22)209 0 1 1 22210 7 0.9761 0.9599 19301 2 0.9926 0.9850 21302 0 1 1 22303 0 1 1 22305 0 1 1 22306 2 0.9843 0.9439 21307 42 0.8822 0.7648 20308 0 1 1 22309 0 1 1 22310 0 1 1 22401 0 1 1 22402 0 1 1 22403 3 0.9998 0.9966 21404 5 0.9834 0.9500 21406 23 0.9642 0.7624 19407 2 0.9895 0.9814 21408 52 0.8154 0.7054 17409 0 1 1 22410 13 0.9873 0.9392 19501 7 0.9930 0.7562 21502 2 0.9901 0.9870 21503 0 1 1 22504 0 1 1 22506 0 1 1 22507 0 1 1 22509 0 1 1 22510 1 0.9999 0.9994 22601 0 1 1 22602 0 1 1 22603 0 1 1 22604 0 1 1 22605 0 1 1 22606 2 0.9954 0.9649 21607 3 0.9999 0.9871 22608 4 0.9866 0.8719 21609 0 1 1 22101B.2. Subject ConsistencySubject ID GARP Violations CCEI VEI HM (out of 22)701 3 0.9999 0.9763 22702 10 0.9869 0.9461 20703 0 1 1 22704 3 0.9999 0.9520 22706 0 1 1 22707 2 0.9863 0.9753 21708 5 0.9449 0.9072 21801 3 0.9849 0.9198 21802 0 1 1 22803 2 0.9944 0.9925 21804 6 0.9887 0.8940 21806 53 0.8160 0.6523 17807 0 1 1 22808 0 1 1 22810 0 1 1 22901 2 0.9720 0.9507 21902 4 0.9999 0.8781 22903 0 1 1 22904 0 1 1 22905 8 0.9630 0.8207 21906 9 0.9999 0.7727 22907 0 1 1 22908 0 1 1 22910 5 0.9885 0.9805 201001 0 1 1 221002 20 0.9282 0.7860 191003 0 1 1 221004 6 0.9962 0.9887 211006 10 0.9760 0.6905 211007 0 1 1 221008 4 0.9865 0.8000 211009 0 1 1 221010 23 0.9477 0.8044 201101 0 1 1 221103 0 1 1 221104 115 0.8212 0.4557 181105 53 0.9043 0.8432 18102B.3. Recovered ParametersSubject ID GARP Violations CCEI VEI HM (out of 22)1201 0 1 1 221202 7 0.9495 0.8050 201203 13 0.9279 0.9110 201204 0 1 1 221206 17 0.9061 0.8236 201207 71 0.8923 0.7183 181208 19 0.9903 0.9538 201209 0 1 1 221210 4 0.9999 0.8863 221301 0 1 1 221302 0 1 1 221303 4 0.9980 0.8881 201304 4 0.9976 0.9526 211306 2 0.9885 0.9694 211307 6 0.9904 0.9731 201309 1 0.9999 0.9981 22B.3 Recovered ParametersSubject ID βMMI ρMMI βNLLS ρNLLS201 -0.221713 0.697781 0.01415 0.635386202 6.530845 1.764761 3.028411 1.878731203 0.24786 1.69243 3.028411 1.878731204 1.01549 0.625649 1.991065 0.358434205 0.84631 0.002921 0.788646 0.002009206 6.144029 1.546505 3.028411 1.878731207 1.660199 1.905051 3.028411 1.878731209 -0.02608 0.673757 -0.576872 1.240839210 0.954469 0.936551 -0.999971 64.043162301 -0.157565 0.441778 -0.988396 2.852691302 0.00136 0.000029 0 0303 -0.258801 0.809647 -0.379997 0.97219305 0.20769 0.78875 0.052542 0.951558306 0.421215 0.852282 0.052439 2.743157103B.3. Recovered ParametersSubject ID βMMI ρMMI βNLLS ρNLLS307 0.221633 0.488218 0.167997 1.065545308 0.505396 0.10439 0.668228 0.005277309 1.04736 0.00101 0.476202 0.005842310 2.411816 0.308843 3.028411 1.878731401 0.465768 0.00054 0.527432 0.003918402 -0.135056 0.398503 0.052513 0.248435403 -0.003326 0.715614 -0.024457 0.740907404 -0.001492 0.022619 -0.823642 0.253094406 0.482449 0.715387 -0.135767 5.125403407 1.241464 0.091052 1.04075 0.0052408 -0.046256 0.569245 -0.110486 1.276714409 0.075757 0.993728 0.182427 1.027514410 1.031443 2.612045 3.028411 1.878731501 1.01233 0.162266 0.431488 0.529112502 1.484122 0.713529 2.137142 0.380603503 1.046092 0.001009 0.597509 0.007544504 0.560875 0.302463 0.052562 0.555741506 0.465627 0.000539 0.663781 0.003376507 0.049163 0.03758 0.286253 0509 1.044234 173.617271 3.028411 1.878731510 1.211409 0.385657 0.138958 1.325956601 6.530845 1.764761 3.028411 1.878731602 0.88801 0.00138 0.769469 0603 6.144029 1.546505 3.028411 1.878731604 -0.061446 0.795014 -0.000004 0.73637605 0.884043 0.001513 0.680129 0.003319606 0.52462 1.622188 0.049981 4.865283607 0.620074 0.725669 0.723367 11.474313608 0.186021 0.24178 0.052558 0.256218609 0.466004 0.000542 0.445994 0.032494701 0.08967 3.225314 0.509613 3.387376702 0.677699 2.936895 3.028411 1.878731703 1.852937 0.566052 3.028411 1.878731704 0.550355 0.873397 -0.99999 39.514291706 0.252886 0.282372 -0.097213 0.59904707 0.051124 0.34272 0.052458 0.372588708 0.598098 0.146235 -0.029567 0.281133104B.3. Recovered ParametersSubject ID βMMI ρMMI βNLLS ρNLLS801 0.988324 0.276886 0.422727 0.515485802 2.999022 0.001958 3.028411 1.878731803 0.390006 1.91247 3.028411 1.878731804 1.511407 0.352793 3.028411 1.878731806 0.995838 0.385417 3.028411 1.878731807 0.06344 0.128955 0.411514 0.016138808 0.0492 0.160092 0.993431 0.015293810 0.194546 0.001422 0.286253 0901 -0.201087 0.900982 -0.146894 0.862363902 0.472117 0.216159 0.052373 0.422738903 0.105069 0.232653 0.463169 0.001417904 -0.024833 0.60536 0.027228 0.609379905 -0.023016 0.761021 0.979883 0.374895906 1.82981 0.070242 1.02725 0.035203907 0.242329 1.164847 0.334301 1.119345908 0.075089 0.389577 -0.333406 0.9075910 0.335641 2.602216 0.215924 2.2905421001 0.465132 0.000539 0.05265 0.0389361002 -0.058357 0.340751 -0.478253 0.5120741003 2.599346 0.091492 2.185049 0.232281004 -0.164134 2.445022 0.052563 2.7763431006 0.453316 0.164052 0.449011 0.0178891007 0.464843 0.256055 0.052543 0.474691008 0.854468 0.099106 0.363505 0.1215771009 0.287048 0.147744 0.464252 0.0014191010 1.269406 0.156918 1.969464 0.2137791101 -0.08757 1.211851 -0.005626 1.3594721103 0.937699 1.408285 3.028411 1.8787311104 0.377095 0.247368 -0.989126 4.2615031105 0.141643 0.921531 1.697913 0.2840951201 -0.007936 0.693634 0.015615 0.634811202 -0.301817 0.800734 -0.999935 6.3981341203 -0.16055 0.919311 -0.027781 1.117151204 0.005204 0.627573 0.000005 0.7135371206 0.121235 1.699603 0.45465 6.30111207 0.220553 0.701825 -0.713119 4.8886811208 0.017533 3.117975 0.466995 4.621009105B.3. Recovered ParametersSubject ID βMMI ρMMI βNLLS ρNLLS1209 2.245767 0.311644 3.028411 1.8787311210 0.765334 0.276049 0.466872 0.5598761301 0.609025 0.394456 0.41749 0.3971281302 -0.552111 1.04357 -0.997415 4.3642711303 0.46407 0.216978 0.998859 0.0320751304 1.735207 0.233454 0.466319 6.178481306 0.826506 0.952441 0.052479 3.7419671307 -0.062699 1.07373 -0.896761 4.0780811309 0.053247 0.563759 0.020796 0.630314106Appendix CAppendix for Chapter 3C.1 Experiment InstructionsWelcomeWelcome to the experiment. Please silence your cell phone and put it away for theduration of the experiment. Additionally, please avoid any discussions with otherparticipants. At any time, if you have any questions please raise your hand and anexperiment coordinator will approach you.Study ProceduresIn this study, you will be asked a series of questions. For each, you will have achoice between two mathematical expressions and you will be asked to identifywhich of the two expressions is greater. Please indicate your choice by drawing acircle around it. You will be asked to answer a total of 30 questions. The questionswill be presented to you in sets of 10 and you will be asked complete three identicalsets of questions. Hence each question will be repeated 3 times, once per set. Foreach set you will be given a time limit to answer all 10 questions. Once the timelimit has been reached, your answer sheet will be collected and you will be giventhe next set of questions. Please make sure to write your name on all three sets ofquestions.CompensationAfter you have answered all 30 questions, one (and only one) will be randomlyselected for payment. If you answered this question correctly, you will be paid $5,and nothing otherwise. Each question is equally likely to be chosen. You will bepaid your earnings, along with a $10 show-up fee, privately before you leave.Please note that as in all experiments in Economics, the procedures aredescribed fully and all payments are real.107C.1. Experiment InstructionsA B1. 7+4 3+62 11−3 12−53. 4×2 7×14. 12÷3 10÷25. 9+6 12+56.√2116−137√1444−777. 5×43.5 −112 2.454−4.538. 3.12.8 +√521 4.92.5− 6272429. 4.21.8 +√17 2.23.7− 312372110. 4367342 +3.33 84376 +2.64108

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0135699/manifest

Comment

Related Items