@prefix vivo: . @prefix edm: . @prefix ns0: . @prefix dcterms: . @prefix skos: . vivo:departmentOrSchool "Business, Sauder School of"@en ; edm:dataProvider "DSpace"@en ; ns0:degreeCampus "UBCV"@en ; dcterms:creator "Matsumura, Ella Mae"@en ; dcterms:issued "2010-06-13T16:29:02Z"@en, "1984"@en ; vivo:relatedDegree "Doctor of Philosophy - PhD"@en ; ns0:degreeGrantor "University of British Columbia"@en ; dcterms:description """Agency theory has been used to examine the problem of stewardship of an agent who makes decisions on behalf of a principal who cannot observe the agent's actual effort. Effort is assumed to be personally costly to expend. Therefore, if an agent acts in his or her own interests, there may be a "moral hazard" problem, in which the agent exerts less effort than agreed upon. This dissertation examines this agency problem when the agent's effort is multidimensional, such as when the agent controls several production processes or manages several divisions of a firm. The optimal compensation schemes derived suggest that the widely advocated salary-plus-commission scheme may not be optimal. Furthermore, the information from all tasks should generally be combined in a nonlinear fashion rather than used separately in compensating a manager of several divisions, even if the monetary outcomes are statistically independent. In situations where effort is best interpreted as time, effort can be viewed as being additive. The analysis in this special case shows that the nature of the outcome distribution, including the effect of effort on the mean of the distribution, is critical in determining whether it is optimal for the principal to induce the agent to diversify effort across tasks. These new results and the already existing agency theory results are applied to the sales force management problem, in which the firm wishes to motivate a salesperson to optimally allocate time spent selling the firm's various products. The agency model is also expanded to allow for the agent's observation of the first outcome (which is influenced by the agent's first effort) before choosing the second effort level. The optimal compensation schemes both in the absence of and the presence of a moral hazard problem are derived. The behavior of the second effort strategy is also examined. It is shown that the behavior of the agent's second effort strategy depends on the interaction between wealth and information effects of the first outcome. Results similar to those in the multidimensional effort case are obtained for the question of optimality of diversification of effort when effort is additive."""@en ; edm:aggregatedCHO "https://circle.library.ubc.ca/rest/handle/2429/25642?expand=metadata"@en ; skos:note "MULTIFACETED ASPECTS OF AGENCY RELATIONSHIPS by ELLA MAE MATSUMURA A.B., The University of California, Berkeley, 1974 M.Sc, The University of British Columbia, 1976 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES Faculty of Commerce and Business Administration We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA September 1984 © Ella Mae Matsumura, 1984 In p r e s e n t i n g t h i s t h e s i s i n p a r t i a l f u l f i l m e n t of the requirements f o r an advanced degree at the U n i v e r s i t y o f B r i t i s h Columbia, I agree t h a t the L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e f o r r e f e r e n c e and study. I f u r t h e r agree t h a t p e r m i s s i o n f o r e x t e n s i v e copying o f t h i s t h e s i s f o r s c h o l a r l y purposes may be granted by the head o f my department or by h i s or her r e p r e s e n t a t i v e s . I t i s understood t h a t copying or p u b l i c a t i o n of t h i s t h e s i s f o r f i n a n c i a l gain s h a l l not be allowed without my w r i t t e n p e r m i s s i o n . Department of Co mme-ree. ^ &us/'ri£SS Aden,nisir~cut/or. The U n i v e r s i t y of B r i t i s h Columbia 1956 Main Mall Vancouver, Canada V6T 1Y3 E-6 (3/81) i i ABSTRACT Agency theory has been used to examine the problem of stewardship of an agent who makes decisions on behalf of a principal who cannot observe the agent's actual effort. Effort i s assumed to be personally costly to expend. Therefore, i f an agent acts in his or her own interests, there may be a \"moral hazard\" problem, in which the agent exerts less effort than agreed upon. This dissertation examines this agency problem when the agent's effort i s multidimensional, such as when the agent controls several produc-tion processes or manages several divisions of a firm. The optimal compen-sation schemes derived suggest that the widely advocated salary-plus-commis-sion scheme may not be optimal. Furthermore, the information from a l l tasks should generally be combined in a nonlinear fashion rather than used sepa-rately i n compensating a manager of several divisions, even i f the monetary outcomes are s t a t i s t i c a l l y independent. In situations where effort i s best interpreted as time, effort can be viewed as being additive. The analysis in this special case shows that the nature of the outcome distribution, including the effect of effort on the mean of the distribution, i s c r i t i c a l in determining whether i t i s optimal for the principal to induce the agent to diversify effort across tasks. These new results and the already existing agency theory results are applied to the sales force management problem, in which the firm wishes to motivate a salesperson to optimally allocate time spent selling the firm's various products. The agency model is also expanded to allow for the agent's observation of the f i r s t outcome (which is influenced by the agent's f i r s t effort) before choosing the second effort level. The optimal compensation schemes i i i both in the absence of and the presence of a moral hazard problem are derived. The behavior of the second effort strategy i s also examined. It is shown that the behavior of the agent's second effort strategy depends on the interaction between wealth and information effects of the f i r s t outcome. Results similar to those in the multidimensional effort case are obtained for the question of optimality of diversification of effort when effort i s additive. iv TABLE OF CONTENTS Page ABSTRACT i i LIST OF TABLES v i ACKNOWLEDGEMENTS v i i CHAPTER 1. INTRODUCTION 1 CHAPTER 2. NOTATION AND FORMULATION 5 Chapter 2 Footnotes 8 CHAPTER 3. ALLOCATION OF EFFORT 9 3.1 Firs t Best 10 3.2 Second Best 11 3.3 Value of Additional Information 15 3.4 Additive Separability of the Sharing Rule 22 3.5 Additive Effort 25 3.6 Application to Sales Force Management 36 3.7 Summary and Discussion 48 Chapter 3 Footnotes . 54 CHAPTER 4. ONE-PERIOD SEQUENTIAL CHOICE 55 4.1 First Best 60 4.2 Second Best 63 4.3 Additive Separability of the Sharing Rule 74 4.4 Additive Effort 76 4.5 Summary and Discussion 84 CHAPTER 5. SUGGESTED FURTHER RESEARCH 88 5.1 Theoretical Agency Extensions 88 5.2 Application to Variance Investigation 89 BIBLIOGRAPHY 91 V APPENDIX 1. ONE-PARAMETER EXPONENTIAL FAMILY OF DISTRIBUTIONS 94 APPENDIX 2. NORMAL DISTRIBUTION CALCULATIONS 102 APPENDIX 3. CHAPTER 3 PROOFS 108 APPENDIX 4. CHAPTER 4 PROOFS 132 v i LIST OF TABLES Page I. Examples in One-product Case 40 II. One-parameter Exponential Family Q 94 v i i ACKNOWLEDGEMENTS I would like to thank Professor Jerry Feltham amd Professor John Butterworth for the many hours they spent discussing ideas and results with me. Their comments often pointed me in new directions or helped me to look at problems from new perspectives. I am grateful to Professor Tracy Lewis for serving on my dissertation committee. I would also like to thank my husband, Kam-Wah Tsui, for his technical assistance while I was writing this dissertation, and for his unfailing encouragement and moral support during the entire time I was in the Ph.D. program. Finally, I gratefuly acknowl-edge the financial support of the University of British Columbia and of the Social Sciences and Humanities Research Council of Canada. 1 CHAPTER 1 INTRODUCTION Managerial accounting has traditionally been associated with the valua-tion of inventories for external reporting and with information provision for internal decision making and control. Broadly speaking, the internal decision making relates to the planning of operations and the control of decentralized organizations. Variance analysis, budgeting, cost-volume-profit analysis, and the development of performance evaluation measures are typical components of the planning and control processes. There are a number of different approaches to gaining a better under-standing of the role of the accounting system in the control of decentral-ized operations. Since an accounting system is an information system, any research on the value of information, the demand for information, or the roles or uses of information has potential implications for accounting research. The body of research which examines such information issues has come to be known as information economics. Information economics uses for-mal economic models in order to study the demand for information for deci-sion making and performance evaluation purposes. In particular, information economics attempts to find economic explanations for why certain phenomena are observed (e.g., Demski and Feltham, 1978), and to uncover insights about behavior thought to be nonoptimal (e.g., Zimmerman, 1979) or behavior thought to be optimal (e.g., Baiman and Demski, 1980b). Much of the early information economics literature focused on essen-t i a l l y single-person decision situations (e.g., Demski and Feltham, 1976, and Feltham, 1977a), where information serves only a decision-facilitating purpose. That i s , the decision maker uses information about the uncertain state of nature to revise his or her beliefs about the decision environment. Thus, the demand for this type of information might be called decision-mak-2 ing demand. The recent information economics literature has incorporated agency theory in explicitly modeling the multiperson nature of accounting problems (e.g., Baiman and Demski, 1980a, 1980b, Gjesdal, 1981, and Holmstrom, 1977). In multiperson situations, information can play a deci- sion-influencing role. For example, i f a manager's actions affect actual production costs, and the manager is evaluated and possibly compensated on the basis of the costs, then the manager's actions w i l l be influenced by the existence of the information system which reports the costs. The demand for this type of information might be called performance evaluation demand, or stewardship demand. This dissertation uses the agency framework to examine some of the issues in the development of performance evaluation measures for motiva-tional purposes. The basic agency model provides a means of studying situa-tions in which one individual (the principal) delegates the selection of actions to another individual (the agent). Within the context of the firm, the principal might be the employer or superior and the agent might be the employee or subordinate. The agency theory literature (e.g., Harris and Raviv, 1979, Holmstrom, 1979) uses the expected u t i l i t y model to represent the preferences of the principal and the agent, and generally assumes that the agent's action (effort) and a random state of nature determine the mone-tary outcome. The sharing rule (contract or compensation scheme) offered by the principal to the agent specifies how much is paid to the agent for each possible value of some performance measure or measures. The performance measure is often taken to be the monetary outcome, or the monetary outcome and an imperfect signal about the agent's effort. The compensation can be based only on what is jointly observable to the principal and the agent, and the compensation must be adequate enough to induce the agent to work for the 3 principal. Alternative employment opportunities for the agent are thus exp l i c i t l y considered. The principal w i l l generally find i t prohibitively costly to continu-ously monitor the agent to determine what action (effort) the agent chooses. Therefore, i f the agent has d i s u t i l i t y for effort and acts in his or her own self-interest, the potential for a moral hazard (incentive) problem exists because of the principal's in a b i l i t y to observe the agent's actions. If the principal pays the agent a fixed wage, the agent has no economic incentive to perform the agreed level of effort, since a low outcome can be blamed on a bad state of nature rather than on shirking by the agent. At the other extreme, i f the principal rents capital or rents the firm to the agent for a fixed fee so that the agent gets the outcome less a fixed fee, the shirking problem can be avoided entirely. The shortcoming of this type of contract is that i t imposes a nonoptimal amount of risk on the agent. That i s , the principal and the agent could be made better off in an expected u t i l i t y sense by using some other contract. Agency theory provides a framework in which i t is possible to find com-pensation schemes which efficiently motivate the agent to choose the desired actions. The idea is to create incentives through an employment contract which imposes some risk on the agent in order to provide incentives for the agent to expend some agreed level of effort. The consequences of the exis-tence of nonmonetary returns or costs, such as effort, can thus be analyzed. This is important for the analysis of performance evaluation and managerial control systems, where incentive effects play a c r i t i c a l role. The choice of variables on which compensation is to be based can be formally derived, with implications for the design of information systems. Furthermore, the analysis clearly demonstrates how the information obtained can be incorpo-rated for motivational purposes. 4 Most of the existing agency theory research (see Baiman (1982) for a comprehensive survey) has a rather narrow definition of effort, in that effort is assumed to be single-dimensional. However, people are often faced with several similar tasks which must be performed within one time period. Examples include a salesperson selling several products for a firm, an audi-tor allocating time to different tasks in an audit assignment, a manager controlling several production processes, or a manager overseeing several divisions of a company. The problem of motivating the optimal allocation of effort within one period is not only interesting in i t s own right, but also has possible implications for multiperiod problems, where effort is a l l o -cated across periods. Multiperiod problems are of interest because the eventual goal is to be able to analyze and understand the issues involved when there are current and long-term consequences of decisions, as there are in many accounting settings. Chapter 2 of this dissertation contains the notation used in the remainder of the paper and a formulation of the agency problem with alloca-tion of effort. Chapter 3 describes theoretical results and an application in the allocation setting, and Chapter 4 describes results in the one-period sequential choice setting. In this scenario, after each effort level is exerted, an associated outcome is observed by the agent before the next effort level is exerted. The agent is compensated only at the end of the sequence of outcomes. The one-period sequential choice case is an interme-diate step between the allocation of effort case, in which both the efforts are exerted before the outcomes are known, and the multiperiod case, in which the f i r s t outcome is observed and the f i r s t compensation is paid before the second effort is exerted. Chapter 5 concludes the dissertation with an outline of proposed future research. A l l technical calculations and proofs appear in the appendices. 5 CHAPTER 2 NOTATION AND FORMULATION In order to state the agency problem with allocation of effort, the following notation w i l l be used: R = the set of a l l real numbers, R_)_ = the set of a l l nonnegative real numbers, X = the set of possible monetary outcomes, x e X CZ. R is the monetary outcome, x = (x^,...,xn) is a disaggregation of the monetary outcome x, i.e., n k x = Ex. , w e R is a k-dimensional vector-valued performance measure, i=l e.g., w = x_with k=n, s(.), a real-valued function, is a sharing rule over the arguments indicated, with s(.) e [ S Q , S ] , * \" a^ = effort expended on task i , i=l,...,d, a_ = (a^ ,... ,a^) e A CZ R^_, f(x,w|a) is the joint density of x and w conditional on a, and is understood to be f(x|a) i f w is a function of x; g(«), h(.), and <(>(.) w i l l also be used to denote probability distributions; U(.): R + R is the agent's u t i l i t y function over money, where U' > 0 and U\" <_ 0, V(.): R^ •*• R is the agent's d i s u t i l i t y function over effort, where 9V/ 3a± > 0 and 32V/ 3a > 0, u = the agent's minimum acceptable u t i l i t y level, W(.): R + R is the principal's u t i l i t y function over money, where W > 0 and W • <_ 0, argmax {. } = the set of arguments maximizing^the expression in braces. In order to avoid side-betting issues, i t w i l l be assumed that the principal and the agent have identical beliefs about the conditional proba-6 b i l i t y distribution over the outcome and performance measure, given effort a_. As in much of the agency literature, the agent's u t i l i t y function is assumed to be of the form U(s) - V(a). In most of the agency literature, n=d=l. The principal's problem is Maximize // W(x-s(w))f(x,w|a) dw dx (2.1) s( .) ,a_ subject to //[U(s(w))-V(a)]f(x,w|a) dw dx > u (2.2) a e argmax {//[U(s(w))-V(a)]f(x,w|a) dw dx }. (2.3) It w i l l be assumed that (2.3) can be replaced with the conditions - J L //[U(s(w))-V(a)]f(x,w|a) dw dx = 0, i=l,...,d. (2.4) Furthermore, sufficient regularity to allow differentiation inside the inte-gral is assumed. This permits the replacement of (2.4) with //U(s(w))f (x,w|a) dw dx = V a (a), i=l,...,d, (2.5) a i a i with subscripts a^ denoting partial differentiation with respect to a^. The principal's problem Is solved by means of a generalized Lagrangian technique. A Hamiltonian (Lagrangian) is formed by attaching a multiplier X to (2.2) and multipliers to each constraint in (2.5). It w i l l be assumed that the supports of x and w do not vary as a varies, and that the partial derivatives of f with respect to each a^ exist and are nondegenerate. The dimension d is often taken to be equal to n, and the marginal cumulative distribution functions are assumed to satisfy f i r s t order stochastic domi-nance. That i s , i f F^x^a^) is the marginal cumulative distribution func-tion of X j _ , then SF^x^\\a^)/Sa^ < 0, i=l,...,n. In a framework where x.^ = h^(a£,9), where 9 represents state uncertainty, i f x^ i s increasing in a^ (i.e., a a 1 > 0 for a l l 9), then 3F 1(x 1|a i)/3a i < 0. Finally, the shar-ing rule is assumed to be measurable and bounded. For the most part, inter-3 lor solutions w i l l be examined. 7 Some of the results w i l l make use of two special classes of functions. The f i r s t is the HARA (hyperbolic absolute risk aversion) class of u t i l i t y functions, whose risk aversion functions are of the form - U , ,(x)/U'(x) = l/(Cx + D). (2.6) The C = 1 case corresponds to U(x) = ln(x+D), the C = 0 case corresponds to U(x) = - exp[-x/D], and the other cases correspond to power u t i l i t y func-tions . The other class of interest is the one-parameter exponential family of distributions. This class includes the exponential, gamma (with the shape parameter fixed), normal (with constant variance), and Poisson distribu-tions. The following representation differs slightly from the usual one for a one-parameter exponential family (see, e.g., DeGroot, 1970). Definition: A probability density function f(x|a) with respect to the measure r(.) w i l l be said to belong to the one-parameter exponential family Q i f i t can be written as f(x|a) = exp[z(a)x - B(z(a))]h(x), (2.7) where r(.) is the Lebesgue measure when the random variable x is absolutely continuous, and r(.) is some counting measure when x is discrete. The representation in (2.7) has the advantage that closed-form expres-sions can be obtained for E(x|a) and Var(x|a). In particular, E(x|a) = B'(z(a)) and Var(x|a) = B\"(z(a)) (Peng, 1975). Table II in Appendix 1 details the representations of some familiar distributions. The remainder of Appendix 1 consists of calculations which are useful in the proofs of the results in Chapters 3 and 4. 8 CHAPTER 2 FOOTNOTES If the sharing rule is unbounded, an optimal solution may not exist (Mirrlees, 1974; Holmstrom, 1977, 1979). Furthermore, the agent's wealth places bounds on the possible sharing rules. Gjesdal (1981) has shown that such a u t i l i t y function for the agent ensures that nonrandomized payment schedules are Pareto optimal. His result refers to ex post (after effort selection by the agent) randomi-zation only. Fellingham, Kwon, and Newman (1983) have shown that ex ante randomization of payment schedules is optimal under certain condi-tions. It w i l l be assumed in what follows that these conditions are not satisfied, and hence the focus is on pure (nonrandomized) payment schedules. That i s , the focus w i l l be on the first-order conditions, which apply to interior solutions. 9 CHAPTER 3 ALLOCATION OF EFFORT As stated earlier, the agency theory framework explicitly recognizes alternative employment opportunities for the agent, d i s u t i l i t y for effort, risk aversion of the agent, and the possibility of the principal obtaining information about the agent's effort, a l l for situations in which the agent has one task to perform. However, in many situations, job effort is multi-dimensional; the agent must allocate effort to several different, but possi-bly related tasks. In spite of the variety of situations in which multidi-mensional job effort occurs, l i t t l e attention has been devoted to character-izing optimal compensation schemes for these situations. S t i g l i t z (1975) considered multidimensional job effort under linear incentive schemes, and Weinberg (1975) sought an incentive compatible scheme for the problem of sales force management in multiproduct firms. Radner and Rothschild (1975) examined the properties of three heuristic strategies an agent might employ when faced with the problem of allocating effort. More recently, Gjesdal (1982) allowed for multidimensional effort and focused on the value of information. The focus of this chapter is the characterization of optimal incentive schemes for the agency problem with allocation of effort across several tasks. The issues of separability of the optimal sharing rule across tasks and the value of additional information are examined, and the results sug-gest that certain compensation schemes that are widely advocated may not be optimal. In particular, commission schemes and linear sharing rules are shown not to be optimal, in general. The special case of additive effort is discussed, and the results are applied to the problem of sales force manage-ment . 10 3.1 FIRST BEST Suppose that in addition to observing the aggregated or disaggregated outcome (i.e., w = x or w = x), the principal can observe the agent's effort. These cases may be called complete contractual information cases, since the principal can observe the agent's choice of effort. These \" f i r s t best\" situations are interesting as benchmarks for comparison with \"second best\" situations, those in which there is less than complete contractual information. The characterizations of the optimal sharing rule for these f i r s t best cases are obtained by solving the problem given by (2.1) and (2.2). As in the single-dimensional effort case, i f one individual is risk neutral and the other is risk averse, then the risk neutral individual bears a l l the risk. Thus, i f the agent is risk neutral (U(s) = s) and the princi-pal is risk averse, then Pareto optimal sharing rules are s(x) = x - k and s(x) = x - k, where k is a fixed fee paid to the principal. Conversely, i f the principal is risk neutral and the agent is not, the principal bears the risk, receiving a share of x - c, while the agent receives a constant wage c. In the event that both the principal and the agent are risk neutral, the a^'s are chosen so that the agent's marginal d i s u t i l i t y for effort equals the marginal increases in the expected outcome (i.e., so that 3E(x|a)/3a^ = 3V/3a^, i=l,...,d), and the sharing rule can be taken as s(.) = u + V(a*), with the principal receiving E(x|a*) - u - V(a*). If both the agent and the principal are risk averse, then they each bear part of the risk, as indicated in Proposition 3.1.1 below. Proposition 3.1.1. If both the agent and the principal are risk averse and they have homogeneous beliefs, then s(x) varies only with x in the f i r s t best case. 1 1 Because the optimal sharing rule depends only on x, s(x) Is the same for a l l jc_ that provide the same total x. The sharing rule S(JC_) therefore varies with x only for risk-sharing purposes - the makeup of x is unimpor-tant. Moreover, i t is easily seen that s(x) is increasing in each x^, regardless of the properties of the conditional distribution function on x. This is in contrast to the second best solution. 3.2 SECOND BEST Suppose now that the principal cannot observe the agent's effort, and hence must present the agent with a sharing rule which induces the desired choice of effort. Since the focus in most of what follows is on motiva-tional, rather than risk-sharing issues, i t w i l l be assumed that unless otherwise stated the principal is risk neutral and the agent is risk averse. As remarked above, i f there were no moral hazard problem, the principal would then bear a l l the risk. Whatever risk is imposed on the agent in the second best case is thus imposed not for risk-sharing purposes, but rather for motivational purposes. Letting f denote 9f/3a. , the optimal sharing rule, given that only x a I 1 is observed, is characterized by d E j i t f a (x|a*) U'(s(x)) = X + f(x|a*) ' ( 3 . 2 . 1 ) for almost every x such that s(x) £ [ S Q , S ] . For a l l other x, s(x) = S Q i f the le f t hand side of ( 3 . 2 . 1 ) is greater than the right hand side, and s(x) = s i f the opposite is true. For example, suppose that n = 2 , U(s) = ln s, x^ and X 2 are independent, 2 2 2 2 x 1 ~N(a 1,o 1), and x 2 ~N(a 2,a 2). Then x ~ ^(.al + a 2, + a 2) and 1/U'(s) = s. The interior portion of the optimal sharing rule is thus^ (See Table II in Appendix 1 for the normal density) 12 x - a* -s(x) = X + ( ^ + v^) 2 2 — °l + °2 ( h + u 2)(a* + a*) ^ + ^ 2 ^ 2 + 2 J _ 2 X » °1 + a2 a l + a2 which can be interpreted as a compensation scheme consisting of a fixed portion plus a commission. If the agent's u t i l i t y function is U(s) = l - e - s , then the interior portion of the optimal sharing rule is x - a| - a^ s(x) = ln[X+ (^ + v^) j ^—1* ° L + °2 In general, i f the principal is risk neutral and the agent's u t i l i t y function is in the HARA class, with risk aversion function given by -U\"(s)/U'(s) = l/(Cs+D), then the interior portion of the optimal sharing rule is ^ £ (x|a) c-tc-°^ i f <*° s(x) = (3.2.2) d T. u.f (x|a) i=l i D l n ( X + f(xla) >• i f C=0. (C=l corresponds to U(s) = ln(s+D), C=0 corresponds to U(s) = -e~ s^ D, and the other cases correspond to power u t i l i t y functions.) As in the single-task setting, the f i r s t best solution is achievable with a fixed fee going to the principal when the agent is risk neutral. This can be deduced by interpreting effort to be a vector rather than a scalar in the single-task setting proofs (e.g., Shavell, 1979). Thus, even though less than complete contractual information Is available, the princi-pal and the agent can obtain the same expected u t i l i t i e s as they could in 13 the complete contractual information case. This is because in effect, the risk-neutral agent rents the firm from the principal for a fixed fee. When U(.) is s t r i c t l y concave, equation (3.2.1) implies that a neces-2 sary and sufficient condition for s(x) to be nondecreasing in x is d f a ^ \"l -Rt f(x|a«) * >- ° < 3 ' 2 ' 3 ) for a l l x corresponding to interior solutions, where the y^'s are the Lagrangian multipliers associated with the optimal solution (a ,s(x)). When a is one-dimensional, (3.2.3) reduces to a f a(x|a*) -5* forr^n i >- °> < 3 - 2 - 4 > since y is positive (Holmstrom, 1979). If (3.2.4) is true for a l l a* e A, then f(x|a) has the monotone likelihood ratio property in x (Lehmann, 1959, p. 111). Many distributions, including a l l those in the one-parameter expo-nential family, have the monotone likelihood ratio property; this property is a stronger ordering on distributions than is first-order stochastic domi-nance (Lehmann, 1959, pp. 73-74). If a_ is multidimensional and the Lagrangian multipliers y^ are a l l nonnegative, then a sufficient condition for s(x) to be nondecreasing in x i s f a (xja*) IT1 f(x|a*) 1 ^ °' f o r a 1 1 x« i = 1 . - - - » d -When a is single-dimensional, the first-order stochastic dominance property means that as a increases, the distribution f(x|a) shifts to the right. It is this property that accounts for the monotonicity of the opti-mal sharing rule. When a_ is multidimensional, the problem of determining an ordering over the effort vectors arises. Condition (3.2.3) states that the directional derivative of log f(x[a) in the direction of y = (y^,...,u^) at 14 a* be nonnegatlve i n order for the optimal sharing rule to be monotonic. Thus, u provides a d i r e c t i o n i n which to measure the s h i f t i n g of f(x|a_). Because of the c r i t i c a l role that the m u l t i p l i e r s play i n the shar-ing r u l e , i t i s of i n t e r e s t to try to determine whether they are s t r i c t l y p o s i t i v e . A p a r t i a l answer i s provided i n Proposition 3.2.1 below, for the s i t u a t i o n i n which the vector x_ = ( x ^ , X 2 ) i s observed. Proposition 3.2.1. Suppose x^ = (x^,X2), f(xj.a) = g(x^|a^)h(x2|a2), and F a ( x i l a i ) < ^» i =l»2> with s t r i c t i n e q u a l i t y for some x^ values. Suppose further that the agent's expected u t i l i t y i s s t r i c t l y concave i n a_. Then at least one of and must be p o s i t i v e . It can be shown that i f the agent's u t i l i t y for wealth i s U(s) = 2^s, V(a) i s s t r i c t l y convex i n a, and g( •) and h( •) are exponential d i s t r i b u -tions with means a^ and a2» respectively, then the agent's expected u t i l i t y i s s t r i c t l y concave i n a. In this case, i f the p r i n c i p a l i s r i s k neutral, then both and are p o s i t i v e . Proposition 3.5.9 i n section 3.5 provides other conditions under which both and are p o s i t i v e . A f i n a l remark on the c h a r a c t e r i s t i c s of the optimal sharing rule can be made at t h i s point. Suppose f(xjj0 i s of the form given i n Proposition d i 3.2.1, i . e . , f(x|a) = II g (x |a ), where the superscripts on g( •) are i=l 1 merely i n d i c e s . The optimal sharing rules i s characterized as i n (3.2.1), with x replaced by _x. In the s p e c i a l case under consideration, this reduces 1 d to u t ( s ( x ) ) = x + 2 \\Za (x ± |a*)/g(x ± |a*) . I f , further, each gi( •) belongs to the one-parameter exponential family Q described by (2.7), then each g* ( »)/g*( *) i s a constant m u l t i p l i e d by (x^ - Hj^a^)), where Mj^a^ i s a i the mean of x^ given a^. The means M^(a^) can be thought of as standards or norms, so that the optimal sharing rule i s a function of deviations from standards ( c f . Christensen, 1982). This i s consistent with managerial 15 accounting's focus on variances (deviations from standards) as an aid in performance evaluation. 3.3 VALUE OF ADDITIONAL INFORMATION A question which naturally arises at this point i s : Would the princi-pal be better off knowing each rather than only x? That i s , w i l l the principal always be s t r i c t l y better off with disaggregated or finer informa-tion? More generally, under what conditions w i l l the principal or the agent be made s t r i c t l y better off by information in addition to the aggregate out-come, x? Intuitively, the more (imperfect) information the principal has about the agent's effort, the more effici e n t l y the agent can be motivated to exert effort. Consequently, the principal's expected u t i l i t y should increase in most situations where additional information is available. A number of peo-ple have addressed this problem. Holmstrom (1979), for example, showed that i f the additional information is of value (that i s , i f i t s optimal use w i l l lead to a Pareto superior pair of expected u t i l i t i e s for the principal and the agent), then the additional information must be informative in the sense that i t contains information about the agent's effort that is not contained in the output. The converse was also shown to be true. Gjesdal (1981, 1982) examined the relationship between Blackwell (1953) informativeness and the value of information. In order to define Blackwell informativeness, let ft be a set of possi-ble performance measures to. In this section, u> is assumed to include the output, x. An information system n is a function from ft to some signal space Y. Let y denote an arbitrary element in Y and let A, the set of a l l possible actions, be f i n i t e . Information system n is Blackwell more infor- mative than another system y : -»• Z i f and only i f P (z|a) = 16 / P(z|y)dP (y|a) for each action a in A and each signal z in Z. It should Y 11 be noted that although n is said to be Blackwell more informative than y , n may actually be only equally informative as y i s . Amershi (1982, Appendix 1) has generalized the definition for the case where A is i n f i n i t e . Amershi (1982) re-examined the value of additional Information problem, and corrected and generalized the results of Holmstrom (1979, 1982) and Gjesdal (1981, 1982). Amershi (1982) showed that a risk neutral principal weakly prefers an information system that is Blackwell more informative than another. That i s , the principal's and the agent's expected u t i l i t i e s are at least as high with the Blackwell more informative system than with the other. A risk averse principal requires that the Blackwell more informative system also provide a specific form of information about the output (see Proposition 3.3.1 below). These results differ from the single-person deci-sion maker case, where risk attitudes are immaterial. Intuitively, a risk neutral principal is concerned only with the incentive properties of contracts, whereas a risk averse principal is concerned with both the incen-tive and risk-sharing properties of contracts. The risk-sharing aspect accounts for the conditions on the output in Proposition 3.3.1 below. More specifically, Proposition 3.3.1 says that information system n is at least as preferred as information system y i f n is Blackwell more infor-mative than y with respect to the effort, a, and (i) there is no risk-shar-ing involved, or ( i i ) y says nothing more about the output x than n does, or ( i i i ) the signal provided by n is enough to determine the output. Proposition 3.3.1 (Amershi (1982, Theorem 3.1)). Let an information system n : ft -»• Y be more informative in the Blackwell sense than the system y : ft -*• Z with respect to the family of measures P^ = {p(o>|a) : aeA}. Sup-pose also, at least one of the following conditions hold: (i) The principal is risk neutral, ( i i ) The output variable and the information system y are 17 conditionally independent given n. ( I i i ) The output can be expressed as x = h(n(u))) for some measurable function h : Y * R. Then the principal weakly prefers n over y. In this proposition and in the other propositions in this section, to is a vector of performance measures that includes the output, x. Although the effort variable, a, is taken to be single dimensional, the proof holds for finite-dimensional effort vectors as well. Proposition 3.3.1 identifies conditions under which information system n is at least as preferred to information system y. It is of interest to identify conditions under which n is s t r i c t l y preferred to y. Amershi's (1982) s t r i c t preference results rely on the concept of suf-ficient s t a t i s t i c s . Using the notation above, a st a t i s t i c T : ft + K is suf- ficient for the family of measures P^ = {P(to|a) : aeAJ i f and only If there exists a nonnegative function h : ft + R + and functions g( *|a) : K •*• R such that f(to|a) = h( to)g(T( to) |a) for a l l weft and aeA, where f( •) is a density i f the random variable is continuous, or a mass function i f the random variable is discrete. A sufficient s t a t i s t i c may be viewed as an information system. A minimal sufficient s t a t i s t i c is a s u f f i -cient s t a t i s t i c T : ft -*• L that is a function of every other sufficient sta-t i s t i c . An, agency sufficient s t a t i s t i c (Amershi, 1982) \"F is equal to a suf-ficient s t a t i s t i c T on ft i f the principal is risk neutral, or (X,T) i f the principal is risk averse. *P is called a minimal agency sufficient s t a t i s t i c i f the sufficient s t a t i s t i c T is minimal. Finally, a contract (s*,a*) is called a best agency contract i f there is no other contract based on any information system on ft that is s t r i c t l y preferred to i t . The proposition below uses the concept of agency sufficient statistics to characterize s t r i c t preferences for information systems. Essentially, 18 the principal w i l l s t r i c t l y prefer an agency sufficient s t a t i s t i c n over another system y which does not generate a best contract. This is because a best contract must be a function of the minimal agency sufficient s t a t i s t i c , which extracts a l l relevant information from cu about a (Amershi (1982, Cor-ollary 3.3)). Proposition 3.3.2 below provides a situation in which the information system y cannot generate a best contract. Proposition 3.3.2 (Amershi (1982, Proposition 3.4)). Suppose a best con-tract exists and at each best agency contract, where 3 is an information system which leads to a best agency contract. The principal s t r i c t l y prefers an agency sufficient s t a t i s t i c n over a system y If -sg- log f(oj|a1^) is not a function of Y i f the principal is risk neutral (or not a function of (x, Y) i f the principal is risk averse). Here (s*,a*) is the optimal contract based on Y' Proposition 3.3.2 holds for the multidimensional effort case, with cations of Amershi's (1982) corollaries to his Proposition 3.4. Their proofs are immediate. Corollary 3.3.3 deals with the situation in which an addi-tional signal z would be of positive value given an information system which reports the outcome x and another signal y. As in Proposition 3.3.2, a con-dition is provided which implies that Y(x,y,z) = (x,y) cannot generate a best contract. Since n(x,y,z) = (x,y,z) is t r i v i a l l y a sufficient s t a t i s -t i c , Corollary 3.3.3 follows directly from Proposition 3.3.2. Corollary 3.3.4 provides a situation in which a sufficient s t a t i s t i c is s t r i c t l y pre-ferred to a nonsufficient s t a t i s t i c . W'(x-s*(S(w))) U'(s*(f3(u>>> 19 Corollary 3.3.3 (Gjesdal (1982, Proposition 1)). Let ft = {u> = (x,y,z) : x,y,z are from some spaces }. Let n be the information system that reports (x,y,z), and let Y be the information system that reports (x,y). Assume that for 8 = n and 8 = Y, W'(x-s*(B((D))) n U'(s*(g(u))) = X + i = \\ \\ TT± ^ f Ha* g ) (3.3.1) holds at the contracts (n, s*, a*p and (Y, s*, a*). Then the signal z has marginal value given (x,y) (that i s , the principal s t r i c t l y prefers n over Y) i f n 3 Z p log f(o)|a*) (3.3.2) i=l 1 ^ i ~ is not a function of (x,y). Corollary 3.3.4 (Holmstrom (1982, Theorem 6)). Suppose the principal is risk neutral, and suppose that for some system y : ft * Z, the expression in (3.3.2) is not a function of y at each a eA. Then the principal s t r i c t l y prefers any sufficient s t a t i s t i c n over y i f equation (3.3.1) holds at any best agency contract generated by information system 8. As Amershi (1982) remarks, these st r i c t preference results do not establish that an agency sufficient s t a t i s t i c is always s t r i c t l y preferred to a nonsufficient s t a t i s t i c . In order for a sufficient s t a t i s t i c to be s t r i c t l y preferred to a nonsufficient s t a t i s t i c , the principal must use Information which is provided by the sufficient s t a t i s t i c but not provided by the nonsufficient s t a t i s t i c . In addition, the principal's risk attitude is a factor, as shown in the proposition below. Part (2) of Proposition 3.3.5 says that sufficiency alone cannot determine st r i c t preference order-ing of information systems i f the principal is risk averse. 20 Proposition 3 . 3 . 5 (Amershi ( 1 9 8 2 , Proposition 3 . 5 ) ) . Let n be the minimal sufficient s t a t i s t i c and x be the output. ( 1 ) A risk neutral principal s t r i c t l y prefers n over any system y which is not a sufficient s t a t i s t i c i f and only i f every best agency con-tract (s*,a*) is such that s* is a sufficient s t a t i s t i c . ( 2 ) A risk averse principal s t r i c t l y prefers (x,n) over any system y such that (x, y) is not an agency sufficient s t a t i s t i c i f and only i f every best agency contract (s*,a*) Is such that (x,s*) is an agency sufficient s t a t i s t i c . Again, although the effort variable is single-dimensional, the result holds even i f effort is multidimensional. Amershi ( 1 9 8 2 ) next developed the following result. Suppose n(w) Is a 3 3 minimal sufficient s t a t i s t i c . If -sg- log f(co|a*) = log k( n( ui) | a*) is an invertible function of n(w), then a risk neutral principal s t r i c t l y prefers n over any system y that is not a sufficient s t a t i s t i c , and a risk averse principal s t r i c t l y prefers (x,n) over any system (x, y ) which is not an agency sufficient s t a t i s t i c . Unlike Amershi's ( 1 9 8 2 ) previous results, which were easily extended to the multidimensional effort case, the i n v e r t i b i l i t y result above does not lend i t s e l f to the multidimensional effort case. Intuitively, the dimension of a sufficient s t a t i s t i c cannot be less than the dimension of the vector of parameters to be estimated. For example, suppose that x^,...,xn ( n 2 ) are observations from a normal distribution with unknown mean 9 and unknown var-2 2 iance o . Then a sufficient s t a t i s t i c for the vector of parameters (9,0\" ) - 2 - 2 is (x,s ), where x is the sample mean and s is the sample variance. More-over, i t is obvious that more than one observation is needed in order to 2 make inferences about ( 9, a ). Thus, a sufficient s t a t i s t i c in the multidi-mensional effort case w i l l generally be multidimensional. The impossibility 21 of inverting a one-dimensional value to obtain a multidimensional s t a t i s t i c precludes the use of Amershi's i n v e r t i b i l i t y result in the allocation of effort problem. For example, in the allocation of effort problem, x_= (x^,...,xn) is potentially observable, with the distribution of x parameterized by a = n (a^,...,aj). The s t a t i s t i c x = Ex. can only be sufficient for (x,x) i f a 1-1 i s not really multidimensional, i.e., i f there is some known functional relationship among the a^s so that knowledge of one a^ Is sufficient to per-fectly infer the others. A special case of this type of relationship occurs when i t is known that the agent w i l l always choose the a^s to be equal. In the allocation problem, i t is very unlikely that a_ is not really multidimen-sional, and therefore in general, x is not sufficient for (x,3c_), i.e., the minimal sufficient s t a t i s t i c is multidimensional. Continuing with the focus on the value of additional disaggregated information, the principal's weak preference for the additional information is easily established. A multidimensional-effort version of Proposition 3.3.1 shows that the information system reporting x_= (x^,...,xn) is weakly n preferred to the information system reporting only E x., no matter what i=l the principal's or the agent's risk attitudes are. If the principal can observe _x, the interior portion of the optimal sharing rule is characterized by . d 8 a / x l s ) = A + E u, 1 U'(s(x)) ^ / i g ( x | a ) ' To i l l u s t r a t e , suppose again that n=2, and U(s) = ln s, but let jc_ = . 2 \\ (x-^.x^ - N( a_, E), where E is the covariance matrix / po^Og 2 22 Then the interior portion of the optimal sharing rule is (see Appendix 2 for bivariate normal calculations) 2 8 a tel§) s( X) = x + £ p 1 - 5 J F T i r . r x l \" a l P(x2-a2) x 2-a 2 p(x1>a1) = x + Hi1 z n 2, ~ — r - r r i + \"2 [ 2 „ 2 , ' — 7 — 2 7 ^ 0^(1-p ) ^(^(l-p ) or2(l-p ) 0]LO2(l-p ) Uj^ PV^ U 2 pi^ = X + ( x 1 - a 1 ) [ - ^ jp\" \" + ( x 2 - a 2 ) [ - ^ 77— - ] . ax(l-p ) oj^Cl-p ) CT2(1-P ^ ^^(l-p ) This compensation scheme may be interpreted as a commission scheme with d i f -ferent commission rates for each task. If = ov, and = y 2, the commis-sion rates w i l l be the same for both tasks. It should be noted that in gen-eral, even i f x^ and x 2 are independent, the optimal commission rates need not be equal across tasks. This is because when x^ and x 2 are independent ( P=0), s(x) = X - a ^ / o ^ - \\ + x i , J i / c ^ + x ^ / a 2 . In this case, the commission rate for task i depends only on the variance of x^ and the multiplier u^. Since the sharing rule depends on each Xj>, the 2 2 signal jc, obtained in addition to x, is valuable (unless M^ /o^ = l ^ / ^ ) * This can be deduced formally from Proposition 3.3.2. 3.4 ADDITIVE SEPARABILITY OF THE SHARING RULE Once the possibility of observing each x^ is introduced, the question of whether or not to reward the agent for each outcome separately arises. For example, should a manager of two divisions that are geographically dis-persed be rewarded for the performance of each separately? Analytically, the question is whether the optimal sharing rule is additively separable in 23 the x^'s. This question w i l l be addressed for the HARA class of u t i l i t y functions. Let V(x) = \\ + E ]i.g (x|a)/g(x|a). As before, i f the agent's u t i l i t y l a ± function is in the HARA class, with -U\"(s)/U*(s) = l/(Cs+D), then the interior portion of the optimal sharing rule is given by i((V(x)) C - D), i f C * 0 s(x) = < (3.4.1) D ln(V(x)), i f C = 0, for almost every jc_ such that s(x) £ [SQ,S] . If the principal is risk averse, with u t i l i t y function in the HARA class and with identical cautious-ness C (see (2.6)), then the interior portion of the optimal sharing rule is ( V(x))C(Cx+D1) - D2 C(l + (V(x)) C) i f C * 0 s(x) = < (3.4.2) i f C = 0, D 1D 2ln V(x) + D2x D1 + D 2 where corresponds to the principal, and T>2 corresponds to the agent. Equation (3.4.1) implies that i f the principal Is risk neutral and the agent's u t i l i t y function is in the HARA class, then a necessary condition for the optimal sharing rule to be additively separable is that C=l, i.e., that the agent have a log u t i l i t y function. Given that U(s) = ln s, a strong form of independence of the outcomes, x^,...,xn, is a sufficient con-dition for the optimal sharing rule to be additively separable. More spe-c i f i c a l l y , let g^(x^|a£) be the density of outcome x^ given effort a^, and let g(x|a_) be the joint density of jc_ given a_. Then g(x 1,...,xja 1,...,a n) = n^ g (x i|a i) (3.4.3) 24 Is a sufficient condition for additive separability of the optimal sharing n i rule, given that U(s) = In s. In this case, s(x) = \\ + I s (x ), where i=l 1 i i s (x ) = u — . The example in Section 3.3 shows that given U(s) = 1 V U j a . ) ln s, independence is a sufficient but not a necessary condition for separ-ab i l i t y of the sharing rule. One might conjecture that there are other com-mon distributions of dependent random variables which, when U(s) = ln s, yield a separable sharing rule. However, no other common joint distribu-tions which seem appropriate (see, e.g., Johnson and Kotz, 1972), seem to lead to such a result. In general, then, the optimal sharing rule w i l l not be additively separable. It is interesting to note that (3.4.3) is not sufficient to yield a separable sharing rule i f U(s) ln s. Furthermore, (3.4.3) is not s u f f i -cient to yield a separable sharing rule i f both the principal and the agent are risk averse, with HARA-class u t i l i t y functions and identical cautious-ness C. This is easily seen from equations (3.4.2). Hence, even i f the principal and agent have identical log u t i l i t y functions, a separable shar-ing rule is not optimal. These results differ from those in the cooperative setting, in which a weighted sum of the principal's and the agent's expected u t i l i t i e s is maxi-mized (no Nash constraint is necessary). In the cooperative case, i f beliefs are identical and the principal and agent are s t r i c t l y risk averse, then the optimal sharing rule is linear for a l l weights i f and only i f the individuals have HARA-class u t i l i t i e s with identical cautiousness (Amershi and Butterworth, 1981). Thus, the moral hazard problem partially accounts for the generally nonlinear form of the optimal sharing rules. One additively separable compensation scheme which is commonly used is the commission scheme. This scheme has the further restriction that 25 s^(x^) = c^x^ + b^ , a l i n e a r function of x^. As above, a necessary condi-t i o n for a commission scheme ( l i n e a r sharing rule) to be optimal i s that the p r i n c i p a l be r i s k neutral and that the agent have a log u t i l i t y function. Given U(s) = In s, whether or not the optimal sharing rule i s l i n e a r depends on the conditional d i s t r i b u t i o n of the outcomes given e f f o r t . 3.5 ADDITIVE EFFORT This section examines the s p e c i a l case where e f f o r t i s additive, as when e f f o r t represents time spent on d i f f e r e n t tasks, and where there i s no i n t r i n s i c d i s u t i l i t y for any p a r t i c u l a r task. In t h i s case, the d i s u t i l i t y function for e f f o r t expended on d tasks can be written as V(a^+..,+a^). This case necessitates only minor changes i n the analysis; p a r t i a l deriva-tives of V(.) with respect to a± are replaced by V ' ( . ) . The assumption that the p r i n c i p a l i s r i s k neutral and the agent i s r i s k averse w i l l be main-tained i n th i s s e c t i o n . Suppose that there i s one outcome x^ associated with each a^, and that d the mean of each x^ i s k i m i ( a i ) , so that E(x) = Z k^m^a.^), where k^ ^ > 0 and m|( •) > 0. In the f i r s t best case, the f i r s t order conditions require that the agent receive a constant wage and that 8E(x|a)/9 a i s k ^ j l a j ) = XV'(.), for a l l i . (3.5.1) The simplest case i s that of constant marginal pr o d u c t i v i t y , where m i ( a i ) = ai» ^ o r further, k^ = k for a l l i , then (3.5.1) i n d i -cates that k/ X = V ( Za i), (3.5.2) and hence any mix of e f f o r t s s a t i s f y i n g (3.5.2) i s equally acceptable to both the p r i n c i p a l and the agent. The k^'s may be thought of as measures of e f f i c i e n c y of e f f o r t (Shavell, 1979). If a l l the k^'s are unequal, then a boundary s o l u t i o n r e s u l t s . In p a r t i c u l a r , a l l but one of the a^'s are zero. The problem i s thus e s s e n t i a l l y one of choosing on which task of many to 26 expend effort. Suppose there are two tasks, with k^ > k 2. In this situa-tion, the optimal solution is to devote effort exclusively to task one. These results are summarized as Lemma 3A.2 in Appendix 3, where the proofs can also be found. Comparison of two one-dimensional effort situations with k^ > k 2 shows why the principal Is better off with a^ > 0 and at. = 0 than with a^ = 0 and ASj > 0. Since k^ > k 2, there is a higher return per unit of effort for task one than from task two. Furthermore, i t is worthwhile for the principal to induce more effort for task one than for task two (see Proposition 3A.3 and it s proof in Appendix 3). The combined productivity gains (recall that E(x^) = k^a^) outweigh the required increased fixed wage compensation to the agent, who would receive the same expected u t i l i t y for either task. The principal's situation can be depicted graphically as follows: ^ i a _ + J k a l * \" s* i ^ ^ - f ^ k 2 a ^ ^ & - T * k2 a2 • s* 1 1 at, a£ For general m^a^), (3.5.1) implies that k^m^(a^) = k^ml(aj), i,j=l d. The marginal impacts of the a^'s on the expected outcomes are balanced, and hence the solution w i l l generally be interior. If the mean functions are identical, then the optimal efforts w i l l be equal. Although the agent's u t i l i t y for wealth is not important in determining the principal's choice of the a^'s in the f i r s t best case, i t is important 27 in the second best case. Assuming an Interior solution, the f i r s t order conditions in the second best case require that gU(sQQ) _ 3EU(s(x)) . . , / o c o \\ - » i > J _ l > • • • » Q ^ J . J . J ; I j Since the agent's effort is not observable in this case, the principal must induce the agent to exert the optimal amount of effort at one or more tasks. The principal may find i t optimal to devote resources to preventing shirking at only one task even i f multiple tasks are available. It is possible that the principal could, by imposing less risk, motivate the agent more e f f i -ciently i f the agent were induced to devote effort to only one task. Since the risk-averse agent must be compensated for bearing risk, the principal may be better off imposing risk related to just one task. The propositions in the remainder of this section describe situations in which a boundary solution or an interior solution w i l l be optimal, and characterize interior solutions. Before stating the propositions, a simple example wi l l be used to introduce the issues. Suppose there are two independent and identical tasks, whose outcomes are represented by X^ and X2- Suppose further that the agent's action space is {(2a*,0),(0,2a*),(a*,a*),(a*,0),(0,a*),(0,0) }, where an effort level of 0 represents the minimal effort the agent w i l l exert. Suppose that the proba-b i l i t i e s of X^ given a are: Probabilities given that x i a=2a* a=a* a=0 $1 .10 11/12 1/2 1/12 -.10 1/12 1/2 11/12 E(X 1 la) 1.00 0.50 0.00 VarCXi la) 0.11 0.36 0.11 28 The joint outcomes occur with the following probabilities: Probabilities given that a^=2a* a^=a* a^=a* a1=0 a1=0 Reward (X 1,X 2) a2=0 a2=2a* a2=a* a2=0 a2=a* a2=0 si (1.1,1.1) 11/144 11/144 1/4 1/24 1/24 1/144 s 2 (1.1,-.1) 121/144 1/144 1/4 11/24 1/24 11/144 s 3 (-.1,1.1) 1/144 121/144 1/4 1/24 11/24 11/144 s 4 (-.l.-.l) 11/144 11/144 1/4 11/24 11/24 121/144 E(X1+X2|a) 1.00 1.00 1.00 0.50 0.50 0.0 Var(X1+X2|a) 0.22 0.22 0.72 0.47 0.47 0.22 Let s_= (s^,s 2,S3,S4). Suppose the principal's problem i s : Maximize E(250X! + 250X2) - E(s) subject to EU(_s) - V(ai + a 2) >^ u (a^,a 2) maximizes |EU(J3) - V(a^+a2)} • Let IKSJ^) = Ss^ and u = 10, and a* = 1. The optimal solution is for the principal to induce the agent to exert 2a* at one task, with the reward for the one task as follows: s = 148.84 and s 0 = 96.04, where s i s paid i f the outcome is 1.1, and SQ is paid otherwise. If the principal desired to induce the agent to exert a* at both of the tasks, the following sharing rule would be optimal: B[ = 207.40, s 2 - s 3 = 144, s£ = 92.16. Looking at the variance as a measure of risk, we note that the outcome is riskier when a_ = (a<£,a*,) than when a_ = (2a*,0). However, this risk i s not directly of concern to either the principal or the agent, because the principal is risk neutral and the agent is not concerned about the riskiness of the outcomes per se, but rather about the effects on the compensation 29 received. In the example above, Var(_s) = 213.0 while Var(j_') = 1667.2, and E(j_) = 144.44, which is less than E(s_') = 146.88. The principal can thus motivate the agent more efficiently with a boundary solution rather than with an interior solution. In this case, the principal's expected payments to the agent are lower for the sharing rule which imposes less risk (as mea-sured by the variance) on the agent. Although the variances of the outcomes are not directly of concern to either the principal or the agent, they are indirectly of concern. and X 2 are not only outcomes, but also signals about the agent's efforts; as such, they provide information about the efforts. The relative magnitudes of the variances of the outcomes are potential surrogates for measures of informativeness, since the variances indicate how the signals (information) about the efforts w i l l vary as the efforts vary. In the example above, a total effort level of 2a* w i l l provide the same total expected outcome, regardless of whether a* is devoted to each of two tasks, or 2a* is devoted to a single task. However, the variance of the outcome is smaller when 2a* is devoted to a single task than when the effort is allocated to two tasks. Since the expected outcomes are the same, the risk-neutral principal desires to allocate effort in the way that provides the most information about shirking. That i s , information issues become dominant in the principal's choice of the allocation of effort. A situation similar to the discrete outcome example above occurs when the X^'s are independent and identically distributed with a normal distribu-tion with mean ka and variance . If effort a* is devoted to each of two independent tasks, then E(X1+X2|a^=a*,a2=a*) = 2ka* and Var(X 1+X 21ai=a*,a 2=a*) = 2 Var(X x| a ]=2a*) . (3.5.4) Thus, one might conjecture that a boundary solution is optimal in cases where (3.5.4) holds and there are independent and identically distributed outcomes, with the means proportional to effort. It should be pointed out, however, that the additivity of effort would also be c r i t i c a l for this result. If the X^s have Poisson distribution with E(Xi|ai=a) = ka = Var(X^|a^=a), then the variances change as the efforts change. If a* is exerted at each of two independent tasks, then E(Xj+X2|a^=a*,a2=a*) = 2ka* = Var(Xj+X2|a^=a*,a2=a*). If 2a* is exerted at one task, say task one, then E(X1|a1=2a*) = 2ka* = E(XX+X2|ax=a*,a2=a*) = Var(X x|a L=2a*). Therefore, we might expect that the principal would be indifferent between a boundary solution and an interior one. Finally, consider the exponential distribution, where E(X^|a^=a) = ka and Var(X^|a^=a) = k a . If a* is exerted at each of two independent tasks, then E(X1+X2|a1=a*,a2=a*) = 2ka* and Var(X1+X2|a]=a*,a2=a*) = 2k 2a* 2. If 2a* is exerted at one task, then E(X1|a1=2a*) = 2ka* = E(X1+X2|a1=a*,a2=a*) but Var(X1|a1=2a*) = 4k 2a* 2 > Var(X]+X2|ax=a*,a2=a*). Thus, in this situa-tion, we might conjecture that an interior solution, rather than a boundary solution, would be optimal. The propositions below substantiate the intuitive arguments above con-cerning when an interior solution or a boundary solution is optimal, given 31 that the expected outcomes of independent and i d e n t i c a l tasks are propor-t i o n a l to e f f o r t expended. If the expected outcomes are nonlinear i n e f f o r t , then the situa t i o n s become more complicated. I n i t i a l l y , the normal d i s t r i b u t i o n with constant variance but with mean a function of e f f o r t w i l l be considered. This case i s of p a r t i c u l a r i n t e r -est, since i t i s the only d i s t r i b u t i o n i n Q (see (2.7)) whose variance i s independent of the agent's e f f o r t . The following proposition states condi-tions under which a boundary solution i s optimal. Proposition 3.5.1. Suppose the p r i n c i p a l i s r i s k neutral, U(s) = 2v^ T, and x^ and x 2 are c o n d i t i o n a l l y independent and i d e n t i c a l l y d i s t r i b u t e d normally with mean ka and constant variance. Suppose further that V(a) = V(Za^). Then a boundary sol u t i o n i s optimal.^ The proposition below characterizes optimal unique i n t e r i o r solutions. Proposition 3.5.2. Suppose the p r i n c i p a l i s r i s k n eutral, the agent i s r i s k averse, and gO^a) = f (xjj a^)f ( x 2 | a 2) , i . e . , x^ and x 2 are c o n d i t i o n a l l y independent and i d e n t i c a l l y d i s t r i b u t e d . Suppose further that V(a) = V( Ea^). If a unique i n t e r i o r s o l u t i o n i s optimal, then the optimal solution has a^ = a*, and u* = u ,^ where and u2 are the Lagrangian m u l t i p l i e r s described e a r l i e r . This r e s u l t i s independent of the u t i l i t y function of the risk-averse agent or the d i s t r i b u t i o n of x^ given a^; the c r i t i c a l element i s that the outcomes are c o n d i t i o n a l l y independent and i d e n t i c a l l y d i s t r i b u t e d . This r e s u l t does not say that a l l agency problems such that the p r i n c i p a l i s r i s k neutral, the agent i s r i s k averse, and the outcomes are c o n d i t i o n a l l y inde-pendent and i d e n t i c a l l y d i s t r i b u t e d have solutions of a^ = a| and u£ = u|; this i s evident from Proposition 3.5.1. Proposition 3.5.2 indicates that i f the optimal s o l u t i o n has the agent a l l o c a t i n g nonzero e f f o r t to each task, then the e f f o r t s should be equal at each task i f the tasks present indepen-32 dent and identical expected returns to the principal. The following propo-sition, which applies to the one-parameter exponential family (see (2.7)), describes conditions under which an interior solution is optimal. These conditions are sufficient but not necessary. Proposition 3.5.3. Suppose the principal is risk neutral, U(s) = and 8 (51 §L) = f ( x i ! a ^ ) f ( x2 la2^» w n e r e f('|a) belongs to Q and has mean M(a), where M(0) _> 0 and M'(a) > 0. Suppose further that V(a_) = V^Ea^. Let a* be the optimal effort in the one-task problem, (i) If M(a) is concave and z ,(a*)M'(a*)/[z ,(a*/2)M'(a*/2)] < 1/2, (3.5.5) then a boundary solution is not optimal. ( i i ) If M(a) is s t r i c t l y concave and z'(a*)M'(a*)/[z,(a*/2)M,(a*/2)] < 1/2, (3.5.6) then a boundary solution is not optimal. In both cases, i f a unique inter-ior solution is optimal, then a^ = a*, and y£ = u^. As shown in below in Corollary 3.5.4, z'(a)/z'(a/2) is often indepen-dent of a, and hence one need not actually solve for the optimal one-task effort. Corollary 3.5.4. Under the conditions in Proposition 3.5.3 i f M(a) = ka and z'(a*)/z f(a*/2) < 1/2, then an interior solution is optimal. In particular, (i) For the exponential distribution with parameter l/(ka), an interior solution is optimal (z'(a)/z'(a/2) = 1/4). ( i i ) For the gamma distribution with parameters n/(ka) and n, an interior solution is optimal (z f(a)/z'(a/2) = 1/4). The following cases do not satisfy (3.5.5) but are included for pur-poses of comparison: ( i i i ) The Poisson distribution with mean ka has z'(a)/z'(a/2) = 1/2. 33 (iv) The normal distribution with mean ka and constant variance has z'(a)/z'(a/2) = 1. The normal distribution should not, of course, satisfy (3.5.5) in view of Proposition 3.5.1. In each of the cases in Corollary 3.5.4 the expected outcomes increase linearly with the agent's efforts. In case ( i i i ) , the variances of the out-comes also increase linearly with the agent's efforts. In case (i v ) , the variances of the outcomes are unaffected by the efforts. In cases (i) and ( i i ) , the variances of the outcomes increase quadratically with the efforts. A boundary solution is optimal in case (iv), where the rate of increase in the variance is s t r i c t l y less than the rate of increase in the mean. An interior solution is optimal in cases (i) and ( i i ) , where the rates of increase in the variances are s t r i c t l y greater than the rates of increases in the means. The following two propositions characterize optimal interior solutions when the means of the outcomes are linear in effort. Proposition 3.5.5. Suppose the principal is risk neutral, U(s) = 2/s, and g(xl§.) = f ( x l I a i ) f ( x 2 I a 2 ^ » w n e r e belongs to Q and has mean M(a). 2 Suppose further that V(a_) = V(Ta±). If M(a) = ka and z\"(a)/z' (a) is s t r i c t l y monotonic, then an optimal interior solution is unique and has a l = a2 a n d Hi = ^* ^ e s t r * c t monotonicity is satisfied by the exponen-t i a l and gamma distributions (given that M(a) = ka), but not by the normal or Poisson distributions. Proposition 3.5.6. Suppose the principal is risk neutral, U(s) = 2/s , and g(x|a) = f(x^|a^)f(x2|a2), where f(.|a) has mean M(a). Suppose further that 2 V(a_) = V^Ea^. If M(a) = ka and I'(a)/I (a) is s t r i c t l y monotonic, where 1(a) = /fg/f' dx, then an optimal interior solution is unique and has a^ = a^ and y* = p*. 34 1(a) is called Fisher's information about a contained in x, and is a useful concept in mathematical statistics (see, e.g., Cox and Hinkley, 1974). The next corollary demonstrates that in part, the shape of the expected outcome function determines whether the optimal solution w i l l be interior. Corollary 3.5.7. Under the conditions in Proposition 3.5.3 i f M(a) = a\", then an interior solution is optimal i f f(.|a) is (i) Normal (M(a),cr2) and 0 < ct<_ 1/2 or ( i i ) Exponential (1/M(a)) and 0 < a < 1 or ( i i i ) Poisson (M(a)) and 0 < o< 1. It is well known that knowing f a / f is equivalent to knowing the l i k e l i -hood of a given the observations. For the exponential family Q, f a / f is given by z'(a)(x-M(a)), where M(a) is the mean of x conditional on a. It is z'(a) and M(a) which play an important role in determining whether a boun-dary solution or an interior solution is optimal. This might be expected, for the x^'s are not only outcomes, but also signals about the efforts that have been expended. Since f a / f is sufficient for the likelihood of a given x, z'(a) and M(a) together measure, to a certain degree, the informativeness of x about a. It is interesting to compare the results for the second best case with those for the f i r s t best case. In the f i r s t best case, (3.5.1) indicates that i f M(a) = ka, then whatever the distribution of x given a, the princi-pal w i l l be indifferent between an interior solution or a boundary one, as long as the total amount of effort expended is the same in both cases. In the second best case, however, Proposition 3.5.1 says that i f the distribu-tion is normal with mean ka and constant variance, then a boundary solution is optimal. On the other hand, i f the distribution is exponential or gamma with mean ka, then an interior solution is optimal (Corollary 3.5.4). 35 If, in the f i r s t best case, the means are concave in effort, then (3.5.1) indicates that an interior solution is optimal, and the optimal efforts are equal i f the mean functions are identical (M'(a^) = M'(a^) implies that a* = alSf). Corollary 3.5.7 indicates that for a specific second best case with concave means, a similar result concerning the optimality of an interior solution holds. The results up to this point have assumed that and x 2 are condition-ally independent and identically distributed. The next two propositions deal with the case of conditionally independent but nonidentically d i s t r i -buted x^'s. Proposition 3.5.8. Suppose the principal is risk neutral, U(s) = 2/s, and g(x|a) = f(x 1|a 1)h(x 2|a 2), where f(.|a 1) and h(.|a 2) belong to Q and E(x i|a=a i) = k±a±. Suppose further that V(a) = V( Ea.^ ) . (i) If x^ has an exponential distribution with mean k^a^, then k^ > k 2 implies that a^ > a^ and u£ > u^. ( i i ) If x^ has a gamma distribution with mean k^a^, then k^ > k 2 implies that a^ > ai£ and u£ > uJ. ( i i i ) If x^ has a normal distribution with mean k^a^ and constant variance, then > k 2 implies that the optimal solution is a boundary solution, with a* > 0 and a* = 0. (iv) If x^ has a Poisson distribution with mean k^a^, then k^ > k 2 implies that the optimal solution is a boundary solution with a^ > 0 and a^ =0. The following proposition states that at least for a specific second best case, the optimal Lagrangian multipliers are positive. Signing the multipliers is of importance because of their c r i t i c a l role in the determin-ation of the optimal sharing rule. For example, i f the density of x^ given a^ satisfies the monotone likelihood ratio property in x^ for a l l i , then the positivity of the u!s guarantees that the optimal sharing rule is 36 increasing in each x^. It can be shown that under certain conditions in the second best case, not a l l the yjs can be zero, and hence in the situations above where the optimal multipliers y* are equal, they must be positive. Proposition 3.5.9. Suppose the principal is risk neutra 1, U(s) = l/s, and g(xl§.) = f ( x i | ai)h(x 2 1 a 2) , where f(.|a^) and h(.|a 2) belong to Q. Suppose further that V(a) = V(Ea^). If an interior solution is optimal, then y£ > 0 and y* > 0. Note that in Proposition 3.5.9, x^ and x 2 need not be identically dis-tributed, although they are conditionally independent. Furthermore, the result holds for general V(a), as long as 9V/3a^ > 0, i=l,2. 3.6 APPLICATION TO SALES FORCE MANAGEMENT In this section, the previous analysis of multidimensional effort situa-tions is applied to the problem of sales force management. Steinbrink (1978) depicts the c r i t i c a l role of compensation of a sales force as follows: Any discussion with sales executives would bring forth a con-sensus that compensation is the most important element in a program for the management and motivation of a f i e l d sales force. It can also be the most complex. Consider the job of salespeople in the f i e l d . They face direct and aggressive competition daily. Rejection by customers and prospects is a constant negative force. Success in selling demands a high degree of self-discipline, persistence, and enthusi-asm. As a result, salespeople need extraordinary encouragement, incentive and motivation in order to function effectively. . . .A properly designed and implemented compensation plan must be geared to the needs of the company and to the products or services the company se l l s . At the same time, i t must attract good salesmen in the f i r s t place . . . Management of the sales force has been the focus of a great deal of research, much of i t empirical. Steinbrink (1978), in a survey of 380 com-panies across 34 industries, found that most companies favored a combination of salary, commission, and bonus schemes. Typical commissions used were 1) Fixed commissions on a l l sales 2) Different rates by product category 37 3) On sales above a determined goal 4) On product gross margin. These commission schemes are a l l examples of linear sharing rules. Farley (1964), Berger (1975), and Weinberg (1975, 1978) studied the problem of \"jointly optimal\" compensation schemes. They assumed a given compensation system (a commission scheme based on gross margin) and sought to determine i f that system is incentive compatible, meaning that the sales-person w i l l be induced to choose levels of effort which the company desires. In these analyses, the measure of effort is taken to be time spent selling. The total time available is assumed to be fixed and the decisions are how to allocate the total time across several products. Farley (1964) demonstrated that i f a commission system based on gross margin is used, the commission rates should be the same for a l l products in the case where both the firm and the salespeople are income maximizers. Weinberg (1975) extended Farley's result to include the choice of discounts on each product as well as the choice of time spent selling each product. Both papers assume that the time spent selling one product does not affect the sales of any other product. Furthermore, sales are considered to be a deterministic function of time, although the conclusions are unaffected by uncertainty because of the assumed risk neutrality of both the firm and the salespeople. Weinberg (1978) maintained the assumption of risk neutrality of the salespeople, and further extended his and Farley's analyses by allow-ing for interdependence of product sales and relaxing the assumption that salespeople maximize income. Even in these situations, an equal gross mar-gin commission system is incentive compatible i f the firm's objective is to maximize expected gross profits. Berger (1975) examined the combined effects of uncertainty and non-neu-tral risk attitudes on the part of the salespeople. He retained Weinberg's 38 and Farley's assumption of constant marginal cost per product, but treated sales of each product as a random variable parameterized by the time spent selling that product. Berger demonstrated that i f a commission scheme is used in this situation, i t may be undesirable for the firm to set equal com-mission rates for a l l products. The agency model allows for many of the important factors in the sales force managment problem and provides a more complete analysis of the problem by determining an incentive scheme which motivates the salesperson to make decisions that are Pareto optimal, rather than taking the compensation scheme as given. The problem w i l l therefore be examined below in an agency framework.^ Interdependence of products, provision of enough net benefits for the salesperson to join and stay with the firm, and also the salesper-son's tradeoff between money and time spent selling are incorporated. In connection with this, the total time spent selling in a given time period wi l l be a choice variable. In order to focus on the motivational rather than risk sharing aspects of the problem, i t w i l l be assumed that the sales-person is risk averse, and the firm is risk neutral and therefore desires to maximize expected profit. The agency theory analysis isolates conditions under which some sort of commission scheme is Pareto optimal, and shows that even when a commission scheme is Pareto optimal, the commissions are gener-ally unequal. In this analysis, effort w i l l be interpreted as \"time spent selling,\" and n w i l l represent the number of products available to be sold. It w i l l be assumed that the salesperson has no intrinsic d i s u t i l i t y for selling any particular product, so that the d i s u t i l i t y function may be taken to be V(Ea^), with V increasing and convex. The remaining notation will be as defined previously, with x^ denoting the difference between sales revenue and variable noncompensation costs for product i . ^ 39 Suppose that the principal (firm) and the agent (salesperson) have identical beliefs. This might be the case when the salesperson is f i r s t hired. Suppose also that the firm and the salesperson are in a f i r s t best situation, either because they are acting cooperatively or because the firm can perfectly observe the times spent selling each product. Recall that in the f i r s t best situation, i t makes no difference whether the principal observes only the total outcome, x, or the vector of outcomes, x_. Since the firm is risk neutral and the salesperson is risk averse, the optimal sharing rules is a constant salary c = U*~^(l/ X) for the salesperson, with the firm receiving the remainder, x-c. The firm requires that the salesperson choose sales effort so that 3E(x|a)/3ai = 9E(x|a)/9a^, i,j=l,...,n. (3.6.1) The interpretation in the cooperative setting is that the salesperson hap-pily supplies effort levels a^ satisfying (3.6.1) in return for the salary c, since in doing so, he or she receives the market u t i l i t y , u. In the per-fect observability setting, the firm pays the salesperson a salary c i f effort levels a^ satisfying (3.6.1) are exerted, and pays nothing otherwise. Observe that in the f i r s t best case, the salesperson chooses effort levels according to their effect on mean outcome. Suppose E(x^) = M^c^a^, where represents the contribution margin (sales revenues minus variable noncompensation costs) per unit of product i , and c^a^ represents the expected quantity of product i that w i l l be sold i f effort a^ is exerted. The analysis in Section 3.5 then applies. If M^ c^ = ^2C2» i , e ,» If t n e contributions per unit of time spent selling are equal, the total effort expended is the only concern. If M^c^ > M2C2, then the Pareto optimal strategy is for the agent to devote effort only to the f i r s t product. Under a more general return structure, the efforts a^ and a2 w i l l 40 be nonzero and unequal. If the mean functions are identical and nonlinear and monotone in a^, then the optimal efforts are such that a^ = a2« In practice, a straight salary is seldom used for salespeople because of imperfect observability and imperfect cooperation (moral hazard). In such situations, a second best analysis is appropriate. Consider f i r s t the one-product case, in which the interior portion of the sharing rule is char-acterized by I _ g a(*|a) U'(s(x)) \" X + P g(x|a) » where x and a are univariate and the subscript a denotes differentiation with respect to a. Examples of specific sharing rules are provided in Table I for two members of the HARA class of u t i l i t y functions and two members of the one-parameter exponential family Q. Sharing Rule Given g(x|a) = U(s) (M(a)) _ 1exp[-x/M(a)], M'(a) > 0 (2Tra) _ 1exp[-(x-M(a)) 2/2a 2], M'(a)>0 l n s X + ^ 2 i ^ l ( x - M ( a * ) ) X _ V*«a*)M'(a*) + £ M X M (a*) b > 1 ( ^ / ( b - l ) ^ / ( b - l ) Table I. Examples in One-Product Case Observe that when U(s) = ln s, the sharing rules shown (and others corres-ponding to different members of Q, the one-parameter exponential family) can be interpreted as a salary plus commission on the outcome x, a scheme com-monly found in practice. If the agent's u t i l i t y function is a concave power function, then the resulting sharing rule is a convex power function of a linear form. The compensation schemes which pay a salary plus bonus commis-sions (e.g., s(x) = m + m^ x i f x < XQ, S(X) = m + m^ x + m2(x-xo) i f x > XQ) can be considered as approximations to these sharing rules. 41 The case where n > 1 is more complicated i f the agent's u t i l i t y func-tion is a power function, since cross terms in the x^'s appear. In order to examine conditions under which i t is optimal to use a salary-plus-commission scheme, the agent's u t i l i t y function w i l l be taken to be U(s) = ln s, since this is the only situation in which a linear scheme can be optimal (see Sec-tion 3.4). The examples below employ the normal distribution because of its convenient representation for dependent random variables. For purposes of il l u s t r a t i o n , i t suffices to take n = 2. Suppose then that U(s) = ln s, n = 2, and that x ~ N(9(a),E(a)), where 9(a) = ( 9^(a), 9 2(a)) and £(a) is the covariance matrix I 2 \\ ^(a) p(a) ^ (a) a 2(a) 2 p(a) o^a) a 2(a) o^a) \\ / At this level of generality, the optimal sharing rule is quite complicated (see Appendix 2). It is not separable in x^ and x 2, and therefore is not a salary plus commission scheme. A correlation coefficient which is constant (independent of a) is not sufficient for the sharing rule to be a salary plus commission scheme, although p = 0 does lead to a sharing rule which is additively separable in x^ and x2« Sufficient conditions for a salary plus commission scheme to be optimal are that both the correlation coefficient 2 and the variances be constant, with P * 1. The commissions are determined 2 by p, marginal increases in the means 9^(a) at a*, the variances and the multipliers u^. Three especially interesting results of the example above are: (1) In the case of the normal distribution with log u t i l i t y , independence of the products is enough to guarantee additive separability (in x^ and x 2) of the optimal sharing rule, but is not enough to guarantee that the optimal 42 sharing rule w i l l be a linear sharing rule. That i s , the optimal sharing rule is not a (salary plus) commission scheme, let alone an equal commission rate scheme. (2) It is not necessary for the products to be independent in order for a salary plus commission scheme to be optimal, or for a separable sharing rule to be optimal. (3) The optimal commission rates are generally not equal across products. The agency analysis applied to the sales force management problem indi -cates that only under very special circumstances is a commission scheme Pareto optimal. In practice, of course, commission schemes are favored because of their simplicity and ease of application, as well as their recog-nized incentive effects. If commission rates are used with risk averse salespeople who face uncertainty in sales, the rates should most likely not be equal across products, according to the analysis above. The results in Section 3.5 on the allocation of additive effort with independent outcomes provide some further insights about optimal compensa-tion schemes for salespeople. It should be recalled that most of the results in Section 3.5 were proved only for U(s) = 2v^\\ Thus, the remarks that follow are restricted by the assumption of that particular u t i l i t y function for wealth for the agent. A principle commonly taught in managerial accounting texts is that under certainty, in order to maximize profits given one scarce factor of production, a firm should manufacture the product which returns the highest contribution margin per unit of the scarce factor (see, e.g., Horngren (1982, p. 373)). This principle does not necessarily hold in the agency setting. In the f i r s t best case, i f the means are linear in effort, then the principle holds. In addition, Proposition 3.5.8 indicates that in the second best case, i f expected returns are linear In effort, then a l l the 43 agent's effort should be put into selling the product with the highest expected return per unit of effort i f the underlying distribution is normal with constant variance, or Poisson. However, i f the underlying distribution is exponential, then more effort should be put into selling the product with the higher expected return per unit of effort, but both efforts w i l l be positive unless the expected returns per unit of effort are very different. (See the discussion after Proposition 3.5.8.) For the exponential distribution with E(x^|a^) = k^a^, the optimal sharing rule is given by s(x t ,x2) = [ X + j (x L - k 1a*) + ^ ( x 2 \" k2 a2*^ ^ \" k l a l k2 a2 If k^ > k2» then > and a| > a^. Equation [3] in the proof of Proposi-2 2 2 2 tion 3.5.8 shows that y*/a* = u|/a^ . Therefore, M*/(kxa* ) < p*/(k 2a* ). This implies that when k^ is greater than k 2 (the expected return per unit of effort is greater for product one than for product two), the agent's com-pensation per unit of x^ (the return on product one) i s less than the com-pensation per unit of x 2. Continuing with the exponential distribution case, i f kj^ = k 2, then a^ = a^ and ii£ = u| (Proposition 3.5.5). The agent's compensation per unit of x^ is equal to the compensation per unit of x 2, and the sharing rule can be written as 2y* 2 s(x 1,x 2) = [ X + j ( x i + x 2 ) \" ] • k l a * 1 Thus, the information (x^,x 2) has no value in addition to x^ + x 2. A simi-lar result holds for more general situations, also. Proposition 3.5.2 says that i f the principal is risk neutral, the agent is risk averse, V(a) = V(Ea^), the x^'s given a^ are independent and identically distributed, and a unique interior solution (a*^ > 0, a*, > 0) is optimal, then a^ = aij and u* = 44 U*,. Under these conditions, the agent's compensation per unit of x^ is equal to the compensation per unit of x 2. It is important to note that i f 2 x^ given a^ has a normal distribution with mean ka^ and variance o\" , and the x^'s given a^ are independent, a boundary solution (e.g., a^ > 0, a^ = 0) i s optimal. In this case, the agent would receive no compensation based on x 2« Up to this point, the focus has been on a single agent exerting multi-ple efforts. A related topic is that of multiple agents, which is pertinent here because a firm w i l l generally have more than one salesperson. Feltham (1977b) examined the use of penalty contracts when a l l the agents are iden-t i c a l , and Holmstrom (1982) showed that the effectiveness of group penalties wi l l be hampered by limited endowments of the agents, especially as the num-ber of agents becomes large. An important question in the multiple agent problem is whether or not each agent should be rewarded independently of the others' performances. Holmstrom (1982) showed that i f the agents' outcomes are correlated with each other through the common uncertainty they face, then basing agent i's share on each agent's outcome helps reduce the uncontrollable randomness in agent i's reward. Holmstrom (1982, p. 335) stated that . . .forcing agents to compete with each other is valueless i f there is no common underlying uncertainty. In this setting, the benefits from competition i t s e l f are n i l . What is of value is the information that may be gained from peer performance. Competition among agents is a consequence of attempts to exploit this informa-tion. Only aggregate information about peer performance is used in the optimal sharing rules i f the aggregate measure captures a l l the relevant information about the common uncertainty. Of course, i f the agents' outcomes are inde-pendent of one another, then the optimal sharing rule for agent i depends only on agent i's outcome. One of the traditional principles in performance evaluation within the firm Is the principle that a person should be held responsible only for 45 those factors (e.g., costs or revenues) over which he or she has control. Basing the sharing rule for agent i only on agent i's outcome is clearly consistent with the controllability principle. Basing the sharing rule for agent i on the outcomes of other agents when there is common uncertainty i s , at f i r s t glance, inconsistent with the controllability principle. However, the reason that the compensation for each agent may depend on the outcomes of other agents is that the principal can gain information about the random state, and hence gain information about the efforts expended by each agent. That i s , the principal can gain information about each agent's input (effort), over which the agent has direct control. Thus, there is no con-f l i c t with the controllability principle in this case. The apparent con-f l i c t occurs because the focus of the controllability principle has been transferred from outputs to inputs (cf. Baiman (1982, pp. 197-198)). The last modification to the standard agency analysis for the problem of sales force management relates to noneffort decisions. Frequently, the salesperson must not only make several effort decisions, but also make sev-eral \"risk\" decisions which do not require expenditures of effort. The choices of discounts to offer on each product are examples of such risk decisions. Weinberg (1975, p. 938) identifies the following situations in which an agent might have control over the price: (1) perishable agricultural products; . . . (2) sales involving trade-ins in which the salesman has control over the evaluation of the trade-in, e.g., automobiles; (3) systems selling in which the salesman has a wide range of latitude in specifying the combination of services to be provided, e.g., contractors and consultants; (4) some r e t a i l situations in which the local store manager has control over price of at least some of the items sold in his store; (5) liquidation sales of obsolete product lines or retailer dis-tress sales; and (6) highly competitive markets in which customers are price bargainers . . . . One approach to the problem of incorporating both risk and effort deci-sions was taken by Itami (1979), who examined optimal linear goal-based incentive schemes under uncertainty. In his model, a risk decision is made 46 by the agent before the state of nature is observed. The agent then chooses an effort level based on the risk decision and the observed state of nature, resulting in a deterministic output which is a function of the agent's two decisions and the state of nature. For example, the divisional manager of a large corporation might make investment decisions on projects before the environmental conditions are revealed. The effort expended and the known state then determine the output. The simplest agency theory approach to the problem of incorporating both risk and effort decisions is to assume that both of the agent's deci-sions are made before the state of nature (or any other information) is observed. This approach w i l l now be briefly discussed. The form of the optimal sharing rule is derived rather than assumed. Furthermore, because risk-sharing aspects are important in this setting, both the principal and the agent are assumed to be risk averse. As Itami points out, there is a direct and an indirect effect of the agent's effort on his or her u t i l i t y , while there is only an indirect effect from the risk decisions. Up to this point, i t has been assumed that the agent's u t i l i t y is separable in effort and wealth. This assumption leads to a characterization of the optimal sharing rule that is independent of the agent's d i s u t i l i t y for effort, although the indirect effects of effort expended are captured via the terms g /g. The more general u t i l i t y func-Si • J tion U(s(x),a) for the agent leads to a characterization of the optimal sharing rule that captures both the direct and Indirect effects of the agent's effort. When there are no risk decisions, optimality requires that for interior solutions, T T I / / w U (s(x),a) g (xia) W'(x-s(x)) a..sv - ' - B a ^ \\ _ i _ / U s(s(x),a) = X + j y j [ U s(s(x),a) + g(x|a) 1 * 47 where the subscripts on U and g denote differentiation with respect to aj, and the subscripts s denote differentiation with respect to s. The major implication of nonseparability of the u t i l i t y function is that the role of effort is explicit, as is interaction between effort and compensa-tion. It is s t i l l true that i f the agent i s risk neutral, then the f i r s t best solution is achievable by a sharing rule of the form x-k. Because the important distinguishing feature of effort decisions is their twofold effect on the u t i l i t y function, the general form of the u t i l -i t y function is used here. Letting r^ denote the risk decision for task i and r = (r^,...,r ), the principal's problem is''' Maximize / W(x-s(x))g(x|a,r) dx s(x),a,r subject to / U(s(x),a)g(x|a,r) dx > u :A -Tr— / U(s(x) ,a)g(x|a,r)dx = 0, j=l,...,n :u. j / U(s(x) ,a)g(x|a,r)dx = 0, j=l,...,m. : 3 . To the right of the constraints above are their associated multipliers. The interior portion of the optimal sharing rule is characterized by W'(x-s(x)) U a s ( 5 8 a ( 5 8 r . ( 5 It should be noted that there is an Implicit assumpti on that the r.»1 s do not satisfy first-order stochastic dominance, since otherwise the principal and the agent would agree on the choices of the r^'s and there would be no incentive problem with respect to the r^'s. Suppose next, as Weinberg (1975) did, that the gross margin generated by sales of product I is given by x i = P i ( 1 - r i ) Q i \" K i Q i ' w h e r e ^ = nominal selling price per unit of product i , r^ = discount (decimal) on product i , 48 = quantity (units) of product i sold, = variable nonselling cost per unit of product i , and M£ = Pj^l-r^) - = gross margin on product i . Weinberg (1975) sought to determine i f an equal-commission scheme is incen-tive compatible when there both risk and effort decisions. An agency theory analysis suggests that such a scheme is not Pareto optimal. Suppose Q± ~N( 9 i ( a i , r 1 ) , a i ( r i ) ) . Then x± ~ N(P 1(l-r i)-K i ) 8 i ( a 1 ,r±), ( P ^ l - r ^ - K ) 2cr^(r ) ) . Previous analysis indicates that i f both the princi-pal and the agent are risk averse with u t i l i t y functions in the HARA class, then the optimal sharing rule is in general not additively separable in and x 2« If the principal is risk neutral and the agent's u t i l i t y is ln s -V(a), then previous remarks concerning the optimality of a commission scheme in the normal distribution example with no risk decisions apply. Demski and Sappington (1983) examined the situation in which i t is desired to motivate an individual to obtain and use information which is personally costly (in a pecuniary or nonpecuniary sense) for the individual to obtain. Their analysis may provide insights for the sales force manage-ment problem when the salesperson has the option or the a b i l i t y to observe private information before making risk decisions. 3.7 SUMMARY AND DISCUSSION This chapter derived optimal incentive schemes when the agent has sev-eral tasks over which to exert effort, and the principal and the agent have homogeneous beliefs about the outcome distribution. In the f i r s t best case, where there is no moral hazard problem, the major issue is risk sharing, and the results are similar in nature to the one-dimensional effort case. If one individual is risk neutral and the other is risk averse, then the risk neutral individual bears a l l the risk, receiving the uncertain outcome less a constant fee. If both individuals are risk averse, then the risk sharing 49 aspect is prominent; even i f the disaggregated information, x = (x^,...,x n), is observed, the sharing rule depends only on the sum of the x^'s. In the analysis of the second best case, where there is a moral hazard problem, the principal was assumed to be risk neutral in order to focus on motivational issues. As in the single-task case, the f i r s t best solution i s achievable when the agent is risk neutral. When the agent Is risk averse, the optimal sharing rule can be as simple as a salary plus commission, or can be more complicated, depending on the distribution of the outcomes and the agent's u t i l i t y function. In general, i t is much more d i f f i c u l t to determine when the sharing rule w i l l be increasing in each outcome, x^, than in the single-dimensional effort and output case. There are two reasons: the sign of each of the Lagrangian multipliers must be determined, and the question of multivariate stochastic dominance must be addressed. Each of these problems can be analyzed only in special cases. The analysis of the value of additional information is also more com-plicated than in the single-dimensional effort and output case. The applic-abi l i t y of the results of Amershi (1982) for the multidimensional effort case was discussed. The use of additional disaggregated information was demonstrated by means of examples. It was shown that in the case where a salary plus commission scheme is optimal, the commissions related to each task w i l l generally be unequal. The next question addressed was whether a manager should receive separ-ate rewards for the outcomes from the different tasks. It was shown that a strong form of independence (see (3.4.1)) is neither necessary nor s u f f i -cient for an optimal sharing rule to be additively separable in the out-comes. If the principal is risk neutral and the agent's u t i l i t y function is in the HARA class, then a necessary condition for additive separability of the optimal sharing is that the agent have a log u t i l i t y function. If the 50 principal and the agent are identically risk averse, with identical log u t i l i t y functions, then the optimal sharing rule is not additively separ-able. The remainder of this chapter focused on situations in which effort is additive, as when effort represents time devoted to different tasks. The optimal sharing rules in the f i r s t best and second best situations were examined under various assumptions about the means and distributions of the outcomes. In the additive effort case where there is no intrinsic d i s u t i l -ity for any particular task, i t is of interest to determine whether the principal can most efficiently induce a risk averse agent to allocate a l l effort to one task, or to diversify by allocating effort to each task. This section showed that the nature of the outcome distribution is an important factor in determining whether the optimal solution w i l l be boundary ( a l l effort devoted to one task) or interior (effort spread across tasks). The c r i t i c a l factor, however, appears to be the relationship between effort expended and the mean of the distribution. Conditions under which an opti-mal interior solution is unique were found, and i t was shown that i f an optimal interior solution is unique, then the optimal efforts for both tasks are equal, as are the Lagrangian multipliers u^. The additive effort results were applied to the sales force management problem. As remarked earier, simple commission schemes are rarely Pareto optimal; even when they are optimal, the commissions are generally not equal across products. However, i f the principal is risk neutral, the agent is risk averse, V(a) = V(Ea^), the x^'s given a^ are independent and identi-cally distributed, and a unique interior solution (a'J > 0, a*, > 0) is opti-mal, then the agent's compensation per unit of x^ is equal to the compensa-tion per unit of x 2. The multiple salesperson firm was briefly discussed, 51 as was the addition of risk decisions (not involving effort) by the sales-person. It has long been recognized that dysfunctional behavior on the part of managers can be induced by their focus on short-term personal goals rather than long-term company goals. Moreover, the company may unwittingly pres-sure managers to make decisions which w i l l increase short-term profits at the expense of long-term goals. One of the obvious aspects of a solution is to extend the performance evaluation period from, for example, one year to several years. A brief comparison of the allocation of effort problem and a multiperiod problem follows. Consider the situation in which the agent chooses one action a^ in each of n time periods, resulting in monetary outcomes x^ which are observed by both the principal and the agent at the end of period i . The agent's action in any period and the sharing rule for each period can then depend on the outcomes from previous periods. For ease of exposition, the two-period hor-izon w i l l be considered here. The principal's u t i l i t y for the two-period horizon is now W(x^-s^(x^),x 2~s 2(x^,x 2)), and the agent's u t i l i t y is U(s 1(x 1) , s 2 ( x 1 } x 2 ) ,a 1,a 2(x 1)) . Let g ^ ,x2 ,al ,a 2(x 1)) = h(x 2|a^,x^,a 2(x^))f(x^|a^). The principal's problem is then Maximize / J w ^ - s ^ x ^ ,x 2-s 2(x 1 , x 2 ) ) g ( x L , x 2 , a l . a ^ x ^ ) d X j d x 2 (3.7.1) i ' i v subject to /]b(s 1(x 1),a 1,s 2(x 1,x 2),a 2(x 1))g(x 1,x 2,a 1,a 2(x 1))dx 1dx 2 > u (3.7.2) and a^ and a2(«) maximize the left-hand side of (3.7.2). Let E 2 denote expectation with respect to h(.). The optimal sharing rules are characterized by 52 E 2 \\ f a i < x l l a l > < E 2 D B 1 ) a 1 ( E 2 V a2 + l ^ C ^ ) - f f - j j » f o r almost every xl, 2 8 l and W s 8 a ( 0 h a ( 0 S2 , . a l . . , % a2 U S2 * + h g( .) + Wj(x]_) h ( t) > f o r almost every (x^, x 2), Two special cases are of interest. The f i r s t is the case in which the principal's and agent's u t i l i t i e s are additive over time, with discount fac-tors 3 and ot, respectively. Suppose the principal and the agent agree on the contract at the beginning of the two-period horizon and each individual is committed to the contract for the entire time horizon. Then the princi-pal's expected u t i l i t y is jW(x 1-s 1(x 1))f (x 1 |a 1)dx 1 + 3/JW( x2 - s2^ x1 » x 2 ^ ^ ( x 1 . x 2 t&i ,a 2( x 1))dx 1dx 2, where g(.) = h(x21 a^ ,xl ,a 2(x 1) ) f (x-j^ l a ^ . The agent's expected u t i l i t y is /b(s 1(x 1))f (x L |a 1)dx 1 + cx//U(s2(x1 ,x 2))g(x 1 ,x2 ,a L ,a 2(x 1))dx 1dx 2 - V ^ ) - a/v(a 2(x 1))f(x 1|a 1) dxj. The interior portions of the optimal sharing rules are characterized by woy'i^i)) 1 + f i ( x i l a i > n . U ' C s^x,)) = X + \"l fCxJa,) ( 3 ' 7 * 3 ) and s W ( x 2 ~ s 2 (Xj_ ,x2 ) ) g x (x x ,x2 ,ax ,a 2 (x L ) ) z = A + u, a U'(s 2(x 1,x 2)) i g(x 1,x 2,a 1,a 2(x 1)) g 2(x 1,x 2,a 1,a 2,(x 1)) + ^ X l > g ( x 1 , x 2 , a 1 , a 2 > ( x 1 ) ) • ( 3 * 7 ' 4 ) where the subscripts j on the distributions indicate partial derivatives with respect to a^ 53 Equation (3.7.A) is similar to the characterization for single-period sharing rules in the multidimensional effort problem. Thus, the multidimen-sional effort results described earlier are useful in extending the theory to a certain class of finite-horizon multiperiod problems. Lambert (1981) has examined the model above under the assumption that effort in one period has no effect on the outcome in any other period, and also examined problems which occur when the principal is committed to a two-period contract, but the agent can leave the firm after the f i r s t period. The second special case of interest is the case in which the princi-pal's and the agent's expected u t i l i t i e s depend only on the total return over the entire time horizon. In this case, the principal's u t i l i t y func-tion is W(x^ + %2 ~ s ( x ^ , X 2 ) ) , and the agent's u t i l i t y is ( ^ ( x ^ ^ ) , a l ' a 2 ^ * ^ * T n ^ s structure is also appropriate for the problem of sequential allocation of effort within one time period, where the time period is said to end when the agent receives his or her compensation. The sequential allocation aspect would arise because of the agent's opportunity to observe an outcome affected by the f i r s t effort choice before making any other effort choices. This situation is the focus of the next chapter. 54 CHAPTER 3 FOOTNOTES 1. Although there are technical problems connected with the use of the nor-mal distribution, i t is used here for il l u s t r a t i v e purposes because i t is the only distribution with a convenient representation for dependent random variables. Detailed calculations and results for the normal dis-tribution appear in Appendix 2. 2. A modified version of this result holds when the principal is risk averse. Differentiating the first-order condition characterizing the sharing rule with respect to x shows that , , , 3 f a W\"U\\ sign (s'(x)) = sign ( ^ — ~ 2~^ * 3 f a Thus, V -K- j— > 0 implies that s'(x) > 0, but the converse does not hold. ~~ 3. As Holmstrom (1979) points out, i f the production function x is given by x(a, 6), where 0 represents a random state of nature, then 3x/3a > 0 implies that the distribution of x satisfies the first-order stochastic dominance property (provided that changes in a have a nontrivial effect on the distribution). 4. Extending this and the other propositions which depend on the assumption of a square root u t i l i t y function to a more general class of u t i l i t y functions appears to be nontrivial. However, in the discrete-outcome example presented earlier, the result is not confined to only the square root u t i l i t y function. Hence, i t appears likely that this and the other results stated for the square root u t i l i t y function hold for a more gen-eral class of u t i l i t y functions. 5. Lai (1982) also independently applied agency theory to the problem of sales force management. Much of his analysis is for a special normal distribution and the class of power u t i l i t y functions. He did not ana-lyze the additive effort case. 6. Let p be the constant sales price of a product, and c be the constant noncompensation cost per unit of product. Further, let q be the random quantity sold as a result of the agent's effort. One question of inter-est is whether the agent's compensation should be based on, for example, sales (pq) or a \"contribution margin\" (pq-cq). It is easy to see that ! f a(q|a*) the optimal sharing rule is characterized by TTTT—/ w = A + u ——,—• . . . U (s( •)) f(q|a*) That i s , the optimal sharing rule does not depend expli c i t l y on p or c or p-c. 7. The formulation is presented in order to illustrate the structure of the problem. Technical problems with the properties of the functions to be maximized are not addressed. 55 CHAPTER 4 ONE-PERIOD SEQUENTIAL CHOICE In this chapter, the model is expanded to include decisions made at different times. The extension is to sequential decisions within one period, where a period is defined to end at the time of payment to the agent. The one-period sequential case i s an intermediate step between the allocation of effort case, in which both the efforts are exerted before the outcomes are known, and the two-period case, in which the f i r s t outcome i s observed and the f i r s t compensation Is paid before the second effort is exerted. The allocation and sequential situations can be depicted as follows: Allocation of effort: Principal chooses Agent exerts s ( x l > * 2 ) a l » A 2 One period sequential choices: Principal and agent observe xi,x 2> principal pays s(x^,x 2) to the agent. Principal chooses s(x 1,x 2) Agent exerts a l Agent observes x l Two-period sequential choices: Agent exerts a 2 ( 0 Agent observes X 2 ; principal observes x^ and X 2 and pays s(xi,x 2) to the agent Principal chooses s l ( x l ) a n d s 2 ^ x l » x 2 ) Agent exerts a l Principal and agent observe x^; principal pays s 1(x 1) the agent. Agent exerts a 2 ( 0 Principal and agent observe X 2 ; principal pays S 2 ( x ^ , X 2 ) to the agent. In each of the cases above, i f the principal and the agent observe addi-tional valuable post-decision information about the agent's efforts, then the sharing rules w i l l depend on this additional information. 56 A number of situations might be modeled in the one-period sequential framework. In the sales force management example, the agent might spend a certain amount of time selling products in one territory and observe the amount of the resultant sales there before beginning work in another t e r r i -tory. If there is correlation between x^ and x 2, then the agent obtains information from x^ which may be useful in the decision about a2« The addi-tional post-decision information that the principal may obtain about the agent's efforts might be comments obtained from personally Interviewing the agent's customers. Another one-period sequential decision setting might involve production decisions by an agent, where a^ i s the number of hours of production until some sales information is obtained. The agent would then choose the number of hours of production for the remainder of the period. In this situation, the additional post-decision information obtained by the principal might be the number of work hours recorded on the agent's time cards. More gener-a l l y , a manager in a decentralized organization w i l l not be monitored daily, but rather w i l l make many decisions during a given time period and w i l l be evaluated only periodically. The one-period sequential model can be thought of as the special case of the f u l l y general two-period model In which the periods are very short, so that the principal's and the agent's expected u t i l i t i e s depend only on their total return for the entire horizon. The one-period model can incor-porate some of the elements of the f u l l y general two-period model while pro-viding a somewhat simplified structure for analysis. For example, in both models, the f i r s t outcome, which is first-stage post-decision information, can be used as pre-decision information for the second effort choice. The agent's precommitment to stay for the entire time horizon is not a major 57 problem In the one-period model, since the agent i s not paid until a l l the required efforts have been exerted. In the f i r s t part of this chapter, the simplified structure in the one-period sequential model i s used to explore the impact of correlation of out-comes in f i r s t best and second best situations. Some comparisons are made to the allocation of effort results. The analysis w i l l focus on aspects which were not addressed in the pre-decision information literature or in Lambert's (1983) analysis of a finite-horizon multiperiod agency problem with independent outcomes. The second part of this chapter develops results for the one-period sequential problem that parallel two sets of results in the allocation of effort problem, namely additive separability of the shar-ing rule and diversification of effort across tasks when effort i s additive. The similarities to and differences from the allocation results are dis-cussed. Before proceeding to the analysis, a brief review of the existing results on pre-decision information w i l l be given and Lambert's (1983) results w i l l be summarized. Unless otherwise stated, the \"sequential effort problem\" w i l l refer to the one-period sequential effort problem. A limited amount of research has been devoted to one-period agency problems with pre-decision information. Baiman (1982, p. 192) comments as follows on the increased complexity with pre-decision information: The role and value of a pre-decision information system i s more complex than that of a post-decision information system. Expanding a post-decision Information system to report an addi-tional piece of information w i l l always result in at least a weak Pareto improvement, since the principal and agent can always agree to a payment schedule that ignores the additional informa-tion. However, expanding a pre-decision information system to report an additional piece of information may not result in even a weak Pareto improvement. The agent generally cannot commit himself to ignore the additional information, and therefore the optimal employment contract without the additional pre-decision information is no longer necessarily self-enforcing given the additional information. This i s true whether the additional pre-decision information i s privately reported or publicly reported. 58 Some of the research concerning pre-decision Information focuses on the following question: Given that the agent has private pre-decision informa-tion, what i s the value of public post-decision information systems? Holmstrom (1979) showed that an informativeness criterion (f(x,y,z;a) * g(x,y)h(x,z;a), where z is the pre-decision signal) is necessary for the post-decision information system which reports a public signal, y, in addi-tion to x, the outcome, to provide a Pareto improvement over the information system which reports only x. Christensen (1982) expanded Holmstrom's (1979) model by allowing the agent to communicate to the principal a message m about the private pre-decision signal. The agent i s assumed to select the message that maximizes his or her expected u t i l i t y . In Christensen's model, a generalization of Holmstrom's (1979) informativeness criterion i s neces-sary for the post-decision information system which reports y, in addition to x and m, to provide a Pareto improvement over the information system which reports only x and m. Here, the public post-decision signal i s a sig-nal about the agent's effort and the agent's private pre-decision informa-tion signal. Another direction of the research on pre-decision information has been the value of pre-decision information systems. There are both positive and negative effects of private pre-decision information for the agent. On one hand, the agent has more information before choosing an action, and hence should make \"better\" decisions. On the other hand, more information may reduce the risk the agent faces, and hence reduce the motivation for the agent to exert effort. Christensen (1981) provided an example which shows that the principal may be worse off when the agent has private pre-decision information (with or without communication of^a message), and also provided an example which shows that the principal may be better off when the agent has private pre-decision information and communicates a message to the prin-59 cipal. Christensen's examples illustrate the d i f f i c u l t y in obtaining a gen-eral preference ordering rule over pre-decision information systems. A third direction of research on private pre-decision information has been the value of communication of a message about the private information from the agent to the principal, given the existence of the private pre-de-cision information system. In the accounting context, the focus is on the value of communication of private information in the process of participa-tive budgeting. The major result in this area is that of Baiman and Evans (1983), who provided necessary and sufficient conditions for communication to result In a Pareto improvement. Baiman (1982, p. 204) summarizes the result as follows: . . . If the agent's private pre-decision information is perfect, then communication has no value. Observing the firm's output in that case allows the principal to infer a l l he needs to know about the agent's private pre-decision information. However, i f the agent's private pre-decision information is imperfect, a necessary and sufficient condition for communication to be s t r i c t l y valuable is for the honest revelation of the agent's private pre-decision information to be s t r i c t l y valuable. That i s , i f any value can be achieved with the information being hon-estly revealed to a l l , then a s t r i c t l y positive part of that value can be achieved by giving the agent sole direct access to the information and letting him communicate in a manner that max-imizes his expected u t i l i t y . Lambert (1983) has examined a special case of the finite-horizon multi-period agency problem. He assumed that both the principal and the agent have u t i l i t y functions (and that the agent has a d i s u t i l i t y function) which are separable across time. He further assumed that the state variables are independently distributed across time, and that effort in one period does not influence the monetary outcome in any other period. Under these condi-tions, Lambert showed that the agent's compensation in a given period w i l l depend on the outcomes in previous periods as well as on the outcome in the present period. He further showed that the incentive problems associated with the agent's effort choices in each period are not eliminated. In the 60 notation of this chapter, the result can be stated as (I) > 0, and ( i i ) ^ ( x ^ > 0 for almost every x 1 (first-stage outcome). The remainder of this chapter analyzes the one-period sequential effort choice problem. The cooperative, or f i r s t best case is f i r s t considered, and the behavior of the agent's second-stage effort choice strategy is char-acterized. The second best case i s then analyzed. The optimal sharing rule is derived and discussed, as i s the behavior of the agent's second-stage choice strategy, with and without independence of the outcomes. It i s then shown that the optimal sharing rule w i l l not be additively separable in the outcomes, even under the conditions which were sufficient for such a result in the effort allocation problem. Finally, the special case of additive effort i s analyzed, and the question of the desirability of diversification of the agent's efforts across tasks i s examined. The result i s related to the information content of the outcome about the agent's effort. 4.1. FIRST BEST In the f i r s t best case, the principal's problem i s : Maximize / / W(x-s(x^ ,x2) ) (x^ ,x2 ,a2( •) ) dx2dx^ s( •) ,a 1,a 2( •) subject to / / {U(s(x 1,x 2)) - V(a t,a 2( •))}•( •)dx 2dx 1 > u, where ,x21 a^ ,a 2( •) ) = f (x^ ^ | a^)g(x 2\\x^ ,a^ ,a 2( •) ) and a 2 ( 0 indicates that the agent's second-stage effort i s in general not a constant, but rather can depend on any information available at the time of choice. Letting X be the multiplier for the agent's expected u t i l i t y constraint and differentiating the Hamiltonian with respect to s( •) for every (x^,x 2) yields W'(x-s( X l,x 2)) U'(s( X l,x 2)) = X 61 for almost every ( x 1 , x 2 ) . This implies that i f one person i s risk neutral and the other i s risk averse, then the risk neutral person w i l l bear the risk (see Appendix 4). That i s , i f the principal i s risk neutral and the agent i s risk averse, then the optimal sharing rule Is constant; i f the prin-cipal is risk averse and the agent i s risk neutral, then the principal's return is k, a constant, and the agent receives x^+x2~k. If both individuals are risk averse, then the risk i s shared; the optimal sharing rule i s a func-tion of (x^+x 2). Furthermore, 9s/9x^ i s positive for i = 1,2. Finally, i f both are risk neutral, then the optimal sharing rule i s s = u + V(a^,a 2(»)). These results are the same as those for the allocation of effort problem. Thus, in the f i r s t best case, the sequential nature of the effort decisions does not affect the characterization of the optimal sharing rules. In this scenario, there are no signals on which the choice of a^ can be based. Whether or not a 2 i s a function of x^ depends on the risk attitudes of the individuals and the joint distribution <|>(x^ ,x2 |a^ ,a 2( •)). If at least one of the individuals i s risk neutral and (jKxj^ ,x2 |a^ ,a 2( •) ) = f (x^ |a^)g(x 2 | a 2( •) ) , then the optimal a 2( •) i s indepen-dent of x^. In this case, the risk neutral person essentially owns the out-put of the firm, and thus bears a l l the risk associated with the uncertainty of x^. Furthermore, x^ conveys no information about x 2. If both of the individuals are risk averse or i f <)>(•) is the more gen-eral f(xjja^)g(x 2|x^,a^,a 2( •)), then the optimal a 2( •) w i l l generally depend on x^. In the f i r s t case, the change from the situation where one individ-ual i s risk neutral occurs because each risk averse individual's marginal u t i l i t y depends on the f i r s t outcome, since i t determines where on his or her u t i l i t y curve the Individual i s ; a risk neutral individual's marginal u t i l i t y , on the other hand, would be the same no matter what the value of x^ i s . This f i r s t effect of x-i can be termed the \"wealth\" or \"risk aversion\" 62 effect. In the second case, i f x^ and x 2 are dependent, then expectations for x 2 may change according to the f i r s t outcome, x^. The principal may therefore wish to induce the agent to choose a 2(«) as an increasing or decreasing function of x^, depending on the risk attitudes of the principal and the agent, the agent's d i s u t i l i t y for effort, and the nature of the cor-relation between x^ and x 2. This second effect of x^ can be termed the \"information\" effect. The information effect of x^ i s made more precise in the proposition below. Proposition 4.1.1. Suppose that in the f i r s t best case, the principal i s risk neutral, the agent i s risk averse, and <{>(•) = f(x^ |a^)g(x2 |x^ ,a^ ,a2( •)). In this case, a 2( •) w i l l depend on x^. Let M^ ( •) denote the mean of x^ given a^, and let M2(x^,a^,a2( •)) denote the conditional mean of x 2 with respect to g(»). Let the second and third subscripts of j on M2 denote par-t i a l differentiation of M2 with respect to the j-th argument of M 2(x 1,a 1,a 2( •))• Then a*«( X l) = -M231/[M233 \" A[ 92V( •) / 9a2] ] • For example, suppose M2(x^,a^,a2( •)) = x^^Cx^/a^ and V( •) = (a^+a2) . Then a*,'(x^) = l/(2a^A) > 0. In this case, a^(x^) increases linearly in x^. The effect of the nature of the correlation between x^ and x 2 i s cap-tured in the derivatives of M 2(0» and the effect of the d i s u t i l i t y function 2 2 is captured in the 9 V/9a2 term. Note that a*,(x^) does not depend on the agent's u t i l i t y function for wealth. This i s because the risk averse agent receives a constant wage in the f i r s t best case, and hence the agent's u t i l -i t y for the wage is constant. Note further that i f M2 depends only on a 2(*)> then a*, i s constant. Proposition 4.1.1 and the discussion preceding i t focused on the second effort choice's dependence on x^, the f i r s t outcome. The second effort choice, a„(»), might seem to also depend on the f i r s t effort choice, aj_. 63 However, the agent chooses the effort a^ and the effort strategy a^(. •) simultaneously at the beginning of the time horizon. The second effort choice is therefore not viewed as a function of a^, although there i s implicit recognition that a^ and a 2( \") a r e chosen jointly and therefore influence one another. However, since the f i r s t outcome is unknown at the beginning of the time horizon, the second effort choice can potentially depend on the f i r s t outcome. 4.2 SECOND BEST In this section, the general formulation of the one-period sequential model is f i r s t presented. Subsequently, the two extremes of independent outcomes and perfectly correlated outcomes are examined. In the f i r s t case, knowledge of x^ reveals no information about x 2, whereas in the second case, x^ reveals perfect information about x 2. The behavior of the agent's second effort strategy is illustrated In the two extreme cases, and also for the intermediate case of imperfectly correlated outcomes. As before, in order to focus on motivational issues, i t w i l l be assumed that the principal is risk neutral and the agent is risk averse. The prin-cipal's problem i s : Maximize / / (x-s(x^ ,x 2)) <|>(x^ ,x2 |a^ ,a 2( «))dx2dx^ s( •),a 1,a 2( •) subject to / [ / U(s(x 1 }x 2))g(x 2 |x 1,a 1,a 2( «))dx2 - V(a 1 ,a 2( •) ) ] f(x x |a1)dx;L >u / / u ( s ( - ) ) [ g f + g f a ^ ] d x 2 d X l - / ( V ^ f + vf a^)d X l = 0 { / U ( s ( 0 ) g a (*)dx2 - V a^( •)}f(x 1|a 1) - 0 for almost every x x, where <«x^,x21a^,a2( •)) = f(x^ |a^)g(x 2 |x^,a^,a2( •)) and differentiation with respect to a 2 i s pointwise for each x^. The interior portion of the optimal 64 sharing rule i s characterized by u t (s( x)) = X + ^i+a ^ * + \" ^ l ^ a ^ * f ° r a l m o s t e v e r y (x 1,x 2), where X, and ^(x^) are multipliers for the three constraints above. Here, 6 /<(> = f /f + g /g and $ /<\\> = g /g. If ai does not influence xo, a l a l a l a2 a2 then A /<(. = f / f . a l a l The characterization of the interior portion of the optimal sharing rule in the sequential effort case i s similar to that in the allocation of effort case, except that here y*, and a*, may depend on x^. In general, a*,(«) depends on x^. However, i f the agent i s risk neutral and <}>(x^ >x2 |a^ ,a 2( •)) = f ( x 1 |a 1)g(x 2 |a 2( •)), then a*,( •) does not depend on x^. If the x^s are conditionally correlated, then a*,( •) w i l l depend on x-^ even i f the agent i s risk neutral. These results are direct consequences of the achievability of the f i r s t best solution in the second best case i f the agent i s risk neutral (see Shavell (1979)). The proposition below describes aspects of the second stage problem for a particular u t i l i t y function for the agent, and for several commonly used distributions for the independent outcomes. Proposition 4.2.1. Suppose that in the second best case, the principal i s risk neutral, and the agent's u t i l i t y function for we alth i s U(s) = 2/s. Suppose also that <(>(•) = f (x^ | a^)g(x 2 | a 2( •)) , where f( •) and g( •) are in Q^ , the class consisting of the exponential, gamma, and Poisson distributions represented in Appendix 1. Define a^ and a 2 so that the mean of f(x^|a^) i s a^ and the mean of g(x 2|a 2) i s a 2. Then, assuming that the optimal efforts are nonzero, (i) i f 8V/3a2 i s positive at a* then (a) y0(x..) i s positive, and 65 (b) a s u f f i c i e n t condition for the agent's expected second stage net u t i l i t y to be increasing i n x^ i s that a*,( •) be a decreasing function of x^; ( i i ) i f i s p o s i t i v e , then (a) the agent's expected u t i l i t y f o r the second stage pecuniary return, E(u(s(x))|x^}, i s an increasing function of x^, and (b) the conditions V 2 > 0, V 2 2 > 0, V 2 2 2 > 0, and V 1 2 2 > 0 are j o i n t l y s u f f i c i e n t f o r a*,(*) to be a decreasing function of x^. Here, subscripts j on V represent p a r t i a l d i f f e r e n t i a -t i o n with respect to the j-th e f f o r t v a r i a b l e . The condition that 3V/3a 2 be p o s i t i v e i s a standard one, and i s nonre-s t r i c t i v e . A number of general forms of d i s u t i l i t y functions s a t i s f y the conditions i n ( i i ) ( b ) . The following, for example, s a t i s f y the conditions: V(a^,a 2) = h(a^) + a m , where m > 1 and a 2 > 0, 2 2 V(a^,a2) = a 1 a 2 where a± > 0 and a 2 > 0, and V ( a 1 , a 2 ) = h ( c 1 a 1 + c 2 a 2 ) , where h' > 0, h* 1 > 0, h \" ' > 0, and the constants c^ and c 2 are p o s i t i v e . If i s zero, so that there i s no incentive problem, then a 2 does not depend on x^ (see Appendix 4). This i s consistent with the f i r s t best r e s u l t s with a r i s k neutral p r i n c i p a l , a r i s k averse agent, and independence of the outcomes. There i s neither an incentive problem nor an information e f f e c t to induce the dependence of a 2 on x^. In general, though, a 2 w i l l depend on x^. Proposition 4.2.1 states that i n some p a r t i c u l a r s e t t i n g s , the optimal second stage e f f o r t w i l l decrease as the f i r s t outcome increases. Recall that x^ determines a point on the u t i l i t y curve for the agent before the second stage e f f o r t i s chosen. Because the agent's marginal u t i l i t y f o r wealth i s a decreasing function and 66 the agent's marginal d i s u t i l i t y for effort i s an increasing function, i t i s more costly for the principal to induce a given level of a2, the higher x^ i s . The result that a 2 i s decreasing in x^ should thus hold for other con-cave u t i l i t y functions for wealth, coupled with convex d i s u t i l i t y functions. Proposition A.2.1 also provides conditions under which the agent's sec-ond stage expected u t i l i t y w i l l increase as the outcome increases. Under the given conditions, E[U(s(x))|x^] i s increasing in x^, and -VCa^,a2(x^)) i s increasing in x^ because a 2 i s decreasing in x^. Thus, the agent's expected second stage net u t i l i t y i s increasing in x^. The independence of x^ and X2 in Proposition A.2.1 means that there is no information effect of xj. If x^ and X2 are correlated, then the behavior of a*,( •) would depend additionally on the nature of the correlation. In order to examine the information effect of xj_, the extreme case of perfect correlation of the outcomes w i l l next be analyzed. When the outcomes are perfectly correlated, then a joint density for x^ and X2 does not exist. Since the lack of a joint density precludes using the previous analysis directly, a modified approach must be taken in order to examine the nature of the sharing rule and the agent's second-stage effort choice when the out-comes are perfectly correlated. Let x^ = x^(0,a^), where 0 i s an uncertain state that influences both the outcomes. It w i l l be assumed that for any fixed a^, x^ can be Inverted to obtain 0 = 0(x^,a^), The principal's and the agent's common beliefs about the outcomes w i l l be expressed as <|>(x^ ,x21 a^ ,a 2( •) ) = f(x^|a^) i f X2 = x 2 ( 9 » a 2 ^ x l ^ a n d 8 = ^ x ^ 3 ^ otherwise, <{>(») = 0. In order to describe the sharing rule, let a^ be the agent's f i r s t -stage effort choice that i s induced by the sharing rule, and let a^(x^) be the agent's second-stage effort strategy that i s induced i f x^ i s observed and i t i s assumed that a^ = a*. Because of the perfect correlation between 67 x^ and x 2, the sharing rule s(x^,x 2) can be viewed as being of the following form: s(x^,x 2) = s(x^) i f x 2 = x 2 (9,a*(x^)) and 9 = 9(x^,a*); otherwise, s( •) i s a penalty wage which is possibly negative. The sharing rule can be viewed as being dichotomous with respect to x 2 and varying continuously only with x^. Alternatively, the sharing rule can be viewed as being a function of the total output, x^ + x 2, subject to the condition that the observed x 2 is in agreement with the observed value of x^ and the inferred value of 9. In either view of the sharing rule, lack of agreement between the observed values of x 2 and x^ i s taken as evidence of shirking; accordingly, a penalty is imposed in such situations. If the pen-alty i s sufficiently severe, the penalty need never be imposed, since the agent w i l l choose to avoid the penalty by choosing a*(x^). Determination of the optimal sharing rule can hence be confined to determination of the opti-mal function s(x^); furthermore, no f i r s t order condition i s required i n order to induce a*,(x^), as long as a^ is properly induced. The principal's problem can therefore be written as follows: Maximize / + x 2( 9 ^ , 3 ^ .a^x^ ) - sCx^ )f (xJa^dXj^ s(x 1),a 1,a 2(x 1) subject to /[U(s( X l)) - V(a 1,a 2(x 1))]f(x 1|a 1)dx 1 > u /[U(s( X l)) - V ( a i , a 2 ( X l ) ) ] f a (x 1|a 1)dx 1 - /Va ( a 1 , a 2 ( x 1 ) ) f ( x 1 | a 1 ) d x 1 = 0. In order to determine the f i r s t order conditions, let X and u be the Lagrangian multipliers for the f i r s t and second constraints, respectively, and form the Hamiltonian H in the usual way. Differentiating H with respect to s(») for every x^ yields W'( X l + x 2 ( 8 ( x 1 , a 1 ) , a 2 ( x 1 ) ) - s ^ ) ) ^ ^ l ' V U'(s( X l)) = X + U f ( X l | a i ) ' 68 wh i ch i s o f t he u s u a l f o r m . D i f f e r e n t i a t i n g H w i t h r e s p e c t t o a-^ y i e l d s Jw' ( 0 f ( • )dx 1 + /W( - ) f a ( O d X j + u/{[U(0 - V ( 0 ] f ( 0 - 2V ( O f ( 0 \" V ( . ) f ( 0 } d x . = 0. a l a l a l a l 1 F i n a l l y , d i f f e r e n t i a t i n g H w i t h r e s p e c t t o a 2 f o r e v e r y x^ y i e l d s 3x ? w (•) -~ f ( •) - xv ( . ) f ( •) - u[v a ( O f ( 0 + v ( O f ( O ] = 0 . <«a2 a 2 a 2 a1 a^2 I f the p r i n c i p a l i s r i s k n e u t r a l , a s i s commonly assumed i n o r d e r t o f o c u s on m o t i v a t i o n a l r a t h e r t han r i s k - s h a r i n g i s s u e s , t hen the f i r s t o r d e r c o n d i t i o n s above r e d u c e t o f a ( x L | a i ) U ' ^ ) ) ~ A + v f'xjap ' <4'2-1> / 1PT Ia7 f ( x l l a l ) d x l + / W ( - ) f a i ( x l l a l ) d x l + li/{tU(s(x1)) - V ( a 1 , a 2 ( x 1 ) ) ] f ( x 1 | a 1 ) d x 1 - 2 V a ( a i , a 2 ( X l ) ) f a (x1|a1) - V g & ( a x .a^ x^ ) f (xj_ | a^ jd^ = 0 , ( 4 . 2 . 2 ) and ^ f ( x l l a l > \" Wa2(a1,a2(x1))f(x1|a1) - ^ v a 2 ( a i » a 2 ( x i ) > f a 1 ( x l l a l ) + V a i a 2 ( a l ' a 2 ( x l ) ) f ( x l l a l ) ] = ° * ( 4 - 2 - 3 ) D i v i d i n g ( 4 . 2 . 3 ) by f(x^ |a^ ) and r e a r r a n g i n g y i e l d s a. f ( x , | a . ) 3x 2 a^ l 1 1' *2 S u b s t i t u t i n g ( 4 . 2 . 1 ) i n t o ( 4 . 2 . 4 ) y i e l d s = T7T7-F—vT V ( a . , a , ( x ) ) + uV (a a (x ) ) . ( 4 . 2 . 5 ) 3 a 2 U ' ( s ( x ^ ) ) a 2 1 2 1 l a 2 1 2 1 69 It i s easily seen that i f (4.2.4) i s to hold for almost every X p then a 2( •) must in general vary with x-^ . The \"wealth\" and \"information\" effects of x-j^ described in the f i r s t best analysis can be seen in (4.2.4). The wealth effect of x^ results from the interaction of the agent's marginal u t i l i t y for wealth and marginal d i s u t i l i t y for effort. The information effect of x^ refers to the information that x^ provides about x 2« In the perfect correlation case, the state 9 is Inferred from x^ and a^, and x 2 is hence a deterministic function of a 2 from both the principal's and the agent's perspectives. The information effect of x^ is therefore captured in the 9x 2/9a 2 term in (4.2.5). The behavior of a*, as x^ varies can be determined by differentiating (4.2.4) with respect to x^ to obtain 3 / a l > _3_ ^ 2 J59 P a- 3x. Q f ; 39 ^ 3 a / 3x. 2 v V 92 f (4.2.6) — - (X + u - ^ ) V - uV 3a 2 1 a2 a2 a l a 2 a 2 2 2 When x 2 i s linear in a 2, the 3 x 2/3a 2 term in the denominator is zero. Two special cases of interest are (i) x^ = 9 + a^, where 9 Is purely noise, and ( i i ) x^ = 9a^, where 9 reveals information about the production technology. In case ( i ) , the marginal output per unit of effort i s one, regardless of the value of 9. In case ( i i ) , however, the marginal output per unit of effort i s 9. 2 2 To i l l u s t r a t e the results, suppose that V(«) = a^a,,. For case (i) , 2 2 2 assume that 9 \" N(0,cr). Then x^ \" N(a^,a ) and f& /f = ( X j - a ^ ) a . There-2 2 fore, the numerator of (4.2.6) is u(2a^a 2)/a and the denominator is x l ~ a l 2 x l ~ a l -(X + u — 2 — ) ( 2 a ^ ) - 4a^u. The term (X + u — 2 — ) * s positive by the a a f i r s t order condition (4.2.1), and the effort levels are assumed to be posi-70 tive. Therefore, a*/(x^) < 0, provided that y > 0. This can also be seen by solving for a 2(x^) directly from (4.2.4) to obtain X \"3. a 2 ( X l ) = [2a 2(X + y J^ i) + 4ua 1]\" 1. a In this case, the sign of a*,' i s the same as in the independent outcome sit-uation described in Proposition 4.2.1, where there was no information about x 2 to be gained from x^. The case (i) result here can thus be interpreted as indicating that the wealth effect of x^ dominates any information effect that exists through perfect correlation of the outcomes . For case ( i i ) , assume f i r s t that 0 ~ exp(l). Th n x^ ~ exp(a^) and 2 f f l /f = ( X j - a ^ ) / a ^ . Equation (4.2.4) becomes x l ~ a l 2 9 = (X + y — j — ^ 2 a l a 2 + ^ t j a i a 2 * a l Sub stituting 0 = x^/a^ and rearranging results i n x l a*2 \" — x 1-a 1 2a^[ a i(X + y - 4 - ^ ) + 2y] a l Therefore, x r a i a^X + y 2—) + 2y - x 1(y/a 1) a 2 , ( x i > = 7 T t i . a i 2 a l {aL(X + y ^ - ^ ) + 2y}2 a l The numerator of a*,'(x^) reduces to (a^X - y + 2y), which is positive x l \" a l (assuming y > 0) because a^(X + y — j — ) > 0 for x^ J> 0, and for x^ = 0 in « ! particular. Thus, a*/(x^) > 0 in this case. A similar analysis can be done for the normal distribution example used in case ( i ) , with the result that 71 a*,'(x^) > 0. The sign of a*'(x^) would remain the same in cases (i) and ( i i ) for a wide variety of reasonable d i s u t i l i t y functions. For the normal distribution example, the only difference in the expres-9 ^X2 96 sion for ai'(x 1) i s the — (——) — — term. In case ( i ) , i t is zero, and in z L 9 0 a2 case ( i i ) , i t i s 1/a-^ . Although 9 is purely noise in case ( i ) , risk i s imposed on the agent for motivational purposes in order to induce a Pareto optimal choice of a^. The effort strategy a 2(x^) i s primarily determined by the wealth effect of x-^ , leading to a decreasing function of x^ just as in the case when the outcomes were assumed to be independent (see Proposition 4.2.1). In case ( i i ) , where the marginal output per unit of effort is 9, the agent receives perfect information about the production technology that was not relevant in case ( i ) . The information effect of x^ overrides the wealth effect in the case ( i i ) examples above, so that a*, i s now an increas-ing function of x^. To i l l u s t r a t e the second best results, suppose that the principal i s risk neutral and the agent's u t i l i t y for wealth i s 2/s. Suppose further that x^|a^ and x 2|a 2 are Independent, f(») is exponential with mean a^, and g(») i s exponential with mean a 2(x^). Then the interior portion of the 2 optimal sharing rule i s characterized by s(x^,x 2) = P (x), where x 2-a*(x 1) i P(x) = \\ + M ^ - ^ ) + P 2(x 1)( \" { * ). (4.2.7) a* a* ( x x ) P(x) must be s t r i c t l y positive in order to satisfy the f i r s t order condition 1/U' = P(x). In the proof of Proposition 4.2.1, i t i s shown that P 2( X l) = (9V(a*)/9a 2)a* 2( X l)/2 , (4.2.8) which i s positive under the usual assumption that the agent's d i s u t i l i t y function is increasing In the second effort. Furthermore, i t i s easily seen 72 As Lambert (1983) notes, P(x) can be viewed as X(x^) + u 2(x^)( 2 ^ ' that u 2'(x^) < 0 under the assumptions in Proposition 4.2.1, part ( i i ) . Intuitively, the higher the f i r s t outcome i s , the less concerned the risk neutral principal i s about motivating a high choice of a 2- This i s because the higher the outcome x^ i s , the costlier i t becomes to induce a given level of a2« As remarked earlier, the principal induces a strategy a 2(xi) which is decreasing In xj_. At the time of the second effort choice, the f i r s t outcome x^ i s known. x 2-a*,( X l) which is as i t would appear in a one-stage, one-period agency problem, given that x^ i s fixed. Thus, i t is not totally surprising that, as i n the one-stage, one-period problem, 3s/3x2 is s t r i c t l y positive, since P(x) and u2(x^) are s t r i c t l y postive. The behavior of s( •) as x^ varies i s consider-ably more complicated. Substituting (4.2.8) into (4.2.7) and differentia-ting shows that f-= 2P(x) A, + ( f L ! ^ _ a-V ( . ) / 2 ] . l a * 2 2 2 Under the assumptions in Proposition 4.2.1, part ( i i ) , the f i r s t and third terms i n the brackets are positive. The condition that x 2 < a*,(x^) is suf-ficient for the sharing rule to be increasing in the f i r s t outcome. How-ever, i t is clearly possible that 3s/3x^ is increasing i n x^ even i f x 2 > a * , ^ ) . An alternative approach to an analysis of the sharing rule i s insight-f u l . Recall that the f i r s t order conditions require that f a (x 1|a*) g a (x 2|a*( X l)) U^i)T = R ( X ) - X + h f U j a * ) + \"2 g O c . l a * ^ ) ) * 73 Taking the conditional expectation of R(x) with respect to g(x 2 | a^x^) ) f a ( , ) 1 results in the expression X + ^ ^ . As in the one-stage, one-period model, i f > 0 annd f( •) satisfies the monotone likelihood ratio property, then (X + ^ ^ ) is increasing i n x^. Thus, the agent faces a sharing rule with similar characterization for each stage, looking only one step ahead. That i s , at each stage, the shading rule i s characterized by the condition that 1/U' = XA + uJh /h. u u a In order to illustrate the behavior of a 2(x^) when x J^a^ and x 2|a 2 are imperfectly correlated, suppose that g(x 2|x^,a 2(x^)) is exponential with mean M2(x^) = x^a 2(x^). Since the exponential distribution i s a one-parame-ter distribution, we may write g(x 2|x^,a 2(x^)) = g(x 2|M 2(x^)), and Proposi-2 2 tion 4.2.1 can be applied. For concreteness, suppose that V(a^,a2) = a ^ a 2 ' 2 2 Then V 2 = 2a^a 2 > 0, V 2 2 = 2a 1 > 0, V 2 2 2 = 0, and V 1 2 2 = h&1 > 0, so that the conditions in Proposition 4.2.1, part (ii)(b) are satisfied. Substitu-ting a 2(x^) = M2(x^)/x^ into the expression for V 2 yields V 2 = 2a 2M 2(x 1)/x^, which is s t i l l positive. Therefore, i f is positive, then M^ Cx^ ) is decreasing in x^. That i s , x^a^x^) is decreasing in xj_. If b is positive, then i t is easily seen that a 2(x^) i s decreasing in x^, as when b is zero (the \"independent\" case). In this situation, as when there is perfect cor-relation with the normal distribution in case ( i ) , the wealth effect of x± is dominant. Recall that case ( i i ) of the perfect correlation analysis assumed that x t = Qa.^, so that x 2 = x 1 a 2 ( x 1 ) / a 1 . This seems similar to the imperfect correlation example in which M2(x^) = x^a 2(x^). However, the signs of a*,'(x^) are opposite in these perfect and imperfect correlation cases. This can be interpreted as follows: in the presence of information related to the production technology, the wealth effect of x^ is dominant i f 74 the correlation i s imperfect; the information effect of x^ i s dominant only i f the correlation i s perfect. If b i s negative, then the behavior of a 2(x^) is potentially much more complex. The condition that M2(x^) is decreasing in x^ is equivalent to the condition that bx^ 1 a 2 ( x 1 ) + x^a^x^) < 0. Since b < 0, the f i r s t term is negative; a 2(xp may thus be of any sign. It could be, for example, that because of the interactions of the wealth and information effects of x^, a 2(x^) is increasing for low values of x^ and decreasing for high values of x r This concludes the analysis of the effect of the information x^ on the agent's second effort strategy. The next two sections examine two aspects which were of interest in the allocation problem, namely additive separabil-it y of the sharing rule, and additive effort. 4.3. ADDITIVE SEPARABILITY OF THE SHARING RULE In this section, the question of whether or not to reward the agent for each outcome separately is examined. For example, suppose a salesperson exerts effort selling a product in one territory, observes the resultant sales, and then devotes effort to selling the same product or a different product in another territory. Should the firm compensate the salesperson with a different reward function for each outcome, as i f he or she were two separate salespeople? That i s , should the sharing rule be additively separ-able in the outcomes? It was shown in Section 3.4 that in the effort allocation problem, i f the principal Is risk neutral and the agent is risk averse with a HARA-class u t i l i t y for wealth, then jointly sufficient conditions for the optimal shar-ing rule to be additively separable in x^ and x 2 are (i) the agent has a log u t i l i t y function for wealth and ( i i ) the outcomes are conditionally indepen-dent (see equation (3.3.2)). In the one-period sequential effort problem, 75 the optimal sharing rule w i l l not be additively separable in x^ and x 2, even under conditions (i) and ( i i ) above. This is easily seen from the charac-terization of the interior portion of the optimal sharing rule: s(x 1,x 2) = ^•[(V(x)) C - D2] i f C * 0 D 2ln 7(x) i f C = 0, where the agent's risk aversion function is -U''(s)/U'(s) = l/(Cs+D2) and *a ( 0 '\"a ( 0 fa i( x l l a l > S a i(x 2|x 1,a 1,a 2( •)) g ^ •) = X + \"I* f ( X l | a i ) + g(x 2|x 1,a 1,a 2(.)) 1 + W 2 < x l ) 1 T 0 ~ ' and differentiation with respect to a 2 i s pointwise for every x^. Thus, even i f U(s) = In s (i.e., C = 1) and g(x 2|x^,a^,a 2(x 1)) = g(x2|a 2(x^)), the optimal sharing rule w i l l not be additively separable in x^ and x 2 because of the l J 2 ( x i ) 8 a /g term unless ^(x^) = k, a constant, and gfl /g is addi-tively separable in x^ and x 2. Lambert (1981, p. 90) has shown in a similar situation that ^(x^) > 0 for almost every xj_. Since for almost every x^, U2(x^) * 0, and i t i s unlikely that ^(x^) = k (which would require that 2 2 3E(x-s( •))/3a 2 = k3 E(U(s(')) ~ V(»))/3a2 for almost every xj^), the optimal sharing rule w i l l almost certainly not be additively separable in x^ and x 2. A corollary of this result Is that i f the principal i s risk neutral and the agent's u t i l i t y for wealth is in the HARA class, then the optimal shar-ing rule w i l l not be linear. Thus, the simple commission schemes often used in practice are not the most efficient way to motivate a risk averse agent when sequential effort decisions are involved. The presence of the additional decision information, x^, for the agent, which is the only difference between the sequential effort problem and the 76 effort allocation problem, introduces more complexity Into the sharing rule in two ways: (I) the multiplier 1^(0 depends on xj_, and ( i i ) because a 2( •) depends on x^, the distribution of x 2 given a 2( •) depends on x^, even i f X j j a i and x 2|a 2 are s t a t i s t i c a l l y independent. The combination of these two features precludes an additively separable sharing rule. Note that a 2( •) depends on x^ even i f x-jja^ a n c* x 2|a 2 are s t a t i s t i c a l l y independent. Hence, a 2's dependence on x^ Is not due to information that x^ provides about the likelihood of x 2. Rather, the dependence is due to a wealth effect (x^ influences the agent's position on his or her u t i l i t y curve before the sec-ond effort i s chosen) which the principal can use to ef f i c i e n t l y motivate the agent. Recall that in the f i r s t best case, there i s no motivational problem, and therefore the optimal a 2 does not depend on x^ i f the agent is risk averse, the principal i s risk neutral, and x^|a^ and x 2|a 2 are statis-t i c a l l y independent. If, on the other hand, x-jja^ and x 2|a 2 are dependent, then a 2 depends on x^ for the additional reason that x^ provides information about the l i k e -lihood of x 2. This is true in both the f i r s t best and second best cases. The multiplier u2(x^) further complicates the sharing rule. Intui-tively, i t i s a measure of the cost to the principal of the motivational problem for a 2. The result that ^(x^) > 0 for a l l x^ means that no matter what the f i r s t period outcome i s , the principal w i l l not find i t optimal to induce as high an effort level, a 2, as he or she could have i f there were no motivational problem. 4.4 ADDITIVE EFFORT In this section, the additive effort situation described in Section 3.5 is examined when sequential choice is allowed^. The principal i s assumed to be risk neutral and the agent is assumed to be risk averse. The agent is further assumed to have no intrinsic d i s u t i l i t y for any particular task, but 77 rather i s assumed to have d i s u t i l i t y only for the total effort expended. The agent's d i s u t i l i t y i s thus represented as V(a^+a2(•))• In f i r s t best situations, i f X j j a j and x 2|a 2 are independent, the prin-cipal i s risk neutral, and the agent Is risk averse, then the optimal a 2( •) does not depend on x^ in the sequential effort case. Therefore, the f i r s t best results for the allocation of effort problem s t i l l hold for the sequen-t i a l effort problem. In particular, i f the means are linear in effort, that i s , the means are given by ka^, then only the sum of the efforts i s of importance to the principal and the agent. If the means are given by k^a^, where k^ * kj, i * j , then a l l the effort should be put into the task with the largest return per unit of effort. For more general unequal mean func-tions, the optimal solution w i l l involve nonzero efforts devoted to a l l tasks. If the mean functions are identical nonlinear s t r i c t l y increasing functions, then the optimal efforts are equal. The second best case is quite different because of the dependence of a 2( •) on X]_. Recall that in the allocation problem, assuming an Interior solution, the constraints require that 3EU(s(x)) 3EU(s(x)) 3a^ 3a 2 ' because each of the marginal expected u t i l i t i e s must equal the marginal dis-u t i l i t y from the total effort, V'(a^+a2). In the sequential effort case, the constraints become 3EU(s(x)) 3EV(a1+a2( •)) and •3a1 3ax 3E2U(s(x)) (4.4.1) ^ = V(a^+a^i •)) for almost every x^, (4.4.2) 78 where E 2U(s(x)) = / U( s( •) )g(x 2 | a 2(x^) )dx 2 . Equation (4.4.1) requires aver-aging over a l l possible values of x^ and x 2, because a^ i s chosen before either outcome is available. Equation (4.4.2), on the other hand, requires averaging only over a l l possible values of x 2, because x 2 i s the only remaining uncertainty at the time the second effort level is selected. Corollary 4.4.1 below applies Proposition 4.2.1, which characterizes the behavior of the second stage effort strategy, to the additive effort case. Proposition 4.2.1 assumed that efforts were defined such that they were the means of the outcome distributions. In this section, efforts are assumed to be additive; assuming that efforts are simultaneously the means of the outcome distributions i s overly restrictive. Therefore, Corollary 4.4.1 allows for a more general situation in which the means of the outcome distributions are functions of the efforts. This accounts for conditions on the second stage mean, M2( •), in order to characterize the behavior of the second stage effort strategy. It should be noted that the definition of effort in turn influences the description of d i s u t i l i t y captured in the dis-u t i l i t y function V( •). Thus, conditions on both V(•) and M 2(») are either implicitly or explicitly required in order to characterize the behavior of the second stage effort strategy. Corollary 4.4 .1. Assume that the conditions in Proposition 4.2.1 hold, except that E(xi_|aj_) = M^a^) > 0, E(x 2|a 2) = M 2(a 2) > 0, and V(a 1,a 2) = V(a 1+a 2), with Mj/ > 0 and M2' > 0. Let ei = M^a^ and e 2 = M 2(a 2). The induced d i s u t i l i t y function is then V*(e 1,e 2) = V(M 1~ 1(e 1) + M2 1 ( e 2 ) ) . If V > 0, V\" > 0, V\"' > 0, and ' < 0, then a sufficient condition for e*(x^) to be decreasing in x-^ i s that 3M 2\" - ^ '''M^ be nonnegative at a*2-ct 8 For example, suppose M^(a^) = a^ , M^a^ = a 2 , and V(a^+a2) = (a +a„) , where 0 < a < 1, 0< 8 0 for 1=1,2. Then 79 a± = = e ^ a a n d a2 = M 2 ^ e 2 ^ = &2^^' ^ ^ ^ ^ Y 4.4.1 shows that e*,(x^) i s decreasing i n x^, since V = 2(a^+a 2) > 0 for a^+a2 * ®> V \" = 2 > 0, V \" » = 0, M2' ' = g ( 3 - l ) a 2 0 \" 2 < 0, and 3M2' » 2 - M2' ' »M 2' = 3 3 2 ( 0 - l ) 2 a 2 2 0 \" 4 - 8 2 ( S - l ) ( B - 2 ) a 2 2 0 \" 4 > 0 f o r g 0, a^ 2 = 0, a 2 1 ^ x l l ^ ^ »^ and a 2 2 ( x ^ ) = 0, then i t i s also optimal for the prin-cipal to induce (2) a ^ > 0, a^ 2 = 0, a 2^(x^^) = 0, and a 2 2(x^^) > 0, or to induce (3) a-j^ = 0, a^ 2 > 0» a 2 ^ ( x ^ 2 ) ^ 0» a n d a 2 2(x^ 2) = 0, or to induce (4) a ^ = 0, a^ 2 > 0, a 2^(x^ 2) = 0, and a 2 2(x^ 2) > 0. That i s , i f one of the four combinations of efforts (1) through (4) is optimal, then the principal i s indifferent among the four combina-tions. This result holds no matter what the risk averse agent's u t i l i t y for wealth i s . Moreover, means that are linear in effort are not required. 2 ( i i ) If xjLj|a£j i s normally distributed with mean ka^j and variance a , then (a) the best effort strategy with a ^ > 0, a^ 2 = 0, a2^(x]_^) > 0, and a 2 2 ( x ^ ) > 0 is Pareto inferior to some effort strategy with a ^ > 0, a^ 2 = 0, a 2^(x^) > 0, and a 2 2 ( x ^ ) = 0, and 81 (b) the best effort strategy with a-^ > 0, a ^ > u» a 2 1 ^ x l l ' x 1 2 ^ ^ »^ a n d a22^ xll» x12^ > 0 is Pareto inferior to some effort strategy with > 0, a-^2 > 0> a21^ xll» x12^ E ^ » and a22(xn.X]^) ^ u * ( i i i ) If x ^ j l a ^ j i s exponentially distributed with mean ka^-j, then (a) the best effort strategy with > 0, a^ 2 = 0, a 2 i ( x n ) > 0, and a22(xn) = 0 i s Pareto inferior to some effort strategy with a l l ^ ^ » a12 = ®* a 2 1 ^ x l l ^ ^ ^ » a n <* a 2 2 ^ x l l ^ ^ u» a n <* (b) the best effort strategy with > 0, a^2 > 0» a 2 1 ^ x l l ' x 1 2 ^ E »^ a n c* a 2 2 ^ x l l ' x 1 2 ^ > 0 i s Pareto inferior to some effort strategy with a ^ > 0, a^ 2 > 0, a2l( xll» x12) > 0» and a22( xll> x12) ^ u* The results in Proposition 4.4.2 can be depicted as follows, where solid lines indicate nonzero effort, and dashed lines indicate zero (no) effort. The f i r s t line in each pair of lines represents the f i r s t task, and the second line in each pair represents the second task, (i) (1) (2) J I I I I I a u > 0 a 2 l ( x l l ) > 0 a l l > 0 a 2 l ( x l l ) = 0 I I I I I L a 1 2 = 0 a22< xll> ~ 0 a12 = 0 a22( xll> > 0 The principal i s indifferent between (1) and (2). Alternative (4) Is similar to alternative (2), with the tasks renumbered, and alternative (3) is similar to alternative (1). 82 ( i i ) (a) (A.) (B) I I 1 I J I &11 > 0 a 2 l ( x l l ) > 0 a l l > 0 a 2 l ( x l l ) > 0 I I I I I I a 1 2 = 0 a 2 2 ^ x l l ^ ^ 0 a12 = 0 a 2 2 ^ x l l ^ ^ 0 The principal prefers some form of (B) to the best possible form of (A). ( I D (b) (C) (D) I l _ I I I I a l l > 0 a 2 l ( x l l > x 1 2 ) > 0 a l l > 0 a 2 l ( x l l > x 1 2 ) = 0 I I _l I 1 ! a 1 2 > 0 a22^ xll» x12) > 0 a12 > 0 a22^ xll» x12^ > 0 The principal prefers some form of (D) to the best possible form of (C). The results in ( i i ) say that whether effort i s exerted at one or two tasks i n i t i a l l y , a l l effort should be concentrated in only one task at the second stage. Because of the assumed independence, i t does not matter which task is chosen. In part ( i i i ) of the proposition, the results In (ii)(a) and (b) are reversed. That i s , whether effort i s exerted at one or two tasks i n i t i a l l y , effort should be s p l i t across two tasks at the second stage. It i s prefer-able to Induce the agent to diversify effort after receipt of the informa-tion x^ when the outcomes are exponentially distributed as described, and i t is preferable not to induce the agent to diversify effort when the outcomes are normally distributed as described. In part (i) of the proposition, diversification of effort Is not in question. Because the outcomes condi-tional on the efforts are independent and identically distributed, the prin-cipal i s indifferent among the four alternatives (1) through (4). 83 As in Section 3.5, the results in parts ( i i ) and ( i i i ) are partly explainable in terms of the variances of the total outcomes. For simplic-i t y , consider a comparison between a fixed amount of effort, a, devoted to only one task, or divided across two tasks. Let x^ and x 2 denote the out-comes of the two tasks, and let ka^ and ka 2 denote their respective means, where a^ is the effort devoted to task i . Since the means of each of the individual outcomes are linear in effort, the total effort expended is the only quantity of relevance for the purpose of comparing the means of the total outcomes (x^ i f effort is devoted only to one task, and x^+x2 i f effort i s devoted to two tasks). For the normal distribution in part ( i i ) of Proposition 4.4.2, Var(x-jJai=a) = o 2, and Var(xi-hx21a]+a2=a) = 2o~. For the exponential distributions in part ( i i i ) , Var(xjjai=a) = k^a^, and i 2 2 Var(xi+x 2|ai+a2 =a) < k^a . For the normal distribution, the variance of the total outcome is smaller when a l l the effort i s devoted to one task, while for the exponential distribution, the variance of the total outcome i s smaller when a l l the effort i s divided across two tasks. This observation can be related to the Information content of the outcomes considered as sig-nals about the agent's effort(s). The quantity 1(a) = /f (x|a)/f(x|a)dx, called Fisher's information about the parameter a contained in the data (see, for example, Cox and Hinkley, 1974), is used as a measure of informa-tion content about a in x. For both the normal and exponential distribu-tions described above, 1(a) is the reciprocal of the variance. Thus, for the normal case, there is \"more\" information about the agent's effort when a l l effort i s devoted to one task than there is when the effort i s divided across the tasks. The opposite i s true for the exponential distribution. Proposition 4.4.2 does not state what the optimal effort strategies are in each case. The comparisons in parts ( i i ) and ( i i i ) are between situa-tions with the same information available at the beginning of the second 84 stage. For example, in ( i i ) ( a ) , the comparison is between two situations in which only is available at the beginning of the second stage. Compari-sons of situations with differing information available at the beginning of the second stage are more d i f f i c u l t to make. 4.5 SUMMARY AND DISCUSSION This chapter examined the problem of sequential effort decisions within one period. The sequential aspect arose because the agent observed an out-come affected by the f i r s t effort choice before making the second effort choice, which affected a second outcome. The agent was paid only after both efforts were exerted and both outcomes were observed. In the f i r s t best case, the characterization of the optimal sharing rule in the sequential effort case i s similar in s p i r i t to that in the a l l o -cation of effort case. That i s , i f one person is risk neutral and the other is risk averse, then the risk neutral person bears the risk. If both the principal and the agent are risk averse, then the risk i s shared, with the sharing rule a function of the sum of the outcomes. The f i r s t best characterization of the optimal efforts i s different i n the sequential effort case than in the allocation of effort case. The sec-ond effort choice may now depend on the f i r s t outcome and the f i r s t effort choice. If both of the individuals are risk averse, then the optimal second stage effort strategy w i l l depend on xj_, the f i r s t outcome. The second stage effort strategy w i l l also depend on x^ i f the joint density of the two outcomes given the actions i s f(x^|a^)g(x 2|x^.a^,a 2(•))• However, i f at least one of the individuals i s risk neutral and the joint density of the two outcomes given the actions i s f ( x ^ | a ^ ) g ( x 2 , a ^ , a 2 ( • ) ) > then the opti-mal second stage effort strategy w i l l be independent of x^. The second stage effort strategy may depend on x^ because of a \"wealth\" (\"risk aversion\") effect, or because of an \"information\" effect. The wealth 85 effect occurs when both individuals are risk averse, because a risk averse individual's marginal u t i l i t y varies at different points of the u t i l i t y curve. The f i r s t outcome determines where on the u t i l i t y curve the individ-ual i s , so the individual w i l l want the second stage effort adjusted accord-ing to the value of the f i r s t outcome. The information effect occurs when the two outcomes are dependent. Depending on the nature of the correlation between the two outcomes, the principal may wish to induce the agent to choose the second stage effort strategy to be an increasing or decreasing function of the f i r s t outcome. Proposition 4.1.1 provides a precise expres-sion for the derivative of the second stage effort strategy with respect to the f i r s t outcome. The analysis in the second best case allowed for nonindependence of the outcomes. As usual, the principal was assumed to be risk neutral and the agent was assumed to be risk averse. The characterization of the optimal sharing rule in the sequential effort case is similar to the characteriza-tion in the allocation of effort case, except that the multipler u 2 a n d t n e effort strategy a 2 may depend on x^. Although in general, a 2 w i l l depend on xl» a2 D e independent of x^ i f the agent is risk neutral and the joint density of the outcomes is of the form f(x^|a^)g(x 2|a 2( •))• Proposition 4.2.1 assumed a square root u t i l i t y function for the agent and conditionally independent outcomes given the actions. It was shown that the agent's second stage effort strategy w i l l be decreasing in x^. Intui-tively, this i s because the higher x^ i s , the more costly i t is for the principal to induce any particular level of a 2. The agent's decreasing mar-ginal u t i l i t y for wealth and increasing marginal d i s u t i l i t y for effort account for the increasing costliness of inducing a 2. Since these charac-teri s t i c s hold in general, the results in Proposition 4.2.1 should hold for other u t i l i t y functions. 86 The c a se o f p e r f e c t l y c o r r e l a t e d outcomes was nex t examined . A s h a r i n g r u l e wh i ch i n c o r p o r a t e s a p e n a l t y wage f o r the second s t age was shown t o g e n e r a l l y r e s u l t i n a second e f f o r t s t r a t e g y t h a t i s d e c r e a s i n g i n the f i r s t outcome i f t he s t a t e i s random n o i s e . I f the s t a t e r e v e a l s i n f o r m a t i o n about the p r o d u c t i o n t e c h n o l o g y , t h e n the second e f f o r t s t r a t e g y may be s t r i c t l y i n c r e a s i n g i n the f i r s t ou t come . The i n f o r m a t i o n e f f e c t o f t he f i r s t outcome i n the p e r f e c t l y c o r r e l a t e d case can t h e r e f o r e o v e r r i d e the w e a l t h e f f e c t , c h a n g i n g the b e h a v i o r o f the a g e n t ' s e f f o r t s t r a t e g y . The b e h a v i o r o f a 2 i s more complex when the outcomes a r e i m p e r f e c t l y c o r r e l a t e d . I t was nex t shown t h a t t he c o n d i t i o n s wh ich g u a r a n t e e d an o p t i m a l s h a r -i n g r u l e t h a t i s a d d i t i v e l y s e p a r a b l e i n the outcomes i n the a l l o c a t i o n o f e f f o r t c ase w i l l no t gua r an t ee an a d d i t i v e l y s e p a r a b l e s h a r i n g r u l e i n the s e q u e n t i a l e f f o r t c a s e . T h u s , the p r e s e n c e o f the a d d i t i o n a l d e c i s i o n i n f o r m a t i o n , x^ , p r e c l u d e s an a d d i t i v e l y s e p a r a b l e s h a r i n g r u l e . The f i r s t b e s t a d d i t i v e e f f o r t r e s u l t s f o r the a l l o c a t i o n o f e f f o r t p rob lem a n a l y z e d i n S e c t i o n 3 a l s o h o l d f o r the s e q u e n t i a l e f f o r t c a s e . The second bes t r e s u l t s i n the s e q u e n t i a l c ase d i f f e r f rom those i n the a l l o c a -t i o n c a se because o f t he r o l e t h a t the f i r s t outcome p l a y s as p r e - d e c i s i o n i n f o r m a t i o n f o r the second e f f o r t c h o i c e . C o r o l l a r y 4 .4 .1 p r o v i d e s c o n d i -t i o n s under wh ich the a g e n t ' s second s t age e f f o r t s t r a t e g y w i l l be d e c r e a s -i n g i n xj_, the f i r s t ou tcome, when e f f o r t i s a d d i t i v e . F i n a l l y , the b o u n -d a r y v e r s u s i n t e r i o r s o l u t i o n r e s u l t s i n S e c t i o n 3.5 were a p p l i e d to t he s e q u e n t i a l e f f o r t c a se i n o r d e r to o b t a i n P a r e t o compa r i sons between e f f o r t s t r a t e g i e s w i t h v a r y i n g deg rees o f d i v e r s i f i c a t i o n o f e f f o r t . The r e s u l t s were r e l a t e d to F i s h e r ' s i n f o r m a t i o n s t a t i s t i c , a measure o f the i n f o r m a t i o n c o n t e n t o f t h e outcome about the a g e n t ' s e f f o r t . The c o n d i t i o n a l i n v e s t i g a t i o n p rob lem i s somewhat r e l a t e d to the s e q u e n t i a l e f f o r t p r o b l e m . In t he c o n d i t i o n a l i n v e s t i g a t i o n p r o b l e m , the 87 agent exerts effort, and both the principal and the agent observe the out-come x. The principal then has the option of observing y, an additional signal about the agent's effort. The agent's compensation is s(x) or t(xiy)» depending on what was jointly observed. Cost variance Investiga-tion, a familiar problem in accounting, has been modeled as a conditional investigation problem (see, for example, Baiman and Demski, 1980a,b) i n which x is a cost and y is the result of an investigation to try to deter-mine the reason for the cost's deviation from a preset standard. The prob-lem is similar to the sequential effort problem in that decisions are based on an i n i t i a l outcome. However, after the i n i t i a l outcome, the principal chooses an act in the conditional investigation problem, and the agent chooses an act in the sequential effort problem. The major focus in the conditional investigation problem has been on the determination of the opti-mal investigation strategy; such a question i s not at a l l relevant in the sequential effort choice problem. Some additional comments about the condi-tional investigation problem w i l l be made in the next chapter. As remarked at the end of Chapter 3, the sequential effort case can be viewed as a special case of the two-period agency problem in which the prin-cipal's and the agent's expected u t i l i t i e s depend only on the total return over the entire time horizon. Thus, the sequential effort results have potential applications in such multiperiod situations. 88 CHAPTER 5 SUGGESTED FURTHER RESEARCH This chapter concludes the thesis with suggestions for further research. The f i r s t section discusses possible extensions to theoretical agency results, and the second section discusses possible applications of the agency theory results to a traditional accounting topic, cost variance investigation. 5.1 THEORETICAL AGENCY EXTENSIONS A number of generalizations of the results In this thesis are desir-able. For example, in the allocation of effort setting with additive effort, i t is desirable to obtain results for a more general class of u t i l -i t y functions and for nonindependent distributions of incomes. A similar remark holds for some of the results in the sequential effort setting. The situation with multiple agents was discussed briefly in Section 3.6, where the agents were salespeople in a firm. The important problem of collusion among agents in order to conceal shirking or the theft of assets has largely been unexplored. Beck (1982), however, has recently taken an incentive con-tracting approach to the problem of collusion for the purpose of concealing the theft of assets. As remarked earlier, many accounting and other business issues are best addressed in a multiperiod setting. Lambert (1981, 1983) has analyzed a special case of the multiperiod agency problem in which u t i l i t i e s are addi-tive over time and the outcomes are independent. Chapter 4 of this thesis analyzed a different special case of the multiperiod problem. The analysis allows for nonindependent outcomes, and assumes that the agent is paid only at the end of the time horizon, even though the effort choices and the observations of the outcomes are sequential. The analysis i s thus suitable for short-term horizons in which the principal and the agent are concerned 89 only with their total shares at the end of the time horizon. Results for more general multiperiod situations are desirable. These situations are, of course, more d i f f i c u l t to analyze. 5.2 APPLICATION TO VARIANCE INVESTIGATION A great deal of attention has been focused on strategies for investiga-ting the underlying causes of cost variances or deviations from standards. Most of the analytical research has assumed that investigations reveal the state of a mechanistic production process, and that the investigator can return an \"out-of-control\" state to an \"in-control\" state (Kaplan, 1975). Thus, only the correctional purposes of investigations were examined. Cor-rectional benefits occur, for example, when costs are higher for a malfunc-tioning machine than for a properly functioning machine. In some situations, the primary focus is on evaluating a manager who has control over a mechanistic process. In such situations, there may be motivational as well as correctional benefits to investigating variances. The manager's actions can be influenced by the possibility of an investiga-tion i f a reward or penalty is based on the results of the investigation. The motivational purposes of investigations have recently come to attention in the analytical literature. Baiman and Demski (1980a, 1980b) have explored the motivational aspects of variance analysis procedures in a one-period agency model, with a single-dimensional effort variable. In both of the analyses, the agent is responsible for a production process which gener-ates a monetary outcome determined by the agent's effort and some exogenous randomness. The monetary outcome, owned by the principal, i s assumed to be jointly observable, while the agent's effort i s not. The principal can, however, conduct a costly investigation in order to obtain a further imper-fect signal which is independent of the outcome but informative about the agent's effort. The nature of the investigation strategy was characterized, 90 and the use of the information for motivational purposes was demonstrated. Lambert (1984) extended the analysis by allowing for a nonindependent addi-tional signal about the agent's effort, and showed that the investigation strategy would differ from that obtained by Baiman and Demski. A number of extensions to the Baiman-Demski analysis are possible. One extension i s to allow for multiple effort decisions by the agent. Feltham and Matsumura (1979), for example, suggested three different effort deci-sions the agent might be responsible for: 1) bringing the system back into control at the beginning of the period after detecting that It i s out of control; 2) keeping the process in control during the period given that the process is in control at the beginning of the period; 3) influencing or con-trolling the operating costs or the outcome during the period. Their analy-sis did not focus explicitly on the tradeoffs between the efforts expended by the agent. Instead, the focus was on characterizing the optimal investi-gation strategy and sharing rule for an infinite-horizon Markov process. Another extension to the Baiman-Demski analysis i s the extension to multiple periods. One approach would be to extend the analysis to a f i n i t e -horizon model. Another approach would be to extend the analysis to an infinite-horizon model. It has been argued that in finite-horizon multi-period problems involving two players, the factor that overshadows a l l others i s the players' knowledge that they have arrived at the last play. When the players expect that there w i l l always be another \"play\" of the game, the appropriate concept i s the repeated game, in which there are an infinite number of plays of the single game (Rubinstein, 1979) . 91 BIBLIOGRAPHY Amershi, A., \"Valuation of Performance Information Systems in Agencies,\" Faculty of Commerce and Business Administration, University of British Columbia, working paper (1982). and J. Butterworth, \"Pareto and Core Syndicates: Risk-Sharing in Per-spective,\" Faculty of Commerce and Business Administration, University of British Columbia, working paper (1981). Baiman, S., \"Agency Research in Managerial Accounting: A Survey,\" Journal of Accounting Literature (Spring 1982), 154-213. and J. Demski, \"Variance Analysis Procedures as Motivational Devices,\" Management Science (August 1980), 840-848. and , \"Economically Optimal Performance Evaluation and Control Systems,\" Journal of Accounting Research Supplement (1980), 184-220. and Evans, John H. I l l , \"Pre-Decision Information and Participative Management Control Systems,\" Journal of Accounting Research (Autumn 1983), 371-395. Beck, P., \"Internal Control and Irregularities: An Incentive Contracting Model,\" Graduate School of Management, University of California, Los Angeles, working paper (November 1982). Berger, P., \"Optimal Compensation Plans: The Effect of Uncertainty and Attitude Toward Risk on the Salesman Effort Allocation Decision,\" in E. Mazze (ed.), Proceedings of the 1975 Marketing Educator's Confer- ence. American Marketing Association: Chicago (1975), 517-520. Blackwell, D., \"Equivalent Comparison of Experiments,\" Annals of Mathemati- cal Statistics (1953), 265-272. Christensen, John, \"Communication in Agencies,\" Bell Journal of Economics (Autumn 1981), 661-674. , \"The Determination of Performance Standards and Participation,\" Jour- nal of Accounting Research (Autumn 1982, Part II), 589-603. Cox, D. and D. Hinkley, Theoretical Statistics. Chapman and Hall: London (1979) . DeGroot, M., Optimal Statistical Decisions. McGraw-Hill: New York (1970). Demski, J. and G. Feltham, Cost Determination: A Conceptual Approach. Iowa State University Press: Ames (1976). and , \"Economic Incentives in Budgetary Control Systems,\" The Accounting Review (April 1978), 336-359. and D. Sappington, \"Delegated Expertise,\" Graduate School of Business, Stanford University, working paper (1983). 92 Farley, J., \"An Optimal Plan for Salesmen's Compensation,\" Journal of Mar- keting Research (May 1964), 39-43. Fellingham, J., Y. Kwon, and D. Newman, \"Ex Ante Randomization in Agency Models,\" Graduate School of Business, University of Texas at Austin, working paper #82/83-1-12 (February 1983). Feltham, G., \"Cost Aggregation: An Information Economic Analysis,\" Journal of Accounting Research (Spring 1977), 42-70. , \"Optimal Incentive Contracts: Penalties, Costly Information and Mul-tipl e Workers,\" Faculty of Commerce and Business Administration, University of British Columbia, working paper #508 (October 1977). and E. Matsumura, \"Cost Variance Investigation: An Agency Theory Per-spective,\" Faculty of Commerce and Business Administration, University of British Columbia, working paper (July 1979). Gjesdal, F., \"Stewardship Accounting: Controlling Informational Externali-ties,\" Ph.D. dissertation, Stanford University (1978). , \"Accounting for Stewardship,\" Journal of Accounting Research (Spring 1981), 208-231. , \"Information and Incentives: The Agency Information Problem,\" Review of Economic Studies (1982), 373-390. Harris, M. and A. Raviv, \"Optimal Incentive Contracts with Imperfect Infor-mation,\" Journal of Economic Theory (1979), 231-259. Holmstrom, B., \"On Incentives and Control in Organizations,\" Ph.D. disserta-tion, Stanford University (1977). , \"Moral Hazard and Observability,\" Bell Journal of Economics (Spring 1979), 74-91. , \"Moral Hazard in Teams,\" Bell Journal of Economics (Spring 1982), 324-340. Horngren, C , Cost Accounting: A Managerial Emphasis (Fifth edition). Prentice Hall: Englewood C l i f f s (1982). Itami, H., \"Analysis of the Optimal Linear Goal-Based Incentive System,\" Department of Commerce, Hitotsubashi University, working paper (March 1979). Johnson, N. and S. Kotz, Continuous Multivariate Distributions. Wiley: New York (1972). Kaplan, R., \"The Significance and Investigation of Cost Variances: Survey and Extensions,\" Journal of Accounting Research (Autumn 1975), 311-337. Lai, R., \"A Theory of Salesforce Compensation Plans,\" Ph.D. dissertation, GSIA, Carnegie-Mellon University (1982). 93 Lambert, R., \"Managerial Incentives In Multiperiod Agency Relationships,\" Ph.D. dissertation, Stanford University (1981). , \"Long-Term Contracts and Moral Hazard,\" Bell Journal of Economics (Autumn 1983), 441-452. , \"Variance Investigation in Agency Settings,\" Kellogg Graduate School of Management, Northwestern University, working paper (1984). Lehmann, E., Testing Statistical Hypotheses. John Wiley & Sons, Inc.: New York (1959). Mirrlees, J., \"Notes on Welfare Economics, Information and Uncertainty,\" in M. Balch, D. McFadden, and S. Wu (eds.), Essays on Economic Behavior Under Uncertainty. North-Holland (1974). Peng, J., \"Simultaneous Estimation of the Parameters of Independent Poisson Distribution,\" Ph.D. dissertation, Technical Report #78, Department of Statistics, Stanford University (1975). Radner, R. and M. Rothschild, \"On the Allocation of Effort,\" Journal of Economic Theory (1975), 358-376. Rubinstein, A., \"Offenses That May Have Been Committed by Accident - an Optimal Policy of Retribution,\" in S. Brams, A. Schotter, and G. Schrodiauer (eds.), Applied Game Theory. Physica-Verlag: Wurzburg (1979). Shavell, S., \"Risk Sharing and Incentives in the Principal and Agent Rela-tionship,\" Bell Journal of Economics (Spring 1979), 55-73. Steinbrink, J., \"How to Pay Your Sales Force,\" Harvard Business Review (July-August 1978), 111-12 2. S t i g l i t z , J., \"Incentives, Risk, and Information: Notes Toward a Theory of Hierarchy,\" Bell Journal of Economics (Autumn 1975), 552-579. Weinberg, C., \"An Optimal Commission Plan for Salesmen's Control Over Price,\" Management Science (April 1975), 937-943. , \"Jointly Optimal Sales Commissions for Nonincome Maximizing Sales-forces ,\" Manajement_S£ience_ (August 1978), 1252-1258. Zimmerman, J., \"The Costs and Benefits of Cost Allocations,\" The Accounting Review (July 1979), 504-521. Appendix 1 Table II One-parameter Exponential Family Q f(x|a) = exp[z(a)x - B(z(a))]h(x) Exponential Normal Gamma Poisson Binomial c t I v 1 . -x . 1 -(x-M(a)) 2. X nx n\" 1e\" X x , M / ._(M(a)) x ,n wM(a) X M(a)\"\" X f ( x ' a ) M U T ^ W 1 \"^=eXPl_ 1 r(n) expI-MCa)]^^- ( x ) < - ^ ) d \" - ^ 0 E(x|a) M(a) M(a) M(a) (= n/X) M(a) M(a) Var(x|a) M 2( a) o2 M(a) M(a) [1 - Hll] n n z ( a ) ~ MTaJ \" HTaT In M(a) ln M(a) - In (n-M(a)) a 2 a 2 , , n x , v i r n e B*(z) - - o z - - exp (z) z B(z) -In (-z) — z n In ( ) exp (z) nz - n ln[ ] 2 v z 1 2 n , v ne z 1 _i_ z 1 + e z z r z 1 + e 1 -2 n . . . ne Z 7 ° 7 e* p < z ) ^ 7 Note: The exponential distribution is a special case of the gamma distribution but is listed separately because of its wide use. 9 5 The following calculations for the one-parameter exponential family, Q, given in Table II, w i l l be useful in the proofs of the results in Chapters 3 and 4. _a = d ln f f da ^- [z(a)x - B(z(a))] da z'(a)x-B'(z(a))z ,(a) z'(a)[x-B'(z(a))] z'(a)(x-E(x|a)). f (x|a) / f ( x | a ) dx = / z'(a)(x-E(x|a))f a(x|a)dx z ' ( a ) f / xf(x|a)dx - z'(a)E(x|a) / f (xla)dx da 1 ' ' J a z'(a)^- B'(z(a)), since E(x|a) =B'(z(a)) aa and / f (x|a)dx = 0 = (z'(a)) 2B\"(z(a)). ( A l . l ) (A1.2) f^(x|a) f f (x|a) ) f(x|a)dx da (z'(a*))2 / (x-E(x|a*)) 2f(x|a)dx Note that / (x-E(x|a*)) 2f(x|a)dx = / (x-E(x|a) + E(x|a) - E(x|a*)) 2f(x|a)dx = / (x-E(x|a)) 2f(x|a)dx + (E(x|a) - E(x|a*)) 2 - 2(E(x|a) - E(x|a*)) / (x-E(x|a))f(x|a)dx = Var(x|a) + (E(x|a) - E(x|a*)) 2. 96 Therefore, f!(x|a) H f (x|a) dx = (z'(a*)) 2 ^ (B\"(z(a))) a* = (z'(a*)) 3B\"'(z(a*)). (A1.3) f (x|a) / 3 f(x|a) f (xla)dx aa z'(a*) - i y / (x-E(x|a*))f(x|a)dx da = z'(a*) - \\ (E(x|a) - E(x|a*)) da = z'(a*) B'(z(a)) da = z'(a*) (B\"(z(a))z»(a)) = z'(a*)[B'\"(z(a*))(z ,(a*)) 2 + B\"(z(a*))z»'(a*)]. (A1.4) Second Best, Additive Effort Example (Section 3.5) Suppose the principal is risk neutral, R(a*) - X + ^ ± f i x ^ Let K(s,a_) = / 2 R(a*)4>(x|a) dx . The agent's expected u t i l i t y EU(s*,a) = K(s*,a) - V ( a : + a 2) and the p r i n c i p a l ' s expected u t i l i t y i s G(s*,a) = / (xL + x 2 - R 2(a*)) + ^ 2 ^ \" V \" > = ° • which imply that G + u,K + u»K = G + U.K + P~K a, i. a, a, Hi. a, a- a_ T. a, a„ a„a 1 1 n~2 \"2 *1~2 2°2 where a l l the functions are evaluated at a* = ( a i * , a 2 * ) 2) = 0 , j-1,2. I.e., / 2R(a*) f ( x 1 | a 1 ) f ( x 2 | a 2 ) d x ^ - + a 2) = 0 and / 2R(a*) f ( x ] > | a 1 ) f ( x 2 | a 2 ) d x ^ - V ,(a ]_ + a 2) = 0 98 Thus, 3a, J [ X + E i=l f ( 0 ]f(x 1|a 1) f(x 2|a 2) dx 1dx 2 2 f a ( '> ]f(x 1|a 1) f(x 2|a 2) dx Ldx 2 , which implies that f ' U l — 3a, ^ I ' V d x i =-3a7 1 y: 3 r 32 2 f f(x„ a„) dx„ v 2 2 2 3 a i since — • ! V L - T ^(^1^) d x i = 0 for i * j . a l Therefore, 1^ / - j — d x l = \"2 / dx 2« (A1.6) Let J denote the quantity in (A1.6) G can be written as a l -JL [ E C x J a ^ + E ( x 2 | a 2 ) ] - * / [ X2 + 2X^ ^ + 2 I *l + 2X \"2 — f f a l a2 + 2 ^ [ — . - r - ] 2 ! 2 } ] •f(x ] [ |a 1)f (x 2 |a 2)dx 1dx 2 99 9E(x 1|a 1) f(x1|a1)dx1 . 2 9 /• j a l } f(x1|a1)dx1 f f a, a. fCxJa^ f(x2 |a2)dx1dx2 ] The last term In the sum is 0 when evaluated at a* because x^ and x 2 are conditionally independent and f ^ l a , * ) ^ f(x 2|a 2*) ' f < x 2 l a 2 * ) d x 2 \" ° ' Thus, at a*, S E C x J a p - 2XJ - u2 dx^. 1 1 9a, f(x1|a1)dx1 + / f(x2|a2)dx2 ]} * 2 \" i l-r f a 1 a 1 ( x l | a l ) d x l 100 and K = K a l a 2 a 2 a l f(x 2|a 2)dx 2 ] = 0 Therefore, (A1.5) can be written as 3E(x 1|a 1) - 2 XJ 2 , a l a* dx + 2 2 i a l f a i a i ( x l ! a l ) d x l 9E(x 2|a 2) 9ar - 2XJ -2 f a2 U2 dx„ f a 2 a 2 ( x 2 l a 2 ) d x 2 (A1.7) For the exponential family, using the results in Table II and equations (A1.2) through (A1.4), (A1.7) can be written as B \" ( 2 ( a 1 * ) ) z ' ( a 1 * ) - ^ [ ( z ' C a ^ ) ) 3 B ' \" ^ ^ * ) ) - 2 ( z ' ( a i * ) ) 3 B , \" ( z ( a 1 * ) ) - 2 z ' ( a 1 * ) z \" ( a 1 * ) B \" ( z ( a 1 * ) ) ] = B\"(z(a 2*)z'(a 2*) - ^ [ ( z ' ( a 2 * ) ) 3 B\"'(z(a 2*)) - 2 ( z ' ( a ? * ) ) 3 B\" ' (z(a 2*) ) - 2z'(a 2*) z\"(a 2*) B\"(z(a 2*))] , or 101 B \" ( Z ( a 1 * ) ) z ' ( a 1 * ) + ^ [ ( z ' ( a i * ) ) 3 B ' \" ( z ( a i * ) ) + 2z' ( a 1 * ) z \" ( a 1 * ) B \" ( z ( a 1 * ) ) ] = B\"(z(a 2*))z'(a 2*) + ^ [ ( z ' ( a 2 * ) ) 3 B\"'(z(a 2*)) + 2z'(a 2*)z\"(a 2*) B\"(z(a 2*))] (A1.8) Equation (A1.6) can be written as (see equation (A1.2)) W L(z ' ( a 1 * ) ) 2 B \" ( z ( a i * ) ) = P2(z'(a 2*)) 2 B\"(z(a 2*)) . (A1.9) 102 Appendix 2 Normal Distribution Calculations This appendix contains calculations for the bivariate normal distribu-tion, the only distribution with a convenient representation for dependent random variables. I 2 X o-^a) p(a) 0^(3)0-2(3) Suppose x ~ N( 6(3) , £(§)), with E = \\ 2 p(s) 0^(3)0-2(3) o\"2(§) where s_= ( . a l t a 2 ) and 6(a) = ( e^a) , 9 2 ( a ) ) T . Then 1 B f (x 1 ,x, |a. ,3,) = — — exp[ =— ], where (A2.1) x l \" 9 l 2 X l \" 9 l X2\" 92 x2\" 92 2 B = i ) Z - 2p(-L_^)(^—±) + (-^-^ • a l °1 a2 °2 Let D denote the argument in the exponentiation in (A2.1). The following quantity plays an important role in the determination of the optimal sharing rule. 9f / a a . 1 9 9 log f = - 5 — [- log 2TT - log a - log o_ f 9ax 6 9a1 1 6 s 1 e 2 - \\ log(l-p 2) + D] (1) (1) (1) \\ °2 p p (1) - — + — + DV ; , °L °2 1-p2 103 where the superscript (i) denotes differentiation with respect to a^. - 1 [ i l l ^ B ^ B ^ ] and ( 1 - P V 1-p* CD x r e i - ^ V ^ r V ^ (i) x r e i x 2 _ e 2 1 o^ 1 2 -9{1)a1-(x1-9.)a51) x,-99 - 2 P [ — i — 1 n-^- i at 2 x.-9 -g-^^-Cx^-g )a\\ l) ~ 2p[4-^1[ 2 ] 1 a. x_-9 -t£ 1> 0-(x 7 - e ) a j 1 ) + n—rjLn— 2 1 • 2 af Case (a): 9(a) =. ( ^ ( a ) , 9 2 ( a ) ) T . f cf, ov, , 2 , , 2 2 „ ., 2. 1 2 1-p (1-p ) 2 ( l - p ) Since f is symmetric, simply replace the (l)'s with (2)'s to get f (2) (2) / 0 N _!?. . _ A - . ^ 2 _ + _ p P \\ P P ( 2 ) B _ 1 B ( 2 ) f °L ^ 1 - P 2 ( 1 - P 2 ) 2 2 ( l - p 2 ) The optimal sharing rule is not separable in x^ and x2 and is not a com-mission scheme. Case (b): p = constant * 0. The optimal sharing rule w i l l clearly not be a commission scheme. Of course, i f p H 0, the optimal sharing rule w i l l be separable in x^ and x2 but w i l l not be a commission (linear) scheme. 104 2 Case (c): p = constant and = constant. a 8 (x -9 ) p ( ) - r • r r t 2 \" -^r (^> (x 2-e 2) + e ^ c ^ - e ^ ) 1-p a 1 2 + 2 • a 2 1-p a 2 1 + 2 J . ° 2 The optimal sharing rule w i l l be a commission scheme with the coefficient of X l e q u a l t 0 fl(l) n f l ( l ) Q ( 2 ) (2) 1 r / 1 2 w A 2 ^ i [ M,( o — — \" ) + U„( o — — ) J . . 2 1 * l v 2 « a.7 P 2 V 2 a, o_ 1-p o^ 1 2 Oj 1 2 The coefficient of x 2 is P 9 ^ 9<2) p0<2> . 2 1 2 a. a ' K 2 V 2 a. a 0 1-p a 2 1 2 a 2 1 2 Lemma 2A.1 characterizes some properties of the optimal second best solution for particular u t i l i t y functions when the distribution of the out-comes is bivariate normal. The calculations in the proof w i l l be useful i n the proofs of propositions in Section 3 . 5 . k l a l Lemma 2 A . 1 . Suppose (x^, X 2 ) ~ N ^ ] C A ^ » 2 CT1 p o l °2 where E = ( _ ) . Suppose further that the principal is risk neu-p a i ao J-1 l 0^ t r a l , U(s) = lis, and V(a_) = V(a^ + a.^)' T n e n assuming the interior charac-105 terization of the optimal sharing rule, s(x 1,x_) = (X + Ey.f /f ) 2 , is valid for almost every (x^ ,^2), the following results hold: ( 1 ) a j * > 0 , &2* > 0 , k^ = k 2• and = o\"2 imply that y^ = y2« (2) k^*k 2 implies that the optimal solution is a boundary solution, i.e., a^* = 0 or a 2* = 0 . Proof of Lemma 2 A . 1 . In this case, S . V V W Pk 1(x 2-k 2a 2) — = [ 2 \" dTdZ ]/(1_p > ' 0^ i 1 a 2 k 2(x 2-k 2a 2) P k j C ^ - k ^ ) 2 I \" = [ 2 ~ oTcZ l / d - P ) . 0 1 2 2 2 and s ( x i , x 2 ) = (X+ £ y f /f ) . y ^ ^ pk2 Let = x 5— and ( ^ ( I - P ) o 2(l-p Z) y 2 k 2 ^1^1 C 2 = 2\" — • \"^h6 principal's expected return is a 2(l-p ) a ^ l - p ) EW = E(x^ + x 2 - s(x!,x 2)) = k l 3 l + k 2 a 2 - E( X + L a C 1 + g c 2 ) w 1 k l 3 l \" k l a l * n A k 2 3 2 \" k 2 a 2 * = k l 3 l + k 2 a 2 - E( X + C + o, 1 0 . 2 106 V r V i * Letting k± = * C± and y £ = g i return can be written as x i ~ k i a i , the principal's expected 2 X l k l a l 2 k i a i + k 2 a 2 - (X + A x + A 2)^ - E( g C L)^ - E< a, C2> \" 2 C l C 2 E t < o L 1 >< a, > 1 k i a i + k 2 a 2 \" ( x + A i + A 2 ) 2 \" c i V a r ( y i ) - C 2 Var(y 2) - 2 0 ^ Cov(y 1,y 2) k i a i + k 2 a 2 - (A + Aj_ + A 2 ) 2 - C 2 - C 2 - 2C 1C 2p (A2.2) 3EW 3EW 3a, k i C i k ± - 2(A + AL + A 2) - i - i , 1-1,2, and = k. 2 Ak p k p pk V [ \" 1 , i , j - l , 2 , i * j . a ±(l-p') \" i j If x^ and x 2 are independent, then p = 0 and SEW 3a, = k± - 2Ak 1 2p i/a 2 Letting a = a^ + a 2, the agent's expected u t i l i t y i s EU = 2 E( A + £ p.f /f ) - V(a) 2 a j k a - k.a * p.k. vuk ?p 1 - P 2 a 2 a l 0 2 k 2 a 2 - k 2a 2* P2k2 p 1k 1p (-1 \" P ~2 °i °o 2 1 2 -) ] \" V(a) 3EU 2 k i , v l h ^ j V 3a . 2 v 2 I 1 - p a •) - V'(a), i, j = l , 2 , i * j. 107 2k C = — - V'(a), 1=1,2. (A2.3) a i The f i r s t order conditions require that (A2.3) is zero for 1=1,2. There-fore, k l C l k2 C2 a\\ °2 If k^ = k 2 and = o^, then (A2.4) implies that = C 2, which in turn implies that = (assuming p * + 1). This establishes result (1) of Lemma 2A.1. Note that i f p = 0, then setting (A2.3) equal to zero shows that > 0 and > 0, since V (a) is assumed to be positive. The Hamiltonian is H = EW + X(EU-u) + E \\i 3EU/3a . J - l j J jw. 2 k.C. k.C, •—- = k - 2[ X + E (a.-a *) ] fe± i j = 1 j j CTj c ± - V\"(a) E u i-1,2. (A2.5) j - l J 3H Setting — = 0 for 1=1,2, and letting P denote the quantity in (A2.4) 6 31 yields k t - 2P[ X + P(a x+a 2 - a 1*-a 2*) ] = k 2 - 2P[ X+ P(a 1+a 2 - a ] L*-a 2*) ] . (A2.6) It is impossible to satisfy equation (A2.6) unless k^ = k 2, which estab-lishes result (2) in Lemma 2A.1. Q.E.D. 108 Appendix 3 Chapter 3 Proofs Proof of Proposition 3.1.1. The principal's problem i s Maximize EW(x-s(x)) = / W(x-s(x)f(x|a)dx subject to EU(s(x)) - V(a) = u. The f i r s t order condition for s*(x) requires that - W'(x-s*(x))f(x|a) + X U'(s*(x))f(x|a) = 0, or W'(x-s*(x)) = XU'(s*(x)) . This implies that x - s*(x) = W'-1(X U'(s*(x))) = T(s*(x)), with T'(s*) > 0 since X > 0. Therefore, x = T(s*(x)) + s*(x) =Y(s*(x)), with Y'(s*) = T'(s*) + 1 > 0 . Thus, s*(x) = Y - 1 ( x ) . Q.E.D. Lemma 3A.1 below w i l l be used in proving Proposition 3.2.1. n Lemma 3A.1: Suppose f(x|a) = II f (x. |a.) and that the risk-averse agent's i=l expected u t i l i t y is pseudoconcave in a. Suppose further that F (x.|a.) < 0, with st r i c t Inequality for some a i 1 1 3EW x^-values. Then for i=l,. ..,n, i f < 0, -g^— > 0 . Proof of Lemma 3A.1: The f i r s t order conditions are (1) /W(x-s(x))f (x|a*)dx + £ u.* { jb(s|x)f _ ( «)dx - V } = 0, a i j=l 2 a i 3 j a i 3 j i=l,•..,n, and 109 t n / i w f (xia*) W'(r(x)) n a. -'-U'(x-r(x)) j j f(x|a*) f J(x. a.*) « a 3 1 ] = A* + Z u.* — r 3 j - l J f J(x.|a.*) because of the independence assumption. Here, subscripts a^ and aj on f( •) and V( •) denote partial differentiation with respect to a^ or a j , respec-tively; A* and n *, j=l,...,n, are the optimal values of the multipliers in the second best problem, and r(_x) = x-s(x). Suppose some < 0. Without loss of generality, let j=l. Consider the following auxiliary problem: Max J W(x-s x(x))f(x|a*)dx + A* [ / U(s x(x))f(x|a*)dx - V(a*)] S A n + Z \\x* [ j U(s,(x))f (x|a*)dx - V (a*) ] , J=2 3 3 where a*, A* and i ^ * , . . . , ^ * correspond to the optimal solution charac-terized by (1) and ( 2 ) . Let r^(x) = x - s^(x). For x E X 1 + = { x with X j such that f ^ x ^ a j * ) > 0 } , „,/ / f j(x.|a.*) f j(x.|a.*) W'(r(x)) n a. 3 3 n a. 3 1 3 rrrp- r-yr- = A* + Z U.* < A* + Z \\1 * T 3 -U'(x-r(x)) . . 3 -3. , ^ . _ j _v j=l J f J(x.|a.*) j=2 f J(x.|a.*) J J J J W'(r A(x)) U'(x-r A(x)) * 110 W(r(x)) Note that 777-; . . N is decreasing in r(x) for every fixed x. Further, U'(x-r(x)) — — r^(5) * s a n increasing function of x^, since 3r 3r W\"(r x(x)) _ J u ' ( x - r x ( x ) ) + W < •) U\"( •)(!- ^ ) 1 2 L = ° U'Z ^rX W'U'' implies that ^ - = „, .y, + W, D, , > 0. Now W'(r(x)) W'(r x(x)) W(r(x)) 7777 r r r < 7777 7~sT a n d 7777 7~~\\T decreasing in r U'(x-r(x)) U'(x-r^x)) U'(x-r(x)) implies that r(x) > r^(x), for a l l x e X^+. Correspondingly, r(x) < r x(x) on Xj_ = { x with Xj^ such that f^Cx^Ja^) < 0 }. Therefore, /W(r(x))f (x|a*)dx - / w ( r , ( x ) ) f (x|a*)dx a. ^ A cl ^ = t w(r<*>> - W(r x(x)) ] f a^(x|a*)dx + L [ W(r(x)) - W(r (x)) ] f ( »)dx > 0. X l + A a l It remains to show that / W(r,(x))f (x|a*)dx > 0. The left-hand side of A 3. ^ ~~* ~\" the expression can be written as j [ / x ... / x W(r x(x))f 2(x 2|a 2*)...f n(x n|a n*)dx 2...dx n ] f ^ ^ |a1*)dx]L 1 2 n 1 = L T(xn )f ^(x, |a1*)dx1 > 0, as in the one dimensional case, because of X, l a , 1 1 1 I l l stochastic dominance and the fact that 3r . « T'(x,) = f W f ...f ndx 0...dx > 0 . 1 J 3x^ 2 n Q.E.D. Proof of Proposition 3.2.1: Let A = / W(x-s(x))f(x|a)dx and B = / U(s(x),a)f(x|a)dx. Subscripts i and j on A and B w i l l denote partial differentiation with 3H respect to a^ or aj, respectively. The f i r s t order conditions = 0 for n=2 are (1) A : + + UpB^ = 0 and (2) A 2 + P L B 2 1 + = 0, where the functions are evaluated at the optimal a* and with the optimal s*(x). In matrix notation, A + B y = 0, A l B l l B12 \" l 0 where A = ( ), B = ( ), y = ( ), and 0 ( ). A 2 B 2 1 B 2 2 y2 0 If B is s t r i c t l y concave in a, then | B | * 0 and B - 1 exists. Therefore, -1 1 B22 B12 A l I.e., (3) ^ = A2 B12 A l B22 iBl and V21 \" V l l (4) U2 rgi 112 If B is s t r i c t l y concave in a_, then | B | > 0 and B±i < 0, i=l,2. Now assume < 0 and < 0. Then by Lemma 3A.1, A^ > 0 and A 2 > 0. From (3) and (4), we have A2 B12 ~ A1 B22 < 0 a n d A1 B21 \" A2 B11 < ° * These imply that B 1 2 < 0 (note: B 1 2 = B 2 1 ) . But i f B 1 2 < 0, then (1) and (2) cannot be satisfied. Therefore, not both and u 2 can be nonpositive. Q.E.D. Lemma 3A.2 below deals with the problem of allocating effort to two tasks considered simultaneously. Lemma 3A.2. (First Best, Additive Effort) Suppose E(x^) = k j ^ * i=l,-..,n. (1) If k^ = k, for a l l i , then k = X V ( Ea^) implies that any nonnegative vector a_such that Ea^ satisfies [1] below is Pareto optimal. (2) If some k^ a boundary solution results. That i s , a l l the a^'s are zero except one. In the n=2 case with k^ > k 2, a^* > 0 and a 2* = 0. Proof of Lemma 3A.2. The principal's problem is Maximize / (x-s(x)) g(x|a) dx s(x), a subject to / [ U(s(x)) - V(a) ] g(x|a) dx > u. H = / (x-s(x)) g(x|a) dx + X { / [ U(s(x)) - V(a) ] g(x|a) dx - u } . 31 1 -g- = -g + XU'g = 0 implies that U'(s(x)) = ~y which implies that s(x) = U ' - 1 ( i ) = C. 113 -g-= /(x-s(x)) g (x|a) dx + X { / [ U(s(x)) g ( •) ] dx - V(Ea.) } = 0 . 3E(x|a) - 0 + X(0-V'(Ea )) = 0 implies that k = XV ( E a ) for a l l i . [1] This establishes result (1) of Lemma 3A.2. To establish result (2), recall that a^* and a 2* are nonnegative by assumption. Let s*(x) = C*, where s*(x) is the optimal sharing rule corre-sponding to the optimal choices a^* and a 2*« ^ e t (ai'» a2'^» w n e r e ai' * 0 and a 2 1 > 0, be a feasible effort pair given C*. The agent's expected u t i l i t y for any feasible (ai,a 2) is C* - V(a1 + a 2) = u . Since (a 1',a 2') is feasible, C* - V(a x' + a 2') = u. Consider the pai r ( a 1 \" , a 2 \" ) = ( a ^ + a 2', 0). This pair is also feasible, since a^ ' + a 2 \" = a^' + a 2', and the principal is s t r i c t l y better off with ( a i \" , a 2 ' ' ) since his expected return is k ^ \" + k 2 a 2 \" - C* = ^ ( a ^ + a 2') - C* > k ^ ' + k 2 a 2 ' - C* i f k L > k2. Therefore, a 2' > 0 Is not optimal, and hence the optimal effort pair i s such that a^* > 0 and a 2* = 0 . Q.E.D. Proposition 3A.3 below compares the solutions to two one-task problems Proposition 3A.3. (First Best, Additive Effort) Suppose E(x^) = K^a^, i =l>2, and that k^ > k 2. Consider the two sepa-rate problems where effort is devoted only to task i . Then (1) a^* > a 2* i f V is increasing and convex, (2) a^* > a 2* implies that s^* > s 2* (i.e., the agent is paid more for exerting a^* at task 1 than for exerting a 2* at task 2), and 114 (3) the principal is better off with a^* > 0 and a.^* = 0 than with a^* = 0 and a^* > 0. Proof of Proposition 3A.3. The principal's problem i f effort is devoted only to task I is Problem i : Maximize / (x - s^(x)) g^(x|a^) dx s ^ x ) ^ subject to J u ( S l ( x ) ) g i(x|a 1) dx - V(a ±) >u . 9H. . . = 0 implies that s (x) = U'~ ( _ ) = C and [1] dS^ X A^ X 3H 1 „ . 1 V'< ai> = 0 implies that k = A V'(a ) - -r- - — r — = - . [2] X X X A- (c. ^ i \" i i Feasibility requires that U ( U'\"1 ( 1- ) ) - V(a±) = u , i which implies that U'\"1 ( i - ) = U'\"1 [ u + V(a.) ] . [3] A i 1 Equation [2] implies that A i k i Result (1). k^ > k 2 implies that a^* > a 2* i f V is increasing and convex. Proof. Suppose a^* < a 2 * « Then U _ 1(G + V(a L*)) < U~1(G + V(a 2*)) (since U-^ and V are increasing) -1 , V ' ( a l * ) x -1 f V ' < a 2 * ) > U' ( J— ) < U' ( ^ i — ) by [3] and [4], which implies that implies that _ ^ , - _ ^k l k 2 V»( a i*) V ( a 2 * ) _ x — r > r (since U' is decreasing), or K l k 2 k 2 V ' ( a 2 * ) k~~ * V (a *) > * ( s i n c e v ' i s increasing and a^* < a 2*), so that 2^ ^ 1^ * Therefore, > k 2 implies that a^* > a 2* . Result (2). a^* > a 2* implies that s^* > s 2* . Proof. a ^ > a 2* implies that u + V(a^) > u + V(a 2*), which implies that U - 1(u + V(a L*)) > U _ 1(u + V(a 2*)) (since U-^ is Increasing), so that U'\"1 ( V ) > u ' - 1 ik- ) b y I3!* Therefore, s j * > s 2 * by [1] . Remark : a^* > a 2* also implies that > ^ Result (3). If ki > k 2, the principal is better off with aj * > 0 and a 2* = 0 than with a ^ = 0 and a 2* > 0 . U6 Proof. It is necessary to show that 1 2 / (x - S j * ) g ( x ^ * ) dx > / (x - s 2*) g (x|a 2*) dx, that i s , k ^ * - > k 2 a 2 * - C 2 . [5] Note that (a 2*, s 2*) is feasible for Problem 1: / U(s 2*) g 1(x|a 2*) dx - V(a 2*) = U(C2) - V(a 2*) = u . Therefore, k j 3 ] * ~ s j * > k l a 2 * ~~ s2* because °^ fe a s i b i l i t y of (a 2*, s 2*) for Problem 1 and optimality of (a^*, s^*) for Problem 1. Furthermore, k^a 2* - s 2* > k 2 a2* ~ s2* b e c a u s e k l > k2 ^ ^ » a n d n e n c e t 5 l holds. Q.E.D. Proof of Proposition 3.5.1: In Lemma 2A.1 of Appendix 2, i t was shown that i f x± ~N(ka l t and an interior solution (a^* > 0, a 2* > 0) is optimal, then i t must be that = Uj,- Let u = , a* = (a^*, a 2*), and a* = a^* + a 2 * ' This interior solution satisfies the Nash conditions f a (xjlaj) ) f(x x|a 1)f(x 2|a 2)dx - V'( a i + a 9) = 0, i=l,2. The condition for i=l is f a ( x j a ^ ) 2 v {\"857 1 f k i l * i * > * f ( x l K ) d x l f a ( x2' a2* ) 3 , a2 1 L + -£7 I f ( x 2 l a 2 * ) ' K^W^V** 1 \" + a2> \" °» or 2 p - i - / (kx L - k 2 3 l*) •f (x 1 |a 1 )dx 1 - V ' ^ + a 2) = 0 117 i.e., 2 pk - V'(a*) = 0, which would also result from the i=2 condition. Hence, there is really only one Nash condition. The principal's expected u t i l i t y is f (x.la.*) / ( X l + x 2 - ( X + y Z 3 i ^ ) ) f(x 1|a 1*)f(x 2|a 2*)dx 1dx 2 = ka* - X 2 - 2u 2k 2 (see equation (A2.2) in Appendix 2). The agent's expected u t i l i t y i f effort a* is exerted is 2 / ( X+ y j f./x^a.*) f / x | a *) > f(x 1|a 1*)f(x 2|a 2*)dx 1dx 2 - V(a*) = u , j 1 j which implies that 2X - V(a*) = u . Now suppose that a2=0, the minimum effort, and that x 2 is ignored for , f a(x|a) compensation purposes. Consider s(x) = [ X + u fOcTa) ) where X, u, and a* are the same as in the interior solution above. The Nash condition is now a f ( x | a ) ) f(x|a)dx - V*(a) = 0 , la\" or 2 U - L - / (kx - ka*)f(x|a)dx - V'(a) = 0 , that i s , 2vk - V'(a) = 0 , which is satisfied at a=a* The principal's expected u t i l i t y If a=a* i s / ( x - ( X+ y f a(x|a) f(x|a) ) ) f(x|a*)dx 2 2 2 = ka* - X - y k 2 2 2 > ka* - X - 2y k , which is the principal's expected u t i l i t y with an interior solution. Since the agent's expected u t i l i t y is 118 unaffected, the principal is s t r i c t l y better off, and the Nash condition holds, a boundary solution is optimal. Q.E.D. Proof of Proposition 3.5.2; The Hamiltonian for the two-task problem is H = E(x L + x 2 - s*(x)) + X [ EU(s*(x)) - + a 2) - u ] + Ji,_ - j ^ - [ EU(s*(x)) - V(a x + a 2) ] + ^ [ EU(s*(x)) - V(a x + a 2) ] The f i r s t order conditions are 3H \"5T7 = o. [i] 3H = 0 pointwise, [2] [ EU(s*(x)) - V ( 3 l + a 2) ] - 0 , [3] and EU(s*(x)) - V(a L* + a 2*) = u As before, [2] implies that U'(s*(x)) 2 V ' j IV X + Z U. -=4 1 r -j = 1 J ^jl^j) 3H 3a, = / ^ + x 2 - s*(x))f (x L |a 1)f(x 2 |a2)dx + 0 + ^ -1^ [ EU(s*(x)) - V( .) ] 3 a l 119 = 0 . = - g i / ( x t + x 2 - s*(x))f(x 1|a 1)f(x 2|a 2)dx + 0 + u, [ EU(s*(x)) - V( •) ] 9a2 = 0 . These imply that 3H 9a, 9H 9a, [5] Similarly, i t is necessary that / U(s*(x))f(x 1|a 1)f(x 2|a 2)dx = V ' ( a 1 * + a 2*) = / U(s*(x))f(x 1|a 1)f(x 2|a 2)dx [6] It is clear that (a^* = a2* and u^* = v^*) constitute a solution to condi-tions [5] and [6]. Therefore, i f a unique interior solution is optimal, then i t has a^* = a2* and u^* = v^*• The particular values of a^*, u^*, and X are determined from conditions [1], [3], and [4]. Q.E.D. 120 Proof of Proposition 3.5.3: Consider f i r s t the situation where a 2 = 0 and the agent's compensation is based only on xj_. Dropping the subscript for convenience, the optimal sharing rule Is f a(x|a) t ( x ) = [ X0 + *b -fUTaT ] 2 I * = [ XQ + u 0z'(a*)(x - E(x|a*)) ] 2 , where UQ > 0 (Holmstrom, 1979). Recall that E(x|a) = B'(z(a)). The princi-pal's expected return is / (x - t(x))f(x|a*)dx = B*(z(a*)) - X2, - uj(z'(a*)) 2 / (x-E(x| a*)) 2 f (x | a*)dx = B'(z(a*)) - X2, - v g(z'(a*)) 2 B\"(z(a*)) , [1] since Var(x|a*) = B \" ( z ( a * ) ) . The agent's expected u t i l i t y is f (x|a*) 2 ' ( \\) + % f(x|a*) ^ f ( x l a * ) d x \" V( a*) - u . which implies that 2 XQ - V(a*) = u . The Nash condition i s 2 ' ( \\) + H> ff(x|a*) ) fa(x|a*)dx - V(a*> = 0, f 2 (x|a*) O T 2 V 0 / f(xla*) d X - V'<\"> = ° • 121 Now consider the two-task situation, where fCx-^ja) = f(x2|a) i f x^ = x2« Let 2 V ^ i ' V s(x 1,x 2) = ( ^ + u Z i = 1 f C x ^ ) )2 . , _ a* where a' = (a^', a 2') and a^' = a 2 - y- . The Nash conditions are now 2 f a < XiK'> 2 / ( + y ^ f C x J a p >f a iU 1|a 1)f(x 2|a 2)dx 1dx 2 - V'(a L + a 2) = 0 and 2 y x i ! a i ' > 2 1 ( ^ + \" J l f U J a ^ ) ) f ( x 1 | a 1 ) f a 2 ( x 2 | a 2 ) d x 1 d x 2 - V * ( a i + a 2) = 0 . When evaluated at a', the Nash conditions reduce to £ a , ( x l l a l , ) fa ^ 2 ^ 2 ^ 2 \" J fCxJa,') d ' l • T' ' 2 » / f(x 2|a 2') d x 2 The Nash conditions w i l l thus hold at a' i f ^ \" l ' V * fj(x|a*) P / f C x ^ a ^ ) d x l = Ho / f(x|a*) d x that i s , i f C = yCz'Ca^)) 2 B , ,(z ( a 1 ' ) ) = ^(z'Ca*)) 2 B\"(z(a*)) (see equation (Al.9) in Appendix 1). [2] 122 Equation [2] is true i f (z'(a*)) 2 B\"(z(a*)) _ z'(a*)M'(a*) r,. a* a* z f (— }M' f—} (z'Cy-)) B\"(z(|-)) 4 ; M 4 ' The principal's expected return is / (x L + x 2 - s(x 1,x 2))f(x 1|a 1')f(x 2|a 2')dx 1dx 2 = 2B'(z(^L)) \" \\ j 2 - 2 u 2 ( z ' ( ^ ) ) 2 B\"(z(|^)) . [4] Since M(a) i s concave, M(a*) < 2M(|^ -) . (Proof: V2M(0) + !/2M(a*) < M(|-) because M( •) is concave. If M(0) > 0, thenV2M(a*) < M(|-)). That i s , B'(z(a*)) < 2B'(z(-2 -^)) . [5] Suppose that Z'' ^ \" ' ^ < V 2 . [6] z'(f-) M'(|-) The difference between [4] and [1] is 2B'(z(4r-)) \" B'(z(a*)) - 2pC + i^C > 0 because [5] holds and because [3] and [6] imply that H, - 2 - H , 11 - 2 , , y ) K , y ) i > o. 0 U z'(|_)M' If M(a) is s t r i c t l y concave, then the Inequality in [5] becomes a strict inequality, and hence the s t r i c t inequality in [6] can be relaxed to be a nonstrict Inequality (<). Finally, the agent's expected u t i l i t y in the two-task situation described above is s t i l l 2XQ - V(a*) = u. Since the principal is better off with an interior solution which satisfies the Nash conditions, a boundary solution is not optimal. 123 Proof of Corollary 3.5.4: If M(a) = ka, then (3.5.5) reduces to z'(a*)/z'(^*-) V2 . (iv) For the Poisson distribution with mean ka, z(a) = ln ka = ln k + In a (see Table II in Appendix 1). z'(a)/z'(f) = i / | = V 2 . Q.E.D. Proof of Proposition 3.5.5: In this case, B'(z(a)) = ka, B\"(z(a))z'(a) = k, and B1 ' , ( z ( a ) ) ( z , ( a ) ) ( z ' ( a ) ) 2 + B' • ( z ( a ) ) z \" (a) = 0 . Equation (A1.8) reduces to k + |£ z' ( a 1 * ) z \" ( a 1 * ) B \" ( z ( a 1 * ) ) = k + z' ( a 2 * ) z \" (a 2*)B' ' (z(a 2*)) , which implies that ^ z \" ( a i * ) = ^ z\"(a 2*) . [1] Equation (A1.9) reduces to U l Z'( a i*) = li2z'(a 2*) . [2] 124 [1] and [2] together imply that z \" ( a * ) z \" ( a * ) [3] ( z ' ( a i * ) ) 2 ( z ' ( a 2 * ) ) 2 ' 2 Let v(a) = z''(a)/(z'(a)) . If v(a) i s s t r i c t l y monotone, then [3] Implies that a^* = a 2 * . This i n turn implies, from [1] or [2], that Examples Q.E.D. -1 1 -2 1) Exponential: z(a) = ^ , z'(a) = — j , z''(a) = — ^ » a n a* ka ka k a 3 v (a) = : = -2ka . e l / i,2 4 k a v (a) i s s t r i c t l y decreasing i n a, so a,* = a 9 * and u. * = u *. 2) Gamma: z(a) = 7 - ° - , so v (a) i s a constant multiple of v ( a ) . lea g e Therefore, a^* = a 2 * and p^* = u^* • 3) Normal with unit variance: z(a) = ka, z'(a) = k, z''(a) = 0, and v R ( a ) = 0. Recall that i n t h i s case, a boundary s o l u t i o n i s optimal. If an i n t e r i o r s o l u t i o n i s required, [2] indicates that the multi-p l i e r s Pj and p 2 would have to be equal. 4) Poisson: z(a) = ln ka, z'(a) = , z''(a) = - and a _ J _ 2 v = — H-1 . Therefore, the s o l u t i o n i s not unique and we ~2 a cannot say whether p^ = P2 . 125 Proof of Proposition 3.5.6: Equation (A1.6) can be written as ^ 1 ( 8 ^ ) = UjKaj*). [1] d f a 2(x|a) 2f f f a 3 Note that I* (a) = S f ( x | a ) d x = / ( ~ ) d x> and hence equation (A1.7) can be written as I'( a i*) = I'(a 2*) . [2] Equations [1] and [2] together imply that I'(a 1*) _ I'(a 2*) ~2 2 ' r ( 3 l * ) I (a 2*) Therefore, i f T.'(a)/IZ(a) is s t r i c t l y monotonic, then aj* = a 2*, which implies that u^* = l ^ * (equation [1]). Q.E.D. Proof of Corollary 3.5.7. For cases (i) - ( i i i ) , an interior solution is optimal i f (3.5.6) holds. (i) z(a) = M(a), and hence (3.5.6) requires that ( ^ f f or 2 2 - 2 .2 , fM'(a) 2 ; ,2 126 or 2 2 o r • (—|— ) 2 c t , which is satisfied when 0 < a< 1, since ,2cr2-2a 1 . 1 4 ^ 2 * ( i i i ) z(a) = ln M(a) , and hence z'(a) = ^ ' ( a ) = -M(a) a Equation (3.5.6) requires that a orl 3 ^ 0 e l 2 1 2 since k^ > k 2. Therefore, a^* > a 2*, and hence, u-^* > u 2* (from [3]). (ii) z(a) - - £- , z'(a) = -Ar , and z\"(a) = - 2 n ka ' * v\"' . 2 ' \" v o / , 3 ka ka An analysis similar to that in (i) establishes the result. ( i i i ) z(a) = ka, z'(a) = k, and z''(a) = 0. Equation [2] becomes k l = k2 » which contradicts the assumption that kj > k 2. Therefore, the optimal solution is a boundary solution. Suppose the optimal solution has a^' = 0 and a 2' > 0' It wi l l be shown that there is a Pareto superior solution ( a ± * , a 2 * ) , with a^* > 0 and a 2* = 0. The optimal sharing rule i f only task two has nonzero effort is f (x 2|a 2«) a2 1 Z 2^ 5 - ( X + \"2 f(x 2|a 2«) ^ The Nash condition is (see equation (A2.3) in Appendix 2) 2k2U2 - V ( a 2 ' ) - 0 . The agent's expected u t i l i t y is 2A - V(a 2') = u , and the principal's expected return is (see equation (A2.2) in Appendix 2) 2 2 2 k2 a2' ~ * ~ 2 w2 k2 ' 128 Now consider the pair (a^*,a 2*), where a^* = a2' and a 2* = u i and consider the sharing rule f (x.la,*) t ( x l > = ( X + h f ^ x j a ^ ) where = — j - • The agent's expected u t i l i t y (with effort k l exerted only at task one) is s t i l l u, and the Nash condition is 2 2 satisfied, since 2^ = 2k2 and a 2' = a ^ . Furthermore, the principal is s t r i c t l y better off because k 2 k ^ * - X2 - l{ k 2 = k l a 2 ' - X2 - 2U* k 2 ( J - ) > k 2a 2' - X2 - 2u 2 k 2 . (iv) z(a) = ln ka, z'(a) = —,and z''(a) = ~ —« • Equations [1] and [2] 3. £• a become U1*k1 u 2*k 2 = ^ and a l a 2 P 2 ~ k l 2 k 2 k l + \" L * ' — 2 = k 2 + V '—71 a l * a2 which together imply that 2 2 R R k - -E. = k - _E K l k x K2 k 2 * R P < ^ \" 17 > • k i \" k 2 > 0 since ki > ko. Therefore, -r— ~ 17- > 0, which implies that k l *2 k l ^ k 2 (contradiction). Hence, a boundary solution is optimal. 129 Suppose the optimal solution has a^' =0 and a 2' > 0. It w i l l be shown that there is a Pareto superior solution (ai*,a 2*), with aj* > 0 and a 2* = 0. The optimal sharing rule i f only task two has nonzero effort is f (x |a2') s(x 2> = ( X + \"2 f ( x 2 l a 2 ' ) ^ » X2 = ' The Nash condition, evaluated at 8 2 ' » is - f - ! ( x 2 l * 2 f ) E \"2 f(x la ') \" V ' ( a 2 , : > = ° » x2=0 1 u x 2 l a 2 ; that i s , (see Appendix 1 ) , ^ ( z ' t a ^ ) ) 2 B \" ( z ( a 2 ' ) ) - V ( a 2 ' ) = 0 , \"2 \" -^2 k 2 a 2 ' ~ V , ( a 2 , : > - 0 » a 2 \"2k2 which implies that ; V'(a ') = 0 . a 2 The agent's expected u t i l i t y , evaluated at a2', is 2X - V(a 2') = u , and the principal's expected return is . , .2 \"2 k2 k 2 a 2 \" X ~ * Now consider the pair (a^*,a 2*), where aj* = a 2' and a 2* = 0, and consider the following sharing rule, 130 where = ^ — • The agent's expected u t i l i t y (with effort exerted only at task one) is s t i l l u, and the Nash condition is satisfied, since k^ = k2 2^ a n d a l * = a 2 ' * Furthermore, the principal is s t r i c t l y better off, because \\ i 2 2 2 \\ 1 2 k,a * - A - -!-4 = k,a ' - X -*1=1 1 2 V 2 _ k l V 2 2 \"2 2 > k 2a 2' ~ X - — - j - . Q.E.D, Proof of Proposition 3.5.9. The Nash conditions require that 2 4>a ( x l a ) ^ / 2 ( x + j = \\ ^-wnr-j=l , 2 . ) <«x|a)dx V ' ( a i * + a2*) = 0, For j=l, the condition is 8 a 2 ( x 2 l a 2 * ) 2 \\ I f ' x j a ^ ) d x l + Zv2 I g(x 2|a 2*) * f a i(x 1!a 1*)g(x 2|a 2*)dx - V ' ( a i * + a2*) = 0, which reduces to f a 2 ( x l | a l * > 2 \\ $ f f r j a ^ ) d x l = V ' ( a i * + a 2*) . Since f( •) belongs to Q, the condition can be written as (see Appendix 1) 2 u i ( z ' ( a 1 * ) ) 2 B \" ( z ( a i * ) ) = V'(a x* + a 2*) . Since V > 0 and B\" ( z ^ * ) ) = V a r ^ ^ a ] * ) > 0, j ^ * > 0 • A similar analysis for j =2 shows that j ^ * > 0. 132 Appendix 4 First Best The principal's problem is to Maximize / / W(x-s(x^ .x^) ) (x^ ,x21 a^,a 2( •) ) dx2dx^ s( •) ,a 1,a 2( •) subject to / / [U(s(x 1,x 2))-V(a 1,a 2(«))]'()(x 1,x 2|a 1,a 2(0) dx^x^^ > u, where ^(x^ ,x2 |a^ ^ ,a 2( •)) = f (x^ |a 1)g(x 2 \\-x^ ,a 2( •)) and a 2( •) indicates that the agent's second-stage effort is in general not a constant, but rather can depend on any information available at the time of choice. In the scenario described, a 2 may depend on x^. The Hamiltonian Is / / W(x-s(-))(Odx 2dx 1. Differentiating the Hamiltonian pointwise with respect to s yields -M'lf + X U'<|> = 0 for almost every (x^.x^, which implies that W'(x-s(x 1,x 2)) , . , r-r— = X for almost every (x. , x „ ) . (A4.1) u Q S ( , X ^ , X 2 , ) } 1 i. 1) Risk averse principal, risk neutral agent (i.e., U' = 1). Equation (A4.1) implies that W(x-s(xi,x 2)) = X for almost every (xi,x 2 ) , which implies that x-s(xi,x 2) i s constant for almost every (x i , x 2 ) , which in turn implies that s(x 1,x 2) = x-c, where c is a constant. It w i l l be shown below that a 2 is independent of x^ i f x^ and x 2 are conditionally independent, in which case c = E(x|a*) - V(a*) - u. 2) Risk neutral principal, risk averse agent (W'(x-s(«)) = 1). Equation (A4.1) implies that U'(s(xi,x 2)) = constant for almost every (xi , x 2 ) , which implies that s( •) is a constant for almost every ( x j _ , x 2 ) ' « If x^ and x 2 are conditionally independent, then a 2 i s independent of x^ and s( •) = U - 1(u + V(a*)). 133 3) Both individuals risk averse. Equation (A4.1) implies that x-s(x 1,x 2) = W'-1( AU' (s(x x ,x 2))) = G(s(x)), where G' > 0. Therefore, x = G(s(x)) + s(x) = H(s(x)), where H' > 0. Thus, s(x) = H _ 1(x), where H'\"1 > 0. 4) Both individuals risk neutral. ^ In this case, the agent's expected u t i l i t y constraint implies that s ( X l , x 2 ) = u + V(a*). The choice of the agent's effort decisions w i l l f i r s t be examined in the simplest case, where the principal is risk neutral, the agent is risk averse, and the outcomes are conditionally independent. That Is, (Kx - p X^a^.a^ •)) = f (x^ | a^)g(x 2 | a 2( •) ) > where we allow for the possibility that the second effort decision depends on the f i r s t outcome. Since the optimal sharing rule Is s(xi,x 2) = s (constant), the function to be maxi-mized is / / (x-s) ^ (x! ,x2 l a j ^ ,a 2( •))dx 2dx 1 + A[ / / (U(s) - V(a 1 >a 2(0)}«x 1,x 2|a 1,a 2(«))dx 2dx 1 - il] , or, ignoring constants, / x 1f(x 1|a 1)dx 1 + / / x 2f(x!|a 1)g(x 2|a 2(•))dx 2dx 1 - A / / V(a 1,a 2(.))f(x 1|a 1)g(x 2|a 2(.))dx 2dx 1. (A4.2) (A4.2) can be rewritten as / [xj_ + { / [x 2 - X V(a 1,a 2(0)]g(x 2|a 2(«))dx 2}]f(x 1|a 1)dx 1. (A4.3) For each fixed xj_, maximizing the expression inside the braces with respect to a 2 w i l l maximize (A4.3) with respect to a 2. Since the expression depends on x^ only through a 2, a 2( •) is the same for almost every x^. That i s , a 2 134 does not depend on x^. A s i m i l a r analysis can be done f or the case where the p r i n c i p a l i s r i s k averse and the agent i s r i s k n e u t r a l . F i n a l l y , If both i n d i v i d u a l s are r i s k averse, then the function to be maximized i s / [ / W ( x - s ( x ) ) g ( x 2 | a 2 ( - ) ) d x 2 ] f ( x 1 | a 1 ) d x 1 + X [ / { / U ( s ( x ) ) g ( x 2 | a 2 ( »))dx2 - V ( a 1 > a 2 ( 0 ) } f ( x 1 | a 1 ) d x 1 ] . In this case, a 2 ( 0 w i l l generally depend on x^. Maximizing (A4.3) with respect to a^ re s u l t s i n the condition that aE(x |a' 1)/aa 1 = X . Maximizing (A4.3) with respect to a 2 (which i s independent of x^) results i n the condition that 3E(x 2|a 2 )/aa 2 = X 3V(-)/3a 2. Proof of Proposition 4.1.1. Under the given assumptions, a 2 ( •) w i l l depend on xj . Let M ^ a j ) denote the mean of x^ given a^, and l e t M^x^ ,a 2( •)) denote the condi-t i o n a l mean of x 2 with respect to g( • ) . The function to be maximized i s J J (xj + x2)( •)dx 2dx 1 - X / / V ( a i , a 2 ( •))•( Odxjdxj = / X j f U j l a ^ d x j + / M^ • ) f ( x 1 |a 1)dx 1 - X / V(-)f (Xj |aj )dxj = MjCaj) + E 1M 2( •) - X EjVC • ) , where E^ represents expectation with respect to f( • ) . The f i r s t order con-dit i o n s with respect to e f f o r t are then 3M 1(a 1)/3a 1 + B E ^ C O ^ = X ffi^C and 3M 2(»)/3a 2 = X 3V(0/aa 2 (A4.5) 135 for almost every and for a^ = ai£. The sign of a^'Cx^) can be determined by taking the derivative of (A4.5) with respect to x^. Let the second and third subscripts of j on M 2 denote partial differentiation of M 2 with respect to the j-th argument of M2(xpa^,a2( •)) • Taking the derivative of (A4.5) with respect to x^ results in M233 a* 2' + M 2 3 1 = X [3 2V ( 0/3a 2]a 2« or a*'( X l) = -M 2 3 1/[M 233 - X[3 2V(0/3a2]] Q.E.D. Second Best Let <( + X U' + UjU*^ + p 2(x 1)U'g a f = 6 for almost every ( x 1 , x 2 ) . That i s , •a ^a 1 , . 1 . , x 2 u ' C ( x l t x 2 ) ) = X + y i — + W T' where the subscript a 2 represents differentiation with respect to a 2 for each fixed x^. 136 (b) A t a - a * , | - - / / ( x - s C x j . X g ) ) * (-)dx + h *rr I / / u d x i l + ' W \" a l^ ^ [ ' U ( S ( * ) ) 8 a 2 ( 0 d x 2 \" V a 2(-)]f(x 1|a 1)dx 1} = 0. (c) At a = a*, and for every fixed x^, -g- = / (x-s(x 1,x 2))g(x 2|a 2<•))dx 1dx 2 = X 2 + 2 X U J Z J U J X M J U J ) - Mj(a*)) + ^ z J 2 ( a * ) t V a r ( X l l a p + M 2 ^ ) - a t^apMj U j ) + M2(a*)] , 2 2 2 since E(x-a*) = Var x + (Ex) - 2a*Ex + a* . E(D) = X 2 + u J z ^ C a j O V a r C x J a * ) E(F) = 2 X / / u 2(x 1)z^(a*)[x 2-M 2(a*(«))]f(x 1| a i)g(x 2|a 2( •))dx 1dx 2 + 2 Uj, / / z'(aJ)(x 1-M 1(a{))u 2(x 1)z^(a§(0)(x 2^M 2(a§(«)))f (0g(0dx 1dx 2 = 2 X / u 2(x 1)z^(a*)[M 2(a 2(x 1)) - (a*(x x))]f(x x|a y)dXj + 2u 1z^(a*) / z^(a*('))P 2(x 1)(x 1-M 1(a*))(M 2(a 2 ( 0 ) \" M 2(a$( 0 ) ) f ( OdXj, E(F)| = 0. Is* E(G) = / / u 2(x 1)z 2 2(a*(-))(x 2-M 2(a*(-))) 2f(')g(*)dx 1dx 2 = / z 2 2 ( a * ( - ) ) u 2 ( x 1 ) [ V a r ( x 2 | a 2 ( - ) ) +M 2(a 2(-)) - 2 M 2 ( a * ( 0 ) M 2 ( a 2 ( 0 ) + M 2(a*( •)) 1 f ( OdXj . 2 2 E(G) ' , Z ' \" = / z ^ ( a * ( • ) ) ^ ( x 1 ) V a r ( x 2 | a * ( • ) ) f ( x 1 | a * ) d x 1 Therefore, (2) - 2 2 ' 2 - + u^zj ( a * ) V a r ( X l | a * ) * + / z^ 2(a*( • ) ) ^ ( x 1 ) V a r ( x 2 | a * ( • ) ) f ( x 1 | a * ) d X l (3) / / 2/s(x) f(x 1|a 1)g(x 2|a 2(»))dx 1dx 2 - / V(aj ,a 2( 0 ) f (xj |aj )dxj = 2 / / [ X + y 1 z j ( a * ) ( x 1 ^ 1 ( a * ) ) + (xj )z^(a*( • ) ) ( x ^ (a*( •)) ) ] • f( ')g( •)dx 1dx 2 - / V(a 1 >a 2(»))f(x 1|a 1)dx 1 = 2X+ 2y 1z{(a*)(M 1(a 1) - Mj(a*)) + 2 / M 2 ( x 1 ) z 2 , ( a * ( 0 ) [ M 2 ( a 2 ( . ) ) - ML, (a*( •) ) ] f ( OdXj - / V ( a i , a 2 ( • ) ) f ( x 1 | a 1 ) d x 1 . (3) - 2 X - / V ( a i , a 2 ( • ) ) f ( x 1 | a 1 ) d x 1 = Z. (4) = 2y 1z|(a*)Mj(a 1) + 2 / ^ ( x j ) z ' ( a * ( •)) [ M 2 ( a 2 ( • ) ) - M 2(a|( 0 ) ] f a d x j - / V a f ( x 1 | a 1 ) d x 1 - / V ( - ) f ^ ( x j | a 1 ) d x 1 . (4) 1 = 2w l Z*(a*)M^(a*) - Jv& f C x J a - p d X j - / V( - ) f & (x 1|a*)dx 1 l a * 1 1 (5) F i x x x. ^p- = 2 H 2 ( x 1 ) z ' ( a * ( 0 ) M ^ ( a 2 ( 0 ) f ( x 1 | a 1 ) - V (•)f(x.|a 1) = 0, which implies that &2 V (a*) W \" 2 Z ' ( a * ( x 1 ) ) M ^ ( a * ( x 1 ) ) * C l e a r l y , U ^ X j ) > 0 i f V & ( •) > 0. This establishes r e s u l t ( i ) ( a ) , i s 139 After i s r e a l i z e d , the agent's expected u t i l i t y given xj and a^(x^) 2 / Tsjx) g ( x 2 | a * ( X l ) ) d x 2 - V(a*,a*(x 1)) = 2 J [\\+ UjzJCajXxj-MjCa*)) + j^Cxj ) z ' ( a * ( X j ))(x 2^M 2(a*(x 1))] • g(x 2| Odx 2 - V ( a * , a * ( X l ) ) = 2 [ A + UjzJCaJXxj-MjCa*))] - V(a* . a * ^ ) ) . D i f f e r e n t i a t i n g with respect to x^ y i e l d s 2 V l < a l > - V a 2 ( ' ) a 2 , ( X l ) * The agent's expected u t i l i t y for the second stage pecuniary return ( i . e . , s(x)) i s an increasing function of x^ (assuming p^ > 0). Assuming that V ( •) > 0, a s u f f i c i e n t condition for the agent's expected second stage net a2 u t i l i t y to be increasing i n x^ i s that a^(x^) be a decreasing function of x j . This establishes r e s u l t s ( i ) ( b ) and ( i i ) ( a ) . Now f i x xj and l e t a 2 denote a 2 ( x ^ ) , and f denote f ( x j j a ^ ) . f - = M ' ( a 2 ) f - 2 X u 2 ( x 1 ) z ' ( a * ) M ' ( a 2 ) f - 2 u 1 z J ( a * ) z ^ ( a * ) p 2 ( x 1 ) ( x 1 - M 1 ( a * ) ) M ^ ( a 2 ) f - z 2 2 ( a * ) p ^ ( X l ) [ B 2 \" ( z 2 ( a 2 ) ) z ^ ( a 2 ) + 2M 2(a 2)M 2(a 2) - 2M 2(a*)M 2(a 2)]f + P 1 [ 2 u 2 ( x 1 ) z 2 ( a * ) M ' ( a 2 ) f a i - V ^ f - V ^ C O f ^ l + u 2 ( x 1 ) [ 2 p 2 ( x 1 ) z ^ ( a * ) M ^ ' ( a 2 ) f - V ( O f ] . 3H - H J U * ) - 2y 2(x 1)[M 2(a*)Xz 2(a*) + u l V a f a / f l a 2 al 1 a i a 2 140 + y^(x 1)[2z 2(a*)M' ,(a*) - z 2 3 ( a * ) B ' \" ( z 2 ( a * ) ) ] = 0. (Note that f /f a l - zJ(a*)(x 1-M 1(a*)) = 0.) Substituting the expression for v^ix^) from (5) above and l e t t i n g subscripts j on V represent p a r t i a l d i f f e r e n t i a t i o n with respect to the j - t h e f f o r t variable y i e l d s V2 V22 M2 \" W 2 - 2 ^ \" ^ V i ^ l v V ' V 2 Z : B : * ' V 0 z ! (x, -M. ) - y , V 1 0 + „ - f = 0. 112 2 2z^M 2 Z 4M ,2 D i f f e r e n t i a t i n g with respect to x^, M2 a2 XV 2 2a 2 a' V fi r 22 2 1 z 2M 2 + V2 V222 V 2 V 2 2 ( z - M 2 + z 2M 2') ( z * M p 2 \" l [ V 1 2 2 a 2 + V 2 2 a 2 z l ( x r M l ) a' 2V V M'' + V p[*' ' 2 2 22 2 2 2 V 2zJ] + 4 ( . ) z'M'2 Z2 M2 a 2 V 2 2M 2*(z^'M 2 2 + z^2M2M2') 2 ( z ^ 2 ) 2 a' 2V V z'B1 ' 1 f 2 , 2 22 Z2 2 4 2 2 2 2 V z''B' ' 1 V z'B'' ' ' 2 z2 H2 + 2 z2 B2 M, ,2 M, ,2 V 2z'B'''2M'' v2 2 2 2 M ,3 -] = 0. Recall that M 1(a i) = a ^ so that = 1 and M|' =0. Thus, the expression above reduces to £a V 2 + V V . . . , . . , / , U „ . 22 2 222 2 22 2 a 2 < X l > = \" \" l V 2 Z i / D ' W h e r e D = (X + U l — ) V 2 2 + — —j-141 V 2 V 2 2 z 2 B 2 \" ' V 2 2 ( z 2 ' B 2 \" + z 2 2B 2\"') + 1V122 + Recall that i t is assumed that V 2 > 0, V 2 2 > 0, V 2 2 2 > 0, and V 1 2 2 > 0. It is easily checked that for the exponential, gamma, and Poisson distributions in 0, z' > 0, z \" < 0, B , , ,» > 0, and z'^'** + z , 2B'*\" > 0. These facts, plus the f i r s t order condition requiring that X + y. f /f be positive, guar-antee that the denominator of a 2(x^) is positive. The sign of the numerator is the same as the sign of y^. Hence, i f y^ > 0, then a*,( •) is a decreasing function of x^. This establishes result ( i i ) ( b ) . Q.E.D. Proof of Corollary 4.4.1: It is necessary to show that V* > 0, V*2 > 0, V* 2 2 > 0, and V* 2 2 > 0. The derivatives of M 1 w i l l f i r s t be calculated. Dropping subscripts for convenience, 1) M _ 1(M(a)) = a implies that M - 1 '(M(a))M'(a) = 1. Therefore, M_1'(M(a)) = 1/M'(a) > 0. 2) M _ 1\"(M(a))M'(a) = - M\"(a)/(M'(a)) 2. Therefore, M _ 1\"(M(a)) = - M\"(a)/(M'(a)) 3 > 0. 3 2 -1'\" M\" 'M' - M''3M' M1 ' 3) M 1 (M(a))M'(a) = - [- ^ ]. M' ™. * \\ \\ 3M , , 2-M , , ,M' Therefore, M (M(a)) = = M,:> Let subscripts j on V* denote partial differentiation with respect to ej. Then V* = V'M^' = V'/M'(a2) > 0, 142 V* V22 -1' 2 -1'' V 1 V \" ( M 2 1 ) Z + V'M^ j V'M 2'(a 2) ( M ^ ( a 2 ) ) ' (M^(a 2))--1* -1' 2 -1'' V * 2 2 = [ V ' \" ( M 2 i y + V \" M 2 l ] 1 ^ v,,, V\"M 2'(a 2) M i ( a l } [ ( M 2 ( a 2 ) ) 2 (M 2(a 2)) — ] > 0, and V* 222 V \" ' ( M 2 ) + V\"2M 2 M 2L + V \" M 2 M 2 + V'M2 3 V \" M 2 \" ( a 2 ) ^ V'(3M''2 - M 2\"M 2) (M ' ( a 2 ) ) 3 ( M 2 ( a 2 ) ) 4 M ,5 Thus, I f (3M1' 2 - M^'*Mp > 0 at a*, then V* 2 2 > 0, as required. Q.E.D. "@en ; edm:hasType "Thesis/Dissertation"@en ; edm:isShownAt "10.14288/1.0096655"@en ; dcterms:language "eng"@en ; ns0:degreeDiscipline "Business Administration"@en ; edm:provider "Vancouver : University of British Columbia Library"@en ; dcterms:publisher "University of British Columbia"@en ; dcterms:rights "For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use."@en ; ns0:scholarLevel "Graduate"@en ; dcterms:title "Multifaceted aspects of agency relationships"@en ; dcterms:type "Text"@en ; ns0:identifierURI "http://hdl.handle.net/2429/25642"@en .