Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Essays on the persistence of leverage in residual-based portfolio sorts Mueller, Michael 2012

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2012_fall_mueller_michael.pdf [ 768.29kB ]
Metadata
JSON: 24-1.0073172.json
JSON-LD: 24-1.0073172-ld.json
RDF/XML (Pretty): 24-1.0073172-rdf.xml
RDF/JSON: 24-1.0073172-rdf.json
Turtle: 24-1.0073172-turtle.txt
N-Triples: 24-1.0073172-rdf-ntriples.txt
Original Record: 24-1.0073172-source.json
Full Text
24-1.0073172-fulltext.txt
Citation
24-1.0073172.ris

Full Text

Essays on the Persistence of Leverage in Residual-Based Portfolio Sorts by Michael Mueller B.Com., Queen’s University, 2002 M.Sc.B., The University of British Columbia, 2005 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in The Faculty of Graduate Studies (Business Administration) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) September 2012 c© Michael Mueller 2012 Abstract Firm leverage is a slow-moving, persistent variable. This persistence remains after controlling for leverage determinants. When firms are sorted into portfolios on the basis of residuals from a regression of leverage on various factors, and then tracked for 20 years, the mean leverage level of these portfolios still exhibits long-term persistence and slow convergence over time, as documented by Lemmon et al. (2008). My thesis focuses on measurement error in the explanatory variables as a possible driver behind this long-run persistence. I show theoretically in Chapter 2 that if a firm’s leverage dynamics are driven by a persistent explanatory variable that is measured with error, using the mismeasured explanatory variable in a regression can create persistence in residual-sorted portfolios: conditional on an observed residual, future expectations of leverage are no longer equal to the unconditional mean. Instead, a large positive residual will forecast above average future leverage. This is because the estimated residual is correlated with the unobservable explanatory variable, which in turn predicts leverage. In Chapter 3, I quantify the amount of measurement error that is consistent with the documented persistence of leverage in residual-based portfolio sorts. If we assume that a single factor drives leverage (we can think of this factor as a composite of many tradeoff theory-based explanatory variables), then the measurement error variance of this single “composite” variable needs to be 42% larger than its cross-sectional variance. While this seems large, even smaller levels of measurement error produce a remarkable level of persistence in residual-based portfolio sorts. I then examine several explanatory variables used in regressions in the literature, namely a firm’s profitability, the tangibility of its assets, the market-to-book ratio, and industry leverage. I find that low quantities of measurement error in profitability, tangibility, and industry leverage, coupled with ii a measurement variance equal to about 80% of the cross-sectional variation in the market to book ratio, produce a good fit of simulated sample data moments to empirical moments. This is an interesting finding, since it suggests that unobserved investment opportunities may play an important role in explaining leverage ratios. iii Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Related Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 Persistence as a Consequence of Measurement Error: Theory . . . 10 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2 Recent Evidence of Long-Term Persistence in Leverage . . . . . . . . . 10 2.2.1 Replication of Lemmon et al.’s (2008) Findings . . . . . . . . . . 11 2.3 Possible Explanations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.4 Benchmark Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.5 Initial Leverage Dispersion is a Function of R2 . . . . . . . . . . . . . . 19 2.6 Time Series Persistence in Leverage Portfolios as a Result of Measurement Error: Intuitive Explanation . . . . . . . . . . . . . . . . . . . . . . . . 20 2.6.1 Base Case: A Correctly Specified Model . . . . . . . . . . . . . 21 2.6.2 Case II: Persistence as a Consequence of Measurement Error . . 23 iv 2.7 Comparative Statics and Numerical Examples . . . . . . . . . . . . . . 27 2.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3 Extracting Measurement Error From Explanatory Variables: A Cali- bration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.2 Estimating Target Leverage Dynamics . . . . . . . . . . . . . . . . . . . 44 3.3 Estimating Measurement Error by Extracting the Mismeasured Target 47 3.4 Multi-Variable Calibration with iid Measurement Error . . . . . . . . . 51 3.4.1 Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.5 Multi-Variable Calibration with Autocorrelated Measurement Error . . 58 3.5.1 Exogenously Autocorrelated Measurement Error . . . . . . . . . 59 3.5.2 Estimated Measurement Error Autocorrelations . . . . . . . . . 61 3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Appendices A Variable Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 B Derivations for Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 90 B.1 Correlation between a Regression’s Dependent Variable and the Esti- mated Residuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 B.2 Attenuation Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 B.3 Conditional Expectation of Leverage Under Measurement Error . . . . 92 C Derivations for Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . 94 C.1 Derivation of α0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 v C.2 Derivation of α1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 C.3 Derivation of σ2e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 vi List of Tables 2.1 Summary Statistics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.1 Estimated Structural Parameters, with iid Measurement Error. . . . . . 75 3.2 Actual and Simulated Moments, with iid Measurement Error. . . . . . . 76 3.3 Measurement Error Ratio with iid Measurement Error. . . . . . . . . . 77 3.4 Estimated Structural Parameters with Persistent Measurement Error. . 78 3.5 Actual and Simulated Moments with Persistent Measurement Error. . . 79 3.6 Measurement Error Ratio with Persistent Measurement Error. . . . . . 80 3.7 Estimated Structural Parameters with Estimated Measurement Error Persistence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.8 Actual and Simulated Moments with Estimated Measurement Error Per- sistence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3.9 Measurement Error Ratio with Estimated Measurement Error Persistence. 83 vii List of Figures 2.1 Average Leverage of Book Leverage Portfolios. . . . . . . . . . . . . . . 34 2.2 Average Leverage of Book Leverage Portfolios, Sorted on Unexpected Leverage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.3 Average Leverage of Book Leverage Portfolios, Sorted on Unexpected Leverage, with Firm Fixed Effect. . . . . . . . . . . . . . . . . . . . . . . 36 2.4 Portfolio Convergence at Different Speeds. . . . . . . . . . . . . . . . . . 37 2.5 Portfolio Convergence when Sorted on Residuals. . . . . . . . . . . . . . 38 2.6 Portfolio Convergence when Sorted on Residuals Contaminated by Mea- surement Error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.7 Comparison of Portfolio Leverage Dispersion as a Function of Measure- ment Error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.8 Portfolio Leverage Dispersion as a Function of β. . . . . . . . . . . . . . 41 3.1 Average Leverage of Book Leverage Portfolios. . . . . . . . . . . . . . . 66 3.2 Average Leverage of Book Leverage Portfolios, Sorted on Unexpected Leverage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.3 Data-Implied Target Leverage Dynamics. . . . . . . . . . . . . . . . . . 68 3.4 Sample Mismeasured Target Leverage Paths. . . . . . . . . . . . . . . . 69 3.5 Implied Measurement Error from Residual-Based Sorts. . . . . . . . . . 70 3.6 Residual-Sorted Leverage Portfolios at Different Implied Levels of Mea- surement Error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.7 Average Leverage of Portfolios Sorted on Simulated ‘Actual’ and ‘Unex- pected’ Leverage with iid Measurement Error. . . . . . . . . . . . . . . . 72 viii 3.8 Average Leverage of Portfolios Sorted on Simulated ‘Actual’ and ‘Unex- pected’ Leverage with Persistent Measurement Error. . . . . . . . . . . . 73 3.9 Average Leverage of Portfolios Sorted on Simulated ‘Actual’ and ‘Unex- pected’ Leverage with Estimated Measurement Error Persistence. . . . . 74 ix Acknowledgments First, I would like to express my deepest gratitude to my supervisor, Murray Carlson, as well as to Adlai Fisher, Ron Giammarino and Russell Lundholm: Your knowledge, advice and helpfulness have been invaluable, and I particularly would like to thank you for your unwavering support through what sometimes have been difficult times. Without your continued help and encouragement this thesis could not have been completed. I would furthermore like to thank all the finance faculty members and fellow stu- dents at Sauder for providing so much insightful advice in seminars, presentations and meetings. I am particularly indebted to Alberto Romero for taking the time to share some of his knowledge of Matlab with me. I finally would like to thank my parents Ellen and Hans-Joachim, who have made so many sacrifices to support me, and my girlfriend Cheryl, who never stopped believing in me: I cannot even begin to tell you how much this means to me! x Dedication Für meine Eltern Ellen und Hans-Joachim, sowie für Tante Elfriede und Oma Berta, die sich so gefreut hätten. xi Chapter 1 Introduction Firm leverage, the ratio of a firm’s debt to its assets, is generally accepted to be a very slow-moving variable. When modeled as an AR(1) process, the estimated coefficient of autocorrelation for the leverage ratio tends to be around 0.9. Ex-ante, it is not clear whether this value is too large, and whether there exists an unconditional leverage mean towards which firms’ leverage levels should converge. Intuitively, a technology company like Google may be quite different from a manufacturer like General Motors, with a resulting persistent disparity in leverage levels. This disparity could be driven by dif- ferences in the costs and benefits of debt each firm faces. For instance, a manufacturing firm has more tangible assets which can serve as debt collateral than a technology com- pany whose assets may contain more intangibles in the form of human capital. This would lead to the former being able to sustain a larger amount of debt than the latter. Interestingly, this persistence in capital structure hardly diminishes after controlling for factors that we believe drive leverage. When firms are sorted into portfolios on the ba- sis of residuals from a regression of leverage on various determinants motivated by the tradeoff theory of capital structure, and then tracked for 20 years post portfolio forma- tion, the mean leverage level of these portfolios still exhibits long-term persistence and slow convergence over time, as documented by Lemmon et al. (2008). To be precise, there is a wide dispersion between the portfolios in the formation period, which slowly narrows in subsequent periods, but even after 20 years, the portfolio leverage levels have not converged to the unconditional mean. Consequently, in the Lemmon et al. (2008) study, the variable that explains most of the initial cross-sectional variation and subsequent slow convergence is a firm fixed effect. In this thesis, I seek to provide an explanation of what could be behind this firm fixed effect in the context of the Lemmon 1 et al. (2008) portfolio sorts. My study does not address the issue of whether firms are slowly or quickly adjusting towards a target. Instead, I seek to provide an answer to the question of whether the apparent slow convergence of leverage towards an unconditional mean after controlling for determinants of leverage must be taken as evidence against firms having a time-varying leverage target that is not driven by a firm fixed effect. My investigation focuses on measurement error in the explanatory variables as a possible driver behind long-run persistence in leverage portfolio sorts. The assumption of measurement error in empirical proxies of the true underlying economic variable is a reasonable one. For instance, under the tradeoff theory of capital structure, the more profitable a firm is, the more debt it will take on to take advantage of the interest tax shield, ceteris paribus. However, accounting profitability does not necessarily equal eco- nomic profitability. In setting an optimal capital structure, expected future profitability would be the relevant driver for the firm, while accounting profitability is a backward- looking historical figure. Similarly, more tangible capital makes better collateral, but using PP&E normalized by assets does not tell the full story, as it ignores the liquidity of the collateral. For instance, a construction company and an airplane manufacturer may have a similar proportion of tangible assets. However, in bankruptcy the construction company’s cranes and excavators are easily transferable to another construction com- pany, while the tooling used to assemble a Boeing 777 would be next to useless in the assembly of an Airbus A-330. This illustrates how an empirical proxy like the tangibility measure will not always properly reflect the true underlying economic determinant. I show that if a firm’s leverage dynamics are governed by a persistent but stationary explanatory variable that is measured with error, using the mismeasured explanatory variable in a regression is problematic and can create the illusion of persistence in residual-sorted portfolios: conditional on an observed residual, future expectations of leverage are no longer equal to the unconditional mean. Instead, a large positive resid- ual will forecast above average future leverage. This is because the estimated residual is correlated with the unobservable explanatory variable, which in turn predicts lever- age. In the presence of measurement error, sorting firms into portfolios on the basis 2 of regression residuals and observing the leverage of these portfolios over time will give rise to the persistence phenomenon that Lemmon et al. (2008) describe. I also quantify the amount of measurement error that would be consistent with the documented per- sistence of leverage in residual-based portfolio sorts. If we assume that a single factor drives leverage (we can think of this factor as a composite of many tradeoff theory-based explanatory variables), then the measurement error variance of this single “composite” variable needs to be 42% larger than its cross-sectional variance. While this seems large, even smaller levels of measurement error produce a remarkable level of persistence in residual-based portfolio sorts. Furthermore, in my simulated model, I find that reason- able levels of measurement error in four actual explanatory variables (profitability, asset tangibility, the market-to-book ratio, and industry leverage) can reproduce the stylized empirical facts. My work builds on the Lemmon et al. (2008) observations. By analyzing leverage portfolios, Lemmon et al. (2008) provide new evidence for the long-term persistence in leverage; they sort firms into quartile portfolios, and track the portfolios’ average leverage levels for 20 years. Regardless of the sorting method, they find a large initial dispersion between the leverage portfolios. Over the years, the portfolios do converge to some extent, but significant differences remain even after 20 years. Since controlling for known determinants of capital structure leaves their results largely unchanged, Lemmon et al. (2008) conclude that this continued persistence supports firm fixed effects as an important determinant of leverage. In search of an explanation for the documented fixed effect-like feature of the data, two phenomena thus need to be accounted for: the large initial dispersion of quartile portfolio leverages, and the sluggish convergence to a common mean. The first, relatively minor phenomenon is that in the portfolio formation period, large differences between the portfolio leverage levels exist. This is the case even when firms are sorted into portfolios on the basis of their “unexpected” leverage, which is the residual obtained from a regression of leverage on various determinants. I show that using the residual as the sort criterion automatically generates dispersion between 3 the portfolios in the formation period. This is because the regression residuals and the dependent variable leverage are related by construction. The worse the model’s fit is, the higher will be the correlation between residual and dependent variable, and the more will sorting on the residual resemble sorting on leverage itself. This effect does not require the model’s fit to be abysmally bad: for instance, even a fairly high R2 of 50% still implies a correlation between leverage and the estimated residual of 0.71, and will thus generate significant dispersion between the portfolios in the sorting period. The second, and much more important phenomenon, is that of slow convergence: in Lemmon et al. (2008)’s study, significant differences between the leverage levels of the portfolios remain after 20 years. While the R2 effect above accounts for disper- sion in the portfolio formation period, it does not account for dispersion thereafter: if residuals are not autocorrelated and the explanatory variable is properly measured, the dispersion should completely vanish after the sorting period. Therefore, I analytically examine portfolio sorts in two environments: in a correctly specified model, and in one where the explanatory variable is mismeasured. Forming portfolios on the basis of the residuals of a well-specified regression does not produce persistent leverage time series. The expectation of next period’s leverage, conditional on this period’s regression resid- ual, is simply equal to the unconditional mean leverage level. However, in the presence of measurement error, this conditional expectation no longer reduces to the uncondi- tional mean, but instead is a function of essentially all parameters of the model. The implication is that the estimated regression residuals will now predict future leverage. To investigate the effect of measurement error, I model leverage as a function of a single stationary but persistent explanatory variable. This is arguably a reasonable assumption, as in a tradeoff framework both the latent capital structure determinants and their empirical proxies are slow-moving. It is difficult to directly observe the factors that give rise to the marginal benefits and costs of debt; the researcher can only measure the explanatory variables with error. For instance, the market-to-book ratio is an imperfect measure of future growth opportunities. Using mismeasured explanatory variables in a regression leads to the well-documented attenuation bias. The bias causes 4 estimated residuals to be correlated with the regressor, which in my case is persistent. Therefore, a large positive estimated residual will predict future above-average values of the regressor, and thus leverage. The magnitude of the model parameters such as the AR(1) coefficient that governs regressor dynamics, the β-coefficient of the cross-sectional relationship between leverage and the regressor, and the cross-sectional variability of the regressor all affect the magnitude of this effect. The relationship among the parameters is such that an increase in magnitude of most parameters will increase the artificial wedge between expected leverage conditioned on a residual and the unconditional expectation of leverage. Even at moderate levels of measurement error, the interplay between the parameters may lead to an apparently long-lasting persistence in residual-sorted leverage portfolios. The effect is more pronounced the more persistent the underlying regressor is. Measurement error is therefore an ingredient capable of producing persistence in residual-sorted portfolios, and a potential contributor to the slow portfolio leverage convergence documented by Lemmon et al. (2008). Knowing that measurement error can contribute to persistence in residual-based portfolio sorts, a key question is: what amount of measurement error (relative to the cross-sectional variation in an explanatory variable) is necessary to reproduce the styl- ized portfolio leverage effects? To study this, I first determine the implied underlying dynamics of a true leverage target, which would be consistent with the portfolio leverage data points generated by sorting firms into portfolios based on actual leverage. I then establish an analytical relationship between this true target, and the mismeasured tar- get implied by mismeasured explanatory variables. For simplicity, I again assume that a single explanatory variable drives leverage. This is not limiting, as we can think of this variable as a composite of multiple underlying factors rooted in the tradeoff theory. Without explicitly modeling the underlying explanatory variable, I recover the implied ratio of measurement error to cross-sectional variation in this explanatory variable, which would reproduce the residual-based portfolio sorts. I find that the measurement error variance needs to be 42% larger than the cross-sectional variance of the true, but unobserved explanatory variable. However, I also show that even at lower levels of 5 measurement error, a remarkable amount of persistence in the residual-based portfolio sorts remains. For instance, if the ratio of measurement error to state noise in the explanatory variable is as low as 25%, the residual-based portfolios nonetheless exhibit a sizeable amount of persistence; the effect amounts to between 50% and 75% of what Lemmon et al. (2008) document. Therefore, even if one takes the view that measure- ment error alone is not sufficient to fully account for the persistence in residual-sorted leverage portfolios, it nonetheless is likely to be an important contributor. I proceed to examine several explanatory variables, namely a firm’s profitability, the tangibility of its assets, the market-to-book ratio, and industry leverage. These variable are chosen to be consistent with the Lemmon et al. (2008) study, and because previous work, for example Rajan and Zingales (1995) and Frank and Goyal (2007) have identified them as important predictors of leverage. The objective is to ascertain the amount of measurement error needed specifically in these variables to reproduce the stylized facts. I do this via an estimation that matches simulated data moments to empirical moments. I find that low quantities of measurement error in profitability, tangibility, and industry leverage, coupled with a measurement variance equal to about 80% of the cross-sectional variation in the market to book ratio, produce a good fit of simulated sample data moments to empirical moments. This is an interesting finding, since it is consistent with other studies such as Erickson and Whited (2006), who document that a large amount of the variability in the market-to-book ratio can be attributed to measurement error, and not true Tobin’s q. My finding suggests that unobserved investment opportunities may play an important role in explaining leverage ratios. I finally show that the model fit can be improved if the measurement error terms themselves exhibit some degree of positive autocorrelation. 1.1 Related Literature Myers (1984) remarked that “we do not know how firms choose the debt, equity or hy- brid securities they issue.” Since then, much effort has gone into better understanding corporate capital structure, yet the question of whether firms do have a target capital 6 structure towards which they adjust their debt/equity mix is open. Titman and Wes- sels (1988) find several variables that help predict a firm’s capital structure, yet the variables do not correspond to any one theory. Fischer et al. (1989) propose a dynamic capital structure model, where firms adjust towards an optimum, but are hampered by adjustment costs. Hovakimian et al. (2001) provide evidence that firms behave in a fashion consistent with a tradeoff model, a finding that is echoed in Leary and Roberts (2005). Roberts (2001) shows that firms appear to adjust towards firm-specific time- varying targets, that adjustment speeds vary considerably across industries, and that accounting for measurement error increases the speed of adjustment. In their extensive survey, Graham and Harvey (2001) find some, though not particularly strong, support for the tradeoff theory. Baker and Wurgler (2002), on the other hand, suggest that firms’ issuing behavior is driven by attempts to time the market, while Welch (2004) shows that firms appear to do nothing to counteract mechanistic stock return effects on market leverage. In Hennessy and Whited (2005) there is no leverage target to- wards which firms adjust, in spite of their optimizing behaviour. Chang and Dasgupta (2009) argue that the evidence for the tradeoff theory is not as strong as it may seem, as random financing generates data similar to what actually is observed. Overall, the evidence for the existence of an optimal capital structure towards which firms adjust, is mixed, and Lemmon et al. (2008) also cast doubt on rebalancing behaviour since they find firms’ capital structures to be remarkably persistent over time. By analyzing leverage portfolios, Lemmon et al. (2008) provide new evidence for the long-term persistence in leverage; they sort firms into quartile portfolios on the basis of firm leverage, and track the portfolios’ average leverage levels for 20 years. They find a large initial dispersion between the leverage portfolios. Over the years, the portfolios do converge to some extent (most of the convergence happens early on), but significant differences remain even after 20 years. Controlling for known determinants of capital structure leaves their results largely unchanged. Instead of sorting on actual leverage, they now sort on the residual from a regression of leverage on lagged explanatory vari- ables; the evolution of the average leverage of the resulting portfolios is similar to sorting 7 on actual leverage directly. Persistent differences between the portfolios remain even 20 years after formation. DeAngelo et al. (2009) offer a potential resolution of the Lemmon et al. (2008) results. In their model firms can finance investment either out of retained earnings (cash), by issuing debt, or by issuing equity. Carrying cash forces the firm to incur agency costs proportional to the cash balance. Issuing debt is costless, but there is credit rationing in place, which caps a firm’s debt capacity. Equity issuance, on the other hand, is costly. Generally, firms will avoid carrying a cash balance due to the associated agency costs. Instead, they will free up debt capacity, so as to avoid a costly equity issuance when installing new capital. This “transitory” debt, coupled with various frictions in the model and cross-sectional dispersion in profitability shocks leads to leverage sorts that resemble those of Lemmon et al. (2008). However, the sorts are for raw leverage only: correctly controlling for the cross-sectional dispersion may reduce persistence in their model. Another recent effort to explain leverage persistence is by Menichini (2010). His model, which includes agency costs and endogenous investment, leverage and dividend payouts generates portfolio sorts on both actual and unexpected leverage that contain long-term persistence as in Lemmon et al. (2008). This obtains largely because in his model there is no single long-term mean towards which firms revert. On the other hand, DeAngelo and Roll (2011) question capital structure stability altogether, and argue that it is the exception, and not the rule. They find that many firms which have been listed for 20 or more years, have leverage levels in at least three different quartiles. The goal of this thesis is to reconcile some of the recent empirical findings regarding the persistence of capital structure. Both Roberts (2001) and Flannery and Rangan (2006) suggest that measurement error may be partly responsible for the sluggish con- vergence in leverage ratios towards their mean, as measured by the adjustment speed parameter in a partial adjustment framework. The latter authors also show that includ- ing firm fixed effects speeds up estimated convergence speeds. Lemmon et al. (2008) 8 also provide evidence for leverage persistence via their portfolio sorts, and suggest that a fixed effect may be responsible. My paper expands on this literature in the following way: I make precise the channel in which measurement error may add to the persistence of leverage and resemble a fixed effect in the Lemmon et al. (2008) portfolio sort setting. 9 Chapter 2 Persistence as a Consequence of Measurement Error: Theory 1 2.1 Introduction In this chapter I first discuss Lemmon et al. (2008)’s methodology, and then I repro- duce their results. I briefly discuss possible explanations of what they document, before focusing on measurement error as a potential culprit. I show that if a firm’s leverage is driven by a persistent explanatory variable that is measured with error, persistence in residual-sorted portfolios will obtain. This occurs because conditional on an observed residual, future expectations of leverage are not equal to the unconditional mean. In- stead, a large positive residual will forecast above average future leverage. This is because the estimated residual is correlated with the latent state variable, which in turn predicts leverage. Sorting firms into portfolios on the basis of these regression residuals and tracking the leverage of these portfolios over time will cause persistence in the portfolio leverage levels. 2.2 Recent Evidence of Long-Term Persistence in Leverage Lemmon et al. (2008)’s findings are problematic for the tradeoff theory of capital struc- ture, as the authors provide compelling evidence of long-term persistence in leverage. They find that “high (low) levered firms tend to remain as such for over two decades”, 1A version of this chapter will be submitted for publication. 10 which seems to run counter to a world where firms actively rebalance their capital structures towards targets. While they document initial convergence in leverage ratios, significant long-term differences remain. Furthermore, these differences are hardly re- duced even after controlling for determinants of capital structure that may account for some of the differences in leverage levels. As the particular nature of Lemmon et al. (2008)’s methodology is central to this thesis, I begin by replicating their findings, and explaining their approach in some detail. 2.2.1 Replication of Lemmon et al.’s (2008) Findings I start with a sample that is comparable to that of Lemmon et al. (2008). The sample consists of firms listed in the annual Compustat database between 1965 and 2003. Finan- cials and firms with missing asset or debt values are excluded. Leverage is constrained to lie in the closed unit interval. Definitions of all variables are given in Appendix A. All explanatory variables are winsorized at the 1st and 99th percentile. Table 2.1 presents summary statistics, which are similar to those in Lemmon et al. (2008). The table also displays a prominent feature of the data, namely the existence of zero-leverage firms, whose proportion is, in fact, sizeable (see e.g. Strebulaev and Yang (2006)). Next, I carry out the Lemmon et al. (2008) portfolio sorts. The procedure is as follows: Starting in 1965, and then every year thereafter, I sort firms into 4 quartile portfolios on the basis of their book leverage level. I then compute the mean leverage of each portfolio for the next 20 years, keeping its composition constant (e.g. after sorting firms into portfolios in 1965, the portfolio will be unchanged (save for potential exits from the sample) until 1986). I repeat this procedure for all years until the end of the sample period; starting in 1983, the portfolios’ time series length will decrease by one year every year. This results in 38 portfolio time series of length 20 years, or less, if the portfolios were formed in 1983 or later. The portfolios are then averaged cross-sectionally in event time, i.e. I compute the average of the 38 portfolio leverages for each of the 20 years in the time series. Figure 2.1 shows the results of this procedure. Figure 2.1 is virtually identical to that presented in Lemmon et al. (2008). There 11 is wide cross-sectional dispersion among portfolios in the initial sorting period. This dispersion is followed by an initially quick convergence towards the overall mean, which starts to noticeably taper off as we move further away from the portfolio formation year. After 20 years, there is still a large difference between the mean leverage levels of the portfolios. The highest portfolio’s leverage is 35% in year 20, down from 55% in year 0, whereas the lowest portfolio’s leverage is 19%, up from 3%. Therefore, after 20 years there is still a 16 percentage point difference between the highest and lowest leverage portfolios. This is the long-term persistence in raw leverage that Lemmon et al. (2008) document. Lemmon et al. (2008) note that the pattern uncovered in Figure 2.1 could be the consequence of cross-sectional variation in an underlying determinant of firm leverage. For this reason, they first regress leverage on lagged explanatory variables, namely firm size, profitability, tangibility, market-to-book equity, and Fama-French 38-industry dummies2. The regressions are estimated every year, which allows for time-varying coefficient estimates. Then Lemmon et al. (2008) obtain the estimated residuals of these regressions (dubbed “unexpected leverage”), and carry out the portfolio sorts on this unexpected leverage, instead of on actual leverage. Again, firms are sorted into quartile portfolios, with the initial portfolio composition being obtained by sorting on the regression residuals. As before, actual leverage is then tracked over the next 20 years. The point of sorting on residuals instead of on actual leverage is to reduce dispersion during the initial sort, by controlling for the cross-sectional variation in leverage that can be attributed to explanatory variables. Under a well-specified regression, convergence of the portfolio leverage averages towards the overall mean should also speed up, since the residuals would not contain any information about firms’ future leverage levels. Lemmon et al. (2008) state that “there should be less cross-sectional variation in the formation period as a result of sorting on residual leverage [...] any differences in the average leverage levels across portfolios should quickly disappear as the impact of the 2Size, profitability and an industry dummy are used in e.g. Titman and Wessels (1988), while tangibility and market-to-book equity are used e.g. in Rajan and Zingales (1995). 12 random shock dissipates.” I again follow the Lemmon et al. (2008) methodology in order to replicate their portfolios using the regression residuals as the sort criterion. I modify their regression slightly: instead of industry dummies, I use mean industry leverage (identified by Frank and Goyal (2007) as an important determinant of leverage), without materially affecting the results. Figure 2.2 reproduces Lemmon et al. (2008)’s findings virtually identically. Just as in their study, Figure 2.2 and Figure 2.1 look very similar. In fact, the cross- sectional portfolio dispersion in the formation year is still large, albeit slightly reduced as compared to sorting on actual leverage. In addition, the portfolio dispersion remains persistent, and significant differences between the portfolios remain over the entire 20 years. By casual visual inspection, the speed of convergence appears to not have been significantly altered by controlling for known determinants of leverage. Furthermore, while overall cross-sectional dispersion has been reduced at all horizons, leverage still is persistent. After 20 years, average book leverage in the highest portfolio is about 33%, compared to 20% in the lowest leverage portfolio. This difference between leverage levels in the portfolios at long horizons is seen by Lemmon et al. (2008) as evidence that “...an important factor is missing from existing specifications of leverage, and that this factor contains a significant, permanent, or time-invariant component...” In a further variance decomposition of different ANCOVA models, Lemmon et al. (2008) point out that a firm fixed effect is the largest component of the explained sum of squares, and largely subsumes other determinants of leverage. Lemmon et al. (2008)’s findings are striking, and more importantly, the persistent differences between leverage portfolios cast doubt on theories of capital structure that have the firm adjust towards some kind of optimal mix of debt and equity. However, before turning towards new theories, it would be useful to know to what extent existing theories are able to accommodate the Lemmon et al. (2008) leverage portfolio graphs. In particular, the challenge is to provide explanations of the following features that are evident in Figure 2.1: • Leverage portfolios exhibit much cross-sectional dispersion in the portfolio forma- 13 tion period, regardless of whether the sort is based on actual or residual leverage obtained from a regression that controls for determinants of capital structure. • During the periods following portfolio formation, large differences in average lever- age between the portfolios remain, and convergence between the portfolios towards a common mean is slow, taking in excess of 20 years. 2.3 Possible Explanations There are several possible channels that could give rise to the persistence of residual- based leverage portfolios. However, they are all manifestations of the same underlying cause: in order for the residual-based portfolio sorts to exhibit persistence in leverage, the regression residuals must contain information about future levels of leverage. They do this by containing information about future values of the variables that in turn deter- mine leverage. The first channel is that empirical specifications of leverage regressions are plagued by an omitted variable problem. In its simplest form, it is possible that leverage is largely determined by a time-invariant firm fixed effect, as suggested by Lem- mon et al. (2008). A firm fixed effect can be thought of as every firm having its own intercept in the regression. Omitting this fixed effect in empirical specifications would move it into the error term: a large error would likely contain a large intercept value, which in turn would imply that the firm is a high-leverage firm. Since the residual contains information about leverage, sorting on it would lead to leverage persistence in the portfolios. Another possibility is that regressions omit one or more time-varying variables that determine leverage. As with a fixed effect, the regression residuals are no longer just noise, but contain important information. This effect would likely depend on the size of the cross-sectional coefficient of the omitted variable, as well as on the persistence of the omitted variable. The larger the cross-sectional β, the bigger the component of the regression residual would be that is due to the omitted variable, and not due to noise. If the residual were mostly noise, then the residual based sorts would generate 14 much less initial dispersion than sorting on leverage itself. Also, persistence in the omitted variable is needed: today’s residual needs to predict high leverage today, as well as high leverage tomorrow. This happens if the omitted variable is itself persistent. An example of this strand of literature is the recent paper by DeAngelo et al. (2009), who model firms as incurring transitory debt obligations that represent deliberate, but temporary, deviations from a target capital structure. Carrying out portfolio sorts on their simulated firms also results in persistent leverage portfolio. While they do not carry out sorts on the basis of regression residuals, some of the persistence would likely remain. The reason for this is that not properly accounting for the level of transitory debt would leave an omitted variable imbedded in the residual. A third possibility is that the regressions are misspecified from an economic view- point. For instance, if the firms face adjustment costs as in Fischer et al. (1989) or Hennessy and Whited (2005), there would be no target leverage as implied by the re- gression. Instead, the firm may choose to not alter its capital structure while leverage is within a certain range. In all likelihood adjustment costs would need to be large in order for firms’ leverage levels to not converge even after 20 years. Finally, it is possible that our economic models are correct, but our empirical proxies for the benefits and costs of debt are inaccurate. Having mismeasured explanatory variables would again create a correlation between the regression residual and leverage itself, and sorting on the residual would resemble sorting on leverage itself to some extent. Generally, distinguishing conclusively between these alternatives is difficult. Includ- ing firm fixed effects in the regression explains much of the cross-sectional variation between firms, because each firm is now allowed its own intercept. However, it does not eliminate the interesting portfolio patterns in residual-based sorts, as shown in Figure 2.3. While a fixed effect reduces initial dispersion, there still is no convergence, as the average leverage level of the low leverage portfolio now is substantially higher than that of the high leverage portfolio after 20 years. In essence, including a fixed effect demeans the portfolio leverage time series, but the patterns still remain. 15 In the sections to follow, I take a more mechanical route towards a possible explana- tion of the stylized facts: I show that the first observation above is a more or less direct consequence of least squares regression mechanics. The second observation obtains if we allow for the fact that we cannot observe all determinants of leverage perfectly, and we model leverage as a function of one or more mismeasured explanatory variables that exhibit a certain degree of persistence. I will argue that measurement in explanatory variables is certainly consistent with the data, which studies like Flannery and Rangan (2006), Roberts (2001), and Erickson and Whited (2006) confirm, but I have to concede that my results cannot conclusively prove that measurement error is, in fact, the culprit of the phenomena I study. 2.4 Benchmark Model In what follows, I present my analysis from the empiricist’s perspective; therefore, the relationship between leverage and its determinants is couched in a regression framework. This is perhaps not very intuitive, as it somewhat obfuscates the firm’s exact optimiza- tion problem in a tradeoff framework. For this reason, I present a simple model, which can be thought of as the theoretical counterpart to the empirical specification, which produces data that would resemble those modeled in the regression frameworks that I study. The simple model is a multi-period extension of Kraus and Litzenberger (1973)’s model. In a dynamic setting, firms trade off bankruptcy costs with the tax benefits of debt financing. The firm has an entitlement to a perpetually lived project, which generates random cash flows (EBIT) of Xt. The EBIT process follows a mean-reverting Markov process. The firm has access to a capital market in each time period, and it can issue 1-period debt with face value Ft each period. The firm thus receives the debt’s market value today in exchange for a promise to repay the face value next period. Default occurs when the face value of debt exceeds the firm’s market value Vt in any given period. In default, the firm is liquidated, and bondholders incur bankruptcy costs 16 cVt (0 < c < 1). Debtholders receive bond cash flows Bt according to: Bt =  Ft Ft ≤ Vt(1− c)Vt Ft > Vt (2.1) Under the risk-neutral measure Q, the current market value of debt, Dt with face value Ft+1 maturing next period is: Dt = 1 1 + r EQt (Bt+1) (2.2) The interest portion of the debt repayment in period t, rDt−1 is tax-deductible. All earnings are paid out as dividends: Divt = (1− τ)(Xt − rDt−1) +Dt (2.3) where τ is the corporate tax rate. Consistent with the earlier definition of corporate bankruptcy, dividend payments are: Divt if (Vt − Ft) ≥ 00 if (Vt − Ft) < 0 (2.4) The market value of equity at time t, Et, is comprised of the current period’s dividend plus next period’s discounted expected value of equity: Et = max { 0, Divt + 1 1 + r EQ(Et+1) } (2.5) The market value of the firm is thus Vt = Bt + Et. Managers act in the best interest of shareholders; they will maximize equity value by adjusting debt levels in each period to take advantage of the tax shield. Below is the optimality equation for the managers’ optimization problem, where E∗t denotes the maximized value of equity: E∗t = max{Ft+1} { max {0, Divt + 11 + rE Q(E∗t+1) } (2.6) In the absence of adjustment costs, the firm is not hampered to attain its optimal capital structure each period. In every state, the firm observes an EBIT value along 17 with the face value of last period’s debt, and chooses an optimal debt policy according to F ∗t+1(Xt, Ft) = arg max { max {0, Divt + 11 + rE Q(E∗t+1) } (2.7) In a world where adjusting the firm’s capital structure is costless, the state space is reduced to just one dimension. The previous period’s debt face value is irrelevant: the current EBIT value is the only state variable which drives the optimal policy. Conse- quently, the firm’s optimal capital structure policy closely follows the EBIT process. Specifically, the firm’s time-varying target leverage is given by the ratio of optimal debt F ∗ to total assets. Knowing the theoretical model, an exact empirical test of the model would regress current debt face value on lagged EBIT values, which would result in a perfect fit. Not knowing the true theoretical model, we may justify regressing leverage on contem- poraneous EBIT values, which would still result in a good, albeit imperfect model fit (due to the nonlinearity of leverage and using current instead of lagged EBIT). In the theoretical model, the firm adjusts to its time-varying leverage target every period, by selecting the equity value-maximizing amount of debt. In the regression framework, we can reasonably infer the target by computing the predicted values of the regression (evaluating the firm’s EBIT at the estimated coefficient), which would correspond to the optimal leverage levels towards which the firm adjusts in the theoretical model, given a certain EBIT realization. Going forward then, the regression framework that I use can be thought of as a representation of an underlying theoretical model similar in spirit to the one above. Sorting firms into portfolios based on actual leverage in this model and then tracking the portfolios would produce a portfolio leverage time series whose persis- tence is driven by the persistence in EBIT. Tracking portfolio leverage after regressing leverage on EBIT and then performing the residual-based sorts would remove most of the persistence in the leverage portfolios, as EBIT in this model would be a very good predictor. The portfolios would converge to the unconditional mean of leverage quickly. Here, the unconditional leverage mean would be determined by the unconditional mean of the EBIT process. 18 2.5 Initial Leverage Dispersion is a Function of R2 When firms are sorted into quartile portfolios on the basis of their actual leverage, there are significant cross-sectional differences between the leverage levels of the portfolios, both at formation and thereafter. Figure 2.1 illustrates this, and Section 2.2.1 provides a detailed description of how exactly the sort algorithm works. The initial differences in leverage, while sizeable, may in fact be just a manifestation of cross-sectional differences in an explanatory variable, as Lemmon et al. (2008) point out. However, the differences in portfolio leverage levels remain sizable even after controlling for explanatory variables in a cross-sectional regression. Sorting firms into leverage portfolios on the basis of residuals obtained from the regression produces an almost identical picture as sorting on leverage itself (see Figure 2.2). At this point one should note that the large dispersion in the sorting period is expected, while the subsequent slow convergence is not. At first glance, intuition suggests that sorting on the residuals (i.e. noise) should not produce significant differences between the portfolios. However, after sorting on the residual or unexpected component of leverage, portfolio averages are computed based on actual leverage. Regression residuals, despite being orthogonal to the regressors, are necessarily not orthogonal to the dependent variable, i.e. leverage. In fact, the worse the model fit is, the higher the correlation will be between residuals and the dependent variable. In a standard linear model of the form lev = xβ + u, where leverage lev is the dependent variable, x is a matrix of regressors, and β is the coefficient vector, the correlation between the regressand lev and the estimated residual û is given by: Corr(lev, û) = √ (1−R2) ∈ (0, 1] (2.8) where R2 is the coefficient of determination. For a proof, see Appendix B.1. Consequently, as the model fit deteriorates, sorting on the residuals will produce results that are increasingly similar to sorting on leverage itself, since the portfolios will, to an increasing extent, contain the same firms. The moderate R2s in leverage regressions therefore explains why, in the sorting period, the leverage levels of portfolios sorted on actual leverage resemble those sorted on residual leverage. The average R2 of 19 the yearly regressions that I run to replicate Lemmon et al. (2008) is equal to 19.5%, which results in a correlation coefficient between actual leverage and the regression residual of 0.90. This high level of correlation explains why sorting on the residual does not produce markedly different portfolio leverage averages in the formation period than does sorting on leverage itself. This R2 effect explains initial dispersion, yet it does not explain the continued persistence in periods after the sorting period. 2.6 Time Series Persistence in Leverage Portfolios as a Result of Measurement Error: Intuitive Explanation In the previous section, I show how differences in leverage portfolios naturally arise dur- ing the formation period when firms are sorted into portfolios on the basis of regression residuals. The only requirement is a moderate regression R2, which leads to the depen- dent variable being highly correlated with the estimated residuals. The caveat, however, is that while this phenomenon explains the leverage differences across residual-sorted leverage portfolios in the formation period, it does not explain why these differences per- sist in the periods to follow. If the regression residuals are uncorrelated over time, then any initial difference between the portfolios should completely vanish in the subsequent period. In this section, I formally examine leverage sorts, starting with the case where we can observe all variables perfectly. Here, sorting on the regression residual will produce initial dispersion that immediately vanishes in the periods after portfolio formation. I then show how persistence arises when the explanatory variable is measured with error. It may seem that by considering measurement error, I am just substituting one kind of firm fixed effect for another. This is not the case, since I do not assume measurement error to be persistent or firm-specific. Instead, the only persistent variable is the true, but unobserved regressor. 20 2.6.1 Base Case: A Correctly Specified Model When firms are sorted into leverage portfolios on the basis of regression residuals, the portfolios will exhibit cross-sectional dispersion in the formation period by construction. However, this dispersion will immediately vanish in the subsequent time period if the sort is carried out with residuals of a correctly specified regression model of leverage. To see this, I begin with a world where leverage is a function of a single persistent explanatory variable, which is perfectly measured. There exists a panel of firms, where i indexes a firm, and t indexes time. The dependent variable of interest, leverage, is denoted by levit. Its true relationship to the explanatory variable xit (e.g. size, profitability, or the book-to-market ratio) is given by: levit = βxit + uit (2.9) where uit ∼ N(0, σ2uit) is an error term. From a tradeoff theory point of view, the economic interpretation of this specification is as follows: Firm i’s target leverage at time t is given by βxit. Every time period, the firm’s actual leverage levit equals its target, plus a random deviation uit. Thus, the firm attempts to fully adjusts towards its target in every time period. The random deviation uit, which Lemmon et al. (2008) refer to as unexpected leverage, can be thought of as an exogenous shock that takes place after adjustment to the target has taken place. For instance, a change in the market value of the firm’s equity would cause actual leverage to deviate from the target. Finally, suppose that the leverage determinant xit follows an AR(1) process of the form: xit = φxit−1 + it (2.10) where t ∼ N(0, σ2it). In the above, I am implicitly assuming that the explanatory variable, and hence leverage, have a mean of 03. Under this specification, leverage di- rectly inherits the dynamics of the explanatory variable. Tomorrow’s expected leverage, 3This assumption is not crucial. We could easily add a mean to the explanatory variable xit with- out affecting any of the conclusions. Furthermore, since it is possible in my setup for leverage to be negative, it is perhaps most natural to think of the levit as logit leverage ln ( lev 1−lev ) . An inverse logit transformation would map leverage from the real line back to the unit interval. 21 conditional on today’s observed leverage, is governed by the magnitude of the autocor- relation coefficient of the AR(1) process, since E(levit|levit−1) = φlevit−1. For instance, if the explanatory variable and hence target leverage are persistent with φ close to 1, then leverage is persistent as well. The assumption of a slow-moving explanatory vari- able is reasonable, since both empirical factors and the underlying capital structure determinants they proxy for are arguably persistent. Empirically, the persistence of the aforementioned leverage determinants ranges from φ = 0.64 for the market-to-book ratio to φ = 0.95 for the tangibility ratio. Profitability has an AR(1) coefficient of φ = 0.80, while that of industry leverage is φ = 0.93. Firm size is not stationary, as firms tend to grow over time. Overall, the empirical estimates suggest that the persistent regressor assumption is justified. If the value of φ is large, then if we form portfolios by sorting on leverage and track their evolution over time, the high leverage portfolios decline only slowly towards the unconditional mean, while the leverage of low leverage portfolios increase equally slowly towards the mean. The persistent difference between a high leverage portfolio and a low leverage portfolio reflects the persistence in the explanatory variable. Figure 2.4 illustrates this via a simulation. Leverage is a function of a persistent explanatory variable xit, whose autocorrelation coefficient φ = 0.85 in Panel A, and φ = 0.95 in Panel B. The persistence in the explanatory variable is clearly reflected in the slow convergence of the leverage portfolios. Even at the lower value of φ = 0.85, the high leverage and low leverage portfolios have not converged to the unconditional mean of 0 after 20 time periods. If instead of sorting on actual leverage, we sort on unexpected leverage, i.e. on the residuals obtained from a regression of levit on xit, convergence happens immediately after the sorting period. Since there is no information in the regression residual about future values of the regressor, and hence leverage, next period’s average portfolio lever- age drops to its unconditional mean of zero right away, irrespective of the magnitude of the residual that we condition on. To see this analytically, combine (2.9) with (2.10) 22 to obtain the following sample regression equation: levit+1 = βφxit + βit+1 + uit+1 (2.13) The expectation of next period’s leverage levit+1, conditional on this period’s esti- mated regression residual ûit, obtained by running regression (2.9), is: E[levit+1|ûit] = βφE[xit|ûit] + βE[it+1|ûit] + E[uit+1|ûit] = 0 (2.14) since all three expectations on the RHS are equal to zero. E[xit|ûit] = E[xit] = 0 follows from the orthogonality of the residuals to the regressor. The second expectation vanishes due to the independence of it+1 and ûit, while the last expectation equals zero because of the temporal independence of the regression residuals. Thus, under a correctly specified model of leverage, conditioning on the estimated residuals does not produce persistent differences between leverage portfolios. Irrespective of the level of persistence in the regressor, the portfolio leverages converge to their unconditional mean in the first time period after formation. I redo the portfolio sorts for my panel of simulated firms, this time sorting on unexpected leverage (the estimated regression residual) instead of on leverage itself. Figure 2.5 shows the results. There is initial dispersion between the portfolio leverage levels in the formation period, a manifestation of the R2 effect discussed in Section 2.5. However, since the regression is well-specified, today’s residual contains no information about tomorrow’s leverage, and both portfolios converge to the unconditional mean after one time period. 2.6.2 Case II: Persistence as a Consequence of Measurement Error In the previous section I explain how, under a correctly specified model of leverage, con- ditioning on the estimated residuals would not produce persistent differences between leverage portfolios. This is no longer true if we measure a slowly moving explanatory 23 variable with error. In fact, measurement error will artificially create a persistent dif- ference in portfolio leverage levels if firms are sorted into percentile portfolios each year on the basis of the residuals from a cross sectional regression of leverage on its determi- nants. To understand the channel through which this artificial difference works, I first assume that the regressor xit is not directly observable, but a mismeasured regressor x∗it is: x∗it = xit + ηit (2.17) where ηit ∼ N(0, σ2ηit) is measurement error, and uit, it and ηit are independent. If only x∗it, but not xit, is available as a regressor, and we run a regression of levit on x ∗ it, the sample regression equation (* indicates a coefficient or variable that is affected by measurement error, and ̂ denotes a regression estimate) is: levit = β̂∗x∗it + û ∗ it (2.18) Figure 2.6 illustrates the effect of a noisy regressor, again for two levels of persistence in the regressor xit. Recall that under the baseline model of a perfectly measured regressor, there is no persistence in leverage portfolios that are sorted on the regression residuals (see Figure 2.5). However, when measurement error is introduced, sorting on regression residuals does produce persistence in the leverage portfolios, as shown in Figure 2.6. In obtaining the figure, I assume that measurement error is of the same magnitude as the variance of the innovations to xit, i.e. σ2ηit = σ 2 it . Both panels show that when firms are sorted into leverage portfolios on the ba- sis of contaminated residuals, the average leverage of these portfolios exhibits initial dispersion and long-term persistence, which is in stark contrast to the setup without measurement error. More persistence in the explanatory variable (via a larger φ) trans- lates directly into more persistent leverage. The kink in the first period in both graphs results mainly from the R2 effect discussed in Section 2.5. Persistence beyond the kink is driven by measurement error alone. The level of measurement error in Figure 2.6 is such that its variance is the same as the variance of the error in the process for the true explanatory variable. This results in leverage dispersion that is about 50% of the 24 dispersion when sorting is done on leverage itself, which suggests that even moderate amounts of measurement can have significant and lasting effects on persistence. To understand how measurement error gives rise to a seeming persistence in leverage portfolios, I trace the effect of the well-documented attenuation bias in the estimated slope coefficient. With a mismeasured regressor, the estimated slope coefficient of re- gression equation (2.18) is no longer unbiased (see Appendix B.2): β̂∗ = cov(x∗it, levit) σ2x∗it = β σ2xit σ2xit + σ 2 ηit ≤ β (2.22) This bias is the necessary ingredient for persistent differences in leverage portfo- lios. Recall that without measurement error, regression equation (2.9) is well-specified, and sorting on residuals means that next period’s expected portfolio leverage is zero. Now suppose that instead of running the accurate, yet unavailable regression (2.9), we run (2.18) with mismeasured regressors instead. We then form portfolios at time t, based on the regression residual obtained at time t, û∗it, and track the leverage of these portfolios over time. Unlike before, sorting on residuals no longer implies that next pe- riod’s expected portfolio leverage is equal to zero (or to the unconditional mean, more generally): Proposition 1. Suppose leverage is determined by levit = βxit + uit, where xit = φxit−1 + it, and all noise terms are normally distributed. If we regress leverage on the mismeasured observable variable x∗it = xit + ηit, then expected leverage next period, con- ditional on this period’s estimated regression residual û∗it is a function of the estimated residual: E(levit+1|û∗it) = φ [ 1 + σ2uit β2 ( 1 σ2ηit + 1 σ2xit )]−1 ︸ ︷︷ ︸ = c≥ 0 û∗it (2.23) Proof. See Appendix B.3. Equation (2.23) shows that next period’s expected leverage is directly linked to this period’s regression residual via the coefficient c. Its sign is positive, which implies that the expected leverage conditional on a positive residual will overstate the true expected 25 leverage (and understate true expected leverage for a negative residual). This creates an artificial leverage dispersion when we track leverage portfolios. Furthermore, the rate at which the dispersion disappears is directly governed by the coefficient φ, the persistence in the underlying latent explanatory variable. The link between regression residual and expected leverage is that the residual now contains information about the magnitude of the true explanatory variable xit, which in turn determines leverage. Recall the general expression for expected portfolio leverage, conditional on sorting on the regression residual: E[levit+1|û∗it] = βφE[xit|û∗it] + βE[it+1|û∗it] + E[uit+1|û∗it] (2.24) As under the no-measurement error scenario, the second and third expectations on the RHS are still equal to zero. E[it+1|û∗it] = 0 since next period’s innovation in the explanatory variable is independent of this year’s estimated residual. Similarly, next period’s true residual in the leverage regression is independent of this period’s estimated residual, so E[uit+1|û∗it] = 0. The first expectation on the RHS, however, is no longer equal to 0, but instead is given by: Lemma 1. Suppose leverage is determined by levit = βxit+uit, where xit = φxit−1+it, and all noise terms are normally distributed. If we regress leverage on the mismeasured observable variable x∗it = xit + ηit, then the expectation of the regressor, conditional on the estimated regression residual û∗it is: E(xit|û∗it) = E(xit) + Cov(xit, û∗it) V ar(û∗it) [û∗it − E(û∗it)] = [ β + σ2uit β ( 1 σ2ηit + 1 σ2xit )]−1 ︸ ︷︷ ︸ = b û∗it (2.25) Proof. See Appendix B.3, beginning with (B.11). This expression relates the expectation of xit, conditional on an estimated residual û∗it to the true parameters of the underlying processes, which are compactly captured in the coefficient b. Importantly, knowing a particular value of û∗it tells us something about the 26 value of the true xit. This is because the estimated residual is not orthogonal to the true regressor, i.e. E(xit|û∗it) 6= E(xit), unlike in the setup without measurement error. With a latent explanatory variable, if the true relationship between levit and xit is positive (i.e. β > 0), then a larger residual û∗it predicts a true xit that is above its unconditional mean. Conversely, if β < 0, then a larger residual û∗it predicts a true xit that is below its unconditional mean. The estimated residual contains no information about xit only if b = 0. This is the case if there is no measurement error, i.e. limσ2ηit↓0 b = 0, or if there is no relationship between levit and xit (β = 0). It would also be the case if σ2xit = 0, which is an obviously vacuous scenario, since there would be an assumed lack of any cross-sectional variation in the explanatory variable to begin with. In summary, when measurement error is present in a persistent explanatory variable, a residual û∗it that is affected by attenuation bias does contain information about the true but unobserved xit, and hence can predict future leverage. As is typical in situations of measurement error in an explanatory variable, tractability becomes an issue: the coefficient b depends on virtually all the parameters in the model, and as such it is difficult to disentangle how much each parameter contributes to persistence in leverage portfolios. However, and perhaps contrary to intuition, it is clear that in the presence of measurement error more factors than just the persistence of the regressor matter and combine with each other in the one period ahead conditional expectation of leverage. The next section isolates the effect of each parameter, and examines the interplay among them. 2.7 Comparative Statics and Numerical Examples In this section I determine how changes in the important parameters of the model affect the expected leverage conditional on an estimated residual. Via the attenuation bias, this filters directly down into the dynamics of portfolio leverages obtained after carrying out a regression residual-based sort. An interesting complementarity exists among the parameters that determine the expectation of next period’s leverage E(levit+1|û∗it), and hence the behaviour of any residual-sorted leverage portfolios. Measurement error is a 27 necessary ingredient; given its presence, an increase in magnitude in a coefficient, ceteris paribus, will generally increase the wedge between the leverage portfolios, and/or slow the apparent convergence of the portfolios. Alternatively, an increase in magnitude of a parameter could compensate for a decrease in another, to hold the wedge between portfolios constant. For the discussion to follow, I assume: 1. The leverage determinant is mismeasured. 2. Conditioning is based on a positive residual, i.e. on a firm whose leverage level is “unexpectedly” above the unconditional mean. 3. The AR(1) coefficient φ is positive, i.e. there is persistence in the regressor. A negatively autocorrelated regressor, on the other hand, will reverse the sign of most of the effects below. First, I consider the effect between next period’s expected leverage and the AR(1) coefficient φ that governs the regressor dynamics: Corollary 1. The bigger φ and hence the more persistent the explanatory variable is, the larger will be the expected leverage next period, conditional on observing a large estimated residual û∗it this period: ∂E(levit+1|û∗it) ∂φ = [ 1 + σ2uit β2 ( 1 σ2ηit + 1 σ2xit )]−1 ︸ ︷︷ ︸ ≥ 0 û∗it (2.26) Taken in the context of leverage sorts, more persistence in the explanatory variable translates directly into more persistent leverage portfolios, as seen in Figure 2.6. A larger value of φ (as in Panel B) slows down the speed of mean reversion in the leverage determinant and thus leverage. As far as the sorts are concerned, this raises dispersion in t1 compared to in Panel A, and in addition also notably slows convergence, as evidenced by the wider remaining gap after 20 years. More measurement error also results in seemingly more persistent leverage when leverage portfolios are formed on the basis of regression residuals. This is easy to see in the univariate setup: 28 Corollary 2. Increasing the variance of ηit (i.e. measurement error) will increase next period’s conditional expected leverage, so long as φ is positive. ∂E(levit+1|û∗it) ∂σ2ηit = φc2 σ2uit β2[σ2ηit ] 2︸ ︷︷ ︸ ≥ 0 û∗it; c = βb = [ 1 + σ2uit β2 ( 1 σ2ηit + 1 σ2xit )]−1 (2.27) It follows that the more mismeasured the regressor is, the easier it is to conclude that important leverage dynamics have not been captured by the model, when in fact the problem is one of measurement. In Figure 2.7 I illustrate the effect of more measurement error when the value of the AR(1) coefficient of the regressor is φ = 0.85. Figure 2.7 plots the relationship between portfolio leverage dispersion and the magnitude of measurement error, when firms are sorted based on regression residuals. I include two lines for reference: the solid line shows the sort based on leverage itself, while the dotted line shows a residual based sort without measurement error. In the latter case, the portfolios collapse to the unconditional mean immediately after the sorting period, as discussed before. The dashed lines show sorts for two levels of measurement error: ση ∈ {0.5, 1}. The ratio of measurement noise to state noise in the regressor is thus also ση/σ ∈ {0.5, 1}. The larger the quantity of measurement error is, the more do the residual-based sorts start to resemble leverage-based sorts. At the higher level of measurement error, the dispersion in portfolio leverage is about 50% of the dispersion when sorting is done on leverage itself. The cross-sectional relationship between leverage and its determinant also plays a role: Corollary 3. The larger the cross-sectional coefficient β is, the larger will be the upward bias on next period’s conditional expected leverage levt+1: ∂E(levit+1|û∗it) ∂β = 2φc2 σ2uit β3σ2ηit︸ ︷︷ ︸ sign(β) û∗it (2.31) The β-effect is dependent on the sign of the true regression parameter β, but an increase in the magnitude of β will always translate into more initial dispersion in 29 expected leverage. For a negative β, the derivative is negative, which implies dispersion increases as β decreases (i.e. increases in magnitude). Figure 2.8 illustrates this point by comparing sorts at two levels of β: β = 0.5 for the left panels (A1 and A2), and β = 1 for the right panels. The top panels (A1 and B1) show leverage portfolios that have been sorted on actual leverage, while the bottom panels show leverage portfolios that have been sorted on residual leverage. Comparing A1 and B1 shows that doubling β results in approximately double the dispersion in portfolio leverage. Comparing the bottom panels A2 and B2 shows that this effect persists when the sort is on the regression residual: a larger β again leads to increased portfolio leverage dispersion. Clearly, this effect would not occur in a world without measurement error. While β affects dispersion, it does not have much of an effect on convergence. The more dispersed portfolios also converge at a faster rate, as evidenced by the steeper slopes in Panels B1 and B2 as compared to A1 and A2. Corollary 4. Conditional expected leverage is increasing in the variance of the regressor σ2xit: ∂E(levit+1|û∗it) ∂σ2xit = φc2 σ2uit β2[σ2xit ] 2︸ ︷︷ ︸ ≥ 0 û∗it (2.32) The variance σ2xit measures the cross-sectional dispersion of the regressor, so the effect above is quite natural. Increasing the variance does not affect the unconditional mean of a symmetric distribution, but it affects the conditional means. For instance, if the variance of a symmetric zero-mean random variable is increased, the unconditional mean remains at 0. However, the conditional mean of observations greater than zero will increase, while the the conditional mean of observations less than zero will decrease. In my simulated panel, all firms are carbon copies of each other, and so the cross-sectional variance at any given time is equal to the time-series variance of the underlying AR(1) process V art(xi) = σ2 1−φ2 . Thus, one way of increasing the variance of the regressor σ 2 xit is by increasing the amount of persistence in the regressor via a higher value of φ. The resulting higher portfolio dispersion and persistence is shown e.g. in Figure 2.6, which 30 illustrates this point. The above findings can be summarized as follows: in the presence of measurement error, an increase in the magnitude of any of the parameters of the model will increase the wedge between high- and low-leverage portfolios. The persistence in the regressor has the biggest impact on portfolio leverage dispersion and persistence. An increase in the amount of measurement error present also increases the initial portfolio dispersion, but most visibly affects the speed of convergence at high values of φ. An increase in the magnitude of the true β coefficient of the predictive leverage regression also increases initial dispersion, but has less of an impact on the speed of convergence. Overall, the effects of measurement error are clearly unpalatable from an empirical viewpoint. Nevertheless, it is important to recognize that measurement error easily biases one’s findings towards large and persistent differences in leverage portfolios. This effect may be an illusion, however, driven by the mismeasured persistent regressor x∗it. The effect would immediately disappear if one had access to the true, but unobserved regressor xit. 2.8 Conclusion While persistence in residual-based leverage portfolios may be a consequence of a firm fixed effect or omitted time-varying variables, I show in this chapter that it can also arise when slow-moving explanatory variables in a leverage regression are measured with error. If firms are assigned to portfolios on the basis of tainted regression residuals, the residuals contain information about the true level of the unobserved regressor. Since the explanatory variable is persistent, its present value predicts its future value, on which leverage in turn depends. Taken together, these facts can potentially explain the persistent leverage portfolio differences that Lemmon et al. (2008) document. In the presence of measurement error, sorting firms into portfolios based on the regression residuals, or “unexplained leverage”, will resemble sorting firms into portfolios based on actual leverage. In addition, two other studies, Flannery and Rangan (2006) and Roberts (2001), argue that accounting for measurement error increases the adjustment 31 speed towards a leverage target, as measured in a partial adjustment model. This suggests that measurement error does play an important role in analyzing corporate capital structure. 32 Variable Mean Minimum Median Maximum Std Dev lev 0.27 0.00 0.24 1.00 0.21 profit 0.05 -2.37 0.11 0.44 0.32 tang 0.34 0.00 0.28 0.93 0.25 MB 1.73 0.18 1.00 21.21 2.45 LnSize 4.18 -1.47 4.03 10.45 2.38 Table 2.1: Summary Statistics. Summary statistics over the sample period 1965-2003 for nonfinancial firms on Compu- stat. Variable definitions are provided in Appendix A. 33 Figure 2.1: Average Leverage of Book Leverage Portfolios. Using the 1965-2003 sample of nonfinancial Compustat firms, I sort firms into 4 port- folios on the basis of their book leverage level. I then compute the mean leverage of each portfolio for the next 20 years, keeping its composition constant. I repeat this procedure for all years until the end of the sample period. The resulting 38 portfolio time series are then averaged in event time. 34 Figure 2.2: Average Leverage of Book Leverage Portfolios, Sorted on Unexpected Leverage. Using the 1965-2003 sample of nonfinancial Compustat firms, I sort firms into 4 portfo- lios based on residuals from a regression of book leverage on lagged size, market-to-book, profitability, tangibility and mean industry leverage. I then compute the mean leverage of each portfolio for the next 20 years, keeping its composition constant. I repeat this procedure for all years until the end of the sample period. The resulting 38 portfolio time series are then averaged in event time. Variables are defined in Appendix A. 35 Figure 2.3: Average Leverage of Book Leverage Portfolios, Sorted on Unexpected Leverage, with Firm Fixed Effect. Using the 1965-2003 sample of nonfinancial Compustat firms, I sort firms into 4 portfolios based on residuals from a regression of book leverage on lagged size, market-to-book, profitability, tangibility and mean industry leverage. Firm fixed effects are included. I then compute the mean leverage of each portfolio for the next 20 years, keeping its composition constant. I repeat this procedure for all years until the end of the sample period. The resulting 38 portfolio time series are then averaged in event time. Variables are defined in Appendix A. 36 0 2 4 6 8 10 12 14 16 18 20 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 M ea n Le ve ra ge Event Time High Lev Low Lev Panel A: φ = 0.85 0 2 4 6 8 10 12 14 16 18 20 −3 −2 −1 0 1 2 3 M ea n Le ve ra ge Event Time High Lev Low Lev Panel B: φ = 0.95 Figure 2.4: Portfolio Convergence at Different Speeds. The two panels show the evolution of leverage portfolios, where simulated firms are sorted into either a high or a low leverage portfolio based on actual leverage at time 0. The firms are kept in their respective portfolios for 20 years. The sort is carried out every year for 40 years, giving rise to 40 time series, each being 20 years long. The time series are then averaged in event time within each portfolio, resulting in the graphs above. Individual firm time series are produced as follows: each period, leverage is determined as a function of an explanatory variable x: levit = βxit + uit (2.11) where β = 1 and uit ∼ N(0, 0.25). The leverage determinant xit follows an AR(1) process: xit = φxit−1 + it (2.12) with φ = 0.85 in Panel A, φ = 0.95 in Panel B, and it ∼ N(0, 1). The time series for x is simulated for 160 time periods, of which only the last 60 are retained to approximate a steady state. I simulate a cross section of 5,000 firms. 37 0 2 4 6 8 10 12 14 16 18 20 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 M ea n Le ve ra ge Event Time High Lev Low Lev Panel A: φ = 0.85 0 2 4 6 8 10 12 14 16 18 20 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 M ea n Le ve ra ge Event Time High Lev Low Lev Panel B: φ = 0.95 Figure 2.5: Portfolio Convergence when Sorted on Residuals. The two panels show the evolution of leverage portfolios, where simulated firms are sorted into either a high or a low leverage portfolio based on unexpected leverage at time 0. Unexpected leverage is the residual obtained from a cross-sectional regression of leverage on its determinant, which is estimated each year. The firms are kept in their respective portfolios for 20 years. The sort is carried out every year for 40 years, giving rise to 40 time series, each being 20 years long. The time series are then averaged in event time within each portfolio, resulting in the graphs above. Individual firm time series are produced as follows: each period, leverage is determined as a function of an explanatory variable x: levit = βxit + uit (2.15) where β = 1 and uit ∼ N(0, 0.25). The leverage determinant xit follows an AR(1) process: xit = φxit−1 + it (2.16) with φ = 0.85 in Panel A, φ = 0.95 in Panel B, and it ∼ N(0, 1). Unexpected leverage is the residual from regressing levit on xit in each time period t. The time series for x is simulated for 160 time periods, of which only the last 60 are retained to approximate a steady state. I simulate a cross section of 5,000 firms. 38 0 2 4 6 8 10 12 14 16 18 20 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 M ea n Le ve ra ge Event Time High Lev Portfolio Low Lev Portfolio Panel A: φ = 0.85 0 2 4 6 8 10 12 14 16 18 20 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 M ea n Le ve ra ge Event Time High Lev Portfolio Low Lev Portfolio Panel B: φ = 0.95 Figure 2.6: Portfolio Convergence when Sorted on Residuals Contaminated by Measurement Error. The two panels show the evolution of leverage portfolios, where simulated firms are sorted into either a high or a low leverage portfolio based on unexpected leverage at time 0. Unexpected leverage is the residual obtained from a cross-sectional regression of leverage on its imperfectly measured determinant, which is estimated each year. The firms are kept in their respective portfolios for 20 years. The sort is carried out every year for 40 years, giving rise to 40 time series, each being 20 years long. The time series are then averaged in event time within each portfolio, resulting in the graphs above. Individual firm time series are produced as follows: each period, leverage is determined as a function of an explanatory variable x, which follows an AR(1) process. x is not available as a regressor, but a mismeasured x∗ is: levit = βxit + uit (2.19) xit = φxit−1 + it (2.20) x∗it = xit + ηit (2.21) where β = 1, uit ∼ N(0, 0.25), φ = 0.85 in Panel A, φ = 0.95 in Panel B, and it ∼ N(0, 1), and ηit ∼ N(0, 1) is measurement error. Unexpected leverage is the residual from regressing levit on x∗it in each time period t. The time series for x is simulated for 160 time periods, of which only the last 60 are retained to approximate a steady state. I simulate a cross section of 5,000 firms. 39 0 2 4 6 8 10 12 14 16 18 20 −1.5 −1 −0.5 0 0.5 1 1.5 M ea n Le ve ra ge Event Time ση= 0 ση= 0.5 ση= 1 Actual Figure 2.7: Comparison of Portfolio Leverage Dispersion as a Function of Measurement Error. The simulation setup is as before, e.g. in Figure 2.6. I simulate the following system 5,000 times: levit = βxit + uit (2.28) xit = φxit−1 + it (2.29) x∗it = xit + ηit (2.30) where β = 1, uit ∼ N(0, 0.25), φ = 0.85, and it ∼ N(0, 1). The available regressor x∗ is imperfectly measured. I perform the residual-based portfolio sorts as before, for 3 levels of measurement error: ση ∈ {0, 0.5, 1}. The ratio of measurement noise to state noise in the regressor is thus also ση/σ ∈ {0, 0.5, 1}. The leverage-based portfolio sort (solid line) is included for reference. Shown are the average portfolio leverage levels over an event horizon of 20 time periods. 40 0 2 4 6 8 10 12 14 16 18 20 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 M ea n Le ve ra ge Event Time High Lev Low Lev Panel A1: Sort on actual leverage with β = 0.5 0 2 4 6 8 10 12 14 16 18 20 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 M ea n Le ve ra ge Event Time High Lev Portfolio Low Lev Portfolio Panel A2: Sort on resid. leverage with β = 0.5 0 2 4 6 8 10 12 14 16 18 20 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 M ea n Le ve ra ge Event Time High Lev Low Lev Panel B1: Sort on actual leverage with β = 1.0 0 2 4 6 8 10 12 14 16 18 20 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 M ea n Le ve ra ge Event Time High Lev Portfolio Low Lev Portfolio Panel B2: Sort on resid. leverage with β = 1.0 Figure 2.8: Portfolio Leverage Dispersion as a Function of β. I simulate the system of equations in Figure 2.7 for β = 0.5 (left panels) and β = 1.5 (right pan- els). The remaining coefficients are: uit ∼ N(0, 0.25), φ = 0.85, it ∼ N(0, 1), and ηit ∼ N(0, 1). Top panels show portfolio leverages when sorting on actual leverage; bottom panels show portfo- lio leverages when sorting on the residual from regressing leverage on the mismeasured regressor. 41 Chapter 3 Extracting Measurement Error From Explanatory Variables: A Calibration4 3.1 Introduction In Chapter 2, I show both analytically and in simulations how measurement error in a slow-moving explanatory variable can lead to leverage persistence in residual-based portfolio sorts. In particular, I regress leverage on a mismeasured factor and sort firms into two portfolios based on whether the regression residuals are above or below the median. These portfolios exhibit the two phenomena documented by Lemmon et al. (2008): first, there is a wide initial dispersion between the leverage levels of the two portfolios; and second, depending on the parametrization of the model, the leverage levels of the two portfolios can take a long time to converge to the unconditional mean leverage. While Chapter 2 thus shows the theoretical channel through which measure- ment error can produce persistence in portfolio sorts, it does not answer the important question of how much measurement error is needed to reproduce wide initial dispersion between the residual-based portfolios, followed by slow convergence. The objective of the current chapter is to assess whether a reasonably calibrated model with measurement error in explanatory variables can satisfactorily explain the data. I do this with two different approaches: in the first approach, described in Sections 4A version of this chapter will be submitted for publication. 42 3.2 - 3.3, I use the Lemmon et al. (2008) actual leverage portfolio sorts as the starting point. Recall that in these sorts firms are sorted into portfolios based on their actual leverage levels, and average portfolio leverage is then tracked for 20 years. If leverage is determined cross-sectionally by a model of the form levit = βxit + uit, then in every time period a firm’s leverage is equal to target leverage βxit plus an error term uit. Therefore, the portfolio leverage time series from the actual leverage-based sorts display the same dynamics as the true leverage target, and thus can be used to infer the target’s law of motion. Furthermore, if the regressions underpinning the residual -based sorts correctly identified this true target, the leverage levels of the residual -sorted portfolios should converge to the unconditional mean immediately. Since they do not, I use the portfolio leverage time series for the residual-based sorts to establish how mismeasured the leverage target needs to be in order to be consistent with the residual-based sorts. I do this by fleshing out an analytical link between the true target and the mismeasured target, which is directly responsible for generating the residual-based leverage portfolios. The mismeasured target is in turn a function of a mismeasured determinant, but the advantage of my approach is that the underlying determinant does not have to be explicitly modeled. In fact, this determinant does not have to be viewed as single variable, but could instead be interpreted as a linear combination of potentially many underlying explanatory variables. The quantity of measurement error consistent with the portfolio sort data is thus a form of aggregate measure, independent of any particular variable we may have in mind. I find that variability in measurement error would need to be 42% larger than the variation of an underlying state variable in order for the portfolio sort time series to obtain. While this is large, lower levels of measurement error still produce a remarkable amount of persistence: measurement error variances as low as 75% of the state variable variance produce portfolio sorts that still fit the data well. This suggest that even if measurement error is not the sole cause of persistence in residual-based leverage sorts, it nonetheless is capable of being a likely contributor. The second approach of quantifying the amount of measurement error needed to reproduce the Lemmon et al. (2008) portfolio sorts is outlined in Sections 3.4 and 3.5. 43 There, I examine four actual explanatory variables used in the Lemmon et al. (2008) study and determine how mismeasured they need to be in order to be consistent not only with the portfolio sorts, but also with other observed data moments. The four variables I use are a firm’s profitability, tangibility (which measures how tangible a firm’s collateral assets are), the market-to-book ratio as a proxy for investment opportunities, and industry leverage as a measure of industry-specific leverage targets. While these variables cover a number of relevant factors in a tradeoff model, it is nonetheless possible that I am omitting other variables which could arguably influence a firm’s leverage level. Still, I find that a reasonable amount of iid measurement error in these four variables, particularly in the market-to-book ratio, reproduces the portfolio sorts and other data moments to a large extent. Importantly, measurement error in proxies for investment opportunities is consistent with other studies, e.g. Erickson and Whited (2006), which find that market-to-book is a noisy proxy for true Tobin’s q. While I obtain a good model fit initially with iid measurement error, I can improve the fit by allowing for a small amount of autocorrelation in measurement error itself. My results cannot definitively prove that measurement error in explanatory variables is in fact the culprit behind the portfolio sort phenomena documented by Lemmon et al. (2008), but they do identify a reasonable channel through which these stylized facts obtain. 3.2 Estimating Target Leverage Dynamics In order to determine the amount of measurement error in explanatory variables that would be consistent with the Lemmon et al. (2008) portfolio sorts, consider again the setup from Chapter 2, where leverage levit is a function of a slow-moving factor xit. This factor evolves according to an AR(1) process, but the true realizations of the process are latent. The observed values x∗it contain iid measurement error ηit: levit = βxit + uit (3.1) xit = φ0 + φ1xit−1 + it (3.2) x∗it = xit + ηit (3.3) 44 where uit ∼ N(0, σ2u), it ∼ N(0, σ2 ), and ηit ∼ N(0, σ2η). An intercept φ0 is included in the AR(1) process for the leverage determinant to allow for a non-zero mean. This is necessary because actual leverage is bounded between 0 and 1; the leverage portfolios have a positive mean, as seen in Figure 2.1, for instance. In the cross-sectional specification in equation (3.1), actual leverage levit can be viewed as the sum of two components: a leverage target l̂evit ≡ βxit, which the firm adjusts to every period, and a random deviation from the target uit. Implicit in this rep- resentation is the assumption that there are no adjustment costs that would cause the firm to deviate systematically from its target for multiple periods. In a regression con- text, target leverage is the conditional mean of the dependent variable l̂evit = E[lev|x]. While the target in (3.1) is determined by just a single variable, using the target leverage representation does allow the flexibility of viewing the target as a function of potentially many explanatory variables, so the above setup of only one explanatory factor does not result in a loss of generality. Substituting target leverage in (3.1) and (3.2) above gives the following system: levit = l̂evit + uit (3.4) l̂evit = ϕ0 + ϕ1 l̂evit−1 + εit (3.5) x∗it = xit + ηit (3.6) The law of motion for target leverage l̂evit in (3.5) is the same as that for the original factor in (3.2), scaled by the constant β. If more than one explanatory factor were included, the target dynamics can still be thought of as a linear combination of AR(1) processes, which would result in an ARMA representation5 for the target. To estimate parameter values in (3.4) - (3.6), I proceed as follows: in the first step, I use the Lemmon et al. (2008) portfolio leverage time series (sorted on actual leverage) to parameterize (3.4) and (3.5). After gaining an estimate of target leverage dynamics, I then determine how mismeasured (by virtue of mismeasuring the underlying factors) the target needs to be in the cross-sectional regressions for the patterns in the 5see e.g. Granger and Newbold (1977) 45 residual-based leverage sorts to obtain. Therefore, I use the residual-based sorts to back out an estimate of the measurement error variance σ2η in the second step. I simplify the analysis by examining only two portfolios, a “high leverage” portfolio and a “low leverage” portfolio, as opposed to the 4 portfolios in their original study. This does not affect the results; the main Lemmon et al. (2008) conclusions, namely initial convergence and long-term persistence, are still evident with only 2 portfolios. Figure 3.1 shows the actual leverage-based sorts with 2 portfolios, and Figure 2.2 depicts the sorts based on unexpected leverage, as given by regression residuals. To determine what target leverage dynamics are consistent with the picture in Figure 3.1, I calibrate a time-varying leverage target to produce similar portfolio patterns. Under this approach, four parameters in equations (3.4) and (3.5) need to be estimated: the cross-sectional error variance σ2u, and for the AR(1) process governing target leverage the intercept ϕ0, slope coefficient ϕ1 and error variance σ2ε . To estimate parameter values, I simulate the system (3.4) - (3.5) above for both realized leverage and the target, and then find parameter values that make the simulated data resemble the actual leverage portfolios. Let PFlevit denote the leverage of portfolio i (i indexes high and low leverage) at time t. The objective then is to estimate parameter values that minimize the sum of the squared differences between actual portfolio leverage and simulated portfolio leverage, i.e. min Φ ∑ i ∑ t ( PFlevsimit − PFlevactit )2 (3.7) where the parameter vector Φ = {σ2u, ϕ0, ϕ1, σ2ε}. The parameter estimates are as follows: ϕ0 ϕ1 σε σu Estimate 0.021 0.930 0.066 0.080 Std. Error (0.012) (0.009) (0.003) (0.010) The estimated coefficients are of reasonable magnitudes, roughly in line with what a pooled regression would yield. In addition, simulating the Lemmon et al. (2008) actual 46 leverage-based portfolio sorts using the parameter values above provides a good fit to the real data, as shown in Figure 3.3. 3.3 Estimating Measurement Error by Extracting the Mismeasured Target Section 3.2 analyzes the dynamics of a leverage target by calibrating an AR(1) process for the true target to the Lemmon et al. (2008) portfolios sorted on actual leverage. This true target is given by the following cross-sectional relationship: l̂evit = βxit. The objective now is to extract a mismeasured leverage target l̂ev∗it consistent with the residual-based portfolios. The reason for backing out a mismeasured target is that it will allow us to compute the mismeasured residual which forms the basis of the residual- based portfolio sorts. To see this, recall that the residual-based sorts are carried out by first regressing leverage levit on a noisy determinant x∗it, and then using the residual u∗it from this regression to sort firms into portfolios. Using a noisy determinant in the regression implies that the target leverage (i.e. the regression’s predicted leverage value) is also mismeasured. Specifically, we have the following relationship between observed leverage and the mismeasured target: levit = l̂ev∗it + u ∗ it (3.11) Equation (3.11) decomposes a leverage observation into a mismeasured target and a mismeasured residual. As mentioned before, I avoid explicitly modeling an explanatory variable xit, or x∗it in its mismeasured form, but focus on the target instead. It is nonetheless possible to recover the mismeasured target leverage l̂ev∗it, because we can express it as a function of the true target: Proposition 2. Suppose that leverage dynamics are given by equations (3.1) through (3.3). Using a mismeasured explanatory variable x∗it in the cross-sectional leverage regression will cause target leverage l̂ev∗it (the fitted regression value) to be mismeasured 47 as well. This mismeasured target can be expressed in terms of the true target l̂evit by the following regression: l̂ev∗it = α0 + α1 l̂evit + eit (3.12) where α0 = (1− α1)E(l̂evit) (3.13) α1 = 1 1 + a (3.14) σ2e = V ar(l̂evit) a (1 + a)2 (3.15) a = σ2η σ2x (3.16) Proof. See Appendix C. Proposition 2 states that knowledge of the true target dynamics, via the methodology in Section 3.2, permits an explicit solution for the mismeasured target leverage in (3.12). Several observations about Proposition 2 are in order. The unknown parameters α0, α1 and σ2e are functions only of known data moments and a given ratio of measurement noise to cross-sectional variation σ 2 η/σ2x = a. This ratio can thus be used to indirectly quantify the amount of measurement error in equation (3.3), and also allows for an explicit solution for the parameters in Proposition 2. Furthermore, as long as measurement error is present, the variance of the mismeasured target V ar(l̂ev∗it) is always less than the variance of the true target V ar(l̂evit) (see (C.15) in Appendix C for a proof). In fact, the larger the amount of measurement error in the underlying leverage determinant, the smaller will be the variation in estimated target leverage. This phenomenon warrants a short explanation. As we increase the amount of measurement error on the right hand side in a univariate regression, we increase attenuation in the slope coefficient. This naturally results in a larger estimated intercept. For instance, consider an extreme example where the signal-to-noise ratio of the explanatory variable goes to zero, i.e. the observed x∗it is (almost) completely white noise. In this case, the estimated intercept 48 will approach the unconditional mean of the dependent variable. This makes intuitive sense: the ’best’ predicted value of the dependent variable in the presence of an (almost) useless explanatory variable should just be the dependent variable’s unconditional mean. This reasoning translates directly to the relationship between estimated mismeasured target leverage and the true target, as given by (3.12). If the target were perfectly mea- sured, then α0 = 0, α1 = 1 and σ2e = 0. As the measurement noise in the observed explanatory variable increases, the mismeasured target will become more stable relative to the true target: α0 > 0 and α1 < 1, while σ2e will increase at first and then decrease again. σ2e = 0 reaches its maximum at a = 1, i.e. when the magnitude of the measure- ment error equals the cross-sectional variation in x. In the limit, with the signal-to-noise ratio of x∗it approaching 0, the mismeasured target is constant with α0 = E(lev), α1 = 0, and σ2e = 0. While the mismeasured target will always be less variable than the true target, it still equals the true target, on average: E(l̂ev∗) = E(l̂ev) (see (C.7) in Appendix C for a proof). Intuitively, this is due to regression mechanics: the mean predicted value will equal the dependent variable’ unconditional mean, regardless of whether there is measurement error in the explanatory variable. Naturally, this only holds in an unconditional sense; if a given true xit is above its unconditional mean, the mismeasured target will underestimate the true target, and vice versa. Figure 3.4 illustrates this point for various levels of the noise-to-signal ratio a. As a increases, the effect becomes more visible. For instance, in Panel 4 with a = 1.25, when true target leverage is below its unconditional mean of 0.27, the mismeasured target tends to be larger than the true target, i.e. it is closer to the unconditional mean of the leverage variable. I next recover the implied ratio of measurement noise to cross-sectional variation in the explanatory variable a that best reproduces the Lemmon et al. (2008) residual-based portfolio sorts. I do not specify an explicit process for the explanatory variable xit, but instead extract the implied mismeasured target l̂ev∗it from the true target l̂evit, which is simulated with the parameters obtained in Section 3.2. For a given value of a, this is done by first computing the mismeasured target via (3.12), and then solving equation 49 (3.11) for the residual u∗it. After computing the residual, the firms are sorted either into the high-leverage or the low-leverage portfolio. I then track the average portfolio leverage level for 20 years. The objective is to determine the noise-to-signal ratio a that produces simulated portfolio leverage levels that most closely resemble those of the empirical residual-based portfolios. As before, this is accomplished by minimizing the sum of squared differences between the simulated and actual portfolios: min a ∑ i ∑ t ( PFlevsimit − PFlevactit )2 (3.20) PFlevit denotes the leverage of portfolio i, where i indexes high and low leverage at time t. Figure (3.5) shows the results of this minimization. The simulated residual-based portfolio sorts most closely match the empirical ones with a noise-to-signal ratio of a = 1.42 (std. error = 0.12), i.e. the variance of the measurement error needs to be 42% larger than the cross-sectional variation of the true but unobserved explanatory variable x. Clearly, this amount of measurement error seems large, but one needs to keep in mind that the sole factor x in my setup serves as a stand-in for all the determinants of leverage. For instance, in a multivariate world, high levels of measurement error in one variable may counterbalance low levels of measurement error in another. Another consideration is that in the previous calibration each portfolio received an equal weighting, which resulted in a = 1.42. Weighting some observations more heavily than others may also reduce the value of a. Furthermore, a = 1.42 results in the best fit, but it is instructive to assess the impact that lower noise-to-signal ratios a have on the residual-based portfolio sorts. Therefore, I simulate the portfolios for various levels of a. The results are shown in Figure 3.6. The simulations show that even small quantities of measurement error relative to the variance of the explanatory variable produce a surprising amount of persistence in the residual-based sorts. For instance, Panel 2 depicts portfolios where a = 0.25; even after 20 time periods, a persistent difference remains between the high and low leverage 50 portfolios. Furthermore, the difference in year 20 between the simulated portfolios is almost as large as that of the actual portfolios. Panel 4 shows that a noise-to-signal ratio as low as 0.75 produces a good fit to the actual portfolios in year 5 and beyond. This proves that while large quantities of measurement error are needed to reproduce the stylized facts exactly, much more moderate levels still result in a good fit. This reinforces the view that measurement error is likely a contributing factor to persistence in residual-based portfolio sorts. 3.4 Multi-Variable Calibration with iid Measurement Error In Section 3.3, I obtain an estimate of the noise-to-signal ratio that best reproduces the residual-based portfolio sorts in an implied single variable framework. I next investigate whether measurement error in the explanatory variables used by Lemmon et al. (2008) is able to reproduce the leverage time series of both the actual- and residual-based portfolio sorts. I focus on profitability, tangibility, market-to-book, and industry leverage (in lieu of an industry fixed effect) as explanatory variables. Firm size is excluded, since it is not stationary and thus would not conform to my setup of modeling the explanatory variables as AR(1) processes. The estimation procedure, which can be viewed as a form of the simulated method of moments framework, proceeds in a similar fashion to that in Section 3.3. In particular, the economy consists of simulated firms whose leverage dynamics are governed by the following system of equations: 51 levit = β′(1 xit)′ + uit (3.25) = ( β0 βProf βTang βMB βIndLev )  1 Profit Tangit MBit IndLevit  + uit (3.26) xit = φ0 + φ1xit−1 + it (3.27) =  φProf0 φTang0 φMB0 φIndLev0 +  φProf1 0 0 0 0 φTang1 0 0 0 0 φMB1 0 0 0 0 φIndLev1 xit−1 +  Profit Tangit MBit IndLevit  (3.28) x∗it = xit + ηit (3.29) The errors are all normally distributed with uit ∼ N(0, σ2u), it ∼ N(0,Σ), and ηit ∼ N(0,Ση). Leverage is determined in the cross-section by an intercept and the four explanatory factors, which are all modeled as AR(1) processes. Firms differ in terms of the realization of a particular variable, but the coefficients in the model are the same for all firms. The true explanatory variable vector xit is latent; the observable x∗it is measured with error. This reflects the fact that the explanatory variables are imperfect proxies for the true economic fundamentals driving leverage. For instance, in the case of the market-to-book ratio, the underlying fundamental variable is investment opportunities, which may diverge widely from the ratio of firm market value to book value. Similarly, the observed industry leverage may be an imperfect measure of a certain industry target that firms keep in mind when adjusting their capital structures. The covariance matrix of the innovations of the AR(1) processes Σ is diagonal, as is the covariance matrix of the measurement error terms Ση: 52 Σ =  σ2Prof 0 0 0 0 σ2Tang 0 0 0 0 σ2MB 0 0 0 0 σ2IndLev  (3.30) Ση =  σ2ηProf 0 0 0 0 σ2ηTang 0 0 0 0 σ2ηMB 0 0 0 0 σ2ηIndLev  (3.31) There are a total of 22 unknown parameters in this formulation: the intercepts, slopes, and error variances of the AR(1) process (12 parameters), the cross-sectional betas and the error variance σ2u (6 parameters), and the measurement error variances (4 parameters). To reduce the number of free parameters in the model, I infer the unconditional means of the noisy explanatory variables directly from the data. This is possible, since mismeasured and latent explanatory variables have the same mean: µx∗ = E(x∗it) = E(xit + ηit) = E(xit) = µx. This allows me to express the intercepts of the latent AR(1) processes as functions of the empirical means of the respective variables and estimates of φ1, which still is a free parameter matrix: φ0 = (I4 − φ1)µx (3.32) = (I4 − φ1)µx∗ (3.33) I use In to denote an n × n identity matrix. In fact, several other parameters could be inferred directly from the data6, namely the variance of leverage, V ar(lev), and the variance matrix of the noisy explanatory variables Σx∗ . However, forcing the constraints that the model imposes on the variances to hold exactly is too restrictive and results in a poor fit. Instead, the variances are added as moment conditions, which results in the simulated values being close to the data values without the need to match them exactly. 6We could relate the variance matrix for the AR(1) innovations Σ to the variance of the noisy 53 3.4.1 Identification To reduce the dimensionality of the parameter space, I compute the intercepts for the autoregressive processes directly from the data via (3.33). This pares down the free structural parameters to a total of 18: the matrix φ1, which contains the slope coefficients for the explanatory variables, the innovation standard deviation matrix Σ, and the measurement error variance matrix Ση need to be estimated. Furthermore, the parameter vector β, which governs the cross-sectional relationship between leverage and its determinants, along with the standard deviation of the cross-sectional residual σu, has to be estimated. The structural parameters underlying the latent processes are obtained by matching simulated sample moments to data moments. Broadly speaking, the data moments consist of sample statistics for leverage and the explanatory variables, the parameter estimates of the mismeasured AR(1) processes driving the explanatory variables, the regression parameters from a panel regression of leverage on its noisy determinants, and the portfolio leverage levels of the Lemmon et al. (2008) portfolio sorts. Since I assume that the actual data on explanatory variables is contaminated by measurement error, all data moments involving explanatory variables are mismeasured as well. In particular, I use the following moments: 1. The intercepts φ∗0 and slope coefficients φ∗1 for each explanatory variable (i.e. prof- factors Σx∗ , and the variance of the regression residual σ 2 u to the variance of leverage V ar(lev) via: Σ = ( I4 − φ1′φ1 ) Σx (3.34) = ( I4 − φ1′φ1 ) (Σx∗ −Ση) (3.35) σ2u = V ar(lev)− βΣxβ′ (3.36) The first equation is a rewritten expression for the variance of a vector of AR(1) processes. Solving for the variances of the error terms  requires the slope coefficients and the variances of the latent explanatory variables, which I express as the difference between the variances of the observed mismeasured variables and the variances of the measurement error terms. The last expression computes the variance of (3.26) to solve for the variance of the residual. Forcing these constraints to hold exactly is too restrictive and results in a poor fit. 54 itability, tangibility, market-to-book and industry leverage), which are obtained by regressing each observed mismeasured explanatory variable on its lagged value (8 moments): x∗it = φ ∗ 0 + φ ∗ 1x ∗ it−1 +  ∗ it (3.37) 2. The variance of each mismeasured explanatory variable σ2∗x , and the variance of leverage σ2lev (5 moments). 3. The cross-sectional coefficients β∗ from a regression of leverage on the noisy de- terminants (5 moments): levit = β∗ ′ (1 x∗ ′ it ) + u ∗ it where (3.38) β∗ ′ = (β∗0 β ∗ Prof β ∗ Tang β ∗ MB β ∗ IndLev) (3.39) and x∗it is the vector of mismeasured explanatory variables. 4. The time series of portfolio leverage levels obtained after sorting on both actual and unexpected leverage (80 moments in total; a time series consists of 20 portfolio leverage levels for each ‘high leverage’ and ‘low leverage’ portfolio). For both actual and simulated data, the moments are collected in vectors mact and msim, respectively. The structural parameters collected in the vector Φ = (φ1 Σ Ση β σu) are found by minimizing the sum of squared differences between actual moments and simulated moments: min Φ (mact −msim)′(mact −msim) (3.40) This minimization makes the simulated moments as close to their actual counterparts by picking the ‘best’ structural parameter values. 3.4.2 Results The estimated structural parameters of this procedure, along with their standard er- rors7, are listed in Table 3.1. Table 3.2 presents a comparison of empirical data mo- 7The standard errors are obtained by bootstrapping: First, all empirical moments are recalculated for subsets of the Compustat universe. I then estimate structural parameters for each of the subsamples. 55 ments, their simulated counterpart based on mismeasured variables, and moments that are based on the estimated true latent parameters. For each explanatory variable, I present intercept and slope coefficient of the AR(1) process and its variance. For the cross-sectional relationship between leverage and its determinants, I present the β-coefficients for each variable (including an intercept), as well as the leverage vari- ance. Table 3.3 gives two estimates of the ratio of measurement noise to state noise for each simulated explanatory variable. The first estimate is the ratio of measurement error variance to variance of the latent underlying variable, while the second estimate is the ratio of measurement error variance to variance of the observed variable, which thus includes the measurement error variance in the denominator. Finally, Figure 3.7 shows the portfolio sorts on actual and residual leverage, which are obtained with the estimated parameter values. For both the tangibility and industry leverage ratio, the calibrated values of the latent processes are very close to the empirical data values. As measured by the AR(1) parameter and shown in Table 3.2, the estimated persistence for tangibility is 0.936 (empirical data value of 0.952), while it is 0.891 for industry leverage (empirical data value of 0.908). The estimated magnitude of the measurement error standard deviation ση is small in both instances, and well below the variance of the innovation σ in the respective AR(1) process (see Table 3.1). This results in a ratio of measurement error variance to latent variable variance σ2η/σ 2 x of 0.021 for tangibility and 0.018 for industry leverage (see Table 3.3, column (1)). Very similar values for the measurement error ratio are obtained if the variance of the observed explanatory variable is used instead. Consistent with the low quantity of estimated measurement error, the structural β- coefficients for both variables are close to their empirical counterparts: 0.115 vs. 0.099 for tangibility, and 0.859 vs. 0.835 for industry leverage (see Table 3.2). Table 3.2 reveals that latent profitability (φ1 = 0.832) is more persistent than ob- served profitability (φ∗1 = 0.775). The depressed observed φ∗1 coefficient is caused by measurement error in observed profitability with an estimated standard deviation of The standard errors are then given by the standard deviations of the estimated structural parameters. 56 0.105 (see Table 3.1), which also induces a slight downward bias in the cross-sectional β∗. Relative to tangibility and industry leverage, the measurement error ratios for prof- itability have increased to 0.09 and 0.083, respectively (see Table 3.3). These value are still low; the latter value of 0.083 implies that only 8.3% of the variation in observed profitability is due to measurement error. The most interesting result obtains for the market-to-book ratio. The latent AR(1) process has an estimated value of φ1 = 0.931, while the empirical process has a value of φ∗1 = 0.534 (see Table 3.2). Note that the simulated φ∗1 value, obtained by regressing simulated mismeasured market-to-book on its lagged value, is 0.530, which is very close to the empirical estimate. The discrepancy between latent and observed φ1 is caused by a measurement error standard deviation that is large compared to that in the other variables. Its value is ση = 1.476, which exceeds the standard deviation of the innovation term in the AR(1) process, whose value is σ = 0.603, as shown in Table 3.1. The resulting measurement error ratio is σ2η/σ 2 x = 0.802, which drops if we use the variance of the observed market-to-book ratio in the denominator: σ2η/σ 2 x∗ = 0.445 (see Table 3.3). This latter value implies that 44.5% of the observed variation in the market-to- book ratio is driven by noise. While this seems large, the market-to-book ratio as a proxy for investment opportunities can ex-ante be expected to be noisy. As Erickson and Whited (2006) note, “all observable measures or estimates of the true incentive to invest [...] are likely to contain measurement error.” This is because accounting information inaccurately reflects both the market value of debt and the replacement value of assets, and because strong assumptions are needed for Tobin’s q to accurately reflect a firm’s incentive to invest. Using a classical errors-in-variables model with the investment-to-capital ratio on the left-hand side and average q on the right-hand side, Erickson and Whited (2006) report that approximately 59% of the variation in book value-based measures of Tobin’s q is driven by noise, and only 41% is driven by variation in the true unobservable q. This is broadly consistent with my model, where 55% of the variation in the market-to-book ratio is due to variation in true q. My estimates of the structural parameters produce a variance in the observed market- 57 to-book ratio of 4.895, which is exactly equal to its empirical counterpart. Thus, the results are not driven by an unnaturally high total variance in the market-to-book ra- tio. In the simulated cross section, this means that the true latent β-coefficient for the market-to-book ratio is -0.105, which is much larger than the empirical value of -0.006 (Table 3.2). The simulated mismeasured observed value for βMB is -0.058, which is still larger than the data value, but my results nonetheless suggest that a market-to-book ratio which is a poor proxy for true investment opportunities plays an important role in the persistence of the residual-based portfolio sorts. Since an option to invest is riskier than the investment itself, firms with a high true q would optimally choose to carry lower amounts of leverage. However, this effect is obscured in the data due to the high amount of measurement error inherent in the market-to-book ratio. Overall, the estimation produces sensible parameter values, and the simulated mo- ments closely resemble their empirical data counterparts, as a comparison of the “Data Value” and “Sim. Value” columns in Table 3.2 reveals. Finally, Figure 3.7 shows the results of the portfolio sorts. Using the estimated values of the structural parameters in Table 3.1 produces a close fit between empirical and simulated portfolio leverage time series, regardless of whether the sort is done on actual or residual leverage. While the simulated residual-based portfolios exhibit less dispersion than their empirical counter- parts in years 2-5, they track the empirical time series closely in the other time periods. This shows that low levels of measurement error in profitability, size, and industry lever- age, coupled with a larger, yet realistic amount of measurement error inherent in using book value-based proxies of Tobin’s q, is able to produce close matches to the Lemmon et al. (2008) portfolio sorts, and thus offers a potential explanation of their findings. 3.5 Multi-Variable Calibration with Autocorrelated Measurement Error In Section 3.4, as throughout this thesis, I have maintained the assumption of identically and independently distributed measurement error terms. As the analysis in section 3.4 58 shows, this still produces a remarkably good fit in terms of simulated sample moments being close to their actual empirical counterparts. In this section I relax the assumption of iid measurement errors in order to further improve the fit between simulated and actual portfolio sorts. While in the iid setup in Figure 3.7, the simulated portfolio sorts closely resemble the empirical sorts, the dispersion of the simulated residual-based portfolios is smaller than the dispersion of the empirical ones. To match the simulated data more closely to the real data, I now allow for a small amount of persistence in the measurement error terms themselves. This is not an unreasonable assumption; consider the tangibility measure, for instance, and assume that a systematic difference exists between accounting depreciation and real economic depreciation. It is then entirely plausible that accounting values of property, plant, and equipment systematically over- or understate the true economic values of such a variable. This would result in a certain amount of autocorrelation in the measurement error terms. Alternatively, a manager’s private valuation, i.e. the one relevant for financial decisions, may diverge from the market for extended periods of time. Note however, that specifying persistence in measurement error is not equivalent to assuming the result: what drives the slow convergence in the portfolio leverage sorts is the persistence in leverage, which is entirely independent of the persistence in the observed explanatory variable due to measurement error. Leverage is a function of the true factors xit, and not the mismeasured factors x∗it. 3.5.1 Exogenously Autocorrelated Measurement Error As before in Section 3.4, the system of equations that governs firms in the cross section and time series is given by: levit = β′(1 xit)′ + uit (3.41) xit = φ0 + φ1xit−1 + it (3.42) x∗it = xit + ηit (3.43) 59 However, the measurement error term now is persistent as well, with a zero mean: ηit = ρηit−1 + ζit (3.44) with ζit ∼ N(0,Σζ). The innovation covariance matrix Σζ is diagonal. Furthermore, in this section, I exogenously specify that ρ = 0.3 is a scalar that is the same for all four variables; the innovation variances contained in the matrix Σζ are still quantities to be estimated. Specifying an exogenous AR(1) coefficient is arbitrary and will be relaxed in Section 3.5.2. However, the estimation results show that using this com- mon, but low level of persistence for all explanatory variables provides the best balance between a good fit in terms of the portfolio sorts while maintaining reasonable values for the underlying structural parameters. These parameters are collected in the vector Φ = (φ1 Σ Σζ β σu), and are found by minimizing the sum of squared differences between actual moments and simulated moments: min Φ (mact −msim)′(mact −msim) (3.45) The moments in this calibration are the same as in the previous section and consist of sample statistics for leverage and the explanatory variables, the estimates of the AR(1) processes driving the explanatory variables, the regression parameters from a panel regression of leverage on its determinants, and the portfolio leverage levels of the Lemmon et al. (2008) portfolio sorts. Results A comparison between the structural parameters estimated under the iid measure- ment assumption in Table 3.1 and those under the exogenous autocorrelated specifica- tion in Table 3.4 shows that most of the parameter values are comparable. All of the cross-sectional β-coefficient have slightly increased in magnitude. The most pronounced change is for the market-to-book variable: while the estimate of the latent AR(1) coeffi- cient φ1 = 0.924 remains close to its previously estimated value of 0.931, the estimated standard deviation of the innovation term of the AR(1) process, σ has decreased from 60 a value of 0.603 under the iid specification to a value of 0.515 under the autocorrelated specification. At the same time, the standard deviation of the measurement error has increased. In the iid specification, the standard deviation was ση = 1.476, while in the autocorrelated setup specified by equation (3.44), the standard deviation of the innova- tion term is σζ = 1.671. This implies that relatively more of the persistence in observed mismeasured market-to-book is driven by the persistence in the measurement error, as opposed to persistence in latent market-to-book. This translates directly into a larger measurement error ratio, shown in Table 3.6. Using the variability of latent market- to-book as the denominator, the ratio has more than doubled from an iid specification value of 0.802 to 1.682. However, if the total variance of the mismeasured market-to- book variable (including measurement noise) is used as the denominator, the increase is more moderate from a value of 0.445 to 0.627. This is still close to the Erickson and Whited (2006) value of 0.59, and therefore a reasonable estimate of the amount of measurement error in the market-to-book ratio. Under the autocorrelated measurement error setup, simulated moments are generally closer to their empirical counterparts, as Table 3.5 reveals. It is noteworthy that with low autocorrelation in the measurement error terms, as given by ρ = 0.3, the simulated portfolio sorts mirror the empirical portfolio sorts (see Figure 3.8). Compared to the iid setup, the simulated sort on residual ‘unexpected’ leverage matches the empirical one almost perfectly: a small gap between empirical and simulated portfolios remains only in year 2. This reinforces the view that measurement error in explanatory variables, with a mild autocorrelation, is a possible explanation of the Lemmon et al. (2008) portfolio sorts, and could therefore be at the root of the effects documented by the authors. 3.5.2 Estimated Measurement Error Autocorrelations In Section 3.5.1, I estimate the latent model parameters by assuming that the mea- surement error terms for all four explanatory variables follow an AR(1) process with a common slope coefficient of 0.3. This produces a good model fit, but the assumption of a common exogenous AR(1) coefficient is somewhat arbitrary. I now relax this as- 61 sumption, and instead consider all AR(1) coefficients of the measurement error terms as quantities to be estimated. While the structural setup is still as in Section 3.5.1, the measurement errors now evolve according to: ηit = ρηit−1 + ζit (3.46) =  ρProf 0 0 0 0 ρTang 0 0 0 0 ρMB 0 0 0 0 ρIndLev   ηProfit−1 ηTangit−1 ηMBit−1 ηIndLevit−1 +  ζProfit ζTangit ζMBit ζIndLevit  (3.47) with ζit ∼ N(0,Σζ). The innovation covariance matrix Σζ is diagonal. The parame- ters are again estimated by minimizing the sum of squared differences between actual moments mact and simulated moments msim: min Φ (mact −msim)′(mact −msim) (3.48) where Φ = (φ1 Σ Σζ β σu ρ). Results This model produces simulated moments that closely resemble empirical moments, as can be seen by comparing the ‘Sim. Value’ and ‘Data Value’ columns in Table 3.8. In addition, both portfolio sorts produce highly persistent portfolio leverage time series (see Figure 3.9), whose initial dispersion dissipates very slowly, as in the real data. While this specification fares best in the sense that it produces the lowest sum of squared errors, it does so at a cost: some of the estimated parameter values diverge to a large degree from previous estimates. For instance, the estimated AR(1) coefficient of the latent profitability measure has dropped from 0.831 in the previous section to 0.602 (see Tables 3.4 and 3.7). However, the AR(1) coefficient for simulated mismeasured profitability is 0.775 (Table 3.8), which is exactly equal to its empirical counterpart. The reason for this phenomenon is that the autocorrelated measurement error now absorbs a sizeable portion of the persistence in observed profitability: the estimated value of 62 ρ, the AR(1) coefficient for measurement error in profitability, is 0.792, as shown in Table 3.7. A second aspect of this phenomenon, evident from Table 3.7, is that the standard deviation of the measurement error innovation (σζ = 0.212) now is larger than the standard deviation of state noise innovation (σ = 0.085). This, in turn, drives up the measurement error ratio for profitability, as shown in Table 3.9. Using the variance of latent profitability as the denominator, the ratio now stands at 3.916, compared to a value of 0.128 when ρ was exogenously assumed to be 0.3 (see Table 3.6). The effect is even more pronounced for the market-to-book variable, whose measurement error ratio now stands at 17.416, i.e. the measurement error variance is 17 times as large as the state variance. A value this large is arguably unrealistic, even though it is reduced to 0.701 if the ratio of measurement noise to observed market-to-book variance is computed instead. The shift of variability from the explanatory variables’ innovation variance terms σ into the measurement error innovation terms σζ is evident for all variables. As the variability of the latent explanatory variables decreases, the estimated cross-sectional coefficients increase in magnitude, as a comparison between Tables 3.4 and 3.7 reveals. This effect takes place to maintain the cross-sectional dispersion in the portfolio sorts: the lower variance of the state variable is magnified by a larger latent cross-sectional β- coefficient. Furthermore, comparing the ‘Struc. Value’ with the ‘Data Value’ column in Table 3.8 reveals that, with the exception of the industry leverage variable, there is now a marked discrepancy between the estimated structural β-coefficients, and their empirical counterparts. This raises the question of whether allowing for individual estimates of the ρ coefficients comes at the expense of implausible parameter values (standard errors have also increased relative to previous specifications). If one compares the Lemmon et al. (2008) portfolio sort results from this section with the previous one (Figure 3.9 vs. Figure 3.8), it is not clear that the current specification offers a better fit than the previous one with exogenously autocorrelated measurement error terms. 63 3.6 Conclusion This chapter addresses the question of how much measurement error in explanatory variables is necessary to reproduce the Lemmon et al. (2008) portfolio sort findings. I first consider the situation where we are agnostic about exactly which variable(s) drive leverage, and assume that it is determined by a single underlying factor. This is not as limiting as it may seem, because the factor could serve as a composite measure of many potential underlying variables. I determine that the measurement error variance of this factor would need to be 42% larger than its latent cross-sectional variation in order to replicate the portfolio leverage time series after a sort on actual leverage, and residual leverage, respectively. This number is large, but nonetheless a useful measure, as one can interpret it as an aggregate estimate of how mismeasured the explanatory variables would need to be. Perhaps more importantly, lower amounts of measurement error still produce a sizeable amount of persistence in the residual-based sorts. For example, a measurement error variance that is only about 75% as large as the state noise variance still produces residual-based portfolio leverage time series that closely mimic their empirical counterparts. The second contribution of this chapter is an examination of the ability of several important, yet arguably mismeasured explanatory variables to reproduce the Lemmon et al. (2008) stylized facts. The variables are a firm’s profitability, the tangibility of its assets, the market-to-book ratio, and industry leverage. I find that low quantities of iid measurement error in profitability, tangibility, and industry leverage, coupled with a measurement error variance equal to about 80% of the cross-sectional variation in the market to book ratio, produce a good fit of simulated sample data moments to empirical moments, including the portfolio sort time series. Furthermore, this level of measure- ment error in the market-to-book variable, which proxies for Tobin’s q, is consistent with other studies such as Erickson and Whited (2006), and suggests that unobserved investment opportunities play an important role in explaining leverage ratios. I then show that the model’s fit can be improved by allowing for a small amount of exoge- nous autocorrelation in the measurement error terms themselves. Finally, while treating 64 the autocorrelation coefficients in measurement error as free parameters produces the best fit in a statistical sense, the estimates of some of the structural parameters can be viewed as unrealistic. Overall, this chapter highlights that a reasonable amount of measurement error in explanatory variables can reproduce the Lemmon et al. (2008) stylized facts to a large degree. Therefore, measurement error constitutes a plausible driver behind those findings. 65 Figure 3.1: Average Leverage of Book Leverage Portfolios. Using the 1965-2003 sample of nonfinancial Compustat firms, I sort firms into 2 portfolios on the basis of their book leverage level. I then compute the mean leverage of each portfolio for the next 20 years, keeping its composition constant. I repeat this procedure for all years until the end of the sample period. The resulting 38 portfolio time series are then averaged in event time. The main Lemmon et al. (2008) conclusions are still evident. 66 Figure 3.2: Average Leverage of Book Leverage Portfolios, Sorted on Unexpected Leverage. Using the 1965-2003 sample of nonfinancial Compustat firms, I sort firms into 2 portfolios based on residuals from a regression of book leverage on lagged size, market-to-book, profitability, tangibility and mean industry leverage. I then compute the mean leverage of each portfolio for the next 20 years, keeping its composition constant. I repeat this procedure for all years until the end of the sample period. The resulting 38 portfolio time series are then averaged in event time. Explanatory variables are as in Chapter 1. 67 0 2 4 6 8 10 12 14 16 18 20 0 0.1 0.2 0.3 0.4 0.5 M ea n Le ve ra ge Event Time High Lev (sim) Low Lev(sim) Actual Figure 3.3: Data-Implied Target Leverage Dynamics. I model leverage lev as a function of its target l̂ev, which in turn is an AR(1) process: levt = l̂evt + ut (3.8) l̂evt = ϕ0 + ϕ1 l̂evt−1 + εt (3.9) The red crosses correspond to actual portfolio leverage levels. I simulate a panel of 1,000 firms, and choose parameter values for the system above such that the simulated data most closely resembles the actual data points by minimizing the sum of squared deviations: min Φ ∑ i ∑ t ( PFlevsimit − PFlevactit )2 (3.10) where i indexes whether a data point belongs to a high or low leverage portfolio at time t, and the parameter vector Φ = {σ2u, ϕ0, ϕ1, σ2ε}. The parameters are, respectively: the cross-sectional error variance in (3.8), as well as the intercept, slope and error variance for the AR(1) process governing target leverage in (3.9). The estimates are as follows: ϕ0 ϕ1 σε σu Estimate 0.021 0.930 0.066 0.080 Std. Error (0.012) (0.009) (0.003) (0.010) 68 0 10 20 30 40 50 60 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 M ea n Le ve ra ge Event Time True Target Mismeasured Target Panel 1: a = 0.1 0 10 20 30 40 50 60 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 M ea n Le ve ra ge Event Time True Target Mismeasured Target Panel 3: a = 0.75 0 10 20 30 40 50 60 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 M ea n Le ve ra ge Event Time True Target Mismeasured Target Panel 2: a = 0.5 0 10 20 30 40 50 60 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 M ea n Le ve ra ge Event Time True Target Mismeasured Target Panel 4: a = 1.25 Figure 3.4: Sample Mismeasured Target Leverage Paths. I first simulate a true target based on the parameters recovered via (3.7): Φ = {ϕ0 = 0.021, ϕ1 = 0.93, σε = 0.066, σu = 0.080}. The mismeasured target is then given by l̂ev∗ = α0 + α1 l̂ev + e (3.17) α0 = (1− α1)E(l̂ev) (3.18) α1 = 1 1 + a σ2e = V ar(l̂ev) a (1 + a)2 (3.19) The four panels show sample leverage paths for different levels of the noise-to-signal ratio a = σ 2 η σ2x . The true target is the same in all panels. 69 0 2 4 6 8 10 12 14 16 18 20 0 0.1 0.2 0.3 0.4 0.5 M ea n Le ve ra ge Event Time High Lev (sim) Low Lev(sim) Actual Figure 3.5: Implied Measurement Error from Residual-Based Sorts. I recover the implied ratio of measurement noise to variation in the true explanatory variable x. Previously, I obtained the parameter values governing the dynamics of the true target by calibrating simulated leverage portfolios to those obtained by the Lemmon et al. (2008) leverage- based sorts. Knowing the true target then allows the mismeasured target l̂ev∗ to be backed out via l̂ev∗ = α0 + α1 l̂ev + e (3.21) α0 = (1− α1)E(l̂ev) (3.22) α1 = 1 1 + a σ2e = V ar(l̂ev) a (1 + a)2 (3.23) The red crosses correspond to actual portfolio leverage levels. I simulate a panel of 1,000 firms, and choose the noise-to-signal ratio a = σ2η/σ 2 x for the system above such that the simulated data most closely resembles the actual data points by minimizing the sum of squared deviations: min a ∑ i ∑ t ( PFlevsimit − PFlevactit )2 (3.24) where i indexes whether a data point belongs to a high or low leverage portfolio at time t. The minimum of the objective function is reached at a = 1.42 (std. error = 0.12). The resulting fit is shown above. 70 0 2 4 6 8 10 12 14 16 18 20 0 0.1 0.2 0.3 0.4 0.5 M ea n Le ve ra ge Event Time High Lev (sim) Low Lev(sim) Actual Panel 1: a = 0.1 0 2 4 6 8 10 12 14 16 18 20 0 0.1 0.2 0.3 0.4 0.5 M ea n Le ve ra ge Event Time High Lev (sim) Low Lev(sim) Actual Panel 3: a = 0.5 0 2 4 6 8 10 12 14 16 18 20 0 0.1 0.2 0.3 0.4 0.5 M ea n Le ve ra ge Event Time High Lev (sim) Low Lev(sim) Actual Panel 2: a = 0.25 0 2 4 6 8 10 12 14 16 18 20 0 0.1 0.2 0.3 0.4 0.5 M ea n Le ve ra ge Event Time High Lev (sim) Low Lev(sim) Actual Panel 4: a = 0.75 Figure 3.6: Residual-Sorted Leverage Portfolios at Different Implied Levels of Measurement Error. I simulate the set of equations in Figure 3.5 for different values of the noise-to-signal ratio a = σ2η/σ 2 x. Firms are sorted into portfolios based on residuals. 71 0 2 4 6 8 10 12 14 16 18 20 0 0.1 0.2 0.3 0.4 0.5 M ea n Le ve ra ge Event Time High Lev (sim) Low Lev(sim) Actual Panel A: Sort on Actual Leverage 0 2 4 6 8 10 12 14 16 18 20 0 0.1 0.2 0.3 0.4 0.5 M ea n Le ve ra ge Event Time High Lev (sim) Low Lev(sim) Actual Panel B: Sort on Unexpected Leverage Figure 3.7: Average Leverage of Portfolios Sorted on Simulated ‘Actual’ and ‘Unexpected’ Leverage with iid Measurement Error. Panel A shows the evolution of the high and low leverage portfolios, when firms are sorted into portfolios based on simulated leverage. Firms are simulated using the parameters from Table 3.1, which are obtained by the calibration described in Sections 3.4. Every period, simulated firms are sorted into either a high- or low-leverage portfolio, whose composition is held constant for 20 time periods. The figure shows the average leverage of the simulated portfolios in each year (solid and dashed lines). The simulated portfolios closely resemble the real data, depicted by the red crosses. Panel B shows the results of doing the residual-based sort: leverage is regressed on mismeasured profitability, tangibility, market-to-book and industry leverage, and firms are then sorted into portfolios on the basis of the regression residual. The portfolio leverage levels in years 5 and onward again closely resemble the real data, while the initial dispersion is lower than in the data. 72 0 2 4 6 8 10 12 14 16 18 20 0 0.1 0.2 0.3 0.4 0.5 M ea n Le ve ra ge Event Time High Lev (sim) Low Lev(sim) Actual Panel A: Sort on Actual Leverage 0 2 4 6 8 10 12 14 16 18 20 0 0.1 0.2 0.3 0.4 0.5 M ea n Le ve ra ge Event Time High Lev (sim) Low Lev(sim) Actual Panel B: Sort on Unexpected Leverage Figure 3.8: Average Leverage of Portfolios Sorted on Simulated ‘Actual’ and ‘Unexpected’ Leverage with Persistent Measurement Error. Panel A shows the evolution of the high and low leverage portfolios, when firms are sorted into portfolios based on simulated leverage. Firms are simulated using the parameters from Table 3.4, which are obtained by the calibration described in Sections 3.5.1. The measurement errors for all four variables are modeled as an AR(1) process: ηit = ρηit−1 +ζit. I exogenously specify that ρ = 0.3. Every period, simulated firms are sorted into either a high- or low-leverage portfolio, whose composition is held constant for 20 time periods. The figure shows the average leverage of the simulated portfolios in each year (solid and dashed lines). The simulated portfolios closely resemble the real data, depicted by the red crosses. Panel B shows the results of doing the residual-based sort: leverage is regressed on mismeasured profitability, tangibility, market-to-book and industry leverage, and firms are then sorted into portfolios on the basis of the regression residual. Compared to the iid measurement error case, model fit has improved and the portfolio leverage levels now closely resemble the real data in all years. 73 0 2 4 6 8 10 12 14 16 18 20 0 0.1 0.2 0.3 0.4 0.5 M ea n Le ve ra ge Event Time High Lev (sim) Low Lev(sim) Actual Panel A: Sort on Actual Leverage 0 2 4 6 8 10 12 14 16 18 20 0 0.1 0.2 0.3 0.4 0.5 M ea n Le ve ra ge Event Time High Lev (sim) Low Lev(sim) Actual Panel B: Sort on Unexpected Leverage Figure 3.9: Average Leverage of Portfolios Sorted on Simulated ‘Actual’ and ‘Unexpected’ Leverage with Estimated Measurement Error Persistence. Panel A shows the evolution of the high and low leverage portfolios, when firms are sorted into portfolios based on simulated leverage. Firms are simulated using the parameters from Table 3.7, which are obtained by the calibration described in Sections 3.5.2. The measurement errors for all four variables are modeled as AR(1) processes: ηit = ρηit−1 + ζit, where ρ = diag(ρProf ρTang ρMB ρIndLev) is a matrix of free parameters. Every period, simulated firms are sorted into either a high- or low-leverage portfolio, whose composition is held constant for 20 time periods. The figure shows the average leverage of the simulated portfolios in each year (solid and dashed lines). The simulated portfolios closely resemble the real data, depicted by the red crosses. Panel B shows the results of doing the residual-based sort: leverage is regressed on mismeasured profitability, tangibility, market-to-book and industry leverage, and firms are then sorted into portfolios on the basis of the regression residual. Compared to the iid measurement error case, model fit has improved and the portfolio leverage levels now resemble the real data in all years. 74 Variable Parameter Estimate Std. Error Profitability φ1 0.832 0.013 σ 0.194 0.007 ση 0.105 0.006 Tangibility φ1 0.936 0.011 σ 0.090 0.011 ση 0.038 0.009 Market-to-Book φ1 0.931 0.011 σ 0.603 0.059 ση 1.476 0.067 Industry Leverage φ1 0.891 0.009 σ 0.039 0.010 ση 0.012 0.004 Cross-sectional β0 0.129 0.014 Parameters βProf -0.070 0.016 βTang 0.115 0.011 βMB -0.105 0.007 βIndLev 0.859 0.031 σu 0.082 0.005 Table 3.1: Estimated Structural Parameters, with iid Measurement Error. This table lists the structural parameters governing the time series and cross-sectional properties of the latent variables profitability, tangibility, market-to-book, and industry leverage in the four-variable calibration modeled via equations (3.25) through (3.31), as well as standard errors. The parameter values are found by minimizing the squared distance between simulated sample moments and actual data moments. The chosen moments are described in Section 3.4.1. 75 Variable Parameter Data Value Sim. Value Struc. Value Profitability φ∗0 0.009 0.009 0.007 φ∗1 0.775 0.764 0.832 σ2∗x 0.132 0.134 0.123 Tangibility φ∗0 0.017 0.031 0.023 φ∗1 0.952 0.916 0.936 σ2∗x 0.057 0.067 0.066 Market-to-Book φ∗0 0.616 0.616 0.092 φ∗1 0.534 0.530 0.931 σ2∗x 4.895 4.895 2.717 Industry Leverage φ∗0 0.028 0.036 0.033 φ∗1 0.908 0.879 0.891 σ2∗x 0.007 0.008 0.008 Cross-sectional β∗0 0.013 0.077 0.129 Parameters β∗Prof -0.066 -0.063 -0.070 β∗Tang 0.099 0.112 0.115 β∗MB -0.006 -0.058 -0.105 β∗IndLev 0.835 0.834 0.859 Leverage σ2lev 0.034 0.044 0.044 Table 3.2: Actual and Simulated Moments, with iid Measurement Error. This table lists actual data moments in the “Data Value” column, their simulated counterparts (excluding the portfolio leverage levels) in the “Sim. Value” column, and the latent structural values in the “Struc. Value” column. The simulated moments are computed from simulated mismeasured variables using the estimated structural parameters from Table 3.1, and are de- scribed in Section 3.4.1. The latent structural values are obtained with the estimated structural parameter values from Table 3.1, and are included here again for ease of comparison. 76 (1) (2) σ2η/σ 2 x σ 2 η/σ 2 x∗ Profitability 0.090 0.083 Tangibility 0.021 0.021 Market-to-Book 0.802 0.445 Industry Leverage 0.018 0.018 Table 3.3: Measurement Error Ratio with iid Measurement Error. For each explanatory variable, column (1) shows estimates of the ratio of measurement noise σ2η to variance in the latent explanatory variable σ 2 x, while column (2) shows the ratio of mea- surement noise σ2η to total variance σ 2 x∗ . The total variance is the variance of the mismeasured observed variable, and thus includes the measurement error variance. The values shown are computed with the structural parameter values in Table 3.1, which minimize the calibration’s sum of squared errors. 77 Variable Parameter Estimate Std. Error Profitability φ1 0.831 0.016 σ 0.190 0.009 σζ 0.117 0.008 Tangibility φ1 0.955 0.009 σ 0.071 0.009 σζ 0.033 0.007 Market-to-Book φ1 0.924 0.006 σ 0.515 0.051 σζ 1.671 0.086 Industry Leverage φ1 0.907 0.011 σ 0.027 0.011 σζ 0.007 0.004 Cross-sectional β0 0.158 0.024 Parameters βProf -0.082 0.015 βTang 0.118 0.012 βMB -0.129 0.013 βIndLev 0.867 0.050 σu 0.079 0.005 Table 3.4: Estimated Structural Parameters with Persistent Measurement Error. This table lists the structural parameters governing the time series and cross-sectional properties of the latent variables profitability, tangibility, market-to-book, and industry leverage in the four-variable calibration modeled via equations (3.41) through (3.43), as well as standard errors. The parameter values are found by minimizing the squared distance between simulated sample moments and actual data moments. The chosen moments are described in Section 3.4.1. The measurement errors for all four variables are modeled as an AR(1) process: ηit = ρηit−1 + ζit. I exogenously specify that ρ = 0.3. 78 Variable Parameter Data Value Sim. Value Struc. Value Profitability φ∗0 0.009 0.009 0.007 φ∗1 0.775 0.772 0.831 σ2∗x 0.132 0.132 0.119 Tangibility φ∗0 0.017 0.021 0.016 φ∗1 0.952 0.942 0.955 σ2∗x 0.057 0.059 0.056 Market-to-Book φ∗0 0.616 0.624 0.100 φ∗1 0.534 0.533 0.924 σ2∗x 4.895 4.895 1.825 Industry Leverage φ∗0 0.028 0.030 0.028 φ∗1 0.908 0.902 0.907 σ2∗x 0.007 0.004 0.004 Cross-sectional β∗0 0.013 0.060 0.158 Parameters β∗Prof -0.066 -0.068 -0.082 β∗Tang 0.099 0.115 0.118 β∗MB -0.006 -0.048 -0.129 β∗IndLev 0.835 0.844 0.867 Leverage σ2∗lev 0.034 0.041 0.041 Table 3.5: Actual and Simulated Moments with Persistent Measurement Error. This table lists actual data moments in the “Data Value” column, their simulated counterparts (excluding the portfolio leverage levels) in the “Sim. Value” column, and the latent structural values in the “Struc. Value” column. The simulated moments are computed from simulated mismeasured variables using the estimated structural parameters from Table 3.4, and are de- scribed in Section 3.4.1. The latent structural values are obtained with the estimated structural parameter values from Table 3.4, and are included here again for ease of comparison. The mea- surement errors for all four variables are modeled as an AR(1) process: ηit = ρηit−1 + ζit. I exogenously specify that ρ = 0.3. 79 (1) (2) σ2η/σ 2 x σ 2 η/σ 2 x∗ Profitability 0.128 0.114 Tangibility 0.020 0.020 Market-to-Book 1.682 0.627 Industry Leverage 0.014 0.014 Table 3.6: Measurement Error Ratio with Persistent Measurement Error. For each explanatory variable, column (1) shows estimates of the ratio of measurement noise σ2η to variance in the latent explanatory variable σ 2 x, while column (2) shows the ratio of mea- surement noise σ2η to total variance σ 2 x∗ . The total variance is the variance of the mismeasured observed variable, and thus includes the measurement error variance. The values shown are computed with the structural parameter values in Table 3.4, which minimize the calibration’s sum of squared errors. The measurement errors for all four variables are modeled as an AR(1) process: ηit = ρηit−1 + ζit. I exogenously specify that ρ = 0.3. 80 Variable Param. Estimate S.E. Param. Estimate S.E. Profitability φ1 0.602 0.064 Cross-sectional β0 0.288 0.054 σ 0.085 0.013 Parameters βProf -0.820 0.321 ρ 0.792 0.025 βTang 0.627 0.117 σζ 0.212 0.008 βMB -0.340 0.055 Tangibility φ1 0.831 0.062 βIndLev 0.885 0.032 σ 0.058 0.006 σu 0.056 0.012 ρ 0.963 0.031 σζ 0.060 0.008 Mkt-to-Book φ1 0.925 0.012 σ 0.169 0.036 ρ 0.519 0.029 σζ 1.852 0.088 Industry Lev. φ1 0.938 0.012 σ 0.033 0.005 ρ 0.467 0.096 σζ 0.022 0.002 Table 3.7: Estimated Structural Parameters with Estimated Measurement Error Persistence. This table lists the structural parameters governing the time series and cross-sectional properties of the latent variables profitability, tangibility, market-to-book, and industry leverage in the four-variable calibration modeled via equations (3.41) through (3.43), as well as standard errors. The parameter values are found by minimizing the squared distance between simulated sample moments and actual data moments. The chosen moments are described in Section 3.4.1. The measurement errors for all four variables are modeled as AR(1) processes: ηit = ρηit−1 + ζit, where ρ = diag(ρProf ρTang ρMB ρIndLev) is a matrix of free parameters. 81 Variable Parameter Data Value Sim. Value Struc. Value Profitability φ∗0 0.009 0.009 0.016 φ∗1 0.775 0.775 0.602 σ2∗x 0.132 0.133 0.011 Tangibility φ∗0 0.017 0.023 0.061 φ∗1 0.952 0.937 0.831 σ2∗x 0.057 0.059 0.011 Market-to-Book φ∗0 0.616 0.616 0.100 φ∗1 0.534 0.535 0.925 σ2∗x 4.895 4.895 0.197 Industry Leverage φ∗0 0.028 0.027 0.019 φ∗1 0.908 0.912 0.938 σ2∗x 0.007 0.009 0.009 Cross-sectional β∗0 0.013 0.025 0.288 Parameters β∗Prof -0.066 -0.067 -0.820 β∗Tang 0.099 0.102 0.627 β∗MB -0.006 -0.013 -0.340 β∗IndLev 0.835 0.837 0.885 Leverage σ2∗lev 0.034 0.043 0.034 Table 3.8: Actual and Simulated Moments with Estimated Measurement Error Persistence. This table lists actual data moments in the “Data Value” column, their simulated counterparts (excluding the portfolio leverage levels) in the “Sim. Value” column, and the latent structural values in the “Struc. Value” column. The simulated moments are computed from simulated mismeasured variables using the estimated structural parameters from Table 3.7, and are de- scribed in Section 3.4.1. The latent structural values are obtained with the estimated structural parameter values from Table 3.7, and are included here again for ease of comparison. The mea- surement errors for all four variables are modeled as AR(1) processes: ηit = ρηit−1 + ζit, where ρ = diag(ρProf ρTang ρMB ρIndLev) is a matrix of free parameters. 82 (1) (2) σ2η/σ 2 x σ 2 η/σ 2 x∗ Profitability 3.916 0.340 Tangibility 0.335 0.060 Market-to-Book 17.416 0.701 Industry Leverage 0.051 0.048 Table 3.9: Measurement Error Ratio with Estimated Measurement Error Persistence. For each explanatory variable, column (1) shows estimates of the ratio of measurement noise σ2η to variance in the latent explanatory variable σ 2 x, while column (2) shows the ratio of mea- surement noise σ2η to total variance σ 2 x∗ . The total variance is the variance of the mismeasured observed variable, and thus includes the measurement error variance. The values shown are computed with the structural parameter values in Table 3.7, which minimize the calibration’s sum of squared errors. The measurement errors for all four variables are modeled as AR(1) processes: ηit = ρηit−1 + ζit, where ρ = diag(ρProf ρTang ρMB ρIndLev) is a matrix of free parameters. 83 Chapter 4 Conclusion Persistence in residual-based leverage portfolios is a well-documented fact. While this persistence may be a consequence of a firm fixed effect or omitted time-varying variables, I show in Chapter 2 that it can also arise when slow-moving explanatory variables in a leverage regression are measured with error. If firms are assigned to portfolios on the basis of mismeasured regression residuals, the residuals contain information about the true level of the unobserved regressor. Since the explanatory variable is persistent, its current value predicts its future value, on which leverage in turn depends. Taken together, these facts can potentially explain the persistent leverage portfolio differences that Lemmon et al. (2008) document. In the presence of measurement error, sorting firms into portfolios based on the regression residuals, or “unexplained leverage”, will resemble sorting firms into portfolios based on actual leverage. In Chapter 3 I address the question of magnitude: what parameter values are needed to generate significant portfolio persistence, and are they plausible to fully explain the stylized facts observed in leverage portfolios? To provide an answer, I first determine the implied underlying dynamics of a true leverage target. This allows me to quantify how mismeasured the target needs to be in order to generate the residual-based portfolio sorts. I find that if we view the target as being determined by a single composite factor of a number of possible tradeoff theory variables, then the measurement error variance of this latent factor needs to be 42% larger than its cross-sectional variance. This number is large, but nonetheless a useful measure, as one can interpret it as an aggregate estimate of how mismeasured the explanatory variables would need to be. In addition, a measurement error variance that is only about 75% as large as the state noise variance still produces very persistent residual-based portfolio sorts. Therefore, 84 even if one takes the view that measurement error alone is not sufficient to fully account for the persistence in residual-sorted leverage portfolios, it nonetheless is likely to be an important contributor, since sizeable persistence in the residual-based portfolios arises even at low ratios of measurement error to state noise in the explanatory variable. The second contribution of this chapter is that I examine several important explana- tory variables, namely a firm’s profitability, the tangibility of its assets, the market-to- book ratio, and industry leverage. I find that low quantities of measurement error in profitability, tangibility, and industry leverage, coupled with a measurement variance equal to about 80% of the cross-sectional variation in the market to book ratio, produce a good fit of simulated sample data moments to empirical moments. Furthermore, my level of measurement error in the market-to-book variable, which proxies for Tobin’s q, is consistent with other studies such as Erickson and Whited (2006), and suggests that unobserved investment opportunities may play an important role in explaining leverage ratios, and the persistence of the residual-based portfolio sorts. I focus the analysis on measurement error which is iid over time. In a sense, this is the most ‘conservative’ variety of mismeasurement, yet it is sufficient to generate the Lemmon et al. (2008) phe- nomena. On the other hand, I also show that if the measurement error itself exhibits mild persistence, the results are strengthened, and model fit is improved. A persis- tent measurement error term allows the proxy to systematically under- or overestimate the underlying latent factor for more than one time period. I do not consider cross- sectionally correlated measurement error, nor do I consider the possibility that leverage itself is mismeasured, even though it arguably is: the quasi-market leverage measure I use does not accurately reflect true leverage, which would use market-based estimates of debt and possibly other non-financial liabilities suggested e.g. by Welch (2009). While iid measurement error in leverage would only increase its cross-sectional variation and as such not pose a problem, the measurement error in leverage could also be correlated with measurement error terms on the right-hand side. Depending on the sign of the correlation, this would amplify or attenuate the underlying structural relationship in the observed data. 85 My thesis has focused on capital structure, which is an aspect of corporate finance. However, on the asset pricing side, portfolio sorts are also a very popular tool to evaluate returns of certain trading strategies, and to test asset pricing models. Measurement quality is an important consideration for the risk factors in these models, so my work may have implications for the asset pricing applications of portfolio sorts as well. 86 Bibliography Baker, M. and J. Wurgler (2002). Market Timing and Capital Structure. Journal of Finance 55, 1–32. Chang, X. and S. Dasgupta (2009). Target Behaviour and Financing: How Conclusive is the Evidence? Journal of Finance. DeAngelo, H., L. DeAngleo, and T. Whited (2009). Capital structure dynamics and transitory debt. Working Paper . DeAngelo, H. and R. Roll (2011). How stable are corporate capital structures? Working Paper . Erickson, T. and T. M. Whited (2006). On the accuracy of different measures of q. Financial Management , 5–33. Fischer, E., R. Heinkel, and J. Zechner (1989). Dynamic Capital Structure Choice: Theory and Tests. Journal of Finance 44, 19–40. Flannery, M. and K. Rangan (2006). Partial Adjustment toward Target Capital Struc- tures. Journal of Financial Economics 79, 469–506. Frank, M. and V. Goyal (2007). Capital Structure Decisions: Which Factors are Reliably Important? University of Minnesota Working Paper . Graham, J. and C. Harvey (2001). The theory and practice of corporate finance: Evi- dence from the field. Journal of Financial Economics 60, 187–243. Granger, C. W. J. and P. Newbold (1977). Forecasting economic time series. Academic Press. 87 Greene, W. H. (2003). Econometric Analysis. Prentice Hall . Hennessy, C. and T. Whited (2005). Debt dynamics. Journal of Finance 60, 1129–1165. Hovakimian, A., T. Opler, and S. Titman (2001). The Debt-Equity Choice. Journal of Financial and Quantitative Analysis 36 (1), 1–24. Kraus, A. and R. Litzenberger (1973). A state-preference model of optimal financial leverage. Journal of Finance 28, 911–922. Leary, M. and M. Roberts (2005). Do Firms Rebalance Their Capital Structure? Jour- nal of Finance 60 (6), 2575–2620. Lemmon, M., M. Roberts, and J. Zender (2008). Back to the Beginning: Persistence and the Cross-Section of Corporate Capital Structure. Journal of Finance 63, 1575–1608. Menichini, A. (2010). Financial frictions and capital structure choice: A structural estimation. Working Paper . Myers, S. (1984). The Capital Structure Puzzle. Journal of Finance 39 (3), 575–592. Rajan, R. and L. Zingales (1995). What Do We Know about Capital Structure: Some Evidence from International Data. Journal of Finance 50 (5), 1421–1460. Roberts, M. (2001). The Dynamics of Capital Structure: An Empirical Analysis of a Partially Observable System. Working Paper . Strebulaev, I. and B. Yang (2006). The Mystery of Zero-Leverage Firms. Stanford Working Paper . Titman, S. and R. Wessels (1988). The Determinants of Capital Structure Choice. Journal of Finance 43 (1), 1–19. Welch, I. (2004). Capital Structure and Stock Returns. Journal of Political Econ- omy 112, 106–131. Welch, I. (2009). Common flaws in capital structure research. Working Paper . 88 Appendix A Variable Definitions Data are taken from the annual Compustat database between 1965 and 2003. The variable definitions mirror those in Lemmon et al. (2008). Financials and firms with missing asset or debt values are excluded from the sample. Leverage is constrained to lie in the closed unit interval. Size, profitability, tangibility, and the market-to-book ratio are winsorized at the 1st and 99th percentile. The construction of each variable is as follows: Leverage = Short Term Debt [34] + Long Term Debt [9] Book Assets [6] Total Debt = Short Term Debt + Long Term Debt Size = ln(Book Assets [6]) Profitability = Operating Income before Depreciation [13] Book Assets [6] Tangibility = PPE [13] Book Assets [6] Market Equity = Share Price [199] * Shares Outstanding [54] Market-to-Book = Market Equity + Total Debt + Pref. Stock Liq. Value [10] Book Assets [6] − Def. Taxes[35] Book Assets [6] 89 Appendix B Derivations for Chapter 2 B.1 Correlation between a Regression’s Dependent Variable and the Estimated Residuals In a standard linear model of the form lev = xβ+u, where lev is the dependent variable, x is a matrix of regressors, and β is the coefficient vector, the correlation between the regressand lev and the estimated residual û is given by: Corr(y, û) = √ (1−R2) ∈ (0, 1] (B.1) where R2 is the coefficient of determination, or the goodness of fit. Proof. Start with the definition: Corr(y, û|x) = Cov(y, û|x) σy|xσû|x = E(yû|x)− E(y|x)E(û|x) σy|xσû|x (B.2) where σy|x is the standard deviation of the dependent variable lev and σû|x is the standard deviation of the estimated residuals ŷ. By the zero conditional mean error assumption E(û|x) = 0, which eliminates the second term in the numerator. Substitute the identity û = y − ŷ, where ŷ is the fitted value, into the first term to obtain: Corr(y, û) = E(y2)− E(yŷ) σyσû (B.3) All moments are still conditional on the regressor x, but to avoid cluttered notation I omit the “...|x”. From the definition of the variance: E(y2) = σ2y + E(y)2, so Corr(y, û) = σ2y + E(y)2 − E(yŷ) σyσû (B.4) 90 Next, decompose E(yŷ): E(yŷ) = Cov(y, ŷ) + E(y)E(ŷ) = Cov(ŷ + û, ŷ) + E(y)E(ŷ) = Cov(ŷ, ŷ) + Cov(û, ŷ) + E(y)2 = σ2ŷ + E(y)2 (B.5) since E(y) = E(ŷ) and Cov(û, ŷ) = 0, again as a consequence of the zero conditional mean error assumption. Substitute (B.5) into (B.4), and simplify: Corr(y, û) = σ2y − σ2ŷ + E(y)2 − E(y)2 σyσû = σ2y − σ2ŷ σyσû (B.6) Since σû = √ σ2y − σ2ŷ , it follows that Corr(y, û) = σû σy (B.7) Finally, note that the regression’s R2 is defined as: R2 = ESS TSS = 1− RSS TSS = 1− σ 2 û σ2y (B.8) where ESS is the explained sum of squares, TSS is the total sum of squares, and RSS is the residual sum of squares. Substituting (B.8) into (B.7) gives the result. From equation (B.6), it follows that the correlation cannot be negative: σ2y−σ2ŷ σyσû ≥ 0 since σ2y ≥ σ2ŷ . This is intuitively sensible: the variance of the dependent variable cannot be less than the variance of the fitted values. B.2 Attenuation Bias Intermediate Steps: β̂∗ = cov(x∗it, levit) σ2x∗it = E [(xit + ηit)(βxit + uit)]− E(xit + ηit)E(βxit + uit) E[(xit + ηit)2]− E(xit + ηit)2 = βE(x2it)− βE(xit)2 E(x2it) + Eη2it − E(xit)2 = β σ2xit σ2xit + σ 2 ηit (B.9) 91 B.3 Conditional Expectation of Leverage Under Measurement Error We want to compute expected portfolio leverage, conditional on sorting on the mismea- sured regression residual: E[levit+1|û∗it] = βφE[xit|û∗it] + βE[it+1|û∗it] + E[uit+1|û∗it] (B.10) The second and third expectations on the RHS are equal to zero. E[it+1|û∗it] = 0 since next period’s innovation in the explanatory variable is independent of this year’s estimated residual. Similarly, next period’s true residual in the leverage regression is independent of this period’s estimated residual, so E[uit+1|û∗it] = 0. The first expectation on the RHS, however, is not equal to 0. The residual û∗it contains information about the true xit, so E(xit|û∗it) 6= E(xit). To see this, assume that xit and û∗it are normally distributed random variables. Start with a scalar version of the conditional expectation of multivariate normal random variables 8: E(xit|û∗it) = E(xit) + Cov(xit, û∗it) V ar(û∗it) [û∗it − E(û∗it)] (B.11) Now express û∗it as û ∗ it = levit − β̂∗x∗it = βxit + uit − β̂∗(xit + ηit) = (β − β̂∗)xit − β̂∗ηit + uit and substitute: E(xit|û∗it) = E(xit) + E [ (β − β̂∗)x2it − β̂∗ηitxit + uitxit ] − E(xit)E[(β − β̂∗)xit] E [ (β − β̂∗)2x2it + (β̂∗)2η2it + u2it ] − [ (β − β̂∗)E(xit) ]2 û∗it = E(xit) + (β − β̂∗) [E(x2it)− E(xit)2] (β − β̂∗)2[E(x2it)− E(xit)2] + (β̂∗)2E(η2it) + E(u2it) û∗it = E(xit) + (β − β̂∗)σ2xit (β − β̂∗)2σ2xit + (β̂∗)2σ2ηit + σ2uit û∗it (B.12) 8Let x1 . . . xN be multivariate normal, and collect (x1 . . . xm) ′ in a vector xa, and (xm+1 . . . xN )′ in a vector xb (1 ≤ m ≤ N − 1). Then stack the vectors and let x =  xa xb  with mean  µa µb  and covariance matrix Σ =  Σa Σab Σba Σb . Then E(xa|xb) = µa + ΣabΣ−1b (xb − µb), where ΣabΣ−1b can be interpreted as the coefficients of a regression of xa on xb (see e.g. Greene (2003)). 92 Expanding the quadratic in the denominator and substituting β̂ = β σ2xit σ2xit+σ 2 ηit gives E(xit|û∗it) = E(xit) + (β − β̂∗)σ2xit (β2 − 2ββ̂∗ + (β̂∗)2)σ2xit + (β̂∗)2σ2ηit + σ2uit û∗it = E(xit) + (β − β̂∗)σ2xit β(β − 2β̂∗)σ2xit + β̂∗β σ2xit σ2xit+σ 2 ηit (σ2xit + σ 2 ηit) + σ 2 uit û∗it = E(xit) + (β − β̂∗)σ2xit β(β − β̂∗)σ2xit + σ2uit û∗it = E(xit) + b · û∗it; b = ( β + σ2uit(σ 2 xit + σ 2 ηit) βσ2xitσ 2 ηit )−1 (B.13) In my setup, E[xit] = 0, so the expectation of xit conditional on the regression residual û∗it is E(xit|û∗it) = b · û∗it = [ β + σ2uit β ( 1 σ2ηit + 1 σ2xit )]−1 û∗it (B.14) Finally, substitute (B.14) into (2.24) to obtain an expression for the conditional expectation for next period’s leverage: E(levit+1|û∗it) = βφb · û∗it = φ [ 1 + σ2uit β2 ( 1 σ2ηit + 1 σ2xit )]−1 ︸ ︷︷ ︸ = c≥ 0 û∗it (B.15) 93 Appendix C Derivations for Chapter 3 Implied Target Leverage Derivations Note that firm and time subscript are suppressed in the following proofs: C.1 Derivation of α0 Begin with the relationship between mismeasured target and true target: l̂ev∗ = α0 + α1 l̂ev + e (C.1) Taking expectations: E(l̂ev∗) = α0 + α1E(l̂ev) + 0 (C.2) Mismeasured target and true target are equal, on average, i.e. E(l̂ev∗) = E(l̂ev). To see this, start with the regression specification where the explanatory variable x∗ is measured with error (* denotes that a variable or parameter is affected by measurement error): lev = β∗0 + β ∗ 1x ∗ + ∗ (C.3) Taking expectations: E(lev) = β∗0 + β1 σ2x σ2x + σ2η E(x+ η) + E(∗) = β∗0 + σ2x σ2x + σ2η E(lev) (C.4) Therefore, β∗0 = E(lev) ( 1− σ 2 x σ2x + σ2η ) (C.5) 94 Mismeasured target leverage is given by l̂ev∗ = β∗0 + β ∗ 1x ∗ (C.6) Substituting for β∗0 and β∗1 shows that the mismeasured target equals true target (and hence actual leverage), on average: E(l̂ev∗) = E(lev) ( 1− σ 2 x σ2x + σ2η ) + σ2x σ2x + σ2η β1E(x+ η) = E(lev) ( 1− σ 2 x σ2x + σ2η ) + σ2x σ2x + σ2η E(lev) = E(lev) (C.7) Substituting (C.7) into (C.2) then yields an expression for α0: E(l̂ev∗) = E(l̂ev) = α0 + α1E(l̂ev) + 0 α0 = (1− α1)E(l̂ev) (C.8) C.2 Derivation of α1 Since (C.1) above is a regression equation: α1 = cov(l̂ev, l̂ev∗) V ar(l̂ev) (C.9) Expanding the numerator: cov(l̂ev, l̂ev∗) = Cov (β1x, β∗0 + β ∗ 1(x+ η)) = Cov ( β1x,E(lev) ( 1− σ 2 x σ2x + σ2η ) + β1 σ2x σ2x + σ2η (x+ η) ) = Cov ( β1x, β1 σ2x σ2x + σ2η x ) + Cov ( β1x, β1 σ2x σ2x + σ2η η ) ︸ ︷︷ ︸ =0 = β21 σ2x σ2x + σ2η V ar(x) (C.10) Substituting: α1 = cov(l̂ev, l̂ev∗) V ar(l̂ev) = β21 σ2x σ2x+σ 2 η V ar(x) β21V ar(x) = σ2x σ2x + σ2η (C.11) 95 Finally, let a = σ 2 η σ2x , and substitute into (C.11): α1 = 1 1 + a (C.12) C.3 Derivation of σ2e Start again with (C.1), and compute the variance: V ar(l̂ev∗) = α21V ar(l̂ev) + σ 2 e (C.13) We can compute V ar(l̂ev) from calibrating the true target to resemble the leverage- sorted portfolios. To compute V ar(l̂ev∗), start again with l̂ev∗ = β∗0 + β ∗ 1x ∗ = β∗0 + β1 σ2x σ2x + σ2η (x+ η) (C.14) Then compute the variance: V ar(l̂ev∗) = V ar ( β1 σ2x σ2x + σ2η (x+ η) ) = ( β1 σ2x σ2x + σ2η )2 (σ2x + σ 2 η) = β21 (σ2x) 2 σ2x + σ2η = V ar(l̂ev) σ2x σ2x + σ2η︸ ︷︷ ︸ ≤V ar(l̂ev) (C.15) Substitute (C.15) into (C.13), and solve for σ2e : V ar(l̂ev) σ2x σ2x + σ2η = α21V ar(l̂ev) + σ 2 e (C.16) Again express measurement error as a fraction of the variability of the true x: σ2η = aσ 2 x. We can now solve for the implied variance of the residual e as a function of the amount of measurement error present: σ2e = V ar(l̂ev) [ σ2x σ2x + σ2η − ( σ2x σ2x + σ2η )2] = V ar(l̂ev) [ 1 1 + a − 1 (1 + a)2 ] = V ar(l̂ev) a (1 + a)2 (C.17) 96

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0073172/manifest

Comment

Related Items