MONEY, OUTPUT AND THE UNITED STATES' INTER-WAR FINANCIAL CRISIS AN EMPIRICAL ANALYSIS by PATRICK JAMES COE B.A., University of Essex, 1990 M.A., University of Essex, 1992 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES Department of Economics We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA March 1998 © Patrick James Coe, 1998 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. 1 further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department The University of British Columbia Vancouver, Canada Date M*&CH DE-6 (2/88) A B S T R A C T In the first essay of this thesis I test long-run monetary neutrality ( L R M N ) using the long-horizon approach of Fisher and Seater [18]. Using United States' data on M 2 and Net National Product they reject L R M N for the sample 1869-1975. However, I show that this result is not robust to the use of the monetary base instead of M 2 . Nor is it robust to the use of United Kingdom data instead of United States data. These results are consistent with the interpretation that Fisher and Seater's result is a consequence of the financial crisis of the 1930s causing inside money and output to move together. Using a Monte Carlo study I show that Fisher and Seater's rejection of L R M N can also be accounted for by size distortion in their test statistic. This study also shows that at longer horizons, power is very low. In the second essay I consider the financial crisis of the 1930s in the United States as change in regime. Using a bivariate version of Hamilton's [24] Markov switching model I estimate the probability that the underlying regime was one of financial crisis at each point in time. I argue that there was a shift to the financial crisis regime following the first banking crisis of 1930. The crucial reform in ending the financial crisis appears to have been the introduction of the Federal Deposit Insurance Corporation in January 1934.1 also find that the time series of probabilities over the state of the financial sector contain marginal explanatory power for output fluctuations in the inter-war period. A problem when testing the null hypothesis of a linear model against the alternative of the Markov switching model is the presence of nuisance parameters. Consequently, the likelihood ratio test statistic does not possess the standard chi-squared distribution. In my third essay I perform a Monte Carlo experiment to explore the small sample properties of the ii pseudo likelihood ratio test statistic under the non-standard conditions. I find no evidence of size distortion. However, I do find that size adjusted power is very poor in small samples. iii T A B L E OF CONTENTS Abstract ii Table of Contents iv List of Tables v List of Figures v Acknowledgement vi Chapter 1 Introduction 1 .Chapter 2 Long-run Monetary Neutrality in the United States and United Kingdom, 1869-1995 7 2.1 Introduction 7 2.2 Methodology 9 2.3 Time Series Properties of the Data 14 2.4 Long-run Neutrality in the United States 15 2.5 Long-run Neutrality in the United Kingdom 19 2.6 Small Sample Considerations 21 2.7 Conclusions 26 Chapter 3 Financial Crisis and Financial Reform During the Great Depression 37 3.1 Introduction 37 3.2 Financial Crisis and the Great Depression 39 3.3 Banking and Monetary Reform in the New Deal 42 3.3.1 Banking Reform 43 3.3.2 Monetary Reform 45 3.4 A Model of Time Series Subject to a Change in Regime 47 3.5 Estimation Results 51 3.6 Financial Crisis and Output Growth 57 3.7 Conclusions 59 Chapter 4 The Markov Switching Model and Linear Time Series 66 4.1 Introduction 66 4.2 A Two State and a One State Model of GDP 68 4.3 Estimates Using Canadian GDP 73 4.4 Monte Carlo Experiment 75 4.4.1 Distribution of Test Statistic 77 iv 4.4.2 Predictive Accuracy 4.5 Conclusions 79 81 Bibliography 89 Appendix 1 Data for Chapter Two 93 Appendix 2 Estimation of VAR Under the Null and the Alternative 94 Appendix 3 Estimated Conditional Probabilities 1919Q3 to 1941Q4 95 Appendix 4 Filter for Two State Model of Chapter Four LIST OF TABLES 98 Table 2.1 Table 2.2 Table 2.3 Unit Root Statistics Results for Monte Carlo Simulation Monte Carlo Results When the Null Hypothesis in not Rejected 28 30 31 Table 3.1 Table 3.2 Table 3.3 Estimates of Markov Switching Model and Linear Model L M Tests for Conditional Heteroskedasticity of Residuals Results for Monte Carlo Experiment for Bias in 2SLS Estimates 61 62 63 Table 4.1 Table 4.2 Table 4.3 Table 4.4 Markov Switching and AR(1) Models Small Sample Properties of Pseudo Likelihood Ratio Test Statistic Relative Forecasting Performance: True Model has One State Relative Forecasting Performance: True Model has Two States LIST OF FIGURES 83 84 85 86 Figure 2.1 Figure 2.2 Figure 2.3 Figure 2.4 Figure 2.5 United States Long-run Derivatives: Money Stock United States Long-run Derivatives: Monetary Base United States Growth Rates and Yield Spread United Kingdom Long-run Derivatives Fisher and Seater's Long-run Derivatives 32 33 34 35 36 Figure 3.1 Figure 3.2 65 Real Deposits and Yield Spread 1919Q1 to 1941Q4 Conditional Probabilities 1919Q3 to 1941Q4 64 Figure 4.1 Figure 4.2 Canadian Real Per Capita GDP 1960 to 1995 Mean and Standard Deviations of Forecasts (T=100) 87 88 V ACKNOWLEDGEMENT I would like to thank my supervisory committee Anji Redish, Jim Nason, Bob Allen and Mick Devereux for the support and guidance they have given me while writing this thesis. While at the University of British Columbia I have also benefited from the advice of John Cragg and Ron Shearer. Some of the work on this thesis has been undertaken while I have been at the University of Essex. While I have been at Essex, Tim Hatton and Gordon Kemp have provided helpful comments on parts of this thesis. I would also like to thank my wife and daughter, Pam and Katie, and my parents, John and Sheila, for the encouragement and support they have given me over the years. Financial support from the Albert Whitley Memorial Scholarship, a UBC Graduate Fellowship and the Anthony Scott Fellowship are gratefully acknowledged. vi Chapter 1 Introduction The severity of the Great Depression in the United States, relative to other periods of economic contraction, has sometimes been attributed to the financial crisis experienced by the Uni ted States in the early 1930s. It has long been argued that the financial crisis played a role through monetary channels, see for example, Friedman and Schwartz [24]. However, it has also been argued that the financial crisis played a role through non-monetary channels, see Bernanke [3]. Al though many economists have some sympathy for the latter view, the empirical support for this argument is mixed at best. This thesis is an empirical study to help clarify the role of the financial crisis experienced by the Uni ted States economy in the 1930s. This thesis consists of three related essays in empirical macroeconomics. In the first, I argue that the results of long-run monetary neutrality ( L R M N ) tests which include Uni ted States' data from the 1930s can be influenced by the financial crisis. I also undertake a Monte Carlo study into the small sample properties of the test statistic I employ in this essay. In the second essay, I model the financial crisis as a change in regime. I argue that the state of the financial sector contains additional explanatory power for Uni ted States' output fluctuations in the inter-war period. I also argue that the establishment of the Federal Deposit Insurance Corporation was the reform that ended the financial crisis. This is consistent wi th a claim made by Friedman and Schwartz. One problem in testing 1 between a model incorporating changes in regime and a linear model is the presence of nuisance parameters. Therefore, in my final essay, I explore these problems in a Monte Carlo experiment. In a recent paper, Fisher and Seater [22] find that long-run monetary neutrality does not hold for the United States over the period 1869-1975. They test the null hypothesis that a change in the current money stock, as measured by M2, has no effect on future real income at horizons of one to thirty years. They find that they can reject this null hypothesis in favor of the alternative that the effect is non-zero for almost all horizons. This leads them to reject LRMN. In my first essay, I examine the robustness of this result to a change in the monetary aggregate and a change in country. Their result is not robust to either change. Using United States data on the monetary base I am unable to reject the long-run neutrality of money. Using United Kingdom data on the monetary base and on a broad monetary aggregate I am again unable to reject LRMN. The fact that the result of non-neutrality in the United States is critically sensitive to the monetary aggregate, and that the result is specific to the United States, suggest that it reflects the financial crisis there. Boschen and Otrok [7] show that LRMN holds for M2 in United States both before and after the 1930s. They suggest that Fisher and Seater's result is a function of either the Great Depression or the financial crisis of the 1930s. My results suggest that it is the latter of these. That is, the rejection of LRMN found by Fisher and Seater is a result of the 1930s financial crisis in the United States. As Friedman and Schwartz point out, the United States monetary base was largely unaffected by the financial crisis. They argue that the decline in the money stock came through the money multiplier as the public substituted away from deposits and banks increased reserve ratios. According to Bernanke, the financial crisis also had real effects as the breakdown in credit intermediation led to a decline in real income. As a result, during the 1930s, changes in the broad money stock and real income became highly correlated. The fact that I do not reject LRMN using the monetary base is consistent 2 with Boschen and Otrok's second suggestion. My results for the UK are also consistent with their explanation. Although there was a recession in the United Kingdom, there was no financial crisis in the United Kingdom and I cannot reject LRMN for either the broad or the narrow monetary aggregate. In the second half of this essay, I look at the small sample properties of the test statistic. The test developed by Fisher and Seater is similar to the long-horizon re-gressions which have proved relatively successful in detecting predictable components in stock prices, interest rates and inflation.1 Campbell [10] suggests that this relative suc-cess is either due to power advantages or to greater size distortion. Since size distortion is a potential explanation for Fisher and Seater's rejection of LRMN, I perform a Monte Carlo experiment to investigate the size and power properties of Fisher and Seater's test statistic. I find that there are serious size distortions at the longer horizons. In fact, this size distortion is sufficient to account for Fisher and Seater's rejection of LRMN. Using empirical critical values, generated by Monte Carlo simulation, I am unable to reject LRMN using their dataset and their sample period. On the other hand, I also find that size adjusted power is poor at long horizons.2 The theme of financial crisis is continued in the second essay. During the Great Depression the United States experienced several waves of bank failure. As mentioned above, financial crisis can have both real and monetary effects. After Roosevelt took office in March 1933 he undertook a number of reforms intended to stabilize the financial sector, and the economy in general. Friedman and Schwartz argue that the crucial reform that delivered stability to the financial sector was the introduction of the Federal Deposit Insurance Corporation in January 1934. On the other hand, Wigmore [56] argues that it was the decision to abandon the gold standard in March 1933 that ended the financial 1 For stock prices, see, for example, Fama and French [20] [21]; for interest rates, see Campbell and Shiller [12]; for inflation see Mishkin [41] [40]. This success is relative to short-horizon regressions. Consider the case of stock returns, a long horizon regression requires measuring the return on the stock over a period that is longer than the sampling frequency of the data. 2 K i l i a n [33] also finds a lack of high power at long-horizons when looking at exchange rate predictabil-ity for conventional long-horizon tests. He suggests that a bootstrap test has smaller size distortions than the conventional long-horizon regression tests. 3 c r i s i s . In my second essay, I model the financial sector as being in one of two states, financial crisis or financial calm. This underlying state of the financial sector is unobservable. However, by using Hamilton's [28] Markov switching model and observable data, I am able to estimate the conditional probability of the financial crisis state at each point in time. The observable data I use is on the growth rate of real deposits and a yield spread. The latter proxies the cost of credit intermediation. My estimated conditional probabilities suggest that the financial crisis began in late 1930, following the First Banking Crisis, and ended in early 1934, following the introduction of federal deposit insurance. This is consistent with Friedman and Schwartz's view, but inconsistent with Wigmore's view. Testing between Hamilton's Markov switching model and a linear model is problem-atic due to the presence of nuisance parameters. I tackle this problem in two ways. Firstly, I generate an empirical distribution for the pseudo likelihood ratio test statistic under the null hypothesis that the data were generated by the linear model. I then com-pare my pseudo likelihood ratio test statistic from my sample data to this distribution. Using the empirical distribution suggests that the null of a linear model can be rejected in favor of the Markov switching model at the 1% significance level. The second way I tackle this problem is to follow the suggestion of Hamilton [27]. I conduct a Lagrange multiplier test on the one state model to see if a two state model is required. Again the evidence supports the use of a two state model. In this essay, I also ask whether treating the financial crisis as a change in regime provides additional explanatory power for output fluctuations during the interwar period. I add the conditional probability of financial crisis to a reduced form equation for the growth rate of real output. This equation also contains current and lagged values of the growth rate of the money supply, the linear measures of financial crisis mentioned above, and lagged values of real output. F-tests based on two-stage least squares estimation show the null hypothesis that these conditional probabilities contain no additional explanatory power can be rejected at the 5% significance level. 4 Recently non-linear models such as Hamilton's have been used to model the business cycle. However, as noted above, testing between the Markov switching model and a linear model is problematic. Under the null hypothesis that the data are generated by a single state, linear model, the parameters which define the second state are undefined. As a result, the information matrix becomes singular, and the likelihood ratio test statistic does not possess an asymptotic x2 distribution. Therefore, in my final essay I look at the problem of testing between the non-linear model and a linear model. I also assess the extent of bias induced from fitting a one state model, when the true process has two states and vice versa. In both cases model mis-specification is a source of bias. In the latter case there is an additional source of bias due to Jensen's inequality. I explore these issues using artificial data series for the growth rate of quarterly Canadian real per capita GDP. These artificial series are generated using parameter estimates from a linear AR(1) model and Hamilton's Markov switching model using data from 1960 to 1995. I generate artificial samples of sizes 100, 200, 300 and 400. For all the sample sizes I consider, I find no evidence of size distortion in the pseudo likelihood ratio test statistic. However, for all sample sizes with the exception of 400, I find that the power of this test statistic is poor. Currently, most quarterly macroeconomic time series consist of between 100 and 200 observations. For a sample size of 100, size adjusted power is approximately 0.41 and for a sample size of 200 it is approximately 0.79. It is only when the sample size reaches 400 that size adjusted power reaches 0.95. 400 is approximately double the number of quarterly observations currently available for most macroeconomic aggregates. I evaluate the overall effect of bias induced from estimating the mis-specified model on the basis of out-of-sample predictive accuracy. My results suggest that the extent of bias induced from estimating the Markov switching model when the true process has only one state is minimal. The loss in forecast accuracy is typically less than 1%. This is also true, though to a slightly lesser extent, for the estimation of a single state model when the true process is the Markov switching model. Here the loss in forecast accuracy 5 is typically less than 3 % . 6 Chapter 2 Long-run Monetary Neutrality in the United States and the United Kingdom, 1869-1995 2.1 Introduction Classical macroeconomics asserts that permanent movements in nominal-side economic variables, such as the money stock and the rate of inflation, will have no long-run effect on real-side economic variables, such as real income and the real interest rate. Economists refer to these conjectures as the long-run neutrality propositions. These neutrality propo-sitions have important implications for policy-makers. For example, according to LRMN, a permanent change in the level of the money stock will have no long-run effect on real income. However, if the world is non-classical in the long-run, then changes in the money stock do not produce proportionate changes in the price level. Instead these changes lead to changes in both the price level and real income. In this world, the policymaker can use monetary policy to alter the long-run trend of real income. In this chapter I investigate the long-run neutrality of money using a time span of data of over a hundred years for both the United States and the United Kingdom. I test the proposition that permanent 7 changes in the level of the money stock have no permanent effect on real income. In their recent paper, Fisher and Seater [22] estimate long-run derivatives of real income with respect to the money stock using annual United States data on M2 and Net National Product from 1869 to 1975. They find that they can reject the hypothesis that the derivatives of real income with respect to the money stock are equal to zero. Hence, using their dataset, they reject LRMN. Boschen and Otrok [7] argue that this result is driven by observations from the Great Depression. Conducting similar tests, but omitting the observations of the 1930s, they find that LRMN cannot be rejected for the United States for most of the period since 1869.1 One interpretation of this result is that the Great Depression was a unique event in US monetary history. Consequently, one might conclude that any long-run monetary policy implications drawn from this episode would be irrelevant. I offer evidence of the robustness of Fisher and Seater's result in two ways. To begin, I re-estimate their long-run derivative using the monetary base, MO, as the monetary aggregate instead of M2. I find that their result is not robust to this change. Next, I re-estimate their long-run derivative using data from the United Kingdom over a similar sample period. Again their result is not robust to the change. This is true regardless of whether I use the monetary base or a broad measure of the money stock (M3).2 I argue that these two results, and that of Boschen and Otrok, are consistent with the explana-tion that Fisher and Seater's rejection of long-run neutrality is due to the existence of an omitted variable, that omitted variable being the financial crisis experienced by the United States in the 1930s. Campbell [10] suggests that long-horizon tests of predictability, such as those used by Fisher and Seater, Boschen and Otrok, and in this chapter, may have greater power than their short-horizon counterparts or greater size distortions. Therefore, I also perform a 1 They re-estimate the long-run derivative using the same dataset, but split into two sub-periods. They find that long-run neutrality holds for both the 1869-1929 and the 1940-92 sub-periods. 2 Haug and Lucas [32] show that, in the absence of a financial crisis, L R M N also holds for the broad monetary aggregate in Canada. They use Canadian data on M2 and G N P for the period 1914 to 1994. 8 small Monte Carlo experiment to investigate the small sample properties of Fisher and Seater's long-run neutrality test. I find that there is considerable size distortion in the test statistic. This offers an alternative explanation for Fisher and Seater's rejection of long-run neutrality. I use empirical critical values rather than asymptotic critical values to calculate the confidence bands around their long-run derivatives. Using these empirical confidence bands I am unable to reject long-run neutrality using their dataset. I also find that in addition to size distortion, Fisher and Seater's test has low power at the longer horizons. This suggests the test contains very little economic information. In section 2.2 I discuss the problem with reduced form neutrality tests identified by Sargent [49], and Lucas [36] [37], and, using the example of King and Watson [34], show that there is a straightforward test of long-run neutrality if the process generating money contains a unit root. In this section, I also set up the framework of Fisher and Seater [22]. Section 2.3 presents the results of unit root tests on the data. In section 2.4, I show that the result of Fisher and Seater is not robust to a change in the monetary aggregate. Section 2.5 presents the neutrality results for the United Kingdom. In section 2.6, I report the results from the Monte Carlo study. Finally, section 2.7 concludes. 2.2 Methodology Consider a rational expectations model in which real income is the dependent variable and money is the independent variable. Sargent [49] and Lucas [36] [37] argue that in such a model, estimated reduced form coefficients are not solely the monetary policy parameters. Instead, these estimated coefficients are a combination of monetary policy and structural parameters. Consequently, in a rational expectations model, with short run non-neutrality of money, the use of reduced form equations to test long-run neutrality can lead to inappropriate conclusions. However, it has recently been shown that in the case of integrated variables these tests are valid under certain restrictions. These restrictions involve the order of integration of the money stock and real income processes, 9 and the endogeneity of money.3 This point, and the Lucas-Sargent critique, is illustrated by the following example from King and Watson [34]. Consider a simple macroeconomic model consisting of the following three equations: yt = 8 (Pt - Et-ipt), (2.1) Pt = m - jyt, (2.2) and m,t = pm(_i + ut. (2.3) Here yt is the natural logarithm of real income, pt is the natural logarithm of the price level, Et-iPt is its rational expectation based on the information available at time t — 1 and mt is the natural logarithm of the nominal money stock. 8, 7 and p are fixed parameters. The mean zero random variable, ut represents a temporary, unanticipated component of monetary policy. Equation (2.1) is an aggregate supply function, equation (2.2) is a monetary equilibrium condition and equation (2.3) is a money supply rule. The model can be solved to give the reduced form for income: yt = A (1 - pL) mt, (2.4) where A = 5 / (l + j8) and L is the lag operator.4 King and Watson construct this model in such a way that only the unanticipated component of monetary policy (i.e. ut) has any effect on real income. LRMN requires that a permanent change in the level of the money stock has no permanent effect on the level of real income. However, equation (2.4) shows that a one unit increase in the money stock will have a non-zero effect on income. Suppose one were to estimate a 3 For example see Fisher and Seater [22]; King and Watson [34]; Boschen and Mills [8]. 4 T h e model is solved by substituting (2.2) into (2.1) to give yt = 6 [mt — ~fyt — Et-\ (m,t — jyt.)]-Then, using (2.3), Et-\ut = 0 and from (2.1), Et-iJjt = 0, gives yt = 6 [mt — jyt — pm,t.-i]. This can be rearranged to give (2.4). 10 distributed lag model based on (2.4). Typically monetary neutrality would be rejected, even though the model has been constructed such that only transitory components of the money stock can affect income.5 In this case, a test of long-run neutrality should be conducted by looking at the structural parameters of the model rather than the reduced form coefficients. There are two exceptions to this, when 6 = 0 and when p = 1. The second of these cases is the more interesting. Equation (2.4) shows that when p = 1 an increase in the money stock will have a zero effect on real income. This implies that if the money stock process contains a unit root then there is a straightforward test of the LRMN proposition. This result is exploited by Fisher and Seater. The remainder of this section summarizes their methodology for the case where the money stock and real income are both 1(1). For a fuller treatment see Fisher and Seater [22]. Consider the following stationary, unrestricted VAR where m, is the natural logarithm of the nominal money stock, y is the natural logarithm of real income and A = (1 — L), a(L)Am,t = b(L)Ayt + Ut d(L)Ayt = c(L)Am.t + vt (2.5) The error terms are distributed as follows: i.i.d. (0, f2), where Q uu .2 UV .2 (2.6) Fisher and Seater express the long-run derivative of y with respect to a permanent change in m as: which is defined when l i m ^ ^ dmt+k /dth ^ 0. That is, when permanent changes to the 5Consider the distributed lag model yt = f3 {L) mt + vt where vt ~ i.i.d. N (0, cr2) . Estimation of this model would suggest that a one unit increase in money would lead to an increase in real income equal to the long-run multiplier, (5 (1) = A (1 — p). 11 money stock process exist. Here money is long-run neutral when 6 = 0.6 As suggested earlier in this section the degree of integration of the two time series plays an important role in the interpretation of the estimates of these long-run derivatives. Firstly, consider the case where the nominal money stock is integrated of order zero. Here no permanent changes to the money stock process exist, rimfe_+00 dm,t+k /dut = 0. Therefore, the long-run derivative is undefined and so long-run neutrality cannot be addressed. Next, consider the case where the nominal money stock is 1(1) but real income is 1(0). Here a permanent change in the money stock cannot lead to a permanent change in real income as no permanent changes in real income exist.7 Finally, consider the case where both the nominal money stock and real income are 1(1). Now tests of long-run neutrality are valid because permanent changes to both m and y exist. Here long-run neutrality of money implies the following restriction: The restriction for long-run neutrality above contains only parameters from the sec-ond equation of (2.5) which can be consistently estimated under either of two recursive identification schemes by ordinary least squares. However, as Fisher and Seater [22] point out the individual parameters of c (L) and b (L) are not of interest. What is of interest is the long-run derivative, that is c(l)/d(l). Therefore, I follow Fisher and Seater and allow bo and CQ to be unrestricted. I use the long-run exogeneity of money as an identifying restriction. This implies 6(1) = 0, (2.9) which states that a permanent change in y has no long-run effect on m. This is also one of the identifying restrictions considered by King and Watson [34]. An alternative long-run identifying restrictions is the long-run neutrality of money, c (1) = 0. 6 More generally, long-run neutrality implies that 8 = 0 when y is a real variable or the nominal interest rate. It implies that (9=1 when y is a nominal variable. 7 I f a series is stationary around a deterministic trend it is treated as being 1(0). 12 Alternatively, short-run identifying restrictions can be used, CQ = 0 or b0 = 0. However, in this chapter I am interested in the robustness of Fisher and Seater's result and the performance of their test statistic in small sample. For this reason, I restrict my attention to their identifying restriction. The second identifying restriction is placed on the covariance matrix. That is auv = 0. Under these restrictions I now have a structural VAR. Fisher and Seater show, that with b (1) = auv = 0, equation (2.8) can be estimated by l im^oo/^ where (3k comes from the estimation of the following equation, k k J2 Ayt-j = + Amt-J + £kt- (2.10) 3=0 3=0 This can be simplified to (yt - yt-k-i) = ctk + Bk (m,t - mt_fc_i) + ekt- (2.11) Under the null hypothesis of LRMN, Bk = Q. Fisher and Seater estimate equation (2.11) for values of k from one to thirty. They then plot the Bk together with 95% asymptotic confidence intervals obtained using Newey and West's [46] procedure. These confidence intervals are from a t-distribution with T/k degrees of freedom, where T is the sample size.8 They find that zero is below the lower 95% confidence band for all values of k except 25 < k < 27. This implies a rejection of LRMN. As Boschen and Otrok [7] point out this is especially striking as the degrees of freedom used to calculate these confidence bands decline dramatically as k becomes large. In this chapter I examine whether this result extends to another monetary aggregate and another country. However, as pointed out earlier in this section, to be able to test LRMN using this framework, I require that the money stock process and the real income 8 T h e number of degrees of freedom is T jk rather than T — k as this is the number of non-overlapping observations. See, for example, Hansen and Hodrick [31]. 13 process are integrated of at least order one. The next section looks at the order of integration of the money stock and real income series. 2.3 Time Series Properties of the Data This section presents some time series properties of United States and United Kingdom, annual data on the nominal monetary base (bt), the nominal money stock (mt), real income (yt), the price level (pt) and the nominal interest rate (it) . The time period for this data is 1869 to 1995 for the United States and 1871 to 1982 for the United Kingdom. All variables, except the nominal interest rate, are expressed in natural logarithms. Appendix one describes the data and lists sources. Using a long span of data is particularly advantageous when the presence or otherwise of a unit root has important implications for the long-run neutrality tests. This is because the power of unit root tests increases not so much with the number of observations, as with the time span of the data. See, for example, Pierse and Snell [47]. Table 2.1 presents the results of unit root tests. These are Augmented Dickey-Fuller (ADF) tests statistics. I also report 95% confidence intervals for the largest autoregressive root. These confidence intervals are calculated using the procedure developed by Stock [51]. The number of lags used to correct for autocorrelation in the ADF regression is chosen according to the procedure outlined in Campbell and Perron [ll].9 These statistics show that I can reject the null hypotheses that real income, the monetary base and the money stock are 1(2) for both countries in favor of the alternative that they are 1(1) or less.10 For the United Kingdom, I am unable to reject that these three series are 1(1) in favor of the alternative that they are stationary around a deterministic trend. Therefore, the long-run neutrality of money can be tested using equation (2.11) 9 I begin with a niaxirnurn lag order of kmSiX — 10. If this 10th lag is significant, at the 10% level, I set k = 10. If not, I reduce the number of lags by one until the last lag is significant. If no lags are significant, I set k — 0. 1 0 T h i s means that the issue of the superneutrality of money cannot be addressed using this dataset, because there are no permanent changes to the growth rate of the money stock. 14 for both the monetary base and the money stock. For the United States I am unable to reject the null hypothesis of an 1(1) process against the trend stationary alternative for either the monetary base or the money stock. However, for the United States real income series, I am able to reject the null hypothesis of an 1(1) series against this alternative. This suggests that there are no permanent changes in real income and therefore, that LRMN holds. However, the debate that began with Nelson and Plosser [45] over whether or not US real income contains a stochastic or deterministic trend remains unresolved. Therefore, I also report the results of neutrality tests under the assumption that there are permanent changes to income. That is, that the process for real income does contain a unit root. As I show in the next section, the results based on this assumption are not inconsistent with the long-run neutrality of money.11 2.4 Long-run Neutrality in the United States Figure 2-1 shows the long-run derivatives of real income with respect to the money stock. It contains the estimates of fa from equation (2.11) for values of k from 1 to 30, and 95% confidence bands calculated using the Newey-West [46] procedure. Panel (a) shows the long-run derivative of real income with respect to the money stock for the United States for the period 1869 to 1975. This replicates the result of Fisher and Seater, that is, when using asymptotic confidence bands I reject the null hypothesis that the long-run derivative is equal to zero for most values of k. The only exception I find is for 21 < k < 27. These positive and statistically significant estimates of the long-run derivatives are inconsistent with the hypothesis of LRMN. The dataset I use in this chapter is slightly different from that of Fisher and Seater, and this explains why the range of k for which the null 1 1 I n addition to the test statistics and confidence bands reported in table (2.1) I also calculated the test statistic proposed by Elliot, Rothenberg and Stock [18]. This tests the null of a unit root against the alternative with the most power. The conclusions drawn from these tests are the same as those drawn from the results reported in table 2.1. 15 hypothesis cannot be rejected differs sl ightly. 1 2 Next I extend the sample period to 1995. Panel (b) shows the result of the estimate of the same equation using data from 1869 to 1995. Aga in the result does not appear to be completely consistent wi th the notion of long-run neutrality. For al l values of k < 18, zero does not lie between the 95% asymptotic confidence bands. However, it, does lie between these bands for al l values of k > 18. This result is consistent wi th the observation made by Friedman and Kut tner [23] using quarterly, post-war data. Al though the frequency and the time frame of the data are different, the general conclusion is the same. Tha t is, including data from the 1980s weakens the evidence that there is a significant relationship between money and income. In addition to this, for the higher values of k, the point estimates for (3k are lower than those for the 1869-1975 sample. I also estimate the long-run derivatives of output wi th respect to money using this dataset for the two sub-samples that Boschen and Otrok [7] identify. They show that if the sample period is split into two sub-samples, one before the Great Depression and one after, then zero lies wi thin the 95% asymptotic confidence bands in both sub-samples. I find that this result also extends to this dataset. However, one should note that the degrees of freedom used to construct the confidence intervals around Bk are calculated using T /k, and therefore decline rapidly as k increases. For example, the confidence bands on /?3o for the pre-1930 sub-period, for example, are calculated using only two degrees of freedom. The results of Boschen and Otrok suggest that Fisher and Seater's result is not robust to a change in the sample period. Here I show that Fisher and Seater's result is also not robust to a change in the monetary aggregate. Figure 2-2 shows the estimation of the long-run derivatives of real income with respect to the monetary base for the same two sample periods as in figure 2-1. Unlike Fisher and Seater's result for the money stock, I 1 2 Fisher and Seater used data on M2 and Net National Product from Friedman and Schwartz [25]. Here I use data the data from Balke and Gordon [2] as it provided a cleaner match with the data from the Economic Report of the President [17]. The Balke and Gordon dataset is also used in other studies. For a recent example, see Bordo, Goldin and White [6]. 16 find that in general, zero is contained within the asymptotic 95% confidence bands. 1 3 Panel (a) shows that for the sample 1869 to 1975 I am unable to reject the nul l hypothesis that the long-run derivative is equal to zero for al l values of k except k < 4 and k = 13,14. This is Fisher and Seater's sample period. When the sample is extended to 1995 the lower band is only above zero for k < 4. This result is shown in panel (b) . 1 4 Therefore Fisher and Seater's result is dependent not only on the sample period, as suggested by Boschen and Otrok, but also on the choice of monetary aggregate. The point estimates of the long-run derivatives also differ across monetary aggregate. When the money stock is used the point estimate is approximately 0.4 for al l values of k = 1, . . . ,30. When the monetary base is used the point estimate of the long-run derivative is 0.25 for horizons of up to 10 years. After this it falls, and is at zero by the 20 year horizon. Boschen and Otrok [7] point out that wi th the exception of the 1930s, M 2 and income both exhibited a positive trend between 1869 and 1992, and that these trends were at different rates. During the 1930s however the two series both moved downwards in a par-allel fashion. They offer two possible explanations for Fisher and Seater's non-neutrality result. The first is that exogenous monetary shocks during the Great Depression were sufficient to cause a breakdown of long-run neutrality. The second is that the events in the financial markets during the 1930s were reflected in the broad monetary aggregate. This implies that the assumption that changes in this monetary aggregate reflect purely exogenous monetary shocks may be inappropriate. Boschen and Otrok suggest that if one believes the second explanation, the maintained hypothesis of the long-run exogeneity of money is inappropriate. The results presented above using the monetary base help shed some light upon this issue. If shocks to outside money during the Great Depression were in fact sufficient to 1 3 I also use the data on high powered money from Friedman and Schwartz [25] to represent the monetary base, and their Net National Product series used by Fisher and Seater [22] to represent output and find that I am unable to reject long-run neutrality for the 1869-1975 sample. 1 4 I also find that zero lies between the two asymptotic confidence bands for the two sub-periods considered by Boschen and Otrok when using the monetary base. 17 break long-run neutrality, then LRMN should be rejected for the monetary base as well as M2. However, if the rejection is a result of shocks to the financial sector affecting inside money and real income in a similar fashion, then one would expect to see long-run neutrality hold for the monetary base. Consider Friedman and Schwartz's [24] proximate determinants approach to the money stock, M = v(§,f)x5 (2.12) where M is the broad money stock, B is the monetary base and -0 is the money multiplier. The latter depends positively on the deposit-currency ratio, ^, and negatively on the reserve-deposit ratio, | j . If, in a financial crisis, the public reduces its deposit-currency ratios and the banks increase reserve ratios, then, for a given level of the monetary base, the money stock must fall. Therefore, a financial crisis can have a negative effect on the money stock through the adjustment of agents' portfolios. Bernanke [3] argues that the financial crisis of the 1930s had non-monetary effects in addition to its monetary effects. He argues that the financial crisis led to a breakdown in credit intermediation. This in turn meant that less lending took place in equilib-rium. The lower levels of lending translated into lower investment and lower real income. Consequently, the financial crisis also had real, non-monetary effects. Therefore during a financial crisis it is possible that the broad money stock and real income could fall while the monetary base remains unchanged, or even rises. Figure 2-3 shows the growth rates of real income, the money stock and the monetary base for the period 1927 to 1941. It also shows the spread between the yield on bonds rated Baa by Moodys and the yield on long-term government bonds for the same period.15 This spread is used as a proxy for financial crisis.16 It is apparent that, beginning in 1931 the 1 5 T h e data on these yields comes from table 128 of The Board of Governors of the Federal Reserve System [4]. 1 6 Banks perform an intermediary role between lenders and borrowers. They screen and monitor borrowers. The charge for doing this is the cost of credit intermediation. This is the difference between the interest rate a borrower pays and the interest rate the depositor receives. If, during a financial crisis, banks fear a run they will increase reserve-deposit ratios and substitute into safer and more liquid 18 growth rates of real income and the money stock become very similar, while this is not, the case for the growth rates of real income and the monetary base. At this point the yield spread also begins to rise from about 2% in 1927-30 to a peak of almost 6% in 1932. This is consistent with a period of financial crisis. If the financial crisis caused the money stock to fall through the money multiplier, and also had an effect on real income through the channel proposed by Bernanke, then, one would expect to see the growth rates of real income and the money stock follow a similar path. That fact the monetary base does not share this path is consistent with the financial crisis causing the declines in the money stock. However, it is not consistent with an explanation in which the money stock falls through an exogenous fall in high powered money. This suggests that the apparent non-neutrality Fisher and Seater find is produced by an omitted variable.17 That is, the co-movement of the money stock and real income is not because the money stock causes income, but because, during the 1930s, the financial crisis caused both variables. 2.5 Long-run Neutrality in the United Kingdom Assuming that the rejection of long-run neutrality Fisher and Seater find is due to the financial crisis in the United States in the early 1930s, one would expect to find that long-run neutrality would hold in the absence of a financial crisis. The United Kingdom did not experience such a financial crisis during the sample period. This suggests that long-run neutrality should not be rejected using United Kingdom data regardless of the choice of monetary aggregate. Therefore, in this section, I repeat the LRMN tests using data from the United Kingdom. assets. This will reduce their role as a credit intermediary and increase the cost of credit intermediation. Unfortunately, no perfect measure of the cost of credit intermediation exists. In this scenario the observed interest rates on commercial loans will not reflect the shadow cost of funds to the representative borrower. However, this yield spread does represent the cost of funds to different classes of borrowers and therefore is used here as a proxy. See Bernanke [3], Mishkin [42] for further justification. 1 7See, for example, Liitkepohl [38]. 19 Figure 2-4 contains estimates of the long-run derivatives of income with respect to the money stock and the money base using United Kingdom data from 1871 to 1982. These figures show that zero lies between the asymptotic 95% confidence bands for both the money stock or the monetary base. That is, in each case, this is true for all values of A;. In fact the point estimates of these long-run derivatives are typically close to zero, and never above 0.2. The United Kingdom did not experience a financial crisis as severe as that in the United States during the Great Depression throughout this sample period and long-run neutrality holds for the broad money stock as well as for the monetary base. For example, while the United States experienced a loss of depositor confidence in its financial sector in the early 1930s with the deposit-currency ratio falling from 11.2 in 1930 to 5.1 in 1933, depositor confidence in the UK appeared unchanged with the deposit-currency ratio remaining constant at about 7.5 during these years.18 In the previous section I used a yield spread as an indicator of financial crisis for the United States. Unfortunately, to the best of my knowledge, no comparable spread exists for the United Kingdom for the interwar period.19 In the next chapter, I suggest that the negative growth rate of real deposits in the United States in the early 1930s is consistent with a period of financial crisis. There I present data which show that real deposits grew steadily in the early 1930s in the United Kingdom. A result in which long-run neutrality holds for the money stock in the United Kingdom is consistent with an apparent rejection of long-run neutrality of M2 in the United States being the result of the financial crisis of the 1930s. 1 8 T h e figures for the United States are from Friedman and Schwartz [24] appendix B . The figures for the United Kingdom are from Capie and Webber [14] table 1(3). While some of the fall in deposits in the United States was due to bank suspensions, this amount counted for only $0.8bn of the $12.3bn decline. Therefore the decline in deposits was mostly due to portfolio readjustment. These two figures are also from Friedman and Schwarz, table 16 and appendix A , respectively. 1 9 Capie and Collins [13] provide a summary of some of the data on interest rates available for the United Kingdom during the interwar period. 20 2.6 Small Sample Considerations Campbell [10] points to the relative success of long-horizon regressions, such as those used in this chapter, in detecting predictable components in stock returns, interest rates and inflation. For stock prices, see for example, Fama and French [20] [21]; for interest rates, see Campbell and Shiller [12]; for inflation, see Mishkin [41] [40]. He suggests that these findings indicate that long-horizon tests of predictability have either greater power than their short-horizon counterparts or that they have greater size distortions. Size distortion in the test statistic offers an alternative explanation for Fisher and Seater's [22] rejection of LRMN. On the other hand, low power could account for my failure to reject LRMN. Therefore, in this section I perform a small Monte Carlo experiment to investigate the small sample properties of the test statistic. I generate artificial data for the Monte Carlo experiment based on parameter esti-mates from the system described by equation (2.5). To estimate this system I follow the instrumental variables estimation procedure of King and Watson [34], which I outline in appendix two.20 I use the same data on real output and the monetary base from the United States as earlier in this chapter.21 First I estimate the system with only the identifying restrictions, b (1) = auv = 0 imposed. These parameter estimates are then used to generate artificial data for the monetary base and real income under the alternative hypothesis. I then estimate the system with the long-run neutrality restriction, c(l) = 0, also imposed. This second set of parameter estimates are then used to generate artificial data for the two series under the null hypothesis. In each case I generate T + 200 observations, where T is the sample size. The first 200 observations are discarded to minimize the influence of starting values. 2 0Instrumental variables estimation is used to account for the potential correlation between the current value of the growth rate of output and the error term in the equation for the growth rate of money, and between the current value of the growth rate of money and the error term in the equation for the growth rate of output. 2 1 I also estimated the system using the money supply series as opposed to the monetary base series, and then generated artificial data using these parameter estimates. This did not alter the results of the Monte Carlo experiment. 21 I generate samples of sizes, T = 50, 75, 100, 125 and 150. Using these artificial data series I then estimate equation (2.11) for all values of k = 1, 2,..., 30 and calculate the t statistic for test of the null hypothesis that fa = 0, against the alternative that fa 0 . For each sample size I repeat, the experiment 2500 times. I then calculate the type I error rate and the empirical 95% critical value when the artificial data is generated under the null hypothesis, and the size adjusted type II error rate when it is generated under the alternative hypothesis. For each value of k Fisher and Seater construct a 95% confidence interval around fa, where fa is estimated from equation (2.11). This confidence interval is constructed using the asymptotic t distribution with T/k degrees of freedom. The null hypothesis, that c (1) = 0, is then rejected if zero does not lie within the confidence interval. This is equivalent to a t test of the null hypothesis of fa = 0 against the alternative that fa ^ 0. The critical value for this test, comes from the asymptotic t distribution with T/k degrees of freedom. Consider the case where the sample size is T = 150 and k = 30. For a two sided test, the 95% critical value from the asymptotic t distribution with T/k = 5 degrees of freedom is 2.571. For each set of artificial data I calculate the t statistic described above. If, when the artificial data are generated under the null hypothesis only 5% of the artificial t statistics are above 2.571, then there is no size distortion in the test, statistic. However, if more than 5% of the artificial test statistics are above 2.571 then the type I error rate is above 0.05. This would imply that there is size distortion in the test statistic. On the other hand when the data are generated under the alternative then only 5% of the artificial test statistics should be below 2.571 if the test is to have the correct power. Table 2.2 contains the results of this Monte Carlo study. Panel (a) shows the type I error rates for values of k — 1, 2, 5, 10, 20 and 30. These results show that there are size distortions for all combinations of k and T. That is, whatever the horizon, k, and the sample size, T, the type I error rate is greater than 0.05. I also find that these size 22 distortions are greater at the longer horizons.22 When k = 1 the true null hypothesis is rejected for between 15% and 20% of the simulated time series. When k = 30 this figure rises to between 25% and 45%. This is consistent with Campbell's suggestion that the observed relative success of long-horizon regressions in detecting predictable components maybe due to more serious size distortions than in short-horizon regressions. Panel (b) of table 2.2 contains the empirical 95% critical values for the test statistics. Not surprisingly, given the size distortion, these are all above their asymptotic counter-parts. Campbell also finds that empirical critical values are above their asymptotic counterparts.23 Campbell suggests that an alternative reason for the relative success of long-horizon regression is that they have power advantages over their short-horizon counterparts. I find evidence to suggest that this is not the case. I use my empirical critical values from table 2.2 to calculate the size adjusted power of the test statistic for different values of k and T. These figures are also reported in panel (c) of table 2.2. These results show that power is poorer at the longer horizons. For example, consider a sample size of T — 100. For the shorter horizons of k = 1 and k = 2, the size adjusted type II error rate is below 0.10. However, for the longer horizons of k = 20 and k = 30 the size adjusted type II error rate is above 0.50. When A; = 30, the false null hypothesis is rejected in only 35% of the simulations. Power at all horizons increases with sample size. However, even at a sample size of T = 150, larger than any considered in this chapter, power is only 0.51 at the longest horizon of k — 30. This result, combined with the evidence of size distortion suggests that the relative success of long-horizon regressions in detecting predictable components is the result of size distortion rather than power advantages. It also suggests the possibility that Fisher and Seater's [22] rejection of long-run neutrality could be the result of size distortion. 2 2 There is one exception to this, for T = 50 the type I error rate is lower for fc = 30 than it is for k = 20. This can be explained by the fact that the degrees of freedom for these test statistics are calculated as T/k. Therefore for T = 50 there is a large jump in the asymptotic critical value between fc = 20 and fc = 30. As a result a much higher value of the test statistic is required to reject the null hypothesis. 2 3 Campbell presents results for weighted and unweighted long-horizon regressions. The results pre-sented here are analagous to his results for unweighted regressions. 23 For this reason I generate empirical critical values using Fisher and Seater's dataset. To do this, I estimate the system described by (2.5) under the null hypothesis, using their dataset. I then use these parameter estimates to generate artificial data for M2 and Net National Product.24 Finally, I estimate equation (2.11) using this artificial data and calculate the t statistic for the test of Bk = 0 for k = 1, 2 , . . . , 30. I repeat this experiment 2500 times and generate an empirical 95% critical value for the test statistic for each value of k. With these empirical critical values I then calculate 95% confidence intervals for Fisher and Seater's long-run derivative. Figure 2-5 shows their long-run derivative, first with asymptotic confidence intervals, and second with the empirical confidence intervals. By using empirical confidence intervals I am able to overturn their rejection of long-run neutrality. I am only able to reject the null hypothesis that the long-run derivative is equal to zero for k < 3 and 10 < k < 15. For all other values of k the null hypothesis cannot be rejected. This suggests another reason for Fisher and Seater's rejection of LRMN. That is, size distortion in the test statistic. I also generate empirical confidence intervals for my long-run derivatives contained in figures 2-2 and 2-4. That is, the long-run derivatives for the United States data using the monetary base, and for the United Kingdom, using both the money stock and the monetary base. In each of these cases I am unable to reject the null hypothesis when using asymptotic confidence intervals. Unsurprisingly, given the results of table 2.2, these empirical critical values are wider than their asymptotic counterparts and so do not alter my conclusions. To summarize these results, panel (a) of table 2.3 reports the ratio of the empirical to the asymptotic critical value for values for these long-run derivatives. While size distortion may account for Fisher and Seater's rejection of the null hy-pothesis, the power properties of the test statistic in table 2.2 suggest that my failure to reject long-run neutrality could be a result of the low power of the test statistic. As An-2 4 Fisher and Seater use a sample size of T = 107. As before, I generate an artificial data series of length T + 200 and discard the first 200 observations in order to minimize the effect of starting values. 24 drews [1] points out, failure to reject the null hypothesis does not mean that the whole of the alternative parameter space is inconsistent with the data.25 Andrews shows that by using the inverse power function it is possible to determine the region of the alternative parameter space that is inconsistent with the data. He also shows that it is possible to determine regions of high probability of type II error. Andrews refers to this as the region of low power. Even when an alternative parameter value in this region of low power is true, one still has a greater chance of failing to reject the false null hypothesis than one does of rejecting it. That is, the probability of type II error in this region is greater than 50%. Panel (b) of table 2.3 shows the regions of low power for the cases in which I could not reject the null hypothesis of LRMN. Following Andrews, I calculate the region of low power as: {0:O<\P\<b}, (2.13) where b is calculated as 6 = Ai, a(l/2)5 / 3 ) (2.14) op is the estimate of the standard error of J3 and A 1 ) 0, (1/2) is a positive constant whose value is taken from table 1 of Andrews [1]. a refers to the significance level, in this case a = 0.05. When the test fails to reject the null hypothesis it does not provide any evidence that the true parameter value lies outside the region described by equation (2.13) For the cases where I am unable to reject the null hypothesis of @k — 0, the upper bound on the regions of low power are typically between 0.1 and 0.2. For example, for the United Kingdom results using the monetary base, the inverse power function implies that the test provides no evidence that the long-run derivative is less than 0.14 for k = 30. Assuming that the velocity of circulation of the monetary base is constant, this figure of B30 — 0.14 implies that a 10% increase in the monetary base today would cause real 2 5Inconsistent with the data in the following sense. The probability of type two error when the null cannot be rejected at the 5% significance level is also 5%. 25 output to be 1.4% higher, and the price level to be 8.6% higher in 30 years time. In other words, a change in the monetary base today has a non-zero long-run effect on real output. The figures are calculated using an inverse power function derived using the asymp-totic x2 (1) distribution. In small samples the student's t distribution approximates the X2 distribution. If I use my empirical t distribution to calculate the inverse power func-t ion the regions of low power are much larger. This is especially true as k becomes larger. For example, the figure of 0.14 above becomes 0.55. In summary, the results of this section suggest that Fisher and Seater's test does not perform well in small samples. I present evidence to show that there is serious size distortion in the test statistic at al l horizons and that power is low at the longer horizons. This implies that in small samples there is very little economic information in their methodology. 2.7 Conclusions In this chapter, I argue that the apparent violation of long-run neutrality of money found by Fisher and Seater [22] is dependent upon their choice of monetary aggregate. I find that their result is not robust to replacing the broad money stock wi th the monetary base. This is consistent wi th Boschen and Otrok's [7] explanation that long-run neutrality is violated when using M 2 because the financial crisis of the 1930s had non-monetary effects as well as monetary effects. This led income and broad money changes to become highly correlated during the Great Depression and therefore caused an apparent violation of long-run neutrality. In addition, I show Fisher and Seater's result does not carry over to the Uni ted Kingdom, where there was no financial crisis. This provides further evidence that Fisher and Seater's rejection of long-run neutrality in the Uni ted States is a consequence of the financial crisis of the 1930s. I also present a Monte Carlo experiment to examine the size and power of Fisher and 26 Seater's test statistic. The results of this experiment show that these tests have serious size distortions. In fact these size distortions are great enough that when using empirical critical values the confidence intervals on Fisher and Seater's long-run derivatives become sufficiently wide that I am unable to reject LRMN using their dataset. In general the test also has poor power properties. Power decreases with the length of the long-run derivative. For example, at the longer horizons of 20 and 30 years power is typically below 50%, while for the shorter horizons of one and two years it is typically above 90%. These results suggest that any observed success of long-horizon regressions, relative to short-horizon regressions, in detecting predictable components is due to size distortion rather than power advantages. This implies that in small samples there is very little economic information in the methodology proposed by Fisher and Seater. 27 Table 2.1: Unit Root Statistics Demeaned Detrended 95% C.I. k,T tr 95% C.I. k,T (a) United States 1869-1995 Vt -3.802 (-,0.971) 1,125 -2.770 (0.819,1.029) 1,125 m,t -3.568 (0.715,1.014) 1,125 Ay t -4.760 (-,0.81) 2,117 Abt -3.395 (0.707,0.960) 9,116 Am,t -6.016 (-,-) 1,124 Vt -1.645 (0.927,1.040) 5,121 it -2.237 (0.860,1.023) 2,124 -2.189 (0.870,1.035) 2,124 (b) United Kingdom 1871-1982 Vt -2.065 (0.878,1.041) 1,110 bt -2.314 (0.851,1.039) 1,110 m,t -0.130 (1.015,1.049) 2,109 Ayt -5.572 (-,0.674) 3,107 Abt -5.127 (-,0.748) 0,110 Amt -3.545 (0.665,0.942) 1,109 Vt -1.618 1.386 (1.021,1.057) 7,104 it 1522 (1.012,1.050) 4,107 0.403 (1.018,1.053) 4,107 All statistics are based on the Augmented Dickey-Fuller regression k Axt = /J,0 + PIT + pxt-i + li&xt-i + et where r is a linear time trend and et ~ i.i.d.N (0, cr2). The term demeaned refers to the case where is unrestricted, but \i\ = 0. The term detrended refers to the case where 28 both / i 0 and are unrestricted, f.M and tT are the test statistics in the demeaned and detrended cases respectively. The figures in parentheses are 95% confidence intervals for the largest autoregressive root calculated using the procedure outlined by Stock [51]. The lag length k was chosen using the procedure described in the text. T refers to the number of observations used to estimate the statistics. 29 Table 2.2; Results from Monte Carlo Simulations (a) Type I Error Rate T fc = 1 k = 2 k = 5 k = 10 fc = 20 fc = 30 50 0.15 0.16 0.19 0.28 0.37 0.26 75 0.15 0.14 0.17 0.27 0.36 0.37 100 0.15 0.13 0.16 0.24 0.34 0.36 125 0.15 0.13 0.15 0.23 0.36 0.41 150 0.17 0.14 0.15 0.23 0.37 0.43 (b) Empirical 95% Critical Values T k = 1 k = 2 fc = 5 fc = 10 fc = 20 fc = 30 50 2.92 3.00 3.59 5.54 8.17 8.96 75 2.85 2.85 3.22 4.52 7.09 8.48 100 2.73 2.68 3.03 4.19 6.40 7.89 125 2.74 2.63 2.88 3.82 5.67 7.49 150 2.78 2.66 2.77 3.84 5.67 7.21 (c) Size Adjusted Power T k = 1 k = 2 k = 5 k= 10 fc = 20 fc = 30 50 0.62 0.64 0.56 0.37 0.27 0.21 75 0.82 0.84 0.77 0.57 0.36 0.30 100 0.92 0.94 0.88 0.69 0.44 0.35 125 0.97 0.98 0.95 0.82 0.57 0.44 150 0.99 0.99 0.97 0.88 0.63 0.51 These results are based on 2500 repititions of the Monte Carlo experiment described in the text. The type I error rate is based on the 95% asymptotic critical values. The size adjusted power is based on a nominal size of 0.05. 30 Table 2.3: Monte Carlo Results When the Null Hypothesis is not Rejected (a) Ratio of Empirical to Asymptotic Critical Values k = l k = 2 A: = 5 A: = 10 A: -20 A: = 30 U.S. LRDVib 1869-1975 1.39 1.34 1.42 1.85 2.41 2.95 U.S. LRDy,b 1869-1995 1.35 1.28 1.36 1.74 2.35 2.59 U.K. LRDyfi 1871-1982 1.27 1.18 1.33 1.66 2.34 2.58 U.K. LRDy>m 1871-1982 1.07 1.08 1.33 1.71 2.39 2.67 (b) Region of High Probability of Type II Error, |0| < b, where b is k = 1 k = 2 k = 5 k = 10 k = 20 k = 30 U.S. LRDVib 1869-1975 0.16 0.16 0.22 0.21 0.17 0.18 U.S. LRDyfi 1869-1995 0.16 0.16 0.21 0.21 0.14 0.15 U.K. LRDyth 1871-1982 0.18 0.17 0.14 0.09 0.11 0.14 U.K. LRDy>m 1871-1982 0.14 0.16 0.15 0.12 0.11 0.14 The ratio of empirical to asymptotic critical values are based on a Monte Carlo ex-periment which is the similar to that which produced the results of table 2.3. For the United States results, artificial samples of size T = 107 and T = 127 are produced from parameter estimates of equation (2.5) using United States data on real income and the monetary base. For the United Kingdom results, two samples of size T = 112 are pro-duced. One based on parameter estimates using United Kingdom data on real income and the monetary base and the other based on parameter estimates using real income and the money supply. In each case 2500 repititions of the experiment are performed. The region of high probability of type II error is the region of the alternative parameter space for which the probability of type two error is greater than one half. In each case, the first two rows refer to the long run derivatives shown in figure 2-2 and the last two rows to the long-run derivatives shown in figure 2-4. 31 Figure 2.1: United States Long-run Derivatives: Money Stock Fig 2-la: Income on Money 1869-1975 P 3 0.5 -0.5 -1 Fig 2-lb: Income on Money 1869-1995 6 1 0.5 0 -0.5 -1 32 Figure 2.2: United States Long-run Derivatives: Monetary Base Fig 2-2a: Income on Monetary Base 1869-1975 k Fig 2-2b Income on Monetary Base 1869-1995 k 33 Figure 2.3: United States Growth Rates and Yield Spread Fig 2-3a: Growth Rates for y, m & b; U.S. 1919-1941 34 Figure 2.4: United Kingdom Long-run Derivatives Fig 2-4a Income on Money 1871-1982 0.5 6 Q 3 -0.5 J7J7 j . i -1 -1 - L CN "sT _ lO - - ao " " Cf " "CM" ' ""3"' " W " W " " t j " CM ^ CO O .— .— •— .— CN CN CM CN CN CO -1 Fig 2-4b Income on Monetary Base 1871-1982 a. a 0.5 -0.5 CN _T-->0--«>- "CD" • -CM- - 3^"- " -O - "OO" • "CD" ' "CM" " V} ""<)"" cb" ' "CD — — C N C N C N C N C N C O 35 Figure 2.5: Fisher and Seater's Long-run Derivatives Fig 2-5a Income on Money; Asymptotic Critical Values S Q OH 1 0 . 5 0 - 0 . 5 1 - r - r - r C ^ ^ r - O O O O C M ^ O a D O C N ^ O O O O .— .— r— .— •— CM CM CN CM CM OO -1 Fig 2-5a Income on Money; Empirical Critical Values 6 a OH 1 . 5 1 0 . 5 0 - 0 . 5 -1 C N ' ^ r o o o o c M ' ^ r v o ' oa. ,o CM T O OO O •— •— *— •— C M - - C M . . CN CM CM , - C O 36 Chapter 3 Financial Crisis and Financial Reform during the Great Depression. 3.1 Introduction In this chapter I argue that the financial crisis experienced in the United States during the Great Depression should be considered as a change in regime. I provide evidence to suggest that the disruptions to the financial sector during the early 1930s should be thought of as a shift to a regime of financial crisis from one in which the financial sector was relatively calm. I find that treating the financial crisis as a change in regime provides additional explanatory power for output during the interwar period. M y results are consistent wi th Friedman and Schwartz's [24] view that the most important policy reform for the ending of the financial crisis was the introduction of the Federal Deposit Insurance Corporation (F.D.I .C.) in January 1934. I model the financial sector as being in one of two possible regimes. The first is a regime of financial crisis, and the second a regime I call financial calm. Clearly, the current state of the financial sector in the 1930s is not directly observable by the mod-37 ern day econometrician, or even by the econometrician of the 1930s. However, using a bivariate version of Hamilton's [28], Markov switching model I am able to estimate prob-abilities over the underlying state of the financial system using observable time series. The behavior of these two observable time series is dependent on the underlying state of the financial system. More specifically, the growth rate of real deposits is lower in the financial crisis regime and the cost of credit intermediation is higher in the financial crisis regime. The evidence suggests that the financial sector was in a state of crisis from late 1930 until early 1934. I find that these estimated probabilities over the state of the financial sector contain additional explanatory power for output fluctuations, above their linear counterparts, during the interwar period. One of the more striking features of the Great Depression was the high incidence of bank failure. In 1930 a total of 1350 banks suspended payments; in 1931 this figure was 2293; in 1932 it was 1453 and by April 1933 over 3800 banks had suspended payments. However, in the remainder of 1933 less than 200 banks suspended payments and the following two years saw a total of only 91 suspensions.1 Friedman and Schwartz [24] attribute this sudden decline in the number of bank suspensions to the introduction of the F.D.I.C. in January 1934. Wigmore [56], however, sees the crucial reforms as being the gold embargo and the exchange controls which began to be introduced by Roosevelt in the Spring of 1933. He argues that the introduction of these foreign exchange controls insulated the US banking system from further runs on its gold reserves. Although the results presented in this chapter are inconsistent with the view proposed by Wigmore, they are consistent with the view of Friedman and Schwartz. That is, the introduction of the F.D.I.C. was crucial to the ending of the financial crisis and the recovery from the Great Depression. The remainder of this chapter is organized as follows. In the next section I discuss the implications of the financial crisis for the Great Depression and suggest two measures of financial crisis. In section 3.3, I give a summary of New Deal policy which respect 1 Board of Governors of the Federal Reserve System [5], p907. 38 to the financial sector. In section 3.4, I present the version of Hamilton's model I use to estimate the conditional probabilities over the state of the financial system, and, in section 3.5, present the results of the estimation of this model. In section 3.6, I show that the estimated state of the financial system can help to explain output fluctuations during the interwar period. Section 3.7 concludes. 3.2 Financial Crisis and the Great Depression Friedman and Schwartz [24] argue that the bank failures of the 1930s had an adverse effect on income during the Great Depression through two channels. A negative wealth effect on the value of bank shareholders and more importantly through a contraction in the money supply. 2 According to Friedman and Schwartz the bank failures of the 1930s made deposits a less attractive proposition to the public. Therefore they substituted away from deposits and into other assets such as currency. 3 This led to a large drop in deposits during the financial crisis and, as a result, a fall in the money stock through the money mult ipl ier . 4 Therefore, I assume that the growth rate of real deposits is lower in the financial crisis state than in the financial calm state. 5 One potential problem with using the growth rate of real deposits is that, it not only reflects depositor confidence in the banking sector, but it also reflects bank failures. If a bank fails wi th losses to depositors, then deposits wi l l fall. Therefore, any observed 2 Temin [52] finds little evidence to support the hypothesis that there was a negative wealth effect following the stock market crash of 1929. He attributes this to the fact that stocks were only a small proportion of wealth. This also suggests a minor effect on income for a negative wealth effect following bank failure. 3 T h i s is the 'contagion of fear' argument. Suppose that following the failure of one insolvent bank the public begin to fear that other banks may be insolvent. When they have imperfect information about the solvency of a particular bank, their best strategy maybe to withdraw their deposits from that bank. Therefore, the overall fall in deposits maybe much greater than that due to the initial bank failure. A similar argument is made by Wicker [55]. 4 For a full explanation of this see Friedman and Schwartz [24], appendix B. 5 One could also use the level of real deposits or the deposit currency ratio to infer when the financial sector was in a state of crisis. Both of these variables would also be lower in the financial crisis state. The problem with these two measures is that they are not stationary over the sample period. 39 reduction in the growth rate of real deposits wi l l also reflect losses to depositors in failing banks. However, while deposits at commercial banks fell by $12.3bn between December 1929 and December 1932 only approximately $0.8bn of this was in losses due to bank suspensions. 6 This suggests that most of the decline in the deposits was due to portfolio reallocation. Figure 3.1a shows the quarterly growth rate of real deposits between 1919 and 1941. 7 A s one can see, there is a sharp fall in the growth rate of real deposits in the early 1930s. This is consistent wi th a period of financial crisis. There is a prolonged period in which, wi th the exception of one quarter, the growth rate of real deposits was negative. This period is from early 1931 unti l late 1933. In contrast, in the Uni ted Kingdom where there was no financial crisis, real deposits grew steadily between 1929 and 1933. The average, annual growth rate being 4.35%. 8 Bernanke [3] offers a third channel through which the financial crisis could have an adverse affect on income: through the collapse of credit intermediation. Given that loans markets are incomplete there is a role for intermediation between borrowers and lenders. Banks specialize in this activity by screening and monitoring borrowers. The charge for this service is the cost of credit intermediation. This is the differential between the interest rate that the lender receives from the bank and the interest rate that the borrower pays the bank. Bernanke argues that during the financial crisis of early 1930s the cost of credit intermediation began to rise. This meant that some borrowers found it difficult and expensive to obtain credit. Banks typically hold a significant proportion of their assets in the form of i l l iquid 6 T h e first figure is from Friedman and Schwartz [24], appendix B . The second figure and the one which follows are from Friedman and Schwartz, table 16. Total deposits in suspended banks between these dates totalled approximately $3.2bn. 7 T h e data series for deposits comes from Friedman and Schwartz [24], appendix A . It is deflated using the G N P deflator from Balke and Gordon [2]. 8 I n 1929 the real net deposits in the United Kingdom were £2244m; in 1930 they were £2269; in 1931, £2279; in 1932, £2412 and in 1933, £2661. The data on nominal net deposits are from Capie and Webber [14], table 11(2). I deflate this data using a G D P deflator with 1958=100. I construct this G D P deflator from nominal and real G D P data in Mitchell [43]. 40 loans. O n the other hand their liabilities are for the most part in the form of deposits and therefore payable on demand. This can make their position somewhat tenuous. A s a result, during a bank run an i l l iquid, yet solvent bank could be forced to suspend payments. Bernanke argued that as banks began to fail, non-failing banks began to fear a run. In response to this they increased their reserve-deposit ratios and substituted into safer and more l iquid assets. B o t h of these factors led to a reduction in their role of credit intermediation. This led to a rise in the cost of credit intermediation. Consequently the borrower's rate rose and the number of loans fell. Therefore, I assume that when the economy is in a state of financial crisis the cost of credit intermediation is higher than when it is in a state of financial calm. Unfortunately, as Bernanke points out there is no direct, satisfactory measure of the cost of credit intermediation. 9 Here I follow Bernanke [3] and Mishk in [42], and use, as a proxy, the spread between the yield on corporate bonds rated B a a by Moody ' s and the yield on long-term government bonds. 1 0 Admit ted ly this is not a perfect measure of the cost of credit intermediation. However, it does represent the different costs of funds to two different classes of borrowers. Bernanke argues that the only classes of borrower unaffected by the disruption of credit intermediation were the Federal government and some blue-chip corporations. 1 1 Therefore, all else equal, we would not expect the yield on these government bonds to rise during the financial crisis. O n the other hand, i n their 1919 explanation of the different bond ratings Moody 's described B a a bonds as having "....an element of uncertainty regarding the permanently strong position of such issues." ([44]; p l3 ) . A s a result we would expect the yields on these bonds to rise as lenders began 9 I f banks were only making the safest and highest quality loans, then the reported rates for commercial loans do not reflect the shadow cost of bank funds to a representative borrower. In addition to this, in periods of high default on loans we would expect to see a rise in the cost of borrowing as lenders attempted to prevent any decline in the expected value of the loan. 1 0 B o t h of these yields come from the Board of Governors of the Federal Reserve System [4], table 128. 1 1 I also used the yield on Moody's highest rated bonds (Aaa) in the place of the yield on long-term government bonds. The results were unaffected. 41 to prefer safer assets. Therefore, I use the spread between these two classes of bonds as a proxy for the cost of credit intermediation.12 Figure 3.1b shows the spread between Baa rated bonds and long-term government bonds. This indicates that there is a sharp increase in this spread in the early 1930s, again this is consistent with a period of financial crisis. Also consistent with a financial crisis is the fact that this increase came from an increase in the yield on the risky asset rather than a fall in the yield on the risk-free asset.13 In this chapter I propose that in the interwar period in the United States there were two states of the world. One in which the incidence of bank failure was high and all but the safest class of borrower found it expensive to obtain credit. This was reflected in a lower growth rate of real deposits and a higher cost of credit intermediation in this state. I refer to this state as the financial crisis state. A second state had a low incidence of bank failure and more equal costs of credit across classes of borrower.14 I refer to this as the financial calm state. In this state the growth rate of real deposits was higher and the cost of credit intermediation was lower. 3.3 Banking •& Monetary Reform in the New Deal At midnight on March 6th 1933, President Roosevelt declared that there would be a banking holiday from the 6th to the 9th of March. On the 9th of March, Congress passed the Emergency Banking Act (E.B.A.) which was the first of many banking and 1 2 A s Bernanke points out, this measure is biased downwards as it does not allow for the reclassification of firms into higher risk categories. Assuming that the riskiest Baa rated bonds have the highest yields among Baa rated bonds, then, when they are downgraded the average yield on Baa rated bonds will fall. Therefore, downgrading would bias the yield spread downwards. However, despite this, the spread was considerably higher during the years of the Great Depression than during the rest of the interwar period. 1 3 I n January 1930 the yield on Baa rated bonds was 5.29%. It reached a peak at 11.63% in May 1932. This is also when the spread peaked. In January 1930 the return on long-term government bonds was 3.43%. In May 1932 it was relatively unchanged at 3.76%. (Board of Governors of the Federal Reserve System; [4], table 128.) 1 4 T h a t is, relative to the cost of funds to the safest borrowers, the cost of funds to other borrowers was lower than in the financial crisis state. 42 monetary reforms contained in the New Deal. It appears that these reforms were intended to stabilize the financial system, and coupled with Roosevelt's "fireside chats," restore confidence in the financial system. One of the questions asked in this chapter is whether or not these reforms were successful in restoring confidence. Therefore, this section summarizes the main aspects of these policies.15 3.3.1 B a n k i n g R e f o r m The E.B.A. extended the wartime measures that allowed the President to control the payment of deposits by banking institutions in periods of national emergency. Any institution that wished to do bank business during a period of emergency needed the authorization of the Secretary of the Treasury and Presidential Approval. The E.B.A. also facilitated the re-opening of national banks. On the 10th of March 1933, Roosevelt issued an executive order which gave the Secretary of the Treasury the power to grant licenses to member banks allowing them to reopen. To obtain these licenses member banks had to apply to their local Federal Reserve Bank which acted as the Secretary's agent. In a press statement on the 11th March and a radio address on the 12th, Roosevelt announced a program for the re-opening of licensed banks on the 13th, 14th and 15th of March. This order also allowed state banking authorities to open sound, non-member banks. Of the 17,796 commercial banks in operation at the beginning of 1933, 447 had disappeared by the end of the banking holiday. Of the remainder less than 12,000 were granted licenses to re-open. Although another 3398 reopened later, 2132 were suspended, liquidated or merged with other banks.16 The E.B.A. gave the Reconstruction Finance Corporation (R.F.C.) the power to invest equity in banks without taking collateral. Mason [39] argues that the overcollateralization of R.F.C. loans up to this point in time had meant that R.F.C. loans were creating a 1 5 A more comprehensive treatment can be found in Chapter 8 of Friedman & Schwartz [24] or in Wicker [55]. 1 6 Friedman & Schwartz [24], pp422-28. Those licensed banks accounted for $23.3bn of deposits, while the unlicensed banks accounted for $3.4bn of deposits, also Friedman & Schwartz [24], p423. 43 liquidity problem for the very banks they were supposed to help. After March 1933 the R.F.C. was able to invest in the equity of a bank through the preferred stock program. After examining panel data on banks in the Illinois portion of the Chicago Federal reserve district, Mason concludes that this program helped banks survive the depression. The E.B.A. also removed the R.F.C.'s obligation to publish the names of the banks it was assisting.17 By publishing the names of banks it was assisting, the R.F.C. would have had two opposite effects on depositor confidence. On one hand it would have indicated to the depositor that his or her bank had Federal backing. However, on the other hand it would have suggested that the bank might not be able to meet all of its liabilities. The first was likely to increase confidence thus reducing the probability of the bank failing, while the latter would have the opposite effect. Butkiewicz [9] presented evidence to suggest that it was the latter effect that dominated and that the obligation to publish the names of banks receiving assistance increased bank failures. This view was shared by Friedman and Schwartz, who argued that the appearance of a bank's name on the list was often interpreted as a sign of weakness and led to a run on the bank. They went on to argue that by restoring confidence in the monetary and economic system the E.B.A. contributed to recovery from the depression. They pointed to falling currency balances relative to income as evidence of increased confidence in the banking system. However they also argued that the introduction of the F.D.I.C. was the structural change that did the most to restore monetary stability. The Banking Act of June 1933 announced the introduction of deposit insurance to the banking system. All member banks of the Federal Reserve System had to have their deposits insured by the F.D.I.C. Those banks that were not members could also apply for coverage. The introduction of deposit insurance occurred on the 1st January 1934. By July almost 14,000 of the 15,348 commercial banks were covered. These banks accounted 1 7 A provision of the Emergency Relief and Construction Act of July 1932 was interpreted by Speaker of the House, John N . Garner, as requiring the R F C to make public the names of the banks it had made loans to in the previous month. In fact the R F C was only required to inform the President and Congress to which banks it had extended loans. 44 for 97% of all deposits.18 To receive coverage banks had to pay a premium which was a percentage of their deposits. If they were not members of the Federal Reserve System they were also subject to examination by the F.D.I.C. 1 9 Initially there was a limit on insured funds of $2500, however it was doubled on 1st July 1934. Friedman and Schwartz argued that the number of bank failures was greatly reduced in 1934 for two reasons. Firstly, because small depositors knew that if their bank failed they would be reimbursed, and therefore bank runs did not spread from one bank to another. Secondly, bad banks were not allowed to fail, instead they were merged with good banks or re-organized under new management, with the F.D.I.C. assuming any losses. Wigmore [56] questions whether the banking reforms of the E.B.A. and the 1933 Banking Act were sufficient to explain the subsequent calm in the financial sector. He points out that the F.D.I.C. only covered accounts up to $2500, leaving $25bn in large accounts uncovered (two-thirds of total deposits in insured banks) and the fact that it did not become effective until 1934. The ability of the R.F.C. to provide capital is also seen as relatively unimportant by Wigmore; only $15m of authorizations had been made by the end of March 1933. Wigmore goes on to argue that much of the calm can be attributed to the gold embargo and foreign exchange controls that began in April 1933. 3.3.2 M o n e t a r y R e f o r m By Roosevelt's proclamation of March 6th 1933 banks were prohibited from paying out gold or dealing in foreign exchange during the Banking Holiday. The E.B.A. extended this and gave the President emergency powers over foreign exchange dealings, currency and gold movements, and banking transactions. On the 10th March, Roosevelt issued an executive order which forbade gold payments by banking and non-banking institutions 1 8 Friedman & Schwartz [24] pp436-37. 1 9 U n t i l 1950 the F D I C required the permission of the Comptroller of the Currency or the Governor of the Federal Reserve System to examine member banks. 45 except under license and with the permission of the Secretary of the Treasury. However despite this effective suspension of gold payments the exchange rate between the dollar and other currencies remained around its previous gold standard level for over a month. Friedman and Schwartz attributed this to the fact that the suspension was probably regarded as part of the emergency measures and therefore considered to be temporary. They point to the fact that there was no official announcement of devaluation, that after a few weeks several licenses had been granted and that the United States' technical gold position was strong enough to defend parity. On the 5th April 1933 public hoarding of gold was prohibited and all gold, had to be delivered to the Federal Reserve Banks by the 1st May.20 Then, on the 20th April, Roosevelt made it clear that it was the intent of his administration to allow the dollar to devalue against other currencies in order to raise prices. This became official with the Thomas Amendment to the Agricultural Amendment Act on 12th May. This allowed Roosevelt to lower the gold content of the dollar to as little as 50% of its previous weight. By the end of July the dollar had devalued by almost 60%.21 On the 31st January 1934 Roosevelt's administration fixed the price of gold at $35 per fine ounce under the authority of the Gold Reserve Act. Note that this did not represent a devaluation of the dollar. It had already fallen to this price by July of 1933. However, the Treasury had previously valued its gold holdings at $20.67, and so in January 1934 realized a paper profit from the devaluation of the dollar. It could print additional paper money in the form of gold certificates which, although they could not be held by the public, could be held by the Federal Reserve Banks. These certificates had a nominal value of nearly $3bn and did not require the acquisition of additional gold to maintain backing. Therefore they represented the potential for significant monetary expansion. If these reforms did have a positive effect on the financial system, then this would be 2 0 There were some exceptions for the arts and industry. 2 1 Wigmore [56], p750. 46 reflected in the time series of estimated probabilities over the current state of the financial system. If these reforms were credible, then, given that the early 1930s was a period of financial crisis, one would expect that the crisis would end in 1933 or 1934. Wigmore's view that it was the gold embargoes and exchange controls that were the more important reform would suggest that there would be a regime change in the Spring of 1933. On the other hand, Friedman and Schwartz's view that the introduction of the F.D.I.C. was the more important reform would date the regime change as early 1934. 3.4 A Model of Time Series Subject to a Change in Regime The model presented in this section is a straightforward variant of Hamilton's [27], [28], [29], Markov switching model. Two time series follow a single regime. Assume the vector, xt, consisting of the growth rate of real deposits (x\t) and the yield spread (x2t) follows the stochastic process: %it - Mi (St) %2t - M<2 {St) + Pii o 0 P21 Pl2 0 0 p22 Xl,t-1 ~ Ml (St-i) %2,t-l ~ H2 (St-i) %l,t-2 ~ Ml (St-2) x2,t.-2 — M'2 (St-2) (3.1) + o"i (St) v1>t 0-2 (St) V2,t where vit ~ i.i.d. N (0,1) and (1 — p{z) ^ 0 for any z where |z| < 1, for z = 1,2. Here Mi refers to the mean of the first series, the growth rate of real deposits and p,2 refers to the mean of the yield spread, pn and p2\ are the first order, autoregressive parameters for these two series, p\2 and p22 are the second order, autoregressive parameters for these two series.22 u\ and a2 are the standard deviations of the innovations, vijt and v2it- This 2 2 T h i s model described here contains two lagged dependent variables for each series. This matches the model chosen in the empirical section by sequentially testing downwards from a maximum lag order 47 model differs from the usual AR(2) model as it allows the means of the two series and the standard deviations of their innovations to vary with the underlying regime, St- I constrain this unobserved regime to be the same across the two series. This allows me to use observations on the two observed time series to draw inference about the common underlying regime. This regime is modelled as the outcome of an unobserved, discrete time, discrete state, first order Markov process. There are two states, the financial crisis state and the financial calm state. Let state zero be the state of financial calm and state one be the state of financial crisis. In each period there is a fixed probability of switching between the states. These transition probabilities are P(St = l |5t_! = 1) = p; P(St = 0 | S U = 1) = 1 - p; P(St = l\St-1 = 0) = l-q; P'St = 0\St-i=0)=q. The means of the two observed series and the standard deviations of their innovations are given by \ii (St) = ai0 + anSt; Oi (St) = Uio + wnSt] i = 1,2. (3.3) In the financial crisis state the growth rate of real deposits is lower and the yield spread is higher than in the financial calm state. This implies that an < 0 and a2\ > 0 . As mentioned before the econometrician does not directly observe the current regime, or indeed any past regimes. Instead he or she draws inferences about these regimes based on what they have observed, in this case the growth rate of real deposits and the yield spread. This is done using the non-linear filter described below. Throughout the five steps of the filter, the parameter values (p, q, pn, p 2 i , P12, P22, aio;"20, «n « 2 i , ^10, <^20, ^11, ^21) are assumed by the econometrician to be known constants. Using these, he or she then calculates a time series of probabilities over the current regime. of four. 48 The five steps of the filter are given below. In order to keep notation simpler, P (z) denotes P (Z = z) when z is a discrete valued variable and the density function / (z) when z is a continuous variable. Iteration t of the filter takes as an input the conditional probability P (st-i \xt-i,... ,xQ) and gives as output P (st \xt,xt-\... ,XQ), where xt is a vector containing x\t and x2t-Step One; Using the Markov transition probabilities in equation (3.2) calculate where the second term on the right hand side is the output from the previous iteration of the filter. Step Two; The joint conditional density of xt and (St, St-i, St-2) is given by P [St, St-i, St-2\xt-l, • • • ,X0) = P ( s t \ s t - i ) X P(st-l,St-2 \xt-l, • . . ,X0) , (3.4) P (xt, St, St-i, ST-2 \xt-u . . . , X 0 ) = P (xt \st, St-i, St-2,Xt-l, • • • , x0 ) xP(st, Si-1, St_2 \xt-l, • • • , X0 ) , (3.5) where } 27TCT! (St) CT2 (St) X { O^Stf 0-2 (st) x [(xlt - Mi (St)) - pn (xu-i - Mi (St-i)) - P12 (xu-2 ~ Mi (St-2))}2 7WT2 KX2t - M2 (St)) - P21 (X2t-1 ~ M2 (St-l)) ~ P22 (x2t-2 ~ M2 (•S't-2))]2 (3.6) Step Three; The branch of the likelihood function for observation t is 1 1 1 P (Xt \xt-i '•••' xo)=X] ]C YI P(Xt,St,St_uSt-2\xt-l,...,Xo). (3.7) st=0 st-i=0 st-2=0 Step Four; Steps two and three combined give P (St, St-u St-2 \xT,XT-1,..., X0) = P (xt, ST, St-i, St-2 \xt-l, ...,XQ) P ( x t \ x t - i , . . . , x 0 ) (3.8) 49 Step Five; The output of iteration t of the filter is 1 P(st,st_i\xt,xt-1,...,x0)= J2 P(st,St-i,st-2\xt,xt-i,---,xo)- (3.9) S t_2=0 As mentioned before each iteration of the filter assumes that the parameters of the model are known constants. However step three of the filter gives as a by-product the conditional likelihood for those constants for observation t. This can be summed over the T observations to give T ]nP(xT,xT_1, ...,x2 \xi,x0) = ]TmP(.x t \xt-\, ... ,x0), (3.10) t=2 which allows the econometrician to estimate the parameters of the model by maximum likelihood. Also note, that if in step five of the filter he or she were to sum over st-i as well as st-2, the econometrician would calculate P (st \xt,xt-\,... ,x0). These are the conditional probabilities over the current state given what has been observed up to, and including, the current period. Alternatively, if he or she were to sum over st and st_2 they would calculate P (st~i \xt, xt-\,..., XQ) . These are the conditional probabilities over the state in the previous period given what has been observed up to, and including, the current period. If the disruptions to the financial markets during the Great Depression can be thought of as a shift to a regime of financial crisis then this would be reflected in the estimated probabilities from steps four and five of the filter. More specifically, one would expect to see a probability of almost one assigned to the financial crisis regime during the early 1930s. Then, if the banking and monetary reforms did cause the financial crisis to be over, one would expect to see a probability of almost zero assigned to the financial crisis beginning in 1933 or early 1934, depending upon which were the crucial reforms. If Wigmore is correct, and leaving the gold standard was the more important reform, the regime change back to the state of financial crisis should occur in the Spring of 1933. 50 However, if Friedman and Schwartz are correct, and the more important reform was the introduction of the F D I C , then it should occur in early 1934. 3.5 Estimation Results In this section I present the results of the estimation of the model described in the previous section. To estimate the model I use quarterly data on the growth rate of real deposits and the yield spread from 1919 to 1941. 2 3 The results from maximum likelihood estimation of the Markov switching model and a single state model appear in table 3.1. This single state model is a special case of equation (3.1) in which there is only one state. In this model, the means of the two series and the standard deviations of their innovations are constant. A likelihood ratio test of the null hypothesis that the data were generated by the single state model against the alternative that they were generated by the Markov switching model yields a test statistic of statistic of 127.8. This suggests a strong rejection of the nul l hypothesis. However, under the null hypothesis of a one state model the parameters which describe the second state are unidentified. This means that the likelihood ratio test statistic does not have a standard x2 d is t r ibut ion. 2 4 I tackle this problem in two ways. In the first instance I perform a small Monte Carlo experiment to generate an empirical crit ical value for this test statistic. In this experi-ment, I generate artificial data under the null hypothesis that the data were generated by the single state model. To do this I use the parameter estimates for the single state model in table one to create artificial time series for the growth rate of real deposits and 2 3 T o determine the lag order, I estimate a version of the model with one, two, three and four autore-gressive parameters. Likelihood ratio tests do not allow the rejection of the model with three lags in favour of the one with four, or the model with two lags in favour of the one with three. However the one lag model can be rejected in favour of the two lag model. 2 4 F o r the likelihood ratio test to have an asymptotic x2 distribution we require that the information matrix is non-singular. If one tries to fit an n state model when the true process has n — 1 states, this condition will not hold. See, for example, chapter four of this thesis for a more detailed discussion of this issue. 51 the yield spread. 2 5 I generate series of length 200 + T, where T is the sample size. I then discard the first 200 observations to minimize the influence of starting values. Next, I use the remaining artificial data to estimate both the single state model and the Markov switching model by maximum likelihood estimation. Finally, I calculate the likelihood ratio test statistic for the test of the null hypothesis of the single state model against the alternative of the Markov switching model. I repeat this procedure 500 times. The percentiles of my simulated distribution are slightly above those of the asymptotic X2 (6) distribution. For example, the empirical 95% and 99% cri t ical values are 17.21 and 27.34 respectively. However, these are st i l l much lower than the test statistic calculated using the sample data. In fact, of the 500 artificial test statistics I generate by Monte Carlo simulation, not one is above my sample statistic of 127.8. The second way I tackle the problem is to follow the suggestion of Hami l ton [27], [30]. That is, I take the one state model and conduct a test on that model to see whether the two state model is required. I use a Lagrange Mult ipl ier test in which the null hypothesis is the single state model wi th homoskedastic errors from table 3.1. This is tested against the alternative in which the variance of the residuals from that model depend on the lagged filter output. For comparison I also test this null hypothesis against two other alternatives. One in which the variance of the residuals depends on the lagged level of the series and one in which it depends on the lagged squared residuals. The latter of these being a test for A R C H effects.2 6 In each case the test statistic is T f l 2 ~ X 2 ( l ) , (3.11) where T is the sample size and R2 is the coefficient of determination from the estimation of u2t = a + 8zt, z = l , 2 , (3.12) 2 5 T h e standard deviations of the innovations to these series are u>io = 3.107 and 0J20 = 0.641 respectively. 2 6 See Engle [19]. 52 where u% are the residuals from the equation for variable % in the single state model and zt is either the lagged level of the series, the lagged squared residuals or the lagged filter output. The test statistics in table 3.2 provide further evidence to support of the use of a two state model. The null hypothesis of homoskedastic errors can be rejected in favor of the alternative where their variance depends on the lagged output from the filter described i n the previous section. This applies to the residuals from the estimation of both series under the single state specification. For the other two alternatives, the lagged level and the A R C H effects, the null hypothesis can only be rejected for one of the series. This is the yield spread when the alternative is lagged level effects, and the growth rate of real deposits when the alternative is A R C H effects. The estimated Markov probabilities show the unconditional probabilities for a change in regime from financial calm to financial crisis and vice-versa. If the state at time t was financial calm then there is a 97% chance that state at time t+1 was also financial calm. This implies a 3% chance that it was financial crisis. Therefore, the expected length of a period of financial calm is 3 3 | quarters. O n the other hand, if the state at time t was financial crisis then there is an 86% chance that the state at time t + 1 was also financial crisis. This implies a 14% chance that it was financial calm. Therefore, the expected length of a financial crisis is just over 7 quarters. The means of the growth rate of real deposits and the yield spread when the economy was in the state of financial calm were 1.491 and 2.446 respectively. When the economy was in the state of financial crisis the mean growth rate of real deposits fell to —1.357 and the mean of the yield spread rose to 3.279. A test that the mean of a series is the same in the two regimes is just a simple test of the null hypothesis that the incremental parameter, an ° r a 2 i , is equal to zero. This test produces a rejection of the null hypothesis at the 1% level for both series. The lower growth rate of real deposits and higher yield spread suggest a state of financial cr is is . 2 7 2 7 F o r the growth rate of real deposits to be negative in the financial crisis state it must be the case 53 One can pass the parameter estimates from table one through the filter of the previous section to give a time series of conditional probabilities that the economy was in each of the two states. Figure 3.2a shows the conditional probability that the current state was financial crisis at each point in time from the 3rd quarter of 1919 to the end of 1941. This probability is conditional on information up to and including the current period. As mentioned earlier, at time t the econometrician is able to update his or her conditional probability over previous states. In this model, as there are two autoregressive parameters the econometrician can revise his or her conditional probability of the state at time t—1 and time t — 2. The revised conditional probabilities, P(st_2 \xt,xt-\,... ,x0) are shown in figure 3.2b. These are probabilities that the state at time t — 2 was financial crisis conditional on information from up to and including period t. Appendix three gives the full time series of all the estimated probabilities. These diagrams show that a shift away from the financial calm state and into the financial crisis state occurred in late 1930.28 This coincides with the first banking crisis of October 1930. It is at this point that Friedman and Schwartz argued that a "...contagion of fear spread among depositors."29 It is interesting to note that there is no change in regime with the stock market crash in late 1929. Immediately after the stock market crash the economy does not switch to the state of financial crisis, but remains in the state of financial calm. This is consistent with the point emphasized by Mishkin [42] that a stock market crash alone does not necessarily imply a financial crisis. Moving on to the end of the financial crisis, if Wigmore is correct and the exchange controls of the Spring of 1933 were more important than the introduction of deposit that a>io + Q\i < 0. The test of the null hypothesis of o.io + a.n > 0 against the alternative that the sum of these two coefficients is negative gives a statistic of —1.263. This one sided test statistic implies a p-value of 0.105. The variance of both series is higher in the financial crisis state and both of these differences are statistically significant. 2 8 Accord ing to figure 3.2a the conditional probability assigned to the financial crisis state reasonably high in 1920. Examination of figure 3.1a shows that this coincides with a decline in real deposits. However looking at the updated conditional probabilities in figure 3.2b shows that it is unlikely that the economy was in the financial crisis state at this time. 2 9 Friedman and Schwartz, [24], p308. 54 insurance then one would expect to see a sharp fall in the probability of the financial crisis state in either the second or third quarter of 1933. Figure 3.2a shows the conditional probabili ty of financial crisis only falls to 0.776 in the thi rd quarter of 1933. In the fourth quarter this probability is back to one. This suggests that there was no change in regime immediately following the introduction of exchange controls in the Spring of 1933. This point is emphasized by looking at the updated probabilities in figure 3.2b. The updated probability for the financial crisis state for the thi rd quarter of 1933 only falls to 0.936. For the rest of 1933 this probability is one. This result is inconsistent wi th Wigmore's view that the exchange controls were the more important reform for bringing stability to the financial sector. The conditional probabilities suggest that the financial crisis ended in the first quarter of 1934. This is shown much more clearly using the updated conditional probabilities of figure 3.2b than the conditional probabilities of figure 3.2a. The probabilities of figure 3.2a suggest that the probability of financial crisis was 0.812 in the first quarter of 1934, 0.260 in the second quarter and 0.631 in the third quarter. After this they quickly drop to almost zero. However using the updated probabilities of figure 3.2b shows a probabili ty of 0.459 that the economy was in a state of financial crisis in the first quarter of 1934. This probability falls to almost zero in the second quarter of 1934 and then remains at, zero for the remainder of 1934, and unti l 1937. This result is consistent wi th Friedman and Schwartz's view that the introduction of the F . D . I . C . was the crucial reform to the financial sector. Dur ing the first quarter of 1934 the introduction of the F . D . I . C . was not the only major policy change. O n the 31st January 1934, Roosevelt announced that the dollar price of gold was to be fixed at $35 per fine ounce of gold. This returned the Uni ted States to a de facto gold standard. Al though this did not represent any devaluation of the dollar from its floating level, the Treasury was able to revalue its gold holdings. 3 0 Therefore, fixing the gold price of the dollar represented the potential for a significant expansion of 3 0 A s Wigmore [56] points out, the dollar had already devalued 60% by the end of July 1933. 55 high powered money. Friedman and Schwartz argue that, during the Great Depression, the Federal Reserve should have provided a significant expansion of high powered money to protect the banking sector. This might lead one to argue that monetary expansion in early 1934 was the most important factor in ending the financial crisis. However looking at data on high powered money shows that the increase in high powered money in the first quarter of 1934 was only sufficient to restore the stock of high powered money to approximately its February 1933 leve l . 3 1 Therefore, this was not the monetary expansion that the Federal Reserve failed to provide during the early 1930s. This suggests that it was not monetary expansion in early 1934 that ended the financial crisis. There is also a brief period of financial crisis in late 1937 and early 1938. Between 1936 and 1937 the Federal Reserve more than doubled the required reserve ratio for member banks from 6.2% of total assets to 12.6% of total assets. 3 2 If banks are not holding excess reserves, or do not wish their level of excess reserves to fall, then an increase in required reserves can lead to a fall in the amount of loans that they wi l l make. A s the banking system reduces its role as a credit intermediary the cost of credit intermediation wi l l rise. A s a result, raising required reserves can lead to a fall in inside money. Therefore, it is possible that by raising reserve requirements the Federal Reserve would have caused the growth rate of real deposits to fall and the cost of credit intermediation to rise. That is, these series moved in a fashion similar to that in which they would move during a financial crisis. In order for this to happen it must have been the case that banks were either holding no excess reserves or that they wished to keep their level of excess reserves constant after the change in required reserves. In 1937 and 1938 the first of these scenarios did not hold. However, Friedman and Schwartz suggest that the second of these scenarios did hold. 3 1 I n February 1933 the stock of high powered money was $8.807bn, in March 1934 it was $8.998bn, Friedman and Schwartz [24], appendix B. These are nominal figures. The real stock of high powered money does not regain its level of the first quarter of 1933 until 1935. See the figures for MO and the G N P deflator in Balke and Gordon [2]. 3 2 Friedman and Schwartz [24], table 19. 56 "When the rise in reserve requirements immobilized the accumulated cash, they (member banks) proceeded rather promptly to accumulate additional cash for l iquidity purposes." Friedman and Schwartz [24], p458. 3.6 Financial Crisis and Output Growth In this section, I use the estimated conditional probability of financial crisis as an ex-planatory variable for output growth in the interwar period. I show that including the estimated conditional probability of financial crisis in a reduced form equation provides additional explanatory power for output. This is explanatory power in addition to that provided by lagged values of output, current and lagged values of the money stock, cur-rent and lagged values of the growth rate of real deposits and current and lagged values of the yield spread. 3 3 I estimate the following reduced form equation using quarterly data from 1920 to 1941 using two stage least squares: Ayt = a+f: faAyt-i + E IjAm^ + E 8jXU-i + E 8jX2,t-j i=l j=Q j=0 j=0 + XQP (St = l\xt,xt_1,x0) + \1P{St-i = 11x^x^X0) (3-13) + \2P(St-2 = l\xt,Xt-i,x0)+et. Here Ayt is the change in the natural logarithm of real G N P , Am,t is the change in the natural logarithm of the money stock, x\>tt x2<t and xt are as described earlier and et ~ i.i.d.N (0, a 2 ) . I use two stage least squares estimation because the conditional probabilities on the right hand side of equation (3.13) are potentially correlated wi th the error term, et. This is because the conditional probabilities are not predetermined at time 3 3 B o t h the real G N P series and the money stock series are taken from Balke and Gordon [2]. As deposits form a significant proportion of M2 there is the possibility of collinearity between the growth rate of real deposits and the the growth rate of M2. Therefore when using M2 as the monetary aggregate I estimate equation (3.13) with and without the growth rate of real deposits. The results are unchanged. 57 t, they are dependent on X\tt and x 2 ) t . 3 4 The results reported below are for i = j — l . 3 5 Under the null hypothesis that the state of the financial sector has no additional in sample explanatory power for output growth, Ao = Ai = A 2 = 0. This test statistic is distributed F (3, 77). I estimate this equation twice. Once using M2 to represent the money stock and once using MO. The result is similar for both measures of the money stock. When M2 represents money the test statistic is 3.61, and when MO represents money the test statistic is 2.79. These statistics have p-values of 0.0170 and 0.0459 respectively and so both imply a rejection of the null hypothesis at the 5% level. In both cases the sum of the point estimates, Ao + Ai + A 2 is approximately -0.20, suggesting the growth rate of output is lower during a financial crisis. Therefore, treating the financial crisis as a change in regime provides additional, in-sample, explanatory power for output fluctuations during the interwar period. Staiger and Stock [50] note that the two-stage least squares estimator maybe biased when the instruments are weakly correlated with the regressors. Therefore, I perform a small Monte Carlo experiment to investigate whether my rejection of the null hypothesis, Ao = Ai = A 2 = 0, is a result of a bias in the two-stage least squares estimator. First I estimate equation (3.13) by ordinary least squares with the null hypothesis of Ao = Ai = A 2 = 0 imposed. Then I generate artificial data for output under the null hypothesis using these parameter estimates, the sample data on A m t , xijt, X2tt, an initial value for Ayt and random innovations, ut~ N i.i.d(0, s2). Here s2 is the estimate of the variance of the residuals from the OLS estimation of the restricted version of equation (3.13). Next, I use the artificial time series for output together with the data on Am,t, xxt, x^t and the conditional probabilities to estimate equation (3.13) by two-stage least squares. Finally 3 4 I n addition to the predetermined variables in equation (3.13), Ayt, A m t , Amt-i, x-[tt, x.itt-i, x.2,t x<i,t-\ a I 1 ( l a constant, I use lagged values of each of the conditional probabilities as instruments. The F-statistic from each of the the 1st stage regressions is above 25 and the R2 is always above 0.70. I experiment with different instruments and the results remain unchanged. 3 5 I also consider values of i and j upto and including i — j = 4. The results are unaffected by the choice of i and j. I also estimate equation (3.13) with only the current values of A m ( , X\t and x^t and no lagged values of Ayt, again the results are unaffected. 58 I calculate the F-statistic for X0 = X\ = X2 = 0. This procedure is repeated 10,000 times. Table 3.3 contains the type I error rates, the proportion of times the artificial test statistic is greater than the sample test statistic and the 90th, 95th and 99th percentiles of the simulated distribution for the test statistic. The results from the Monte Carlo experiment show that my rejection of the null hypothesis is not the result of bias in the two-stage least square estimator. Based on an asymptotic 5% critical value the type I error rate is 0.0513 when money is represented by M2. When money is represented by M0 this error rate is 0.0488. These figures suggest that there is no size distortion in the test statistic. When money is M2 the proportion of times that the artificial test statistic is above the statistic from the sample data is 0.0174. When money is M0 the proportion of times that the artificial test statistic is above the statistic from the sample data is 0.0428. In both cases these figures are close to the p-values of the sample test statistics. 3.7 Conclusions In this chapter, I present evidence to support the hypothesis that the financial crisis of the early 1930s in the United States should be modelled as a change in regime. I argue that the financial reforms contained in the New Deal played a role in ending the financial crisis. My results are inconsistent with the view of Wigmore. He argues that the financial crisis ended with the decision to abandon the gold standard in the Spring of 1933. My results suggest that the financial crisis did not end until early 1934. This is consistent with the view of Friedman and Schwartz. They argue that the introduction of the F.D.I.C. in January 1934 was crucial to ending the financial crisis. Temin and Wigmore [53] argue that the New Deal represented a change in policy regime that played a crucial role in ending the Great Depression. I present evidence to show that the state of the financial sector contains significant explanatory power for output fluctuations during the interwar period. This is consistent with the view that the financial crisis played a role in the Great Depression and that the end of the financial crisis was important for the 59 recovery from the Great Depression. It is also consistent wi th Bernanke's view that the financial crisis had an adverse effect on output above and beyond the monetary effects. 60 Table 3.1 Estimates of Markov Switching Model & Linear Mode l Markov Switching Linear Alternative Estimate Standard Error Estimate Standard Error p 0.861 (0.097) q 0.971 (0.021) "10 1.491 (0.292) 0.935 (0.416) au -2.848 (1.034) 2.446 (0.152) 2.903 (0.461) "21 0.833 (0.123) Pu 0.565 (0.104) 0.650 (0.103) Pl2 -0.300 (0.101) -0.208 (0.103) P21 0.953 (0.112) 0.625 (0.102) P22 -0.078 (0.109) 0.271 (0.102) <^ 10 2.578 (0.213) 3.107 (0.232) Uu 1.482 (0.792) <^ 20 0.211 (0.018) 0.641 (0.048) <^ 21 1.216 (0.263) log-likelihood -253.44 log-likelihood -317.34 61 Table 3.2 LM Tests for Conditional Heteroscedasticity of Residuals Yield Spread Growth of Real Deposits zt is the lagged level of the series. 15.16 0.29 zt is lagged squared residulas (ARCH) 1.069 4.04 zt is lagged filter output (regime changes) 13.43 5.97 5% Critical Value is 3.84 sample size: T = 87 The test statistic is T x R2 from the estimation of uf — a + 6zt, where T is the sample size and zt is as described above. The test statistic is distributed x2 (1) • 62 Table 3.3: Results from Monte Carlo Experiment on Bias in the 2SLS Estimator (a) Money is M2 Type I Error Rate 0.0513 Proportion of Artificial Statistics Above Sample Statistic 0.0179 90% Percentile 2.117 95% Percentile 2.736 99% Percentile 4.064 (b) Money is MO Type I Error Rate 0.0488 Proportion of Artificial Statistics Above Sample Statistic 0.0428 90% Quantile 2.152 95% Quantile 2.693 99% Quantile 3.919 These results are based on 10,000 replications of the experiment described in the text. 63 Figure 3.1: Real Deposits and Bond Yield Spread 1919-1941 Fig 3.1a: Quarterly Growth Rate of Real Deposits Fig 3.1b: Baa Yield - Yield on L/T Government Bonds 0 os T—1 m >r, r- O N m W) e'- Os r H H (N <N m C<"l en C O Os OS Os Os Os Os OS Os Os Os Os Os r H T—1 i—( T—1 r H T—1 r H r H r H r H r H r H 64 Figure 3.2: Conditional Probabilities 1919-1941 Fig 3.2a: Conditional Probability of Financial Crisis -- 1 n . / { H — l v H — T " i — 1 — h -1 - l ^ - T * i i i i 1919 1921 1923 1925 1927 1929 1931 1933 1935 1937 1939 1941 Fig 3.2b: Updated Conditional Probability of Financial Crisis O N r - i r o i n O c ^ T — i r o i o t ^ O N T — i 65 Chapter 4 The Markov Linear Time Switching Model and Series 4.1 Introduction The most common approach to studying the business cycle has been to consider fluctu-ations around a single long-run rate of growth. Recently, however, it has been suggested that the trend rate of growth itself is subject to stochastic shifts; see for example Hami l -ton [28] or Potter [48]. One interpretation of this latter approach is that the economy moves between states of high and low growth. However, testing between a model wi th a single long-run growth rate and one wi th two states is problematic. Since the parameters which define the second state are undefined under the null hypothesis that the economy has only one state, the likelihood ratio test statistic does not possess the standard x2 distr ibut ion. 1 Also , in addition to the standard problem of model mis-specification bias, there is an additional source of bias that arises if one estimates a two state model when the true data generating process possesses only one state. The source of this bias is Jensen's x This problem also applies more generally. When testing a model with k states against an alternative with k+j states (j > 1) the parameters which define the last j states will not be defined under the null. 66 inequality. That is, the expectation of a function of a parameter vector, 9, is not equal to the function of the expectation of that same parameter vector, when that function is non-linear. Under certain parameter restrictions a non-linear model, such as the Markov switching model, maybe a reparameterization of a linear model. However, the maximum likelihood estimator will not be invariant to a reparameterization that involves a non-linear function. In this chapter, I investigate these issues using Hamilton's Markov switching model for Canadian real per-capita GDP. I calculate the size and power properties of the pseudo likelihood ratio test statistic for the null hypothesis of a one state model against the alternative of a two state model, under the non-standard distribution. I find that there is no size distortion in the test statistic. At the 95% significance level, depending on sample size, the probability of type I error lies between 0.036 and 0.027. However, I find that power is poor in smaller samples. It is only when the sample size reaches 400 that the size adjusted type II error rate falls below 0.05. I also evaluate the degree of bias in parameter estimates due to model mis-specification and Jensen's inequality. This is done using out-of-sample predictive accuracy. I find that when the true process is a single state AR(1) process, there is no loss in predictive accuracy from using the two state Markov switching model. That is the predictions from using the two state model are as accurate as those generated by using a single state AR(1) model. I also compare the predictive accuracy of the single state and the two state models when the true process has two states. I find that, for horizons of one year and under, there is a slight loss in predictive accuracy when using the single state model. These results are consistent with the assertion that there is little or no bias induced by using the two state model when the true process has only one state, or vice-versa. This is supported by examining the estimated long-run growth rates from the artificial samples. In each case the mean of the estimated long-run growth rate from the mis-specified model is as close to the true value as that from the correctly specified model. Estimating the mis-specified model does, however, induce bias in the estimated autoregressive parameter. 67 This is more severe when the true model is the two state Markov switching model and the mis-specified model is the single state model. 4.2 A Two State and a One State Model of GDP In this section I present a simple, two state version of Hamilton's [28] Markov switching model, a linear, single state counterpart and discuss the problem of testing between the two models. I also consider under what circumstances do the estimates from the Markov switching model imply a linear model and compare the processes for producing out-of-sample predictions from both models. Consider first the two state model. A single time series, y, has the following stochastic process, where ut ~ i.i.d. N (0, c r 2 ) and \p\ < 1. Note that this model varies from the usual, linear A R ( 1 ) model in that it allows for discrete shifts in the mean of the growth rate of G D P . 2 More specifically, the mean, p. (St), can vary wi th the underlying regime, St. Th i s regime is not observed by the econometrician. I consider a model with two regimes, identified by St = 0 and St = 1. The mean of the series in these two regimes is given by Therefore, in regime zero the mean of y is a and in regime one the mean of y is a + 7 . The regime is modelled as the outcome of an unobserved, discrete time, discrete state, first order Markov process. Each period there is a fixed probability of switching between 2 More complicated versions of the model can allow for shifts in the residual variance and the autore-gressive parameters. [Vt ~ P (St)) = p [yt-i ~ P (St-i)] + Ut, (4.1) /i (St) = a + 7 5 * . (4.2) 68 the states. These transition probabilities are P(St = l | 5 t _ i = 1) =p; P(St = 0|5 t_x = 1) = 1 - p P ( 5 t = l | 5 t _ 1 = 0) = l - g ; P (5 t = 0 | 5 t _i = 0) = q As mentioned before the econometrician does not directly observe the current regime, or indeed any past regimes. Instead he or she draws inferences about these regimes based on what they have observed, in this case {yt}. This inference is based on a non-linear filter. At time t, the filter takes as its input the conditional probability, P ( S t - i = 1^-1,^-2,..., 2/i) (4.4) and gives as output the conditional probability, P(St = l\yuyt-1,...,y1) (4.5) . This filter is similar to that in the previous chapter and is described in appendix four. Step three of the filter gives as a by-product the conditional likelihood which allows the econometrician to estimate the parameters of the model by maximum likelihood. To begin the filter, I assume the probability that the economy is in state one at time zero to be the unconditional probability m - D - » = ( 1 _ g ; g _ t ) ( « ) Certain parameter restrictions from this two state model would imply a one state model. If either of the switching probabilities is equal to one then a single state model is implied. Recall p = P (St = 1 \St-i = 1) • Therefore, p = 1 implies that the economy begins in state one, see equation (4.6), and never switches to state zero. In this case the single long-run rate of growth for the economy is given by a + 7 . Alternatively, q = 1 implies that the economy begins in state zero, again see equation (4.6), and never switches 69 to state one. In this case the single long-run growth rate is given by a. F ina l ly if 7 = 0 then there is no difference the growth rate of G D P between the two states. A single state counterpart to the model described by equations (4.1) to (4.3) is given by (yt -a) = p (yt_x -a) + vt, (4.7) again vt ~ i.i.d. N (0, o"2) and \p\ < 1. Here instead of the trend rate of growth shifting stochastically wi th the underlying state, it is constant. In other words, there is only a single state for G D P growth. Note that the parameters 7, p and q are not defined for this model. B o t h of these models can be estimated using maximum likelihood. Let / (y, 0) be the log-likelihood for time series y and parameter vector 0. The vector y contains n elements, where n is the sample size, and the vector 9, k, elements. For the Markov switching model k = 6 and for the linear model k = 3. Now define the matrix of contributions to the gradient as G (y, 0) where 3 G i i ( y , * ) , ^ > . (4.8) A s mentioned above, under the null hypothesis of a single state model, the parameters which describe the second state, 7, p and q are not defined. Therefore, the elements of G (y, 0) which describe the contribution of those three parameters to the gradient wi l l be zeros under the null hypothesis. Define the information matrix as r = X>(0), (4.9) where element i,j of the k x k matrix lt (0) is given by (I4 (0% = E, (Gti (y, 0) Gtj (y, 0)). (4.10) Here E.{ refers to the expectation calculated using the data generating process charac-3 T h e notation in this paragraph follows that of Davidson and MacKinnon [16]. 70 terized by the particular parameter vector 0. Hamilton [30] points out that one of the regularity conditions for the likelihood ratio test statistic to have a standard, asymptotic X2 distribution is that the information matrix be non-singular. Clearly this condition will fail to hold under the null hypothesis of a one state model. Putting aside the problem of testing for the number of states, there exists the potential problem of model mis-specification. When estimating a model that incorporates changes in regime there is the possibility that a change in regime may be used to explain 'outliers' from a single regime. For example, suppose the true data generating process for the economy has a single long-run rate of growth. However, an econometrician estimates a two state Markov switching model using this data. By explaining an observed growth rate which is far from the single, long-run growth rate as a change in regime, this non-linear model maybe able to produce a significant improvement in fit over the linear model. As a result, the econometrician may reject the true, linear model in favor of the mis-specified non-linear alternative. If the econometrician then uses this non-linear model for out-of-sample prediction, he or she may generate less accurate predictions than they would have done had they used the true linear model. Therefore, one desirable feature of models that incorporate the possibility of a shift in the trend rate of growth, is that when estimated using observations generated from a linear time series, they return parameter estimates which are consistent with a single long-run rate of growth. As mentioned earlier, in Hamilton's Markov switching model this can happen in one of two ways. One way is that the estimates of the parameters which describe the movement between the different regimes can imply that no such change in regime takes place.4 The other, is that there is no difference in the growth rate of the economy in the two regimes. However, mis-specification bias is not the only potential source of bias that comes from using a two-state model when the true data generating process has only one state. One of the desirable properties of maximum likelihood estimators is their invariance to 4 T h e probability of switching to state i conditional on being in state j ^ i is estimated as being zero. 71 reparameterizations of the model. Let 0 be the unbiased maximum likelihood estimator of the parameter vector 0, that is E0 = 0. Now consider a reparameterization of the model such that A = g (0). Invariance to reparameterizations implies that A = g (jf) . However, if g is a non-linear function, this will not hold. This is a consequence of Jensen's inequality.5 Therefore the choice of a non-linear reparameterization can lead to biased parameter estimates. As a result, even if the estimation of the two state model returns parameter estimates which imply a one state model, the parameters which describe the single state model may still be biased. In what follows, I examine the degree of bias on the basis of out-of-sample predictive accuracy. Suppose that the extent of bias induced by fitting the two-state model when the true data generating process has only one state is small. In this case the out-of-sample predictive performance of the non-linear model should be similar to that of a linear model. The greater the extent of the bias, the poorer the predictive performance of the non-linear model. Similarly, if the degree of bias induced by fitting a one state model when the true data generating process has two states is small, the predictive accuracy of the linear model should be similar to that of the Markov switching model. Prediction using the one state model is straightforward. Let E[yt+h\yt] be the h step ahead prediction of y conditional on the observation of y until time t. Clearly from equation (4.7) E [yt+i \yt] = a> + p (yt - a), (4.11) and it is easily shown that in general E[yt+h\yt}=a + ph(yt-a). (4.12) Therefore, given |p| < 1, lim E [yt+h \yt] = oc. h—>oo Prediction using the Markov Switching model is less straightforward as it also re-5 More specifically Eg > g {EP) if 3 is a strictly convex function and Eg (ji^ < g if 9 ls strictly concave. 72 quires the prediction of future states. Using equations (4.1) and (4.2) we can see that the conditional one-step ahead prediction of y w i l l depend now on the conditional prob-abilities assigned to the two states in that period. This probability is conditional on the observation of y unt i l the current period. Therefore the one-step ahead prediction can be writ ten as E[yt+i \yt] =a + P(St+i = l\yt,... ,y0) x if + p(yt - a - P (St = 1 \yt,...,y0) x 7 ) . (4.13) A s wi th the linear model, using forward iteration and the law of iterated projections, it can be shown that E[yt+h\yt) =a + P(St+h = l\yt,...,y0) x 7 + ph (yt - a - P (St = 1 \yt, • • •, yo) x 7 ) . (4.14) Hami l ton [28] shows that the prediction of future states conditional on current infor-mation is governed by the following equation: P(St+h = l\yt,...,yQ) =n + \h(P(St = l\yt,...,yo)-n), (4.15) where A = p + q — 1. From equation (4.15) it can be seen that as the prediction horizon tends to infinity the conditional prediction of the future state tends to the unconditional probability, 7r . 6 Therefore, from equation (4.14), given that \p\ < 1, l im E[yt+h\yt] = h—>oo a + 7T7. Tha t is, as the prediction horizon tends to infinity the prediction of y tends to a constant which is a weighted sum of the means in the two states. 4.3 Estimates Using Canadian GDP In this section I estimate the Markov switching model and the linear A R ( 1 ) model of the previous section using quarterly Canadian real per capita G N P (Yt) from 1960 to 6 Assuming that either p or q is strictly less than unity implies that A < 1. 73 1995.7 I estimate the model using annualized growth rates of this series which are shown in figure 4.1a.8 The mean growth rate over this period is 2.24% and the standard deviation is 4.05%. Panel (a) of table 4.1 presents the maximum likelihood estimates of the parameters of the Markov switching model and panel (b) of the same table presents maximum likelihood estimates of the linear model. The estimates from the Markov switching model suggest two different states. State zero is a low growth state with an average, annual growth rate of —2.26% and state one is a high growth state with an average, annual growth rate of 3.07%. In comparison the linear model implies a single long-run growth rate of 2.20%. The Markov switching probabilities imply that the expected duration of recessions is about 7- quarters and the expected length of booms is about 26 quarters. Figure 4.1b shows the time series of estimated probabilities that the economy was in the high growth state conditional on what has been observed to date, that is P (St = 1 \yt, • • •, yo) • This diagram shows two main periods of low growth, the recessions of the early 1980s and the early 1990s.9 Note that the Markov switching model contains three more parameters than the linear model. It also gives an improvement in the log-likelihood of 5.15 relative to the linear model. Under standard conditions this would imply a likelihood ratio test statistic of 10.30. Using the standard x2 (3) distribution, this would lead to a rejection of the linear model in favor of the Markov switching model at the 5% significance level. However, as 7 A n Augmented Dickey-Fuller (ADF) test of the null hypothesis that this series is 1(2) against the alternative that it is 1(1) with a trend gives a test statistic of -3.9879. Using Stock's [51] procedure for calculating the confidence intervals of the largest autoregressive root gives an upper bound on this root of 0.952. The A D F test of the null that the series is 1(1) against the alternative that it is 1(0) with a trend gives a test statistic of -0.6521. This translates into a confidence interval for the largest autoregressive root of (1.002,1.036). These figures imply that the series is 1(1). 8 More specifically yt = 400 x In (jT^J • T h e d a t a a r e f r o m C A N S I M . Series D10373 (Real G D P ) divided by series D l (population). 9These results suggest less frequent periods of low growth for Canada than Hamilton's [28] results imply for the United States. The periods of low growth in the early 1960s and early 1980s that Hamilton identifies for the United States also appear in Canadian per capita real G D P . In between these dates Hamilton identifies two other periods of low growth for the United States. These occur in the early and mid 1970s. However, for Canada, only a probability of about 0.5 is attached to the low growth state in the early 1970s, and even less in the mid 1970s. 74 mentioned earlier this test statistic does not possess a standard x2 distribution and so the rejection of the linear model on this basis may be erroneous. 4.4 Monte Carlo Experiment In this section, I generate artificial data under both the null hypothesis that the data generating process has only one state and the alternative hypothesis that it follows a two state Markov switching process. In each case I then estimate both the one state and the two state model and compare their predictive accuracy. I also calculate the size and power properties of the pseudo likelihood ratio test under the non-standard conditions discussed earlier. Generating data under the null hypothesis that there is only one state is straight-forward. Using equation (4.7) and the parameter estimates in panel (b) of table 4.1, I generate T + H + 200 observations for y, where T is the sample size of interest, and H is the maximum length of the prediction horizon. The first 200 observations are then discarded to minimize the influence of initial values. Here the artificial data for yt is generated according to: yt = a + p (yt-i - a) + et (4.16) where et ~ i.i.d. N (0, a2) . Generating data under the alternative hypothesis is more complex. In Hamilton's model the unobserved state of the economy is one of many factors that determine the dynamic process for output. As a result, it is possible that one could observe a high rate of growth for output even when the economy is in the low growth state.1 0 Here I use an approach based on Gallant et al [26]. Using st-i and the estimates of the Markov transition probabilities, p and q, from panel (a) of table 4.1, I generate the conditional 1 0 T h i s is in contrast to the threshold models proposed by Tong [54] in which the current state is directly observable. Often, as is the case in self-exciting threshold autoregressive models, the state is a function of the level of output. In these models, if output was below the fixed threshold at time t then the economy was in the low growth state at time t. 75 distribution for st. If st~i = 1 then proportion p of the elements of the conditional distribution for st w i l l be ones and proportion 1 — p w i l l be zeros. Similarly, if s t _ i = 0 then proportion 1 — q of the elements of the conditional distribution wi l l be ones and proportion q w i l l be zeros. Having generated this conditional distribution for st, I then randomly draw a value of st from that distribution. For the ini t ia l value of the state at time one I draw s\ from an unconditional distribution wi th proportion 7r of its elements being ones and proportion 1 — TT being zeros. A similar method is used to generate the distribution for yt conditional on st-i. If s t _ i = 1 then proportion p of the elements of the conditional distribution for yt are equal to S + 7 + p ( y t - i - 3 - 7 ) + i u t t (4.17) and proportion 1 — p are equal to a + p (yt_t - a - 7 ) + wit (4.18) where a, 7 and p are the parameter estimates in table 4.1, and wit are drawn from a mean-zero normal distribution wi th standard deviation au which is also reported in table 4.1. The subscript i refers to the individual element of the distribution. Similarly, if st-i = 0 then proportion q are a + p (yt_i - a) + wit (4.19) and proportion 1 — cf are 2 + 7 + P (yt - i - S) + wit. (4.20) A s before, having generated the conditional distribution for yt, I then randomly draw the artificial observation yt from that distribution. To generate yi, I draw from an unconditional distribution wi th proportion ir of its elements given by a + 7 + wu and proportion 1 — TT given by 2 + wn. A s wi th the generation of the linear time series I 76 generate T + H + 200 observations for both the state and yt and discard the first 200. I use the first T observations of the remaining artificial time series for yt to estimate the one state and the two state models described in section 4.2. I then calculate the pseudo likelihood ratio test statistic under the null and the alternative hypotheses. Finally, I then use the two estimated models to predict yt+i, yt+2, • • •, Vt+H- I consider sample sizes of 100, 200, 300 and 400. H is set at 32, thus representing predictions of up to eight years ahead. The experiment is repeated 1000 times for each sample size. 4.4.1 Distribution of Test Statistic Table 4.2 gives the small sample properties of the pseudo likelihood ratio test statistic. The type I error rate, based on a nominal size of 0.05, is below 0.05 for al l four sample sizes. This suggests that there is no size distortion. The next three rows of table 4.2 show the 90%, 95% and 99% crit ical values from the empirical distribution. Typica l ly these figures lie between the crit ical values from the asymptotic x2 (2) and x2 (3) distributions. This is not surprising given the type I error rates. Using linear interpolation between the crit ical values for the sample sizes of 100 and 200 gives a 95% crit ical value of 6.848 and a 99% crit ical value of 10.16 for the sample size of 144. This is the length of the sample used in the previous section to estimate the two models using Canadian data on per capita real G D P . The pseudo likelihood ratio test statistic calculated in the previous section is 10.30. Therefore, using these empirical distributions suggests that the linear AR(1 ) model for Canadian real, per-capita G D P growth can be rejected in favor of the Markov switching alternative at the 1% level. These results are similar to the results of L a m [35] and Cecchetti, L a m and M a r k [15]. L a m generates the empirical distribution under the null hypothesis for the pseudo likelihood ratio test statistic for the null hypothesis of an A R I M A model against the alternative of Hamilton's [28] original model. Based on 100 replications and using a sample size of 131 observations, he calculates a 90% crit ical value of 6.65 and a 95% cri t ical value of 9.14. Cecchetti, L a m and Mark generate data under the nul l of a normal 77 distribution and compare this wi th the alternative of a Markov switching model. They perform this experiment for United States annual data. They use sample sizes of 96 for consumption and 116 observations for both real dividends and G N P . They find that from 1000 replications 99.2% are below their pseudo likelihood ratio test statistic for consumption of 11.39. The 99% critical values for asymptotic x2 (3) distribution is 11.34. The p-values reported for their test statistics for real dividends and G N P also suggest that there is no size dis tor t ion. 1 1 A s discussed earlier, in addition to generating data under the nul l hypothesis of a linear A R ( 1 ) , I also generate data under the alternative hypothesis of a two state Markov switching model. This allows me to calculate the power of the psuedo likelihood ratio test statistic. Size adjusted power is given in the final row of table 4.2. These figures show that the power of the pseudo likelihood ratio test statistic is poor in small samples. In this experiment, using a 95% significance level, when the sample size is 100 the false null hypothesis is rejected in favor of the true alternative in only 40.6% of the simulations. When the sample size is 200 this figure rises to 78.6%. However, it is only when the sample size reaches 400 that size adjusted power reaches 95%. Note that these size adjusted power figures are calculated using the empirical 95% cri t ical values from table 4.2. These are al l below their asymptotic x2 (3) counterparts. W h e n asymptotic cri t ical values are used, power is even lower for each sample size. For a sample size of 100 power calculated using the asymptotic 95% crit ical value is 0.353. When the sample size rises to 200 this figure rises to 0.69; at T = 300 it is 0.841; and at T = 400 it is 0.939. Most quarterly macroeconomic time series currently consist of somewhere between 100 and 200 observations. Based on these figures, wi th a sample size of 100, there is a much higher chance of failing to reject the single state model in favor of the two state model than rejecting it, even when the two state model is the true model. W h e n the 1 1 These test statistics are 17.27 and 27.87 and have p-values of 0.000 and 0.001 respectively. Therefore while this is consistent with no size distortion, these tests statistics are too far above the asymptotic critical values to allow us to completely rule it out. 78 sample size is 200 there is still a 30% chance of failing to reject the false null. These results show that, at the 95% significance level in order for the type II error rate for the pseudo likelihood ratio test statistic to fall to 5% we require a sample size of 400. In other words, in order for this pseudo likelihood ratio test statistic to have the correct power against the single state null hypothesis, we require a quarterly time series of more that twice the length than that currently available for most macroeconomic aggregates. 4.4.2 P red ic t ive A c c u r a c y To compare the predictive accuracy of the single state and the two state models I use two statistics. These are mean squared prediction error and mean absolute prediction error. For each repetition of the experiment I obtain a series of predictions, E [yt+h \yt] for h = 1, 2 , . . . , 32, using the single state model. I also obtain a similar series of predictions for the two state model. The mean of the predictions across the 1000 repititions are shown in figure 4.2. Panel (a) contains the means of the predictions when the one state model is the true model and panel (b) the means when the two state model is the true model. In each case m,\ refers to the mean prediction from the one state model and m.i the mean prediction from the two state model. s\ and s2 are the respective standard deviations. This figure shows that in each case the means of the predictions from the true model and the mis-specified model are almost identical. Using these predictions and the artificial data, I calculate prediction errors at each horizon for both models. I then average across repetitions to calculate the mean squared prediction errors and mean absolute prediction errors. The mean squared prediction error at forecast horizon h for the model with k states is given by MSPEkth- Similarly the mean absolute prediction error at forecast horizon h for the model with k states is given by MAPEk,h- I report these statistics as the ratio of the statistic for the two state model to the statistic of the one state model. For a given forecast horizon, if the average of the prediction errors from the single state model are less than those from the Markov switching model, this ratio will be greater than one. If, on average they are greater, this 79 figure will be less than one. Table 4.3 contains these ratios for the case when the true model is the single state AR(1) model. Here I report the results for forecast horizons of h — 1,2,4,8,16 and 32 and for each sample size. Panel (a) contains the ratios of mean squared prediction errors and panel (b) the ratios of mean absolute prediction errors. These results show that when the true model has only one state there is almost no loss in predictive accuracy from using the over parameterized, two state Markov switching model. The ratio of MSPE2,h to MSPElih is only above 1.01 on one occasion, for T = 300 and h = 2. On every other occasion there is less than a 1% increase in mean squared prediction error when using the two state model. The results are similar when looking at mean absolute prediction error. As discussed earlier there are two sources of bias that can arise from fitting a two state model when the true process has only a single state. These are due to model mis-specification and Jensen's inequality. The results in table 4.3 suggest that the degree of bias arising from these two sources is small.1 2 This is supported by looking at the long-run average growth rate and the autoregres-sive parameter. As the forecast horizon increases the forecasts from both the one and the two state model converge to the long-run growth rate implied by their parameter estimates. For the one state model this is just equal to a. For the two state model this is equal to a + TTJ. The true long-run growth rate when the data are generated under the null hypothesis of a single state for GDP growth is 2.204%. For the smallest sample size of 100, the average estimated long-run growth rate for the single state model is 2.198%. For the two state model this figure is 2.201%. The results are similar for the larger sample sizes.13 This suggests that there is very little bias resulting from fitting the over parameterized two state model. When the process has only one state, the true value of 1 2However, these results are also consistent with the interpretation that the degree of bias arising from these to sources is large, and the two sources are offsetting. While there is no reason to suppose that this is the case, it cannot be determined whether or not this is so using the M S P E and M A P E . 1 3 F o r T = 200 the average estimated long-run growth rate for the single state model is 2.207 and for the two state model it is 2.208. For T = 300, these figures are 2.211 and 2.210; for T = 400 they are 2.210 and 2.210. 80 the autoregressive parameter is 0.238. There is some downward bias in the estimate of this parameter when using the two state model. When the sample size is 100 the average estimate is 0.185. This estimate rises wi th the sample size, to reach 0.212 when T = 400. Table 4.4 contains the ratios of the mean squared prediction errors and the mean absolute prediction errors when the true model is the two state Markov switching model. This table shows that there is a slight loss in predictive accuracy if one uses a single state model when the true process has two states. However, this is confined to horizons of up to one year. Even then, this loss in predictive accuracy is never more than 3%. The average estimates of the long-run growth rates for the two models are both very close to the true value of 2.046. For the sample size of T = 100, the average estimates are 2.011 and 2.012 for the one state and the two state model respectively. 1 4 Bo th of these results suggest that there is little bias induced through fitting a one state model when the true process has two states. A s is the case when the true process has one state, estimating the mis-specified model does induce bias in the autoregressive parameter. In this case, there is a large degree of upward bias in the parameter estimate. The true parameter value is 0.019. The average estimates from the single state model are 0.166, 0.180, 0.180 and 0.186 for sample sizes of T = 100, 200, 300 and 400 respectively. 4.5 Conclusions The likelihood ratio test statistic for the test of a linear A R ( 1 ) model against the alter-native of a Markov switching model does not have the standard x2 distribution. This is because some of the parameters of the Markov switching model are not defined under the nul l hypothesis of a linear model. In this chapter I examine the size and power prop-erties of the pseudo likelihood ratio test under the non-standard conditions. I find that there are no size distortions that arise from using this test statistic under non-standard conditions. However, I find that the power properties of the test statistic are poor in 1 4 W h e n the sample sizes is T = 200 these figures are 1.975 and 1.975; for T = 300 they are 1.980 and 1.980, and for T = 400, 1.980 and 1.979. 81 small samples. Size adjusted power does not reach 95% unt i l the sample size reaches 400. Clearly 400 is larger than the number of quarterly observations than is typically available on most macroeconomic aggregates. Therefore, the econometrician who fails to reject the nul l of a linear model in favor of the Markov switching model may do so not because the null model is true, but because he or she has insufficient observations for the test statistic to have the correct power. I also investigate the bias resulting from estimating a two state model when the true model has only one state, and from estimating a single state model when the true model has two states. In both cases I find that there is little loss of predictive accuracy from using the mis-specified model. In the former case, this suggests that the degree of bias resulting from using an over parameterized model and the non-linear reparameterization is small. The latter case suggests, though to a slightly lesser extent, that the degree of bias from using the under parameterized model is also small. 82 Table 4.1: Markov Switching and AR(1) Model Parameter Estimate Standard Error (a) Markov Switching Model a -2.255 0.952 7 5.521 0.907 p 0.019 0.092 p 0.962 0.025 q 0.866 0.084 au 3.412 0.227 log-likelihood -393.40 (b) Linear AR(1) Model a 2.204 0.431 p 0.238 0.081 <j v 3.940 0.232 log-likelihood -398.55 Both of these models are estimated using quarterly data on the annualized growth rate of Canadian per-capita, real GDP. The beginning of the sample period is 1960Q1 and the end is 1995Q4. This gives 144 observations. 83 Table 4.2: Small Sample Properties of the Pseudo Likelihood Ratio Test Statistic Sample Size T = 100 T = 200 T = 300 T = 400 Type I Error Rate 0.036 0.035 0.029 0.027 Empirical 90% Critical Value 5.0 4.4 4.8 4.8 Empirical 95% Critical Value 7.2 6.4 6.6 6.4 Empirical 99% Critical Value 10.6 9.6 10.8 9.6 Size Adjusted Power 0.406 0.786 0.887 0.960 The error rates are based on the 95% critical value from the x2 (3) distribution. This asymptotic critical value is 7.81. The 90% and 99% asymptotic critical values are 6.25 and 11.34 respectively. Under the null hypothesis the data are generated by a linear AR(1) model. Under the alternative the data are generated by the Markov Switching Model with one autoregressive parameter. These error rates and empirical critical values are based on 1000 replications of the Monte Carlo study described in the text. 84 Table 4.3: Relative Forecasting Performance: True Model has 1 State (a) MSPE2ih /MSPElth h T = 100 T = 200 T = 300 T = 400 1 1.009 1.001 0.998 1.001 2 1.005 1.003 1.012 1.000 4 1.004 1.002 1.004 1.002 8 1.006 1.001 0.998 0.999 16 0.998 1.002 1.002 1.000 32 1.000 0.999 1.001 1.001 (b) MAPE%h /MAPElth h T = 100 r = 200 T - 3 0 0 T = 400 1 1.000 1.000 1.000 1.000 2 1.006 1.000 1.003 1.000 4 1.003 1.000 1.003 1.003 8 1.003 1.000 1.000 1.000 16 1.000 1.000 1.003 1.000 32 1.000 1.000 1.000 1.000 Relative forecast performace is measured as the ratio of the mean squared predic-tion error from the two state Markov Switching model at forecast horizon h, denoted MSPE2thi to the mean squared prediction error from the single state, linear AR(1) model, denoted MSPEi^- This ratio is also calculated using the mean absolute predic-tion error. 85 Table 4.4 Relative Forecasting Performance: True Model has 2 States (a) MSPE2th/MSPE1}h h T= 100 T = 200 T = 300 T = 400 1 0.970 0.977 0.973 0.979 2 0.973 0.975 0.975 0.983 4 1.011 0.977 0.988 0.968 8 1.006 1.014 0.997 0.992 16 1.005 1.002 1.002 1.002 32 1.000 0.998 1.001 1.001 (b) MAPE2,h /MAPElth h T = 100 T = 200 T = 300 T = 400 1 0.976 0.991 0.981 0.981 2 0.984 0.986 0.988 0.988 4 1.009 0.991 0.994 0.981 8 1.009 1.003 1.000 0.994 16 1.003 1.003 1.000 1.003 32 0.997 1.000 1.000 1.000 See notes at the foot of table 4.3. 86 Figure 4.1: Canadian Real Per-Capita GDP 1960-1995 Fig 4.1a: Annualized Growth Rate 15 T Fig 4.1b: Conditional Probability of High Growth State 1 0.8 0.6 0.4 0.2 0 H—y- H— i— i—i— i— i—i—i- f ^ H 1 1 1 H-N H 1 (-v o ^ ^ ^ ^ o t ^ r ^ t ^ t ^ r ^ c > o o o c o c x ) o o o \ o v O N 87 Figure 4.2: Mean & Standard Deviations of Forecasts (T=100) Fig 4.2a: One State Model is the True Model 5 i > 4 s ». • * - . . 3 2 m2+s2 0 T -1 J ,*.-" r 1 t 1 1 1 1 i r t 11 1 1 1 1 1 i 1 I }• ••I • "I )—1—1—1—1—1—HH—1—1—1—1—r—t—1—i—i—r ~i—1—1—1—1—i—(—(—1—1—1 H - ^ r ^ o m s o o N C N i i n o o T — i m l & mz Fig 4.2b: Two State Model is the True Model i 1.11 i I I i i I I I I I I I I I I I I I I I en so as m oo ml+sl ml-sl m2+s2 m2-s2 ml & m2 88 Bibliography Donald W.K. Andrews. Power in econometric applications. Econom,etrica, 57:1059-1090, 1989. Nathan S. Balke and Robert J. Gordon. Appendix B: Historical data. In Robert J. Gordon, editor, The American Business Cycle. University of Chicago Press, 1986. Ben S. Bernanke. Nonmonetary effects of the financial crisis in the propagation of the Great Depression. American Economic Review, 73:257-276, 1983. Board of Governors of the Federal Reserve. Banking and Monetary Statistics. Na-tional Capital Press, 1943. Board of Governors of the Federal Reserve System. Federal Reserve Bulletin. Na-tional Capital Press, 1937. Michael D. Bordo, Claudia Goldin, and Eugene N. White. The defining moment hypothesis: The editors' introduction. In Michael D.and Claudia Goldin Bordo and Eugene N.White, editors, The Defining Moment: The Great Depression and the American Economy in the Twentieth Century. University of Chicago Press, 1998. John F. Boschen and Christopher M. Otrok. Long-run neutrality in an ARIMA framework: Comment. American Economic Review, 84:1470-1473, 1994. John M . Boschen and Leonard O. Mills. Tests of long-run neutrality using permanent monetary and real shocks. Journal of Monetary Economics, 35:25-44, 1995. James L. Butkiewicz. The impact of a lender of last resort during the Great De-pression: The case of the Reconstruction Finance Corporation. Explorations in Economic History, 32:197-216, 1995. John Y. Campbell. Why long horizons: A study of power against persistent al-ternatives. Technical Working Paper 142, National Bureau of Economic Research, 1993. John Y . Campbell and Pierre Perron. Pitfalls and opportunities: What Macroe-conomists should know about unit roots. National Bureau for Economic Research, Macroeconomics Annual, pages 141-219, 1991. 89 John Y. Campbell and Robert J. Shiller. Yield spreads and interest rate movements. Review of Economic Studies, 58:495-514, 1991. Forest Capie and Michael Collins. The Interwar British Economy. A Statistical Abstract. Manchester University Press, 1983. Forrest Capie and Alan Webber. A Monetary History of the United Kingdom., 1870-1982. Volum,e 1: Data, Sources and Methods. George Allen and Unwin Ltd, 1985. Stephen G. Cecchetti, Pok-Sang Lam, and Nelson C. Mark. Mean reversion in equilibrium asset prices. American Economic Review, 80:398-418., 1990. Russell Davidson and James G. MacKinnon. Estimation and Inference in Econo-metrics. Oxford University Press, 1993. Economic Report of the President. Appendix B; statistical tables relating to output, employment and production. Technical report, 1997. Graham Elliot, Thomas J. Rothenberg, and James H. Stock. Efficient tests for an autoregressive unit root. Econom,etrica, 64:813-836, 1996. Robert F. Engle. Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation. Econom,etrica, 50:987-1008, 1982. Eugene F. Fama and Kenneth R. French. Dividend yields and expected stock returns. Journal of Financial Economics, 22:3-25, 1988. Eugene F. Fama and Kenneth R. French. Permanent and temporary components of stock prices. Journal of Political Economy, 96:246-273, 1988. Mark E. Fisher and John J. Seater. Long-run neutrality and superneutrality in an ARIMA framework. American Economic Review, 83:402-415, 1993. Benjamin M. Friedman and Kenneth K. Kuttner. Money, income, prices and interest rates. American Economic Review, 82:472-492, 1992. Milton Friedman and Anna J. Schwartz. A Monetary History of the United States. Princeton University Press, 1963. Milton Friedman and Anna J. Schwartz. Monetary Trends in the United States and the United Kingdom,. University of Chicago Press, 1982. A. Ronald Gallant, Peter E. Rossi, and George Tauchen. Nonlinear dynamic struc-tures. Econometrica, 61:871-907, 1993. James D. Hamilton. Rational expectations econometric analysis of changes in regime: An investigation of the term structure of interest rates. Journal of Economic Dy-namics and Control, 12:385-423, 1988. 90 James D. Hamilton. A new approach to the economic analysis of nonstationary time series and the business cycle. Econom,etrica, 57:357-384, 1989. James D. Hamilton. Analysis of time series subject to a change in regime. Journal of Econometrics, 45:39-70, 1990. James D. Hamilton. Time Series Analysis. Princeton University Press, 1994. Lars Peter Hansen and Robert J. Hodrick. Forward exchange rates as optimal pre-dictors of future spot rates: An econometric analysis. Journal of Political Economy, 88:829-853, 1980. Alfred A. Haug and Robert F. Lucas. Long-run neutrality and superneutrality in an ARIMA framework: Comment. American Economic Review, 87:756-759, 1997. Lutz Kilian. Exchange rates and monetary fundamentals: What do we learn from long-horizon regressions? Mimeo, University of Michigan, 1997. Robert King and Mark W. Watson. Testing long-run neutrality. Working Paper 4156, National Bureau of Economic Research, 1992. Pok-Sang Lam. The Hamilton model with a general autoregressive component. Journal of Monetary Economics, 26:409-432, 1990. Robert E. Lucas Jr. Econometric testing of the natural rate hypothesis. In Otto Eckstein, editor, The Economics of Price Determination. Board of Governors of the Federal Reserve System, 1972. Robert E. Lucas Jr. Econometric policy evaluation. Carnegie-Rochester Series on Public Policy, 1:19-46, 1976. Helmut Lutkepohl. Non-causality due to omitted variables. Journal of Econometrics, 19:367-378, 1982. Joseph R. Mason. The Determinants and Effects of Reconstruction Finance Corpo-ration Assistance to Banks During the Great Depression. PhD thesis, University of Illinois at Urbana-Champaign, 1996. Frederic S. Mishkin. The information in the longer maturity term structure about future inflation. Quarterly Journal of Economics, 105:815-828, 1990. Frederic S. Mishkin. What does the term structure tell us about future inflation. Journal of Monetary Economics, 25:77-95, 1990. Frederic S. Mishkin. Asymmetric information and financial crises: A historical per-spective. In R. Glenn Hubbard, editor, Financial Markets and Financial Crisis. University of Chicago Press, 1991. 91 Brian R. Mitchell. International Historical Statistics: Europe. Stockton Press, 1992. Moody's Investor Service. Moody's investor manual, 1919. Charles R. Nelson and Charles I. Plosser. Trends and random walks in macroeco-nomic time series: Some evidence and implications. Journal of Monetary Economics, 10:163-190, 1982. Whitney K. Newey and Kenneth D. West. A simple, positive, semi-definite het-eroscedasticity and autocorrelation consistent covariance matrix. Econom.etrica, 55:703-708, 1987. Richard G. Pierse and Andy J. Snell. Temporal aggregation and the power of tests for a unit root. Journal of Econometrics, 65:333-345, 1995. Simon M . Potter. A non-linear approach to US GNP. Journal of Applied Econo-metrics, 10:109-125,1995. Thomas J. Sargent. A note on the 'accelerationist' controversy. Journal of Money, Credit and Banking, 4:721-725, 1971. Douglas Staiger and James H. Stock. Instrumental variables regression with weak instruments. Econom,etri,ca, 65:557-586, 1997. James H. Stock. Confidence intervals for the largest autoregressive root in US macroeconomic time series. Journal of Monetary Economics, 28:435-459, 1991. Peter Temin. Did Monetary Forces Cause the Great Depression. W.W. Norton Co., 1976. Peter Temin and Barrie A. Wigmore. The end of one big deflation. Explorations in Economic History, 27:483-502, 1990. Howell Tong. Non-linear Time Series: A Dynamical Systems Approach. Oxford Univeristy Press, 1990. Elmus Wicker. The Banking Panics of the Great Depression. Cambridge University Press, 1996. Barrie A. Wigmore. Was the bank holiday of 1933 caused by a run on the dollar? Journal of Economic History, 47:739-755, 1987. 92 Appendix One - Data for Chapter Two This appendix describes the data used in chapter two and the sources from which it was compiled. (1) Uni ted States The monetary base is currency held by the public plus bank vault cash, plus, after October 1914, member bank deposits and non-member clearing accounts at the Federal Reserve. The money stock (M2) is currency held by the public plus adjusted deposits at a l l commercial banks. Bo th are measured in billions of current dollars. Real income is measured by real G N P in billions of 1972 dollars. The source for the monetary base, the money stock and real income from 1869 to 1983 is Balke and Gordon [2]. For al l series, the source from 1984 to 1995 is the Economic Report of the President [17]. (2) Uni ted Kingdom The monetary base is currency held by the public plus bank reserves. The money supply (M3) is non-bank holdings of notes and coins plus the deposits of al l residents. B o t h are measured in millions of current pounds sterling. Output is measured by real G D P in millions of 1958 pounds sterling. For the U K the source of both monetary series is Capie and Webber [14]. The G D P data is from Mitchel l [43]. 93 Appendix 2: Estimation of VAR Under Null and Alternative Assuming that both yt and mt are stationary in first differences the VAR of equation (2.5) can be written as: a (L) Amt = b (L) Ayt + ut d(L) Ayt = c(L) Amt + vt. For simplicity assume a first order VAR, and so equation (A2.1) becomes: (A2.1) Am,t = b0Ayt + bxAyt-i + o4 Am ( _ i + ut Ayt = c 0 Am ( + qAm^.j + diAyt-\ + vt. (A2.2) King and Watson show that the top line of this equation can be rewritten as Amt - -yAyt = ax (ArrH-x - jAyt) + bA2yt + ut (A2.3) where 7 = 6(1)/(1 — ax) = 6(1) /a(l) is the long-run multiplier. Under Fisher and Seater's identification scheme 6(1) = 0 and therefore 7 = 0. Thus equation (A2.3) becomes Amt = o4 A m t _ ! + bA2yt + ut. (A2.4) Potentially there is correlation between A2yt and ut. Therefore I estimate this equation by instrumental variables, with the instruments being {Amt-i, Ayt-i} . Under the iden-tification scheme that cov(ut,vt) = 0 the estimated residuals for this equation are a valid instrument in the estimation of the second equation in (A2.1). The second equa-tion of (A2.1) is then estimated by instrumental variables with the instruments being {Am t _i , Ayt-i, ut}. Using these parameter estimates I generate the data under the alternative hypothesis. To generate data under the null hypothesis, I estimate the money equation in exactly the same way. King and Watson show that the second line of equation (A2.1) can be written as Ayt - 6Amt = a± (Ayt-! - 8Am,t) + bA2m,t + vt (A2.5) Where 6 is the long-run multiplier of income with respect to money. The long-run neutrality restriction is that 5 = 0. Thus, this equation becomes Ayt = axAyt-i + 6A 2 m t + vt. (A2.6) I estimate this equation by instrumental variables, with instruments {Ayt-i, A m t _ i , ut} • Using these parameter estimates I generate data under the null hypothesis. 94 Appendix 3: Estimated Conditional Probabilities 1919Q3 - 1941Q4 Conditional Probabilities of Financial Crisis P(st = l \Xt, .,x0) P ( s t - i = 1 \Xt,.,Xa) P (S t _ 2 = 1 Xt 1919Q3 0.011 0.003 0.000 1919Q4 0.067 0.008 0.002 1920Q1 0.786 0.164 0.029 1920Q2 0.380 0.368 0.080 1920Q3 0.041 0.036 0.035 1920Q4 0.252 0.113 0.085 1921Q1 0.109 0.088 0.033 1921Q2 0.036 0.018 0.015 1921Q3 0.008 0.003 0.001 1921Q4 0.009 0.001 0.000 1922Q1 0.005 0.001 0.000 1922Q2 0.019 0.002 0.000 1922Q3 0.008 0.002 0.000 1922Q4 0.005 0.001 0.000 1923Q1 0.028 0.001 0.000 1923Q2 0.014 0.004 0.000 1923Q3 0.033 0.004 0.001 1923Q4 0.011 0.003 0.000 1924Q1 0.011 0.001 0.000 1924Q2 0.006 0.001 0.000 1924Q3 0.005 0.000 0.000 1924Q4 0.009 0.001 0.000 1925Q1 0.013 0.001 0.000 1925Q2 0.005 0.001 0.000 1925Q3 0.008 0.000 0.000 1925Q4 0.006 0.001 0.000 1926Q1 0.006 0.000 0.000 1926Q2 0.010 0.001 0.000 1926Q3 0.010 0.001 0.000 1926Q4 0.011 0.001 0.000 1927Q1 0.005 0.001 0.000 1927Q2 0.007 0.000 0.000 1927Q3 0.014 0.001 0.000 1927Q4 0.007 0.001 0.000 1928Q1 0.005 0.000 0.000 1928Q2 0.018 0.001 0.000 1928Q3 0.021 0.003 0.000 1928Q4 0.006 0.002 0.000 95 1929Q1 0.052 0.002 0.001 1929Q2 0.016 0.006 0.000 1929Q3 0.015 0.002 0.001 1929Q4 0.024 0.003 0.000 1930Q1 0.032 0.006 0.001 1930Q2 0.009 0.003 0.000 1930Q3 0.006 0.001 0.000 1930Q4 0.993 0.032 0.003 1931Q1 0.926 0.926 0.026 1931Q2 1.000 0.986 0.985 1931Q3 1.000 1.000 0.988 1931Q4 1.000 1.000 1.000 1932Q1 0.852 1.000 1.000 1932Q2 1.000 0.857 1.000 1932Q3 1.000 1.000 0.874 1932Q4 1.000 1.000 1.000 1933Q1 1.000 1.000 1.000 1933Q2 1.000 1.000 1.000 1933Q3 0.776 1.000 1.000 1933Q4 1.000 0.952 1.000 1934Q1 0.812 1.000 0.936 1934Q2 0.260 0.256 1.000 1934Q3 0.631 0.466 0.459 1934Q4 0.101 0.098 0.064 1935Q1 0.012 0.007 0.001 1935Q2 0.007 0.001 0.000 1935Q3 0.026 0.004 0.000 1935Q4 0.013 0.003 0.000 1936Q1 0.188 0.037 0.009 1936Q2 0.057 0.048 0.006 1936Q3 0.049 0.019 0.016 1936Q4 0.015 0.007 0.002 1937Q1 0.022 0.003 0.001 1937Q2 0.012 0.002 0.000 1937Q3 0.026 0.002 0.000 1937Q4 0.998 0.144 0.012 1938Q1 1.000 0.999 0.125 1938Q2 1.000 1.000 1.000 1938Q3 0.764 1.000 1.000 1938Q4 0.383 0.378 1.000 1939Q1 0.084 0.073 0.072 1939Q2 0.053 0.026 0.022 1939Q3 0.029 0.015 0.006 1939Q4 0.013 0.003 0.002 1940Q1 0.008 0.001 0.000 1940Q2 0.007 0.001 0.000 1940Q3 0.005 0.001 0.000 1940Q4 0.007 0.000 0.000 1941Q1 0.007 0.001 0.000 1941Q2 0.038 0.002 0.000 1941Q3 0.016 0.005 0.000 1941Q4 0.021 0.003 0.001 97 Appendix 4: Fil ter for Two State Mode l of Chapter Four Throughout the five steps of this filter, the parameter values (p, q, a0,0*1,7, °~u) a r e assumed by the econometrician to be known constants. Using these, she then calculates a time series of probabilities over the current regime and past regimes. The five steps of the filter are given below. In order to keep notation simpler, P (z) denotes P (Z = z) when z is a discrete valued variable and the density function / (z) when z is a continuous variable. Iteration t of the filter takes as an input the conditional probability P (st-i \yt-i, • • • > Z/o) and gives as output P (st \yt, yt-i • • •, yo)- That is, each iteration of the filter updates the conditional probabilities over the current state one period. Step One; Using the Markov transition probabilities in equation (4.3) calculate P(st,st-i\yt-i,...,yo) = P(st\st-i) x P (st-i \yt-i, ...,yo), (A4.1) where the second term on the right hand side is the output from the previous iteration of the filter. Step Two; The joint conditional density of xt and (St, St-i) is given by P(xt,st,st-i \yt-i, -.. ,y0) = P (xt\st, st-i,yt-i,... ,yo) x P(st,st-l \ yt-i, • • • ,yo), (A4.2) where P(yt\st,st-liyt-1...y0) = x exp { - ^ 2 [{yt - a - n/st) ~ Pi (Vt-i -a- 7 S t - i ) ] 2 } Step Three; The branch of the likelihood function for observation t is (A4.3) 1 1 P(y\yt-i,...,yQ)=YJ E P (&> s*> s * - i \Vt-u • • •, Vo) • (A4.4) st=0 s t - i = 0 Step Four; Steps two and three combined give T>( \ \ P(yt,st,st-i\yt-i,-• • ,yo) fAAx\ P(st,st-i \yt,yt-i,---,yo) = 5 7 — • r • (A4.5) P (xt\yt-i,... ,y0) Step Five; The output of iteration t of the filter is 1 P(st\yt,yt-i,...,yo) = P (st,st-i\yt,yt-i, • • • ,yo) • (A4.6) s t _i=0 98
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Money, output and the United States’ inter-war financial...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Money, output and the United States’ inter-war financial crisis : an empirical analysis Coe, Patrick James 1998
pdf
Page Metadata
Item Metadata
Title | Money, output and the United States’ inter-war financial crisis : an empirical analysis |
Creator |
Coe, Patrick James |
Date Issued | 1998 |
Description | In the first essay of this thesis I test long-run monetary neutrality (LRMN) using the longhorizon approach of Fisher and Seater [18]. Using United States' data on M 2 and Net National Product they reject LRMN for the sample 1869-1975. However, I show that this result is not robust to the use of the monetary base instead of M2. Nor is it robust to the use of United Kingdom data instead of United States data. These results are consistent with the interpretation that Fisher and Seater's result is a consequence of the financial crisis of the 1930s causing inside money and output to move together. Using a Monte Carlo study I show that Fisher and Seater's rejection of LRMN can also be accounted for by size distortion in their test statistic. This study also shows that at longer horizons, power is very low. In the second essay I consider the financial crisis of the 1930s in the United States as change in regime. Using a bivariate version of Hamilton's [24] Markov switching model I estimate the probability that the underlying regime was one of financial crisis at each point in time. I argue that there was a shift to the financial crisis regime following the first banking crisis of 1930. The crucial reform in ending the financial crisis appears to have been the introduction of the Federal Deposit Insurance Corporation in January 1934.1 also find that the time series of probabilities over the state of the financial sector contain marginal explanatory power for output fluctuations in the inter-war period. A problem when testing the null hypothesis of a linear model against the alternative of the Markov switching model is the presence of nuisance parameters. Consequently, the likelihood ratio test statistic does not possess the standard chi-squared distribution. In my third essay I perform a Monte Carlo experiment to explore the small sample properties of the pseudo likelihood ratio test statistic under the non-standard conditions. I find no evidence of size distortion. However, I do find that size adjusted power is very poor in small samples. |
Extent | 4885300 bytes |
Subject |
Money -- United States Depressions -- 1929 -- United States United States -- Economic conditions -- 1918-1945 |
Genre |
Thesis/Dissertation |
Type |
Text |
FileFormat | application/pdf |
Language | eng |
Date Available | 2009-06-02 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
DOI | 10.14288/1.0088747 |
URI | http://hdl.handle.net/2429/8553 |
Degree |
Doctor of Philosophy - PhD |
Program |
Economics |
Affiliation |
Arts, Faculty of Vancouver School of Economics |
Degree Grantor | University of British Columbia |
GraduationDate | 1998-05 |
Campus |
UBCV |
Scholarly Level | Graduate |
AggregatedSourceRepository | DSpace |
Download
- Media
- 831-ubc_1998-271218.pdf [ 4.66MB ]
- Metadata
- JSON: 831-1.0088747.json
- JSON-LD: 831-1.0088747-ld.json
- RDF/XML (Pretty): 831-1.0088747-rdf.xml
- RDF/JSON: 831-1.0088747-rdf.json
- Turtle: 831-1.0088747-turtle.txt
- N-Triples: 831-1.0088747-rdf-ntriples.txt
- Original Record: 831-1.0088747-source.json
- Full Text
- 831-1.0088747-fulltext.txt
- Citation
- 831-1.0088747.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0088747/manifest