UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

An examination of a model of the term structure of interest rates : Kennedy's model Sakellaris, Konstantinos 2000

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2000-0280.pdf [ 7.01MB ]
Metadata
JSON: 831-1.0080036.json
JSON-LD: 831-1.0080036-ld.json
RDF/XML (Pretty): 831-1.0080036-rdf.xml
RDF/JSON: 831-1.0080036-rdf.json
Turtle: 831-1.0080036-turtle.txt
N-Triples: 831-1.0080036-rdf-ntriples.txt
Original Record: 831-1.0080036-source.json
Full Text
831-1.0080036-fulltext.txt
Citation
831-1.0080036.ris

Full Text

A N EXAMINATION OF A MODEL OF T H E T E R M STRUCTURE OF I N T E R E S T R A T E S : K E N N E D Y ' S M O D E L by KONSTANTINOS  SAKELLARIS  B.Sc. , The University of Athens, 1998 A THESIS S U B M I T T E D IN P A R T I A L F U L F I L M E N T O F THE REQUIREMENTS FOR T H E DEGREE OF M A S T E R OF SCIENCE in T H E F A C U L T Y O F G R A D U A T E STUDIES Department of Mathematics Mathematical Finance Programme We accept this thesis as conforming to the required standard  T H E U N I V E R S I T Y O F BRITISH C O L U M B I A April 2000 copyright (c) Konstantinos Sakellaris, 2000  In  presenting  degree  this thesis in partial fulfilment  of  the  requirements  for  an  at the University of British Columbia, I agree that the Library shall make it  freely available for reference and study. I further agree that permission for copying  of  department  this thesis for scholarly or  by  his  or  her  purposes  representatives.  may be granted It  is  permission.  Department of  )Aa'(V\eyY\Q"tiCS v  The University of British Columbia Vancouver, Canada  DE-6 (2/88)  ftov§  28  ,  l O Q Q  extensive  by the head of  understood  that  publication of this thesis for financial gain shall not be allowed without  Date  advanced  copying  my or  my written  Abstract We examine the term structure model proposed by Kennedy (1994). The model assumes that the interest rates can be described as a Gaussian Random Field. We find conditions that the drift and covariance function of the forward rates have to satisfy. We then try to calibrate the model using a number of approaches. Finally the proposed covariance function by Kennedy (1997) is tested.  ii  Table of Contents Abstract  ii  List of Tables  v  List of Figures  vi  Acknowledgements  vii  S E C T I O N 1 Introduction  1  S E C T I O N 2 Kennedy's Approach  2  S E C T I O N 3 Comments on Kennedy's Model  3  S E C T I O N 4 Data  4  S E C T I O N 5 Working with Kennedy's Model  5  5.1  Estimating the Drift Function  6  5.1.a  Formulating the Problem  6  5.1. b  Estimating the Drift Function from Equation (5.1.3)  8  5.2  Estimating the Covariance Function  9  5.2. a  Estimating Cov{Pk^ps )  9  5.2.b  Formulating the Problem  11  5.2.c  Estimating the Covariance Function from Equation (5.2.6)  14  5.2.d  Approach A  15  5.2.e  Approach B  17  5.2.f  Approach B assuming that the covariance function has the form mentioned by Kennedy (1997) 19  3  S E C T I O N 6 Conclusion  20  References  22  Appendix A Graphs of the Treasury Yield data  24  Appendix B Estimation of the Drift Function  29  Appendix C Indicative plots of A j  32  mji  Appendix D Indicative plots of hij(m)  for certain i, j and A ijfor m>  iii  certain i, j  37  Appendix E Statistical properties of the fitting error of hij (m)  42  Appendix F Indicative plots of A ^j  47  for certain m  m  Appendix G Calculation of fk, ,i,j f ° various forms of the covariance function r  m  Appendix H Plots of G\{m,i,j)  when using c\{k,u,z) as the covariance function . . . . 59  Appendix I Plots of G ^ m , i,j) when using C2(k, u, z) as the covariance function Appendix J  52  Plots of G(m, i, j) when using c$(h, u, z) as the covariance function  Appendix K Definition of Gaussian Random Field  iv  64 67 70  List of Tables Table 1 Table 2  ,  v  5 8  List of Figures Graphs of the Treasury Yield data  24  Figure 1  29  Figure 2  29  Figure 3  30  Figure 4  30  Figure 5  31  Indicative plots of A j miit  for certain i, j  Indicative plots of hij(m) and A ijiov mt  32 certain i, j  37  Statistical properties of the fitting error of h j(m)  42  Indicative plots of A j  47  it  m;i  Plots of Gi(m,  for certain m  when using Ci(k, u, z) as the covariance function  Plots of G2(m, i, j) when using C2(k, u, z) as the covariance function Plots of G(m,i,j)  when using cz(h,u,z) as the covariance function  vi  59 64 67  Acknowledgements I'd like to thank my supervisor Dr.Ulrich Haussmann for his useful comments on my thesis and for all his help and support in the past one and a half years I've been in Vancouver. I'd like to dedicate this work to my parents, who have always supported me, to my sister and to Smagdalena.  vii  1. Introduction Interest rates and their dynamics provide probably the most computationally difficult part of the modern financial theory. The modern fixed income market includes not only bonds but all kinds of derivative securities since they are used in time discounting. Interest rates are also important at the corporate level since most investment decisions are based on some expectations regarding alternative opportunities and the cost of capital - both depending on interest rates. The classical approach to modelling interest rates and their associated bond prices is to specify the dynamics of a finite number of underlying processes, typically just the spot rate or the spot rate together with bond prices or interest rates of various durations. Then, using no-arbitrage conditions or equilibrium considerations, bond prices of all maturities are computed in terms of these basic processes. The interest rate models can be divided into two classes. The first one involves modelling the short rate, by assuming it follows a stochastic differential equation. Here we can find three categories: (i) Simple Gaussian models: In this category we include the models of Vasicek (1977), Merton (1973) and their generalisations like the Hull & White (1990) model. These models can be considered "simple" and the log-Gaussian distribution of bond prices makes it very easy to price derivative securities. The Hull & White model specifically is very flexible, in that any initial yield curve, and any initial term structure of volatility can be fitted. However the estimation of these models from data is not practical. The yield curve is not some nice smooth curve known at all positive real points; in practice, it is only known at a limited set of maturities, with dubious accuracy of measurement. Any procedure which requires repeated differentiation of this "curve" (like the procedure followed for these model's calibration) cannot be expected to work. Even if one could obtain estimates of the models' functions from the data, there is no reason why we should get consistent estimates if we performed the same analysis of the term-structure as it appears one week later. Additionally the undesirable property of these Gaussian models is that the interest rates may be negative. (ii) Squared Gaussian models: These models escape the negative interest rates by modifying the variance structure. Cox, Ingersoll & Ross (1981) introduced such processes as models for the spot rate. Again a number of generalisations exist like the one proposed by Hull & White (1990). These models are particularly tractable because, like the Vasicek model, the yield curve is affine in the spot rate. A l l of the objections raised to the estimation of the Gaussian models apply equally here. (iii) Multi-factor models: The single-factor models discussed so far are often criticised on the grounds that the long rate is a deterministic function of the spot rate, and that the prices of bonds of different maturities are perfectly correlated. These assumptions can't be verified by empirical evidence which on the contrary suggest that multi-factor models do significantly better than single-factor models. Models belonging in this class were proposed by Longstaff & Schwartz (1991), Duffie k. Kan (1996), Constantinides (1992), Jamshidian (1988). By passing to multi-factor models, one should get an improved fit, but there is a heavy price to pay; if one wants to calculate prices of, say, options on bonds, the P D E to be solved is higher-dimensional, and will thus take much longer to be solved. Perhaps even more importantly, the factors used have to correspond to some observable variables. Longstaff & 1  Schwartz use the spot rate together with the volatility of the spot rate for example. Perhaps the best known model in this category is the model introduced by Brennan k. Schwartz (1979), which takes as the variables of the (two-factor) model the spot rate and the long rate. Their pricing equation has to be solved numerically but their analysis of the Canadian Government Bonds gives impressive results. In this context, it is worth mentioning a result of Dybvig, Ingersoll & Ross (1996) who prove that the long rate is non-decreasing with respect to time. This makes one a little wary about a model which supposes that the long rate moves as a diffusion. We would like to conclude the review of this class by saying that empirical studies tend to reject the hypothesis that the short rate is Markovian, let alone a diffusion [Ait-Sahalia 1997]. This is another objection to short-rate based models. The second class of models, initiated by Ho & Lee (1986) and extended by Heath, Jarrow & Morton (1992) (HJM), involves modelling directly the forward rate processes for each maturity T. Specifically the H J M approach has as a starting point the observed yield curve and leads to the imposing of a relationship between the drifts and volatilities of forward rates on one hand, and the price volatility functions of discount bond prices on the other. In other words there is no such thing as the H J M model; rather there exists a whole class of models, each characterised by a specific functional form for the volatilities. One important point about the H J M process is that it is intrinsically non-Markovian. From the implementation point of view this in turn means that a non-trivial H J M process cannot be mapped onto a recombining tree; therefore bushy trees or Monte Carlo paths are the tools available to the practitioner for pricing and hedging options. Generally speaking, the complexity of this approach makes it very difficult to obtain simultaneously non-negativity of interest rates and a simple formula for bond pricing, properties desired for a term structure model. On the other hand the ability to input the initial yield curve directly and to vary the volatilities of the forward rates are attractions. A generalisation of this approach was proposed by Kennedy, whose model allowed the incorporation of an infinite number of sources of randomness in the evolution of the term structure. Finally a number of models exist which do not belong in any of the two above classes, like the one proposed by Heath & Jara (1998) which is based on Futures Prices. These models though are relatively new and haven't been examined thoroughly enough so we are not going to review them. The organisation of the thesis is as follows. Section 2 presents Kennedy's approach. In Section 3 we make some comments on Kennedy's model. Section 4 presents the data used for the empirical estimation of the model. In Section 5 we try to estimate the drift and covariance function and we derive a condition that the covariance function should satisfy. Section 6 concludes the examination of this model by sumarizing our results.  2. Kennedy's approach Heath, Jarrow & Morton (1992) approached the problem of modelling the term structure of interest rates by specifying the dynamics of all instantaneous forward rates {F : 0 < s < t} where: s>t  (2.1)  P  5 i t  =  e  -J>.^ 2  represents the price at time s of a bond paying one unit at time t > s. Their model assumes that the F satisfy stochastic differential equations of the form s>t  (2.2)  dF  = a{s, t)ds + JZT=i  Stt  t)dW  l s  where W ,..., W are independent Brownian motions and a(s,t) and o~i(s,t) are processes adapted to the natural filtrations of the Brownian motions. Kennedy (1994) while following the approach of modelling the instantaneous forward rates, considered the case where {F : 0 < s < t} is a Gaussian random field , so that all finite dimensional distributions are normal. He dealt with the situation where, under the equilibrium or martingale measure, the random field has independent increments in the s-direction which the direction of the evolution of "real" time. This framework includes the Heath, Jarrow and Morton model in the case when the coefficients a(s,t) and 0"i(s,£) in the underlying stochastic differential equations (2.2) are not random and so the rates {F } are Gaussian. Kennedy assumes that the instantaneous forward interest rate F for date t at time s, 0 < s <t, are given by: 1  m  1  s>t  s>t  s>t  (2.3)  F,, = /JL,, t  +  t  X  8it  with 0 < s < t where X is a centered continuous Gaussian random field and p deterministic and continuous drift function. He assumes that the initial term structure {fio t,t > 0} is specified. The covariance of X is specified by : Stt  Sjt  is the  t  Sjt  (2.4)  Cov(X ,X ) Sl>tl  S2M  = c(si  As ,t t ) 2  u 2  where 0 < < U and i = 1,2. The function c is given and satisfies 0(0,^1,^2) = 0. It is implicit that c(si A 52,^1,^2) is symmetric in t\ and t and is nonnegative definite in ( s i , i i ) and (si, t\). 2  3. Comments on Kennedy's model Kennedy's model includes infinite-factor models, such as those based on the Brownian sheet, but also finite-factor models such as that of Jamshidian (1989) (where bond prices of all maturities are just functions of the spot rate). It also includes the H J M model when the drift and volatilities are deterministic. Kennedy's model allows the possibility of infinite-dimensional cases, when, for example, the forward rate surface is generated by a Brownian sheet. This fact has significant empirical implications. The Gaussian models can be described by their drift and covariance functions. If we combine this with Kennedy's Theorem 1.1, then we just need to estimate the covariance function in order to fully describe the model and price derivatives. We don't need to specify from before the number of factors that are driving the term structure, as in most of the other multi-factor models (like Brennan & Schwartz, Longstaff & Schwartz, etc). Once the 1  We define Gaussian Random Field in Appendix K. 3  covariance function has been estimated we can easily find the number of factor correlations that are significantly different from unity. The Gaussian model with independent increments has the important advantage that it is quite tractable and yields a simple characterisation of the martingale measure. In addition, prices of derivatives, such as interest rate caps, have an intuitively appealing form. On the other hand though, the Gaussian model can't explain the heavier (than log-normal) tail behaviour that is exhibited by the observed distributions of market prices. The existence of these "fat tails" is related to the fact that the coefficients of the model, which are taken to be constant, are in fact random. This could be considered as an important drawback of the Kennedy model. For a generalisation of the Kennedy model to include non-Gaussian random fields we refer to Goldstein (1997). Finally a drawback of the random field interest rate models is that they allow (but not necessarily impose) a multi-humped yield curve. This is rarely found empirically.  Data  4.  The first, and most important step, in examining this model is to try and estimate the drift and the covariance function. In order to estimate them we are going to use interest rate data. Choosing the data though is not as simple a task as it may seem. This is perhaps not surprising in view of the nature and quality of data on term structure. Prices of coupon bearing and zero coupon bonds, L I B O R rates, index linked bonds together with options and futures on such things all provide information on interest rates. The data that we used were provided by D A T A S T R E A M . As we mentioned above, in the following we are going to try to calibrate the model using interest rate data. The best choice for interest rate data would be the US Treasury data. The existence though of a large number of methods to strip the Treasury bonds of their coupons (in order to calculate the zero curve), providing us with an equally large number of possible zero curves, made us choose another time series. So we worked with the "Treasury constant maturity" series. The description of this series, as provided by the Federal Reserve, is : "Yields on Treasury securities at "constant maturities" are interpolated by the U.S. Treasury from the daily yield curve. This curve is based on the closing market bid yields on actively traded Treasury securities in the over-the-counter market. These market yields are calculated from composites of quotations obtained by the Federal Reserve bank. The constant maturity yield values are read from the yield curve at fixed maturities, currently 3 and 6 months and 1, 2, 3, 5, 7, 10, 20 and 30 years". This series offers the advantage of not being dependant on which method we will use to calculate the yield curve. The only possible objection then is with the method that the Federal Reserve uses to calculate this zero curve! The above series begins in 01/01/1991 but for the calibration we used data from 01/01/1995 up until 12/10/1999. You can find the graphs of the treasuries in Appendix A . 2  In the following we are going to work with logs of treasury bond prices and not yields. We should therefore transform the yields into prices. This can be done easily through the 2  I n the following we will assume that T i = 63, in other words 3 months inlcude 63 working days.  4  definition of the yield : 3  *tT  _  —  logP , ~  t T  ' T-t where T is the maturity of the bond and t the time that has passed since it's issuing. Since the yields we have are Y at each date, s for some maturities T , we can easily get the corresponding prices P , +TStS+T  S S  Treasury price logs 3-month 6-month 1-year 2-year 3-year 5-year 7-year 10-year 20-year 30-year  Test Statistic for data  Unit Root Rejected  Test Stat. for differ, data  Unit Root Rejected  -1.6673 -1.7918 -1.6122 -1.9245 -1.8749 -1.6318 -1.5013 -1.3089 -1.2108 -1.1183  No No No No No No No No No No  -6.6969 -6.7725 -5.9662 -5.8454 -5.7466 -5.3576 -5.5792 -5.6666 -5.6015 -5.6382  Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes  Table 1 : The critical values for the test are -2.57 at the 10% level and -2.86 at the 5% level. Examining the prices of the treasuries we notice two things. First of all their rates seem to have a big drop around October 1998. This is mainly because of the Asian crisis. The other thing that we notice is that the bond price series (and their logs) may be non-stationary. By applying the Dickey-Fuller unit root test to the data, the unit root hypothesis can't be rejected and therefore the series can not be assumed stationary. Then by applying the Dickey-Fuller test to the first differences of the series at the 5% and 10% level, we find that can reject the hypothesis that the first differences have a unit root with 95% confidence . Therefore in the rest of the paper we can safely assume that the first differences of the bond price series (and their logs) are stationary. The reader can find the results of the tests in Table 1 (we got the same results performing the Phillips-Perron test; in the above results we have assumed that there is no trend in the prices but we get similar results if we assume trend) . 4  5 . Working with Kennedy's model From here on we are going to work in discrete time and we are going to try to find a formula which will help us estimate the drift and covariance functions. So the time s will take one W e should note here that we treated the yield as annual although we weren't able to confirm that with D A T A S T R E A M who didn't have this information. W e remind the reader that the finding of a unit root in a time series indicates nonstationarity. The unit root hypothesis can be rejected if the T - T E S T statistic is smaller than the critical value. We used S H A Z A M to do the unit root tests. 3  4  5  of the (integer) values 0, 1, 2, ... , 986. For convenience let M = 986 be the last "date" of our data. 5.1 Estimating the drift function 5.1.a Formulating the problem The price at time s of a bond paying one unit at time t > s is : (5.1.1)  p  =~ £  Sjt  '-  F  e  udu  The data we have restricts us to knowing just the prices P +T where T is one of the maturities of the bond and s = 0, 1, 2, ... , M. Let us pick a specific maturity T. Then, if we apply equation (2.3) in (5.1.1) we get: StS  Ps,s+T  and equivalently (5.1.2)  log(P , ) 8 s+T  -f: p , du-f x , du  •-=  s+T  +T  s  s u  s u  Taking expectations: rs+T  E[log(P )]  E[-  s>s+T  rs+T  p ,udu -  /  /  s  fS+T  E[log(P )}  <^  s>u  fS + T  = - \  s>s+T  X du]  Js  Js  E[n ]du - / s>u  Js  E{X ]du s>u  <^  Js  rs+T E[log(P  )]  SiS+T  = -  / Js  fJL , du s u  The last equivelancy comes from Kennedy's assumption that X is a centered Gaussian random field (so it has zero mean) and p is deterministic. As we mentioned above the logs of the bond data are not stationary, so the value of E[log(P )} will depend on the time s. Therefore we can't estimate this expectation (for each s) by a sample mean. The first differences, though, are stationary and therefore we can work with these: s>t  s>t  StS+T  E[log(P  )  s+hs+l+T  - log(P , )}  = E[log(P i )} rS+l+T  s s+T  -  s+ tS+l+T  -  f Js+l  E[log(P rs+T  )}  SiS+T  /J, i du+  / Js  s+ tU  Hs, du u  Working recursively we have : rs+l+T rs+l+T / p +i,udu ls+1 Js+l s  =  rs+T fS+T / p , du Js s u  - E[log(P  i )  s+hs+ +T  6  -  log(P  )}  SiS+T  rs+l+T rs+i+i  rs+2+T p +k,udu  /  s  =  Js+2  fJ- +i,udu  /  - E[log(P  a  i  s + l t S +  + T  )  -  log(P  )\  StS+T  Js+1 rs+T  and therefore we get the general = / formula fi du  -  2 •E [ l o g ( P  -  k •E[log(P  a>u  Js rs+T s+T = / /J. , du Js  rs+k+T / fJ. +k,udu Js+k Is+k  s + h s + l + T  )  -  )  -  log{P  )}  SiS+T  r  s  s u  i,  s +  where s = 0, 1, 2, ... , M and k = 0, 1, 2, ... , M — s. We can then estimate E[log(P i i T) — log(Ps,s+T)]  IS+  )}  = „  s>s+T  M  ^[%(P +  1  i + M  =  + i + T ) - ^ ( P ^ + r ) ] + ex  By replacing the last equation in  i>i+T  s  s s+T  i=0  where is the unbiased random error of the estimate. Let A ( T ) = j^Y$Lo[lo9{Pi+\,i+\+T) ~ ^og{P )}. the previous one we get : rs+k+T / ri +k,udu Js+k  log{P , )\  from :  s+ tS+ +  £[ZO0(P,+I I+T) - log{P  s + 1 + T  rs+T / Hs, du Js u  -  k • A ( T ) 4- k • e  T  and for s = 0 we have : rk+T  / Jk  rT  p ,udu k  = / no, du - k • A ( T ) + k • e Jo u  We know that fj, , = -fo.u and therefore drift function must satisfy: 0 u  (5.1.3)  ft+ p. , du  T  p du  = -log(Po ).  0tU  = -log(P , )  T  k u  So we conclude that the  !T  - k • A(T) + k • e  0 T  T  where k = 0, 1, 2, ... , M . Notice in (5.1.3) that the right part of the equation is comprised of known parameters, except of course for the error e . We can estimate as follows: Let pi = log(P , +i+T) - log(P ). Then T  k>k+T  k+1 k  i  So  M  M  (M + l ) M  (M+l)  2  1  2  •E .1=0  M  —  ( M + l>)  ijl  i  7  -,2  M  1  .<J2- £ 1  m=0  M - m  E i=0  Cov(pJ,pf J +  M  (Po,Po)  Cov  (  M  +  M  2 • E (  1 )  -m  M  + 1)- Cov(plpl)  +  m=0  We can easily estimate Cov(pJ,pJ )  1  M + 1 COV(PI,PI)  (we refer the reader to section 5.2.a for that) and  +m  therefore get an estimate of the order of the error by calculating ^E[e\\: 6m  3m  Maturity  o(io- ) o(io- ) 1.32 • lfr  E[4]  o(io- )  n  O(10" ) 2.71 • IO"  6  \A(T)\  Maturity  6  o(io- ) o(io- )  10y O(10" )  O(10- ) 5.7- IO"  O(10- ) 5.2-10-  9  3.6 • 10~  A(T)  5  Table 2 :  1.67-10~  2.11 • IO"  o(io- ) o(io- ) 8 4  1.2 • I O  5  5  9 5  5  30y  20y  4  5  o(io- ) o(io- )  5  6  8  5  5  T  .5.72 • 10~  o(io- ) o(io- ) 10  5  6  7y O(10- )  9  jE[e ]  10  6  5y  E[4)  o(io- ) o(io- )  n  3y  2y  iy  - 4  O(10" ) 8  O(10- ) 8.2 • I O 4  - 5  Estimate of the order of  We notice that the order of the value of and A ( T ) are close. One reason could be that the error in the estimation of Cov(pf ,pj ) is not small, as we have assumed (this can actually partially explain the problems we had in estimating the covariance function of the model), although we can't be sure about that. Nevertheless in the following we will ignore e. +m  T  5.1.b Estimating the drift function from equation (5.1.3) We are going to work as following: • We will fit functions to log(P ) tively. 0iT  and A ( T ) , as presented in figures B l and B2 respec-  • We will then differentiate (5.1.3) with respect to the maturity T and get the drift function. Fitting a function to Zop(Po,r) proves to be a simple task. Examining the goodness of fit of polynomials of degrees 1 to 6, we find that the cubic polynomial O J T + (5T + + 5, presented in figure B3, is our optimal choice . If we use polynomials of degrees 4 or higher the fitting doesn't improve significantly. Before moving on to the fitting of A ( T ) , notice that: 3  2  5  A(T)  1 — M +  M  Y\ °9{Pi+\,i+i+T)  - log{P )}  l  iii+T  =  1 , ^—-  / PM+I,M+I+T\  and therefore A(0) = 0. 5  W e found a = 1.27 • l C T , /? = - 1 . 5 5 • I O " , 7 = -1.92 • I O " and S = - 2 . 4 • 10" 1 2  8  4  8  Examining the plot of A ( T ) for the 10 maturities we have available, testing a big number of functions and keeping in mind the above comment we concluded that A ( T ) is best described by the function A ( T ) = a ( e - e ), as shown in figure B 4 . We can then rewrite (5.1.3) in the following way: 6 ( r _ c ) 2  li du ktU  = -aT  - (5T -  3  2  7  bc2  T -6-k-  6  a(e b{T  c)2  - e )+e +k• e bc2  l  2  T  T  where e\ is the fitting error of log(P ^), &r is the fitting error of A ( T ) and where k = 0, 1, 2, ... , M . Differentiating this equation with respect to T, and ignoring the errors, we can get an estimate of the drift function: 0  3aT - 2f5T - 7 - ft • 2ab(T - c ) e 2  6 ( r  -  c ) 2  with k = 0, 1, 2, ... , M. We should note here that p has the form we expected, since it can be easily shown that stationarity implies /J,k,k+T — f(T) + k • g(T) for some functions / and g. We plotted the graphs of Pn , /J4OO,4OO+T d 7^IOOO,IOOO+T figure B5. We can see that the maturity T defines most of the drifts' behaviour, something perfectly logical, while the date k changes slightly the value of the drift, depending how far in the future we are looking ( i.e. how big k is) . Finally, going back to Kennedy (1994) and looking at his Theorem 1.1 we can find a second equation that the drift function p must satisfy in order for the discounted bond process to be a martingale : k>k+T  a n  m  +T  7  s>t  fj, = Po,t + Jo c(s A v, v, t)dv  (5.1.4)  Sit  for all 0 < s < t . If (5.1.4) holds for all s, t then the real measure is the risk neutral measure. If it does not hold for all s, t then in an arbitrage free market there exists another measure under which the discounted bond-price process is a martingale. 5.2 Estimating the covariance function 5.2.a Estimating  Cov{p ,p ) {  k  J  s  We are now going to try^ to estimate the covariance function c. In order to do that we will need to calculate Cov{p ,p ), where pi = log(P i i )-log(P ) with k = 0,1,2,M—l. Cov(p \p ) are the partial autocovariances of all the series corresponding to each pair of maturities of our differenced-log-bond data. ri  k  J  s  k+ ik+ +T  ktk+T  3  k  s  Remark 1 : k, s can take any values between 0 and M — l . But the covariance func/ T T \  / T ' T \  tion is symmetric and therefore Cov(p ,p ) = Covyps jP^) so we should calculate Cov(pl ,pJ ) for s > k. It would then be easier if we wrote s in the form s = k + m i  3  k  {  1  s  J  W e found a = - 1 . 3 • 1 0 " , b = -1.08 • l O and c = 5019. T h e reader shouldn't be troubled with the value T = 5019 which defines the drifts behaviour, because we believe it just reflects information extracted from our sample. 6  4  - 7  7  9  with k = 0, 1, 2, ..., M - 1 and m = 0, 1, 2, ..., Af - 1 - k. Then we want to estimate , Cov(p \p ) for k = 0, 1, 2, ..., M - 1 and m = 0, 1, 2, ..., M - 1 - /c. We should point out that stationarity implies that Cov{p ^p ) is independent of the value of k. T  j  k  k +m  T  T i  k +m  R e m a r k 2 : We are going to estimate Cov(p \p r  C° (Pk'iPk+m)  ~ Ci,3,m +  v  ) using the (unbiased) estimator C  1 j  k  k +rn  i j ] m  :  i,j,m  e  where C- •  A  -  1 M _i  a,6  Ti  _  m  y  1  o-a  M-l-m V (n 2^  T  -  A°> - M  1  m  V . fr, * ) KPl+m 7  A  A j  m  '  M  - ^ J  ;  +1  with A; = 0, 1, 2, ..., AT, m = 0, 1, 2, ..., N — k, N < M — \ and e j j which we assume to be small.  >m  a random error  We should make some comments here regarding the estimation: (a)  Cij,m is independent of k since the series are stationary.  (b)  We placed a restriction on m by not allowing it to become bigger than N — k, where N < M — 1 (this restriction immediately imposes another restriction on k to be less or equal than N or m may become negative). The reason is that the closer m gets to M — 1 — k, the smaller the sample we have for estimating Covipfc' ,Ph+ ) is. Since we want the errors to be small we need a relatively "big" sample . Usually by "big" statisticians mean more than 50 — 60 data points, although this number is relevant to the application. In the following we assumed N = M — 1 — 100 (and therefore the smallest sample we are going to use is 100). m  (c)  The way we estimate the covariance, the sample size will differ depending on m. In one sense someone could argue that this is not consistent, but practically what we do is exploit as much information as we have available. Therefore the smaller m and k are the better our estimate is and consequently the smaller eij.m will be.  (d)  The indices i, j just specify with which series we are working.  10  5.2.b Formulating the problem For k = 0, 1, 2, ... , N and m = 0, 1, 2, ... , N — k we have - using the definition of equation (5.1.2) and the fact that the drift function p is deterministic - the following: Cov{p \p% ) = Syt  T  k  m  rk+Ti + l  fk+Ti  = Cov(  X  rk+m+Tj + l  du-  X  k>u  Jk+l  Jk+l  k+ltU  )dzdu+  k+m+hz  N  .  ,  rk+m+Tj+l  -/  Cov(X ,X +m+i, )dzdu ku  k  Jk  Jk+m  = rk+Ti Jk  c(kA(k+m+l),u,z)dzduJk+l  rk+m+Tj+l  =  fk+Ti  / Jk+l  rk+Ti + l  Jk+m  /  Jk  rk+m+Tj+l  c((k+l) A(k+m), u, z)dzdu  rk+m+Tj  c(k + 1, u, z)dzdu + /  Jk+m+l  rk+Ti  z)dzdu  rk+m+Tj  /  Jk+m+l rk+Ti+l  c(kA(k+m),u,  Jk+m  rk+Ti+l  /  k+miZ  rk+m+Tj  / Jk  rk+m+Tj+l  - /  )dzdu  k  c((k + l)A(k+m+l),u,z)dzdu+  Jk+m+l  Cov(X +i,u,X  k+miZ  Jk+m rk+Ti  / Jk+l  )dzdu  k!U  / Jk+l  rk+m+Tj + l  Cov(X ,X  rk+m+Tj  -  z  Jk+m+l  rk+Ti+l  / fk+m+Tj  rk+Ti+l  / Jk  rK+m+ij  / fk+ i T  Jk+m+l  rk+Ti  du)  k+rritU  Jk+m rK+±i  Cov(X ,X _  +l  r  X  Jk+m+l  rk+m+Tj + l  /k+m+Tj  du-  k+m+l7ll  Jk  rk+Ti+l  = /rk+Ti+l  rk+m+Tj  X du,  k+1<u  c(k,u, z)dzdu  Jk+m rk+m+Tj  with k = I0, 1, 2, ...c(k,u, , N, mz)dzdu-I = 0, 1, 2, ... , NI - k. So wec(k+(lAm),u,z)dzdu end up with the formula : -I Jk  Jk+m+l  Jk+l  Jk+m  (5.2.1) rk+m+Tj+1  Cov{pi\p ) 3  k +m  (  = I  rk+Ti + l  /  Jk+m+l  rk+m+Tj  —  rk+Ti  c(k + l,u,z)duJk  rk+Ti+l  / m0  /  Jk  (c(k + 1, u, z) — c(k, u, z))dudz  Jk+l  • K' (m) J  =  c(k + 1, u, z)du - f > c(k, u, z)du  1  k+T  k  I tT k  k  Tl  Ki(z)dz  c(k,u, z)du\ dz Jk  with k = 0, 1, 2, ... , N, m = 0, 1, 2, ..., N - k. Let • Ki(z) = J ^  \  t  c(k + l,u, z)du —  /  j  rk+T  Jk+m rk+Tj \Jk+lrk+Ti+l  + S  c(k,u,z)du) dz  \Jk+l  /  \  if m > 1 11  J  •  J  ^ "(0) = J  {f :?  k+Ti  k  k  c(K «, z)du - f  +l  k  k+Ti k  c(k, u, z)du) dz  We can then rewrite (5.2.1) as :  Cov(pV,p%J = Fi (m J  + 1) - F^{m)  Working recursively we have : (5.2.2)  F^(m + 1) =  Let G{{z) = J  o Covffl J )  c(k,u,z)du  +Ti k  + i^(0)  k+a  . Then  Kl(z) = Gl (z) +1  - G\{z)  G\ {z) = £ K l ( z ) + Gj(z) +l  6=0  But G^z) = 0 since c(0, u, z) = 0 V u , z and therefore we have : (5.2.3)  Gi(z) =  E Z Kl(z) k  l  b  0  with k = 1, 2, . . . , TV. So for these fc we have : rk+Ti rk+Ti  rk+m+l+Tj rk+m+l+Ti  /  /  Jk  c(k,u,z)dzdu=  Jk+m+l  E  Kl(z)dz  L .n Jk+m+1 6=0 + + m  rk+m+l+Tj rK+m+i+ij  /  "  l  '  = E  ~}  k  G\(z)dz = /  Jk+m+1  / Jk  rk+m+l+Tj rK+m+l+lj  K  J k+m+l  b  T  k  =  z  d z  Q  +E ^ ( 0 )  + 1) = k- E Cov{p \p% )  L _ n b=0  Y K )  a  ^—n 6=0  „—n a=0  Remark 3 : Above we used the fact that the series is stationary and therefore A;—1 m  rn  6=0 a=0  a=0  So far we have: (5.2.4). rk+Ti  for  rk+m+l+Tj  /  /  Jk  Jk+m+l  "L  c(k, u, z)dzdu  „  *lz}  T  = / c - E Ow(P?,P&«) + E a  =  0  b  =  :  ^6 (°) J  Q  k = 1, 2, ... , AT and m = 0, 1, 2, ..., N - k.  Remark 4 : We shouldn't be concerned with the fact that (5.2.4) holds for k = 1, 2, ..., N and not for k = 0. We didn't lose any information in the calculations. It is just that the left hand side of (5.2.4) contains c(k, u, z), which is equal to 0 for k = 0. Let 12  •  /fc.m.ij = Ik ^  C  Sktm  +±i  ( > > Z)dzdu fc  U  • V  =  • B j  = E t o b (Q) = ( ~ hi) • Y%ZlUb,i,j,i - fb,o,j,i)  mthJ  kti  E^ Cov(p ',p !, ) T  0  1  F  T  k  J  k  a  1  for k = 1, 2, ... , AT, m = 0, 1, 2, ... , N — k and i, j = 1, 2, ... , 10. In the above we kept in mind that FQ' (0) = 0. Notice that V i j does not depend on k since the series are stationary. Then by replacing the above variables in (5.2.4) and keeping in mind Remark 4 we get: 3  m>  (5.2.5)  fk,m,i,j  =  t  k • Vm,i,j + Bk,i,j  k = 0, 1, 2, ..., N and m = 0, 1, 2, ..., N — k + 1. The very interesting thing about (5.2.5) is that m, k are separated. Now working recursively with (5.2.5) we have : fl,m,i,j ~~ Vm,i,j f2,m,ij = 2 • V j + B ,ij  = 2 • Vmjj + (Vi j - Vo,i,j) 3 • V ^ i j + (V j - V i j) + 2 • ( V j - Vo,ij) = 3 • V j + 3 • {V j - V j) m>it  /3,m,ij =  2  =2•  lti  fk,m,i,j — k • V ij m>  0t  V j + (fl,l,j,i ~ fl,0,j,i) m>it  M  t  ti  m>ii  Xti  0>ii  ^2 ^ • ( V i ^ j  H  with k = 0, 1, 2, ... , AT and m = 0, 1, 2, ... , N — k + 1. Let:  Then we have the main result of this paragraph, the equation: (5.2.6) fk,m,i,j ~ i,j ' k r (V ij — Vij) • k for k = 0, 1, 2, . . . , N and m = 0, 1, 2, ... , N - k. Equation (5.2.6) gives us a very interesting result. For each pair of maturities we can see exactly how fk, ,i,j is dependant on k and on m. Moreover, through V ,i,ji we have an explicit formula for fk,m,i,j which we can use to estimate the covariance function c(k, u, z). We should remind the reader here that V ij can be estimated by A ^j = J2T=o Ci,j,a f ° m = 0, 1, 2, ... , N — k; we refer the reader to section 5.2.a. v  2,J  mi  m  m  mt  m  13  r  5.2.C Estimating the covariance function from equation (5.2.6) At first glance equation (5.2.6) gives us a clear way of how to estimate the covariance function c(k, u, z). The unknown covariance function is on the left hand side while a certain transformation of the observed bond prices lies on the right hand side. If we examine the problem closer though we can see that it can't be solved that easily. The limits of the double integral are such that most numerical schemes will never work or even if they work the estimation errors will be huge. The main problem is that we have to look at the problem for each k separately. B y picking specific k, i, j we get to solve a system where the number of unknowns and equations varies greatly depending on the choice of k, z, j, but with the unknowns being always more than the equations. We can increase the number of equations we have in our disposal by keeping constant two of the three parameters k, i, j and changing the third one but even then we won't get enough equations to solve our system. We should add here that this system is really hard to manipulate because of it's specific form. Moreover all the numerical schemes and optimization algorithms we used failed. After seeing that we couldn't attack the problem directly, we thought of combining this problem with the drift function problem, that is with equations (5.1.3) and (5.1.4). This way we get a few more equations from some restrictions we can impose to our system. Unfortunately on one hand the problem becomes much more complicated and on the other hand the size of it becomes "too big" (at least for the computers we were using). Moreover the equations we had still weren't enough for a big number of k so we had to stop. Since we didn't manage to solve the problem using a direct numerical approach we tried two different approaches: (A)  One approach is to fit a function to V ij, or more precisely to A n . i j - Then the right hand part of equation (5.2.6), namely fk,m,i,j, will become a continuous function of k, m for each i, j. At that point we will have two options on how to continue. Either work numerically (now it will be easier to do so than before) or work backwards the algebra and try to find which function's double integral can give us the function fitted to fk,m,i,j-  (B)  The other approach consists of making assumptions about the form of c(k, u, z) and working through the algebra to find fk,m,i,j- We can then compare the left with the right hand side of (5.2.6) and try to calibrate the parameters so that fk,m,i,j will have a form like (5.2.6) and -at the same time - give the best possible fit to the data.  mi  The better approach of the two is (A) using some kind of numerical algorithm after fitting a function to fk,m,i,j- If don't want to do that, but want to work some more with the algebra then both approaches have their advantages and disadvantages. The algebraic (A) will most probably lead us to a covariance function that doesn't satisfy the necessary conditions for c(k,u,z) (like the symmetry condition). Approach (B) on the other hand will lead us to a game of guesses regarding the form of c(k,u,z) without knowing anything about it and hoping that we will get an equation like (5.2.6). On the other hand the advantage of these two algebraic approaches over the numerical (A) is that if we manage to solve the problem w  e  14  using one of them, it's solution will probably be a more accurate one since it won't have an approximation error (or if it has it should be relatively small). 5.2.d Approach A The reader can find graphs of A j in Appendix C for m = 0, 1, 2, ... , N and for various values of i, j.The reader can notice in the presented graphs that A ^j can have different forms for the various values of i, j. At the same time most of these graphs have some specific patterns: mii  m  • When i ^ 1 we have a sudden drop around k = 650. • There is an upward spike around k = 550 for all i, j. We can also get some interesting insight by examining the changes of the values of A j in one or two of the three indices while keeping the rest constant. For example we notice that if we keep i or j constant and examine the graphs of A jj for various values of m and j , we can see similar patterns. This could help in understanding the behaviour of ^4 ,i,j and therefore to realize which function would best fit A i j. Unfortunately the dimensionality of the problem combined with the unusual behaviour of the A jj curves for the different i, j ' s make the problem really difficult. For the estimation we used a number of algorithms, mainly least-square based. The final results were derived though using the "lsqcurvefit" script of M A T L A B 5.3 (which is based on a non-linear optimisation algorithm). We used this script despite the fact that during the course of the estimation we wrote some routines which performed better than "lsqcurvefit", because the difficulties we had in the estimation weren't so much in the algorithms used for it but more in the nature of the data. We tried to fit a large number of functions to the data A ij, with the ones giving the best results being of the form: mti  m  m  m> t  m  mt  g {m) itj  = otij • (m + A,j) ' + S  itj  Pi,j{m) - ctij • m + = Oij •  a i e  ^  { m  i:j  •m + y  4  hij(m)  • cos(uj  7l,J  i ) 2  • m + ip ) + r}  2  itj  P i  id  • m + 6 • m + 6 • cos(u  3  - '  • m + <p ) + riij itj  itj  + 7ij • e * ^ " * ^ + 9 7  1-  2  itj  • cos(u  itj  itj  itj  itj  • m + (p ) + rj itj  itj  Of the above functions only hij with Ci j,Tij < 0 can give a realistic covariance function. The covariance function should give smaller values as m increases (i.e. when you examine the covariance of bonds priced at dates further apart) which is a property that the polynomials won't have. Therefore ptj is rejected since this polynomial suggests that the covariance function is a second degree polynomial in m. The first function, g j, will be rejected for the same reason if 7^ is bigger than two, which is the case for most (i, j). Therefore g is rejected too. The reader can find in Appendix D, for comparison reasons, the graphs of hij for the same values of i , j as the ones plotted in Appendix C. It can be seen that h j can in general t  8  it  id  it  8  P i c k i n g a higher order polynomial improves only slightly our results.  15  replicate most of the patterns that A j follows. Unfortunately we weren't able to achieve a better fit than the one presented. We should also not that in some cases the errors are very big. The fitting could possibly be improved by adding some more terms in h j like another trigonometric function or an exponential, but it is our view that this would lead on one hand to a much more complex problem and on the other hand to over fitting. Generally speaking we noticed that we had the biggest errors for m > 650 while the errors for small rn were quite satisfactory - although almost never approaching (except in very few cases) the 5% (or even the 10%) statistical benchmark. The errors actually tended to be bigger than 30% of the actual values which suggests that the fitting - although being the best we managed to get - was still bad. The only relatively satisfactory fit was made for 1 < m < 300 where the number of data points with errors less than 15% is "small". In Appendix E the reader can find the number of data points whose error exceeded the 5%, 10%, 15%, 20%, 25% and 30% benchmarks. The reader can also find there two statistical values we examined. These numbers, although they don't mean much in a non-linear optimisation procedure, can help us compare the goodness of the fit across the various values of i, j. After having fitted the function hij(m) to the data we can use (5.2.6) to get an estimate mti  it  of fk,m,i,j- Recall that fk, ,i,j — Ik Ik+m^ (^> i z)dzdu. So now we have to figure a way to estimate the covariance function from this equation. One way would be to work as we did with the drift function, by differentiating with respect to Tj and Tj. Unfortunately we can't do that here because we don't know how fk,m,i,j depends on Tj and 7*. This information is built in the coefficients of hij. Another way would be to work numericaly. Once again though we can't proceed any further in the estimation of the covariance function since in whatever numerical scheme we apply, we have to work for each k separately, which reduces greatly the amount of information we have (notice that we don't necessarily have c(k,u,z) = c(l,u,z) if k ^ I). For this to work numerically we need more maturities, i.e. not 10 mauturities but N maturities! After seeing that we can't go any further in the estimation of the covariance function if we try to fit functions h j(m) to the A j data, we decided to work the other way i.e. fitting functions h (Ti,Tj) to A j. This method though proves to be even more difficult. Plotting A i j for various values of m we found it very difficult to find a function that could describe A^j's behaviour. The plots A j are very similar for m up to 400, but for m bigger than that there are continuous changes in their form. The reader can find some of these graphs in Appendix F . Even if we assume though that we can fit a function h (Ti,Tj) to A ij, it would still be difficult to get the covariance function. Specifically we would have: +Ti  c  u  m  it  m  mti  mtii  mt t  mtit  m  rk+Ti  / Jk  rk+m+Tj  / Jk+m  c(k,u,z)dzdu~h (T T )-k m  h  +  j  mt  k(k — I)  (hi(T Tj) u  2  -  ho(T Tj)) i}  and after taking the partial derivatives with respect to T; and Tf c{k, k + T ,k + m + Tj) ~ DjDihrniTi,Tj) i  •k +  M^zil .  D  .  D  . ( ( .) hl  TuT  _  ( ,  ho  Ti  j v ) )  From this equation we will get the values of c(u A v, u, v) for u, v = 0, 1, 2, N + T\Q. So fitting a function to this surface will give us the covariance function. A l l this depends on the 16  fitting of h (Ti,Tj), i.e. if the fitting errors are smalls and h (Ti,Tj) being smooth (since we have to estimate DjDih (Ti,Tj)). Unfortunately it is questionable how well we can fit h (Ti,Tj) to Ajnjj when we have just 100 sample points for each m (which basically means that from 100 points we will get estimates for 886 points) and especially by examining the plots of Amjj, which despite their smoothness for m up to 400, for larger m they become very coarse. Moreover the reader should keep in mind that the fitting should result in a symmetric covariance function, something that is not guaranteed by the above procedure. m  m  m  m  2  5.2.e Approach B The results of Approach A weren't satisfying so we decided to proceed with Approach B . The first step with this approach is to make an assumption on the form of the covariance function and then compare it with equation (5.2.6) to see if this assumption can hold. We list five of the functions we used in Appendix G , along with the calculation of their double integrals. As it can be seen none of these function satisfy (5.2.6). The next thing we did was to see if any of those functions can approximately satisfy (5.2.6). Indeed we can find conditions so that two of the aforementioned functions can, approximately, satisfy (5.2.6). These are: a(k,u,z)  = kae~^ ~ \ u  c (k, u,z) = k-a-  z  , B> 0  [ -0(ru-z+5f  2  e  -p( - )^  +  e  TZ  u+S  +  .  k  ^ p  C  >  0  This can be seen in the following way (we are going to show that for C\(k,u,z) although it can be similarly shown for the various approximations of c (k,u, z)). We show in Appendix G that: 2  rk+Ti  rk+m+Ti  /  /  Jk  Jk+m  itm>Ti  /  n  = k^S  c(k,u,z)dzdu  rk+Ti  Jk  - \ l - e" ][l m  e~^\  w  p  , rk+m+Ti  /  ,  n  c(k, u, z)dzdu = k^  Jk+m  if Ti-Tj  T i  1  {2/5 • ( T - m ) - [ l - e - ^ ( l - e ^ ) ] - e -  Z  <m<  rk+Ti rk+m+T, Jk Jk+m  ^  d u=  k  a  < f>  ^  _ _ (1  e  _  p T ] )  e  _  0  m +  e  _  m  {  1  _ ^  1  if m > Ti , 2  . * e  Ti and  GiK«,j) = ^ ^ 1 l - e ^ « l - e ^ ]  = j  +  1  if m < Ti — Tj. Now let:  Gi(m,  / J m  i  p  {20 • (Ti - m) - [1 - c " ^ ( l 17  e™)] •  + e " ™ • e*»}  T  j  )  e  , j m  if Ti-Tj  <m< Ti and  Gi(m,i,j) if m<  = j  {2(5 • Tj - (1 - - W ) • - * » + e " ^ ( l - e " ^ ) • e ^ }  2  C  c  T-Tj.  Then fk,m,i,j = A; • Gi(m,i,j). we need: fk,m,i,j — i,j ' ft + iYm,i,j v  2  Let Tj > Tj. For equation (5.2.6) to hold approximately,  ~ i,j) ' * ~  ~ 1) ' ^ i j  v  ^ '  KJ,JJ » (A; - 1) • W j j or equivelantly f ,m,i,j > A;(A: - 1) • v .  i-e.  k  id  .  fk,m,i,j ^ A:(A: — 1) • J(/I,I,J,J — /i,o,ij)  i-e.  A; • G ^ m . i , j ) > jA:(fc - 1) • ( G ^ M , j ) - G^O,*, j)) Gi{m,i,j) » |(A; - 1) • ^ {-2/3 + [1 - - ^ ( l - e ^ ) ] • (1 - e ) + e " ^ ' • (e' - 1)} If Tj < T we will use the third form of G\{m, i, j) (i.e. Gi(m, i, j) for m < Tj — Tj) and working similarly we get: G I K M ) » i f * - 1) • ^ ( 1 - e-^)(l + - ^ - D ) • (1 - e ^ ) _/J  3  e  e  Now we have to calibrate the coefficients a, 6 in a such a way so that G ^ m , i, j) ~ A j j (where A ^ is the estimator of V iJ) and at the same time the above equation to hold for k — 1, 2, ... , N. If we can find such coefficients we will have estimated the covariance function. So now we are going to try to estimate the coefficients of G\ (m, i, j) for c (A:, u, z) (for all cases) and G (m,i,j) for c (A;,u,2;) (for all approximations) and check if we can fit them to A ,i,j- The functions Gi(m,i,j) and G (m, i,j) can be easily found by taking in mind the results in Appendix G (just go to the double integral of the appropriate covariance function and take k = 1). Now as far as the fitting is concerned: m  m  ]  m>  x  2  2  m  2  • Ci(k,u,z) : The plots of G\(m,i, j)can be found in Appendix H . If we look at the second plot in Appendix H , and specifically G*i(m, 3,1), we can see the three different behaviours of G\(m,i,j) (in general). When m < Tj — Tj (so in our case when m < Ti - T = 189) the element 2/3 • Tj - (1 - e " ^ ) • e^™ =• 2(5 • Tj - e~ dominates. Then for T-Tj <m< Tj (in our graph 189 < m < 252) 2/3 • (Tj - m) takes over and finally, when m > Tj the values of Gi(m, i,j) are too small compared to it's values for smaller m, so it appears as if Gi(m, i,j) = 0. It is obvious that G\(m,i,j) can not be used to fit Amjj. prn  3  • c (k,u,z), approx. (i) : While calibrating this function we notice that we get the best results when 3 take values i n the interval [5 * 1 0 , 8 * 10~ ]. These values however don't satisfy the restriction 3 < 5 • 10~ so this approximation has to be rejected. The plots of G (m, i, j)using this approximation can be found in Appendix I. 2  - 7  6  8  2  • c (k,u,z), approx. (ii)/(iv) : These approximations are not going to work since they will be either exponentially decaying or exponentially increasing in m. 2  18  • c (k,u,z), approx. (iii) : This approximation won't work either since in both IQ and IQ the two highest powers of the polynomial do not depend on the parameters we pick so the solution will soon blow up. 2  Theoretically speaking c (k, u, z) has the desired property that at each date k the covariance between two bonds maturing at dates u and z will decay exponentially. It is obvious that the covariance function is not stationary. One other property of the above covariance function is that as time passes and k increases the covariance between bonds of different maturities will also increase. We don't find this very logical in general, but if we take two specific maturity dates u and z then c (k, u, z) tells us that as we get closer to the maturity of the bonds the covariances between their prices (of those two specific bonds) increases, which sounds fairly logical. As the reader can see, we weren't successful using this approach either. It seems that (5.2.6) is very hard to be solved. Although it does provide a useful condition that the covariance function of this model should satisfy, the actual estimation of the covariance function is still very hard. Something encouraging is that in both Approach A and B we found out that the best functions are of a similar form. Therefore maybe we should search for the covariance function in a set of functions similar to k • g(u — z, e ~ ^ ^ ). 2  2  u _ z  5.2.f Approach B assuming that the covariance function has the form mentioned by Kennedy (1997). In 1997 Kennedy wrote another paper in the Mathematical Finance journal extending his previous work from 1994. There he demonstrates that if further Markovian and stationarity assumptions are made, there is a unique covariance structure describing the equilibrium measure which depends on just three parameters, namely : •  C (k,U,z)  2 . A(*-«A*)-,i.|«-*|  =  3  CT  e  for some constants a, X > 0 and fx > | . We tested this function and found out that it neither gives good empirical results nor satisfies equation (5.2.6) (notice that it's double integral has no k dependence). Specifically if we take it's double integral we get: rk+Ti  /  /  Jk  rk+m+Ti  rj  ,  2  c (k, u, z)dzdu =  f-  3  - -» (e^m  e  \X • {IX —  Jk+m  m  - l ) ( e " ^ - 1)  x)Ti  X)  if m > Ti , /  rk+Ti  Jk  /  rk+m+Tj  Jk+m  c (k, u, z)dzdu =  CT  r  2  fl • (fi  —e  —  -Aim  ^  • {(e"  3  Am  ,  - e-» ) (1 -  ) + e^  m  m  A)  ) — e ^ ' • (e^~ )  X \II-X  -  19  A  /J,J  Ti  — e^ ^ ) • e _A  m  _ A i m  |  • (e^  Ti  ii m < Ti < m + Tj and rk+Ti rk+m+Ti  /  /  Jk  Jk+m  rp-  ,  c (k,u,z)dzdu = — i - — - . { e - ^ U - W 3  fJ. • ((J, — X)  a ( 2  |  A  - 1) - e^  -.  • e^  m  Ti  • (e^~ ^ - 1)) x  1  1  J  1  \fj, — X  n  Am  (l-e-  A T  0  if m + Tj < Ti. As we said c (k,u,z) can't even approximately satisfy (5.2.6). But even empirically, if we examine it's graphs, some of which are presented in Appendix J , we can see that this function doesn't perform well. This can be seen clearly in the last graph of Appendix J. We should remind the reader that the only assumption we made in the derivation of (5.2.6) was that the first differences of the log-bond prices are stationary. Therefore, combining the above with Kennedy's (1997) Theorem 3.3, leads us to the following result: 3  If the first differences of the (log) bond prices are stationary, the forward rates won't satisfy both the Markov and stationarity conditions. This leads us to a very practical result. If somebody is using either Kennedy's model or a model which is included in Kennedy's, he should first check if the first differences of (log) bond prices he is using are stationary. If they are this means that the forward rates won't satisfy both the Markov and stationarity conditions. Therefore a certain number of these models will not perform well with this dataset, either because they may assume the forward rates being Markov and stationary or because they may lead to such forward rates after the calibration. In order to calibrate the model then, one suggestion would be to use Kennedy's model and estimate the unknown coefficients from the formulas presented in the previous sections.  6. Conclusion In our days, finding a model which can describe the movements of interest rates satisfactory is a neccesity for a big number of financial institutions and corporations. The model, apart from being able to reproduce the observed patterns, should be easy and fast to calibrate. Although Kennedy's model is very general, and therefore it should be able to reproduce the observed data, it's calibration seems to be more than a time consuming exercise. Our efforts to calibrate the model, presented in section 5, weren't successful. Assuming that the logs of the bond prices are a stationary series we showed that the covariance can be found as the solution of the following system of equations: fk,m,i,j —  ' k "t" (-<4m,ij  ' ^  c(k, u, z) = c(k, z, u) for k = 0, 1, 2, ... , N, m = 0, 1, 2, ... , N - k, i, j = 1, 2, ... , 10 where: 20  rk+Ti rk+m+Tj  fk,m,i,j =  / Jk  c(k,u,z)dzdu  Jk+m  m  m,i,j =  }Z (pl^P k+a)  A  C0V  T  a-Q  P = log(Pk+i,k+i+T) - log(P ,k+T) T  k  k  i,i ~ 2 ' ^ ' >J ~ -^•o.i.i)  a  l i  where c(k,u,z) is the covariance function of the forward rates and P t is the the price at time s of a bond paying one unit at time t > s. This was the main result of the thesis. We believe that the solution to the above problem will belong in a set of functions of the form k • g(u — z, e ^ " * ^ ). This is where all our estimation efforts seem to point at. Apart from the above a second result was derived from ours and Kennedy's (1997) results regarding how appropriate is the choice of a model included in Kennedy's framework, based on the stationarity of the first differences of bond data. k>  -  -  21  References [1] A D L E R , R. (1981): "The Geometry of Random Fields", Wiley series in Probability and Mathematical Statistics [2] A I T - S A H A L I A , Y . (1997): "Do Interest Rates Really Follow Continuous-Time Markov Diffusions?", working paper, University of Chicago. [3] B R E N N A N M . J . and S C H W A R T Z E.S. (1979): " A Continuous Time Approach to the Pricing of Bonds" J. Banking Fin., 3, 133-155. [4] C O N S T A N T I N I D E S , G . M . (1992): " A Theory of the Nominal Term Structure of Interest Rates" Rev. Fin. Studies, 5, 531-552. [5] C O N T , R. (1999): "Modelling term structure dynamics: an infinite dimensional approach.", working paper, Ecole Polytechnique, Palaiseau, France. [6] C O X J.C., I N G E R S O L L J.E. and ROSS S.A. (1981): " A Theory of the Term Structure of Interest Rates." Econometrica, 53, 385-408. [7] D U F F I E D. and K A N R. (1996): " A Yield Factor Model of Interest Rates." Math. Finance, 6, 379-406 [8] D Y B V I G P.H., I N G E R S O L L J.E. and ROSS S.A. (1996): "Long Forward and Zero Coupon Rates Can Never Fall" Journal of Business, 69, 1-26. [9] E N G E L N - M U L L G E S , G . and U H L I G F. (1996): "Numerical Algorithms with Fortran." , Springer. [10] G O L D S T E I N , R. (1997): "The Term Structure of Interest Rates as a Random Field", working paper, Ohio State University. [11] H E A T H , D. and J A R A , D. (1998): "Term Structure Models Based on Future Prices" working paper, Carnegie Mellon University. [12] H E A T H , D., J A R R O W , R . A . and M O R T O N , A . (1992): "Bond Pricing and the Term Structure of Interest Rates: A New Methodology for Contingent Claims Valuation." Econometrica, 60, 77-105. [13] H O , T.S.Y. and L E E S. (1986): "Term Structure Movements and the Pricing of Interest Rate Contingent Claims" Journal of Finance, 51, 1011-1029. [14] H U L L J. and W H I T E A . (1990): "Pricing Interest Rate Derivative Securities." Rev. Fin. Studies, 3, 573-592. [15] J A M S H I D I A N F . (1988): "The one factor Gaussian Interest Rate Model: Theory and implementation." working paper, Merrill Lynch Capital Markets. [16] J A M S H I D I A N F. (1989): " A n Exact Bond Option Formula" J. Finance, 44, 205-209.  22  [17] K E N N E D Y , D.P. (1994): "The Term Structure of interest rates as a gaussian random field." Math. Finance, 4, 247-258. [18] K E N N E D Y , D.P. (1997): "Characterising Gaussian Models of the Term Structure of Interest Rates." Math. Finance, 7, 107-118. [19] L O N G S T A F F F . A . and S C H W A R T Z E.S. (1991): "Interest Rate Volatility and the Term Structure: a Two Factor General Equilibrium Model." J. Finance,. 47, 1259-1282. [20] M E R T O N , R . C . (1973): "Theory of Rational Option Pricing." Bell J. Econ. Man. Sci., 4, 141-183. [21] R O G E R S , L . C . G . (1995): "Which Model for Term-Structure of Interest Rates Should One Use?" Math. Finance, 93-116. [22] SQUIRE, W . (1970): "Modern Analytical Computational Methods in Science and Mathematics." , American Elsevier , New York. [23] V A S I C E K , O.A. (1977): " A n Equilibrium Characterisation of the Term Structure." J. Fin. Econ., 5, 177-188.  23  Appendix A • 3-month Treasury yields 5.6  4.2 h  3.8 3.6  — Jan96  Jan97  Jan98  Jan97  Jan98  Jan99  • 6-month Treasury yields  3.5  = Jan96  24  Jan99  • 1-year Treasury yields 6.5  I  3.5 Jan96  Jan97  Jan98  Jan99  Jan97  Jan98  Jan99  • 2-year Treasury yields 7  3.5  1  Jan96  25  3-year Treasury yields  Jan96  • 5-year Treasury yields 7  Jan97  Jan98  Jan99  • 7-year Treasury yields  Jan96  Jan97  Jan98  Jan99  Jan97  Jan98  Jan99  • 10-year Treasury yields 7.5,  Jan96  27  28  Appendix B Estimation of the drift function. • Figure B l L o g - P r i c e s of B o n d s o n d a t e 0 f o r v a r i o u s m a t u r i t i e s  Log—Price of B o n d s  8000 d a y s till m a t u r i t y  • Figure B2  8000 d a y s till m a t u r i t y  29  • Figure B3 P o l y n o m i a l fitting t o t h e l o g - p r i c e s of b o n d s O  L o g - p r i c e of b o n d s d a t a Cubic Polynomial j  -0.2  -0.4  -0.6  -0.8 Log-prices of b o n d s  —  "1  -1.2  -1.4  -1.6  -1.8  1000  2000  3000  4000  6000  5000  7000  8000  d a y s till m a t u r i t y  • Figure B4 Fitting a function to delta(T)  x 10  o  V a l u e of d e l t a ( T ) Fitted function  V a l u e of delta(T)  -8  -10  -12  1000  2000  4000  3000  d a y s till m a t u r i t y  30  5000  6000  7000  8000  • Figure B5  -i g i O  i 1000  i 2000  ,—i 3000  1  1  1  1  4000  5000  6000  7000  d a y s till m a t u r i t y  31  8000  Appendix C Indicative plots of A j for certain i, j. On the x-axis we present the values of m while on the y-axis the values of A j for those m. mii  mii  V a l u e of A(m,1,1)  V a l u e of A(m,1,4)  v a l u e of  32  m  33  34  35  36  Appendix D Indicative plots of hij(m) and A ^jior certain i,j. On the x-axis we present the values of m while on the y-axis the values of hij(m) and A j for those m. You can easily tell apart the graph of / ^ ( m ) from it's smoothness. m  mii  • /ii,i(m) and An,i,i  • hi^{m) and  A i^ m>  v a l u e of  37  m  h (m) 1>9  1.5  and  A  m  X  9  x 10  0.5  0.5  1 .5  100  200  300  400  500  600  700  800  900  500  600  700  800  900  value of m  h ,z{m) 2  and  A ,  mfi 3  x 10  0.5  0.5  1.5  1OO  200  300  400 v a l u e of m  38  h (m) and 2t9  i4  m > 2 l  9  x 10  100  200  300  400  500  600  700  800  900  800  900  v a l u e of m  h (m) and 4A0  2.5  2.5  X  A i  mA 0  10  100  200  300  400  500  value of m  39  600  700  h (m) and 5i3  A  mt5<3  0.8 0.6 h 0.4 h 0.2 h  0.2 h 0.4 0.6 h 0.8 h  100  200  300  400  500  600  700  800  900  700  800  900  v a l u e of m  h (m) and A 9>1  X  m > 9 i  i  10  300 v a l u e of m  40  h (m) and 9>3  X  A  m>9>3  10  600  400  800  900  value of m  h (m) and 9y9  1.5  A  m>9t9  x 10  0.5  0.5  1.5  900  400 value of m  41  Appendix E In this Appendix we present some (statistical) properties of the fitting error of  hij(m).  • Let N(p, i, j) be a function which measures the number of data-points whose fitting error is less than p %, where as fitting error we consider the quotient ^ ^i~ ^ ^ _ Then we can plot the following surfaces in order to see the levels at which the fitting errors fluctuate (we remind the reader that the total number of datapoints is 886). We can see from the plots that we have "good" fit for j > 8 and for 5 < i < 7 and i = 10. Am  42  hi  m  43  44  • Let Stl(Lj) =  'i  fei  (m)l  be the quotient of the sum of the absolute values of the  fitting errors over the sum of the absolute values of A ^j for each pair of maturities m  i, j• Let also St2(i,j) = ^ ^i~ '\ £ ^ A  hi  <  n  be the quotient of the sum of the squared absolute  values of the fitting errors over the sum of the squared absolute values of A ,ij for each pair of maturities i, j (this statistic is similar to the R statistic of the least squares methodology). m  2  • Stl and St2 can be used to give us an idea of how big the fitting errors are and for which pair of maturities we have the best and worst fitting. Plotting them we can get the following graphs:  45  46  Appendix F  47  48  49  50  51  Appendix G Calculation of the integral J  k+T{ k  f +m  c(k,u, z)dzdu = fk, ,i,j f ° various forms of the  +T:>  r  k  m  covariance function c(k,u,z). Let: (A)  c(k, u, z) = a • (u + z + /3)  2  Then rk+Ti  / Jk  rk+m+Tj  / Jk+m  c(k,u,z)dzdu = a-Ti -T •, • {4k + (4m + 4/3 + 2 • Tj + 2 -.Tj) • k 2  3  +(m + (3f + (Ti + T )(m + 0) + ^  ^  +  3  }  h  which definetely doesn't satisfy (5.2.6) since the highest order k term is A; and not k as we are supposed to have and also there is a big number of terms not mulitplied with k. 3  (B)  c(k,u,z) =  2  kae~^ - \ u  z  Then: rk+Ti rk+m+Ti  / Jk  / Jk+m  ,  n  m  c(k, u, z)dzdu = k^=S - \l j3 Ti  - e- '][l - e-W]  m  0T  Z  if m > ^ , k + T i  Jk  ^  k + m + T j  _ ) _ [i _ -Pn (1 -  ^^  Jk+m  • P  )\ • e-? + e'  pTi  e  m  e  m  m  • '  K  if Ti — Tj < m < Ti and / Jk  1  if  / Jk+m  3  c(k,u,z)dzdu = k^{2P-Tj-(l-e-^)-e^ p  +  m  e-^(l-e-^)-e^}  m<Ti-Tj.  Again this covariance function doesn't satisfy (5.2.6) because the k term doesn't appear in the right hand side. Notice also that the double integral is continuous at the points m = Ti — Tj and m — Ti. 2  (C)  c(k,u,z) = k-a-(e-P  u  + e-t ) }z  Then k + T i  Jk  k + m + T j  Jk+m  c  ^  z  )  d  z  d  u  a _^ ^ . _ .  =  fc  [e  (fc  r )  _,  e /J(fc+m)+e  _ _,  / (fc+Ii)  e  / fc]  p  which again doesn't satisfy (5.2.6), since k appears in the exponent of the exponential. (D)  c(k, u, z) = k • a • [  e  -/*(7-«-*+*)  52  2  +  -/3(7-z-«+5) ] 2  e  +  k  .  C  , p > o  We have: rk+Ti rk+m+Tj rK+li rK+m+lj , / ( -/?(7-«-*+*) Jk Jk+m  1 =  rk+Ti  rk+m+Tj  = /  /  Vfc  (*)  2  e  +  ,  -P(TZ-u+5)^  e  d  TU  u  ,  /  z+S)  Jk+m  Jk  = -^St^iSZ  d  rk+m+Tj  /-fc+Tj  2  e- ~ - dzdu+ p(  z  e~* dx}du +  e- (' - - * ) dzdu /J  z  u+  )  Jk+m  SH {J^  2  r  e-y dy}du  +Ii  2  where we did the following substitutions: x = (7 • u — z + S)\J~B~ and  y = (7 • z — u + 5)\J~B~  The limits of the integrals then are: d = (7 • u — ft — m + S)\J]3 , «3 = (7 • k 4- 7 m - u + <5) y//3  a = (7 • u — ft — m — Tj + 5)\J~B~ 2  a = (7 • ft + 7 m + 7 • Tj — u + 5)\J~3~  ,  4  In order to calculate (*) we have to use some approximation of the integral f£ e~ dx. We used 4 different approximation for this integral, found in Squire (1970): x  (i)  i e-* dx h  2  (ii)  fie-*dx  (iii)  ~ e- {b+  |6 + ±b  ~ - e ^  - i  b2  a  j f e " * ^ ^ {6 -  + ^  8  (iv)  f  b  e - *  2  + ...} - e- {a+  5  3  fa + ^ a  a2  + *  3  - -.} + e - { £  - i  8  - •••} - {a - f a + >  5  3  5  + ...}  + gjr - ...} - ...}  5  d x c ^ - ^  a p p r o x i m a t i o n (i) I  rk+Ti  (*) = —{-y^  e- 4[a  /*&6  /-fc+Ti  4  + - a + — a +...}du +J 3  2  3  ai  e  3  4  + - a + — a? + ...]du  a  k  2  2 , 4 c  2  e~ ^ [  5  + - a + —a^ + . . . ] d w - - / 3 15 j Jk  a  7 7A:  + /  - 2[a Q  e  + - /  = --L{ p •7  2  2  a  3  T e" [a + | a + - i a + -}da - ["* e~ [a + \a 3 15 J6 3 2  a2  3  6  a2  J61  2 4 e " [ a + - a + — a + ...]da- / a2  z  3  3  5  53  e' [a a2  + - a + — a° + ...]dw} 3 15 3  3[a  + ^-a 15  5  + ...]da  2 4 + - a + — a + ...]da} 3  5  where b ^ i f r - y k - m - T j b = [('y-l)-k-m 3  + S]^  ,  b  = [ ( 7 - l)k - m +-y -T - Tj + 6]y/f3  + 6\y/p  ,  h  = [(7" 1) •  k  = [(7"-  6 = [(7-1)^ + 7 ^ + 7 - ^ + 5 ] ^ 5  b = [(-y-l)-k  + jm + 5] fi3  7  {  2  b  y  8  k-m  + j-Ti  + 5}J(j  l)k + -ym + >yT -T j  = [(7"-^•k  + jm-Ti  + 6\yfJ3  i  +  S]^  Notice here that unless 7 = 1, k will be in the exponent of the exponential and therefore equation (5.2.6) won't be satisfied. Therefore we must have 7 = 1. Then the limits of the integrals become: 6 l  [_ _T  =  m  J  +  f  5]^ ,  b = [-m + Ti-Tj 2  h = [-m + S\y/p  ,  b = [m + T + 6] fp  ,  5  j  y  b = [m + 5}J/3 7  b = [-m + Ti + A  ,  b = [m-Ti 8  + 5]y/p S]^  + 5]y/p  Moreover we can place the restriction 5 > 0 from the symmetry of the integrals and the restriction we just derived: 7 = 1. A final remark is that if we assume T\ = 63 (based on the fact that T ranges around 60 to 65 depending on the number of non-working days in this period) we can find that: x  min{6i} = min61 = [5 + m i n ( - m ) + min(-T,-)] • J/3 ~ (S - 8.5 • 10 )A//3 3  m,i  m  V  V  max{6i} = max65 = [S + max(m) + max(T,)] • J/3 ~ (5 + 8.5 • m,i rn V  We can easily find that: I = f" ~ a  b  e  I = [  ~ a  b2  2  • da = -\[e~ *  a2  2  Y  a2  3  e  -da = h -  \a+ f a ! > 3  e~ } b2  ^[e^fe , - e" ?6 ] 2  Now let's make the assumption: (DI)  -  |^a | 5  54  6  2  10 )Jp V 3  Then keeping in mind that:  r  b2  /  2  _ „ 2 ,  2 , , x  e- (a + ±a )da = 2  3  _ 2,  5 _2 h  r  1  h  - e" ?] 6  - e~^ ] 2  we finally find: IA — k • (a • I + C 'Ti • Tj) ~  jjle-* - e"» ] + i [ e " ^ 8  2  - e~ h\] - \[e~^ - e ^ ] b  2  +k • C • ^ • Tj A couple of points about approximation (DI): • The infinite series a + | a + -j^a + ... is absolutely convergent since 3  5  lim | ^ ± i | = lim _^L ^°° u„. ^°° 2n + 3  n  n  0  =  <1  • Since S > 0 we have a > —8.5 • 10 • y//3. Moreover the inequality (DI) can be written as \a\ + | | a | > j^\a \ since a is raised only in odd powers. We can then solve this inequality: 3  3  5  l l + ol | > T ^ l o 15 a  a 3  a 5  |  ^  l l(! + o° o a  2 _  T V  15  Q  4  )  >  0  and find that we must have \a\ < 1.88544 ~ 1.9. Since we also have a > —8.5 • 10 • y/J3 we should then have B < 5 • 1 0 3  - 8  • The above solution satisfies the inequality \a\ + | | a | > ^ | a | but not necessarily the one we are actually interested in: \a\ + | | a | 3> ^ | a | - For this we should solve an inequality of the form \a\ + | | a | ^> f f | a | where e is a number specifying the accuracy of our approximation. If we pick e = 20 ( which can be interpreted as having around 5% error in our approximation) we get \a\ < 0.7 and therefore B < 7 • 10~ . 3  3  3  5  5  5  9  • Now we can proceed to the estimation of the parameters keeping in mind the restrictions:  0 < B < 5 • 10 Note:  - 8  ,  7= 1 , 5>0  The above restriction for B doesn't ensure small errors but only that the two first terms of the series will be larger than the third one. For small errors we should either make sure that B satisfies the more strict: B < 7 • 1 0 or add more terms to our approximation (which increases greatly the difficulty of our problem). - 9  55  approximation (ii) Working the same way we get: a  e~ '  e" i  e-  b  e'  6  b2  e~  b2  e~ l  b2  b  . 3 ^ e„ ^ ^ e~ l 2 + 2 bi ~ b l 6? % e- § e~  e-"3  +3  *4 -62  b  9  2  >>i  b  b2  ki  %  ^ - 6 7  Q  b  2  ~ « 3  where we used the fact that r 2  2  6  1  1  1  2  1  u  3  2  1  r&2  2  3  and made the assumption: (D2)  k^O  , i = l,..,8  (D3) In order for (D3) to be valid (in th 5% error sense - as above ) we must have \a\ > 2.985 and therefore 8 > 1.234 • 1 0 . We should note here that again the sum of the approximated series converges. In order for (D2) to be true we need to have -7  5 > maxm + max{T,} = 8446 j  If 5 < 8446 then we can always find some values (m, i, j) so that the denominator will be equal to 0. For the above number we used that m = 1, 886 and assumed, as before, that T\ = 63. If we keep in mind the above restrictions they would lead to: -6  2  i B - k ^ i ^ j  i --k—{—y B  + k-C-Ti-Tj  +  ,if  k-c-Ti-Tj  h > l  ,tfh<i  since the other functions would have much smaller values compared to the above ones.  56  approximation (iii) Working the same way we get: ih - k±{-%  + bl  +  --A;-^-T T [m J  \b\ - \b\  + ( T  2  j  bl-bj-  +  \b\ + ht  - T > + A+ ^  J  ^  -  ^  ]  + A;-C-T .T i  where A = S — ^ and we made the assumption: 2  (D3)  \a - | a | > | ^ a | 3  5  In order for this to be valid (in th 5% error sense - as above ) we must have \a\ < 0.8 and therefore /? < 9 • I O . It is easy to see that again the sum of the series converges. For a more accurate result we can also keep in our approximations the third term of the series which would then lead us to the following result: - 9  I ~k-j -  Ti.Tj [ m + 2(7) - T , ) m + 2(B + T + T + ^  2  c  4  2  3  2  +(Tj - Ti) (2B + T + T -Ti-Tj)m 2  + -  ^ - ^ ( n  1  + C + B(^T  2  + T) +  2  -.T .T }  2  Z  2  2  + ^T - T • Tj) 2  t  k-C.T .T  +  2  • Tj )m  2  l  J  where B = 35 - ± and C = {5 - \) + js. 2  2  2  approximation (iv) Finally working with this approximation we end up to:  (3  b  h  2  h  h  bl  b\  b%  b{  b  b  4  b  b  8  bj  7  where we used that: , 1  rb  2  / y^i  e'  a  —da ~ a 2  ~bl  e  e  -bj  —  b  2  h  g b\  -b\  e  +  2-2-  b  2  57  -  H  3  2 - 2 -  of  b{  bj  and made the assumption (D2). The above formula is very similar to'the one for approximation (ii) which leads us to the following result: -b I u - A ^ y  2  p  + k-C-Ti-Tj  ,if  h > l  01  In-kji-^-j  + k-C-Ti-Tj  ,if  h<l  The reader can see that all the above approximations don't satisfy (5.2.6), since we do not get the k term. 2  (E)  c(k, u,z) = k-a- [e-^"-*) + e'^^)  , 3 >0  2  Working similarly to (D) we end up to a solution which doesn't satisfy (5.2.6),since k appears in the exponent of the exponential: rk+Ti  j  k  rk+m+Tj  J  k+  . T  a  2  2  c{k,u,z)dzdu = k—jJ-e +  {a  2  4  3  + ^  3  2,  a • Tj  + -a\ + —a + ...} - —^-e  + ^  2 3  ^{a + -a\  a  2  + ...} +^e-<{a,  a  .  b  2  -e- 3{a  -  x  + \a\ + ±a\ + ...}  ^ + ...}  where a i  = (k-6)y/p  ,  0 3 = {k + m - b~)\/~3~  Note:  a = {k + 2  ,  T -5)sfd i  a = (A; + m + Tj - 5)\[d~ 4  The solution above is derived by using approximation (i). Using the other approximations lead to similar results - all of which are rejected.  58  Appendix H Plots of the function G i ( m , i, j), the estimator of A i j ance function where m> t  (k,u,z)  Cl  = kae- ~ 0lu  zl  when using Ci(k, u, z) as the covari-  , (3 > 0  Using a non-linear optimization algorithm we found a = 1.32 • 1 0 ~ and B = 0.08 as the optimal values for the coefficients. When looking at the plots keep in mind that Gi(m,i,j) is continuous. The jumps we see on the plots are not the caused because of discontinuities but because of a rapid change in the values of Gi(m,i, j). The last three graphs of this section show the behaviour of Gi(m,i,j) for non-optimal values of B (a doesn't concern us that much since it just a scaler) - we added them so that the reader can get a better view of this functions behaviour. n  —  (i.j) = < 1 . 1 ) (i,j) = ( 1 . 2 ) ° (1.3)  - O.D  V a l u e s of G 1 ( m . 1 ,j)  O  1 OO  200  300  400 v a l u e of m  <  59  500  600  700  800  900  (i.j) = ( 2 , 1 ) (i.i) = (2,2) (i.j) = (2,3)  400 v a l u e of  X  SOO m  10 (ND = (3,1) (i,j) = (3,2) (i,l) = (3,3)  4.5  3.5  Values of G1(m,3,j)  2  5  1.5K  0.5  100  200  300  400 value of m  60  500  600  700  800  900  61  Plots of G1(m,i,j) for alpha = 1 0  x 10  Values of G1(m,i,j)  ( _ 1 1 >  , beta = 10  ( s )  and i,j = 1, 2, 3  3  1  h  100  200  300  400  600  500  700  800  900  800  900  value of m  Plots of G1(m,i,j) for alpha = 1 0  x 10  100  200  300  ( _ 1 1 )  , beta = 1 0  400 value of m  62  500  <3)  and i,j = 1, 2, 3  600  700  Appendix I Plots of the function G (m, i, j), the estimator of A i j when using C2(k, u, z) as the covariance function where 2  m> t  c {k, u,z) = k-a2  [ -/5(T«-*+<5) e  2  -P(rz-u+6)^  +  e  +  .^  k  p  >  o  For the calculation of G2{m,i,j) we used approximation (i). The first two graphs present the form of G ( m , i,j) if ft is given a value close to the constraints mentioned in Appendix F. In the third graph we have G2(m,i,j) with the coefficients which make it fit best to A j. In each graph there are 16 functions, the G2{m,i,j) for i, j = 1, 2, 3, 4. We should note that in the first graph (where /? = 1 0 , 5 = 300) each spike appears at one of the (fixed) bond maturities Tj. Moreover all 4 functions G {m,i,j) which correspond to that specific i have exactly the same behaviour. In other words if G2(m,i,j) has the coefficients of the first graph then for each i the functions that correspond to the various j and this fixed i will be almost equal to 0 at all times except from m — Ti, where they will all have a spike (of the same height). 2  m>i  - 9  2  ,(-14)  1.5  beta = 10 " coefficient values : alpha = 2*10' gamma = 1 , delta = O , zeta =  x 10  (  r  0.5 h Value of G2(m,i,j)  -0.5  -1.5  100  200  300  400  500  value of m  64  600  700  800  900  65  Values we are trying to match  Value of A(m,i,j)  400  900  500  value of m  2.5  1.5  Comparable plot of A(m,3,j) and G2(m,3,j)  x 10  G2(m,3,1) G2(m,3,2) G2(m,3,3) A(m,3,1) A(m,3,2) A(m,3,3)  .....•'V?"V/-f^...^,;-^  1 Values of G2(m,i,j) and A(m,i,j)  0.5  0 •I'-  -0.5  ll.  ll 'i  ,;  t  V  "IF  "i  -1.5  100  200  300  400 value of m  66  500  600  700  800  900  Appendix J Plots of the function G(m,i,j), ance function where  the estimator of A ij, m!  when using c (k,u,z) 3  as the covari-  Using a non-linear optimization algorithm we found a = 1.35 • 1 0 , [x = 0.0034 and A = 2.17 • 1 0 ~ as the optimal values for the coefficients. We then got the following plots: - 6  u  (i.j) = ( 1 . 1 ) (i.i) = ( 1 . 2 ) (i.j) = ( 1 . 3 )  V a l u e s of G<m.1,j)  400 v a l u e of m  67  500  700  x 10  (i.j) = (3,1) = (3,2) (3,3)  (U) =  100  200  400  300  500  600  700  800  900  700  800  900  value of m  Plots of G(m,i,j) for i,j = 1, 2, 3  x 10  100  200  300  400 value of m  68  500  600  2.5  X  Data we are trying to match - A(m,i,j) for i, j = 1, 2, 3  10  100  200  300  400  500  700  600  900  800  value of m  2.5  Comparable plot of A(m,3,j) and G(m,3,j)  x 10  "  G(m,3,1) G(m,3,2) G(m,3,3) A(m,3,1) A(m,3,2) A(m,3,3)  J> \  1.5  Values of G(m,i,j) and A(m,i,j)  **  . . . ..*>•  ii  /,  0.5  0  i  -0.5  ; - •  -  ii.  "'' Vi  .ii  l'l I  i/,i  i  -1.5  100  200  300  400 value of m  69  500  600  700  800  900  Appendix K Definition of Gaussian Random Field, taken from A D L E R (1981). "Let G ' denotes the set of all O^-valued functions on !R and p ' the a-field containing all sets of the form { g e G ' : g(tj) e Bj, j = 1, . . . , m } where m is an arbitrary integer, the tj are points of 3f?., and the Bj are open sets in 2 1  2  2 1  2 1  2  Then we define a 2-dimensional random field to be a measurable mapping X from (Cl,F) into (G ' , p ' ). We use the notation X(t,u>) to denote the value the function in G ' corresponding to u takes at the point t. for convenience we usually supress the to. 2 1  2 1  2 1  Given the existence of the probability measure P on p we can immediately obtain from the definition a collection of measures F ... t on B defined by nl  tl>  F ,... {B} tl  At  =  t  n  P{(X(t ),...,X(t ))eB} 1  n  for any B e B . The collection of all such measures, or, equivelantly, the corresponding distribution functions, is known as the family of finite-dimensional distributions for the field X. nl  We define Gaussian random field to be a random field possessing finite dimensional distributions all of which are multivariate Gaussian. From this definition and the above comments it is clear that all the finite dimensional distributions of a real valued Gaussian process, and hence the measures they induce on p ' are completely determined once we specify the following two functions, known, respectively, as the mean and covariance functions: 2 1  n(t) = E[X(t)] R(s,t)=E[(X(s)-p(s)) (X(t)-p(t))} T  It can be easily seen from the form of the multivariate normal density that if a real valued Gaussian field has a constant mean and a covariance function that is dependant on 5 — t only, then the field is homogeneous."  70  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0080036/manifest

Comment

Related Items