Extensions of the VaR Approach to Portfolio Selection with Non-normal Returns by Wai Tse B.Sc. (Mathematics), Hong Kong University of Science and Technology, 1995 M.Sc. (Economics), Hong Kong University of Science and Technology, 1997 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Science in THE FACULTY OF GRADUATE STUDIES (Department of Statistics) we accept this thesis as conforming to the required standard The University of British Columbia May 1999 © Wai Tse, 1999 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of The University of British Columbia Vancouver, Canada DE-6 (2/88) Abstract The well-known mean-variance approach to portfolio selection problem proposed by Markowitz (1952) is often citicized for its use of variance as a measure of risk exposure. Recently, Value at Risk (VaR) has become a pop-ular alternative for measuring risk in many firms. Using the idea of VaR, we formulated a chance constrained programming problem for portfolio selection. Untill recently, most real life applications rely on the normality distributional assumption of the asset returns which seems to be inconsistent with the em-pirical distributions. To relax this assumption, our study focused on the ex-tensions of the VaR approach to portfolio selection to the class of Elliptically Contoured Distributed returns, and time-varying distributed returns. For the later case, we proposed a new solution via empirical distributions. Moreover, a profile map of returns versus risks was proposed so that, the optimal portfolio could be identified for various time-window sizes. The performance of various portfolios over different time periods were evaluated by means of off-sample cumulative returns and a new return-to-risk measure. ii Contents Abstract ii Contents iii List of Tables v List of Figures vi Acknowledgements viii Dedication ix 1 Introduction 1 2 Portfolio Selection Theory 4 2.1 Introduction 4 2.2 Elliptically Contoured (EC) Distributions 8 2.3 Portfolio Selection Model with Deterministic Constraints . . . 16 2.4 Empirical Distributions 18 2.5 Cutting Plane Method 20 iii 3 Empirical Results 27 3.1 Introduction 27 3.2 Data 29 3.3 Comparison of Portfolio Schemes 32 3.4 Empirical Results and Discussion 36 4 Further Research and Discussion 53 4.1 Introduction 53 4.2 Computational Aspects 54 4.3 Beyond the model 55 4.4 Discussion on the Use of VaR Approach 58 5 Conclusion 60 Appendix A Proof of Quasi-concavity of Constraint Functions 62 Appendix B Returns Data 64 Appendix C Portfolio Lines 72 Bibliography 94 iv L i s t o f T a b l e s 3.1 Sample averages of the daily returns for each index 41 3.2 Sample correlation matrix of the daily returns 42 3.3 Summary of maximum cumulative returns for each disaster level (d) in the period Oct 28, 1996 to Oct 27, 1997 43 3.4 Summary of maximum cumulative returns for each disaster level of return (d) in the period Oct 28, 1997 to Oct 27, 1998. . . . 44 3.5 Summary of maximum cumulative returns for each disaster level of return (d) in the period Oct 28, 1996 to Oct 27, 1998. . . . 45 3.6 Summary of return-to-risk ratio (r/d ratio) for each disaster level of return (d) in the period Oct 28, 1996 to Oct 27, 1997. 46 3.7 Summary of return-to-risk ratio (r/d ratio) for each disaster level of return (d) in the period Oct 28, 1997 to Oct 27, 1998. 47 3.8 Summary of return-to-risk ratio (r/d ratio) for each disaster level of return (d) in the period Oct 28, 1996 to Oct 27, 1998. 48 v List of Figures 3.1 Portfolio frontier curve for the period 28/10/96 to 27/10/98 . 49 3.2 Optimal portfolio scheme curves from 28/10/96 to 27/10/98 . 50 B.l Time Series Plots of returns data 65 B.2 QQ Plots with normal probabilities 67 B.3 Histograms of returns data 68 B.4 QQ Plots with t(5) probabilities 69 B.5 QQ Plots of t(5) probabilities - Linear Combinations 70 B. 6 QQ Plots of normal probabilities - Linear Combinations . . . . 71 O l Portfolio lines for various distribution assumptions, using 6-year window 73 C. 2 Portfolio lines for various distribution assumptions, using 5-year window 76 C.3 Portfolio lines for various distribution assumptions, using 4-year window 79 C.4 Portfolio lines for various distribution assumptions, using 3-year window 82 vi C.5 Portfolio lines for various distribution assumptions, using 2-year window . 85 C.6 Portfolio lines for various distribution assumptions, using 1-year window 88 C.7 Portfolio lines for various distribution assumptions, using all data 91 vii Acknowledgements I would like to acknowledge the valuable supports and comments of my super-visor, Professor Jian Liu. I am also indebted to Professor Harry Joe for many of his helpful advice. I wish to thank Fred and Kelly for always helping me and making my stay in Vancouver more enjoyable. I also have to say that I am so lucky to have many friends in Vancouver, especially Don, Maggie, Alice, Phoebe, Ida, Katy, Tiffany, Leonard, Wayne, Dieter, Marc, Jochen, Alex, Matias and Kenneth. Furthermore, I would like to thank for all the support and help from all the graduate students, faculty members and staff in the department. Finally, I would like to thank my family for all the support and love they always give me. R Y A N W . T S E The University of British Columbia April 1999 viii To my family ix Chapter 1 Introduction In the development of portfolio theory, a well-known approach is the mean-variance approach proposed by Markowitz (1952). His idea was to choose a portfolio such that it could maximize the expected returns under certain risk control. In his paper, variance was used for controlling the risk. However, variance takes into account the fluctuations of returns in both directions, high and low. It may not be a good measure of risk since the former direction is not something the investors worry about. In the following, we will define a portfolio selection problem consisting of maximization of expected returns subject to another risk control. Having this concern in mind, researchers have tried to find other mea-surements of risk exposure. These measures include the mean absolute devi-ation and the frequency of "down sides". Recently, a popular measurement called Value at Risk (VaR) is being widely used in risk management by the 1 industry. Since VaR is an appropriate measure of risk, it may be applied to the portfolio selection problem as well. In fact, a similar idea has been sug-gested by Roy (1952), where he called it safety-first principle. Different forms of such a principle were suggested for various objectives. In addition to the de-velopment of chance constrained programming, first proposed by Charnes and Cooper (1959), portfolio selection with VaR as the risk measure has become an alternative to the mean-variance portfolio selection model. In the process of actual portfolio selection, the issue of the distribu-tional assumption arises. So far, most existing practices used the normality assumption. Certainly, the violation of the assumption will not give us the best choice of portfolios. In chapter 2, we will extend the VaR approach (or the chance constrained approach) to our portfolio selection problem, with specific attention to a more general class of distributions, the Elliptically Contoured Distributions. An introduction of this distribution class will be presented, and we will discuss how it can be used in practice. Furthermore, we will allow the distribution to be dependent on time and will make use of the empirical distributions. In chapter 3, we will illustrate the results from an empirical study on a data set of international stock exchange indices. We will attempt to find the best portfolio selection scheme among the traditional VaR approach and our extended VaR approaches by using some new comparison criteria. In addition, 2 we will also examine the effect of using different window sizes of data in form-ing portfolios. By window size we mean the number of daily values we used in constructing the portfolio. We will also test the risk management ability of the models by the performances of various portfolio selection schemes for the period including the Asian financial crisis beginning late October of 1997. Although we have extended the VaR approach of portfolio selecting problem to a much more general situation, there are still many areas where further research can be done. We will discuss them in chapter 4, and suggest some possible directions for future research. Finally, chapter 5 concludes with a summary of our main ideas and findings. 3 Chapter 2 Portfolio Selection Theory 2.1 Introduction Consider an investor who has certain amount of capital to invest in k different risky assets and one riskless asset. Suppose the continuously compounded rates of return of these k risky assets are the random variables r i , r 2 , r k , respectively, and the riskless asset has a fixed rate of return r 0 . The investor would like to know how to optimally allocate his capital into these assets. Let ai, a 2 , b e the portions of the capital allocated to the k risky assets respectively, and a0 be the portion invested in the riskless asset. The well-known mean-variance approach of portfolio selection due to Markowitz (1952) suggests the investor to solve the following problem: fc Max{a(0)}E(Eairi) t=0 fc s.t. V a r ^ a ^ ) = V 0 , 4 k and Oj > 0, i = 0,..., k. where a(0) = (ao, a-i,..., a*)', and Vo is the risk (or variance here) the investor is exposed to. The resulting expected portfolio return and Vo will be on the efficient portfolio frontier. Markowitz (1959, p.22) defined a portfolio to be efficient if "it is impossible to obtain a greater average [expected] return with-out incurring greater standard deviation; it is impossible to obtain smaller standard deviation without giving up return on the average." As mentioned in the previous chapter, variance or standard deviation does not seem to be a good measure of risk because it treats both up-and-down sides equally as risk. Thus, we will form our portfolio selection problem on the following basis: Maximize the portfolio's expected return subject to a certain risk control. In our study, we will assume the measure of risk to be Value at Risk (VaR). Intuitively, VaR is the amount of loss from the original capital to a disaster level at certain probability. Disasters are events like market crashes and bankruptcies. In fact, some early use of VaR idea in portfolio selection can be traced back to Roy (1952), who proposed the so-called safety-first principle. Among the early work, there are several forms of the safety-first principle: 5 • M i n P r { £ * = o ^ < d } ; • Max d s.t. Pr-JXLo < d} < a; • Max E(£*L 0 a^) s.t. Pr{Ef=0 Wi < d} < a; where d is the disaster level of returns and a is the probability that a disaster comes true. The first form was used by Roy (1952). The second one was sug-gested by Kataoka (1963) and the last form was proposed by Telser (1955-56). Pyle and Turnovsky (1970) had illustrated these three forms graphically in the context of finding their solutions in the (mean, standard deviation), or (//, a), plane with the use of the efficient portfolio frontier. They also described the relationships of the solutions of the 3 forms of safety-first approaches and the mean-variance approach. Baumol (1963) used a slightly different approach than Telser's. Under the assumption of normally distributed returns, he con-sidered the plane (fj,, d) instead of the (fi, a) plane. The conclusion is that not all efficient portfolios in the (/i, a) plane are reasonable. He then defined effi-cient portfolios in his (//, ci) plane in the way that a portfolio is efficient if it is impossible to obtain a greater expected return without incurring a greater dis-aster level (smaller d) with the same given probability, and it is impossible to obtain a lesser disaster level (larger d) with the same given probability without giving up some expected return. Moreover, the efficient portfolios in the (fi, d) plane will be an appropriate subset of the efficient portfolios in the (//, tr) plane which contains reasonably efficient portfolios. This gives another reason 6 for us to use safety-first approach instead of the mean-variance approach. In this study, our portfolio selection model via VaR coincides with the form suggested by Telser (1955-56), which in turn is a special case of the chance constrained programming model proposed by Charnes and Cooper (1959). It can be seen that our model is very comparable with the mean-variance ap-proach, and is commonly used in practice.1 Due to the situation that we have a constraint which is a probabilistic statement, we need to transform it into an equivalent deterministic one in order to solve the optimization problem. To do so, many authors (e.g. Pyle and Turnovsky (1970)) have noticed that we can handle the cases where the portfolio return follows some distributions containing only 2 parameters - mean and variance. However, most papers as-sumed normally distributed returns when real data were used. It has been noted that many return series tend to have heavier tails in their distributions and hence multivariate normal distribution model for the returns is not appropriate. In section 2.2, we will consider the class of Elliptically Contoured (EC) Distributions, which includes both the multivari-ate normal and multivariate-*: families. With the assumption that the returns have a joint distribution in this class, we can simplify the portfolio selection problem, which will be stated formally in section 2.3, into a solvable one. In addition to the generalization to the class of EC Distributions, we consider 1 The paper by Agnew, Agnew, Rasmussen and Smith (1969) is a good reference of the application of Telser's model in portfolio selection in a casualty insurance firm. Similar to Baumol's paper, they also assumed normality. 7 to relax the assumption of time-invariant distributional form for the returns. The motivation is that the distribution may be changed when the outside en-vironments change. An example is the distribution of stock returns before and after market crashes. To have such a generalization, in section 2.4, we extend the distributional model-based VaR criterion to that based on the empirical distribution of returns and consequently solve the portfolio optimization prob-lem via the extended VaR approach. In section 2.5, we will implement the Cutting Plane method, suggested by Kelley (1960) in the context of chance-constrained problem, to find the numerical solution to our portfolio selection problem. The actual computer algorithm used corresponds to the Supporting Hyperplarie algorithm of Veinott (1967). 2.2 Elliptically Contoured (EC) Distributions In classical multivariate analysis, a commonly used basic distributional as-sumption is the multivariate normal distribution. For more general cases, statisticians studied a class of distributions, namely Elliptically Contoured (EC) Distributions, which can be considered as an extension of the multivari-ate normal distribution. Two excellent references of this class of distributions are the books by Fang, Kotz and Ng (1989), and Fang and Zhang (1989). In this section, we will only highlight some properties of this class of distribu-8 tions which are useful in our portfolio selection problem. The proofs of the theorems and the detail descriptions of E C Distributions can be found in the above references. It is well known that a A;-dimensional random vector x, which follows iV(/z, E), has the same distribution function as /x + A'y, where y follows the k-dimensional multivariate standard normal distribution N(0,1) and E = A'A. Therefore, N(fi, E) is a generalization of N(0,1) and many properties of N(n, E) are parallel to those of N(0, I). As we mentioned before, the class of E C distributions is the extension of AT(/x, E). Then the extension of N(0,1) is the class of Spherical Distributions, a sub-class of E C distributions. Similar to N(0,1) and N(fi, E), these two classes of distributions also have many similar properties. Thus, we will first focus on the discussion of spherical distributions. Definition 2.1: (Fang, Kotz and Ng (1989, p.27)) A A; x 1 random vector x has a spherical distribution if for every T G 0(h), Tx has the same distribution as x, where 0(h) denotes the set of k x k orthogonal matrices. The above definition may not be helpful in checking whether a distri-bution is in the class of spherical distributions. However, we can make use of the following theorem. 9 Theorem 2.1: (Fang, Kotz and Ng (1989, p.27)) A fc x 1 random vector x has a spherical distribution if and only if its charac-teristic function (c.f.) ip(t) satisfies one of the following equivalent conditions: 1. v(r't) = v(t), v r e o ( f c ) ; 2. 3 a scalar function c/>(-) such that tp(t) = (f>(t't). The function </>(•) is called the characteristic generator of the spherical distribution. Using similar notations as in Fang, Kotz and Ng (1989), the family of all possible characteristic generators for all k x 1 random vectors is *k = {$(•) '• <f>{t\ + • • • + tl) is a fc—dimensional c.f.}. The second condition is very helpful when we want to know whether a fc x 1 random vector x has a spherical distribution. If the condition is satis-fied, we write x ~ Sk((f>). Let us illustrate the idea with the following examples. Example 2.1: (Fang, Kotz and Ng (1989, p.28)) Suppose x ~ iVjfc(0,I), i.e., multivariate normal, the c.f. of x is iP(t) = exp{-i(t? + • • • + t2k)} = exp{-it't}. We can see that ^ ( t ) = </>*(t't), where (j)*(u) = exp(—|). Therefore, x~ Sk{(j>*) with the characteristic generator </>*(•). Example 2.2: (Fang, Kotz and Ng (1989, p.86-87)) Suppose x ~ Mtk{m, 0, I), i.e., multivariate -^distribution with degree of 10 freedom m,2 the c.f. is 1. m is odd: _ v^r(^)exp(-y^t) « ( 2 g - p - l ) ! ( 2 v ^ t r 1 <Mt,m)- 2 m _ l r ( f ) X ^ ( 9 - p ) ! ( f f - l ) ! (p-1)! J' where q = We can see that ipi(t;rn) = (j>l(t't). Therefore, x~ Sk((f>l) with the characteristic generator 4>{(-). 2. m is even: + n ( n - p ) [ i o g ^ - M « + i ) ] } , p=0 4 where 9 = f and h(n+l) = . Also, ^ (t; m) = ^ ( f t). Therefore, x~ Sk((f>2) with the characteristic generator (f)^-)-3. m is a fraction: (—i\qr(i™±±\(-JL—\ ibo(t-m) — ^ L V 2 A S i n 6 7 T ^ m/2-<? ^ ( t ' m ) - 2 ^ r ( f ) r ( e + i)n^(^) m " mt't 1 TUqP=o(n-e-p) _ ( t ' t )%j(n-p) ^ 0 L l 4 j (n!)1 m ^ n + l - e ) 2£r(n + l + e) j J ' where t7 = is the integer part of m^-, e = j — s, and 0 <| e |< | . Thus, we can also write tp3(t;m) = 0^3(ft), and x ~ Sk(4>l) with the characteristic generator c^ -^). 2Stochastic representation of Mtk(m, 0, I): If z ~ iVfc(0, I), s ~ x ^ , and z is independent with s, then y = m l / * z has a multivariate i-distribution with degree of freedom m. 11 As a result, we have just shown that the multivariate t-distribution belongs to the class of spherical distributions for any degree of freedom ra. Notice that a fc x 1 random vector x~ Sk((f>), in general, does not necessarily have a density. If it does, the density must be of the form #(x'x) for some nonnegative function g(-) of a scalar variable. Moreover, a nonnegative function g(-) can be used to define a density of a spherical distribution if and only if We call g(-) a density generator or p.d.f. generator of the spherical dis-tribution. Now, we will discuss the family of EC distribution. We are going to start with the following definition. Definition 2.2: (Fang, Kotz and Ng (1989, p.31)) A A; x 1 random vector x has an EC distribution with parameters fx = ..., (xky and S if x has the same distribution as ft + A'y, where y~ 5/(</>) and A is an / x fc matrix such that A'A = £ with rank(S)=Z. We write From the above definition, it can be easily verified that if x ~ ECk(fJ>, S, (j)) with rank(£)=£, we can find some scalar function </>(•) such that the c.f. of x, x~ ECkin,^^). 12 tb(t) = E(elt'x) is of the following form ib{t) = ett'^(t'St). Among the theorems and properties of the family of EC distributions, the following one will be very useful for our investor's problem. Theorem 2.2: (Fang, Kotz and Ng (1989, p.43)) Suppose x~ ECk(p^, S, <j>) with rank(£)=Z, B is a k x m matrix and v is an m x l vector, then v + B'x ~ ECm(y + B'n, B'SB, (j>). A special case of this Theorem is that any linear combination of the components of x, a'x where a = ( a i , . . . , ajfe)', follows EC\(a!fi, a'Sa, c/>). In addition, we will also find that the following theorem is useful in the later sections. This theorem is a generalized version of the one presented in Weintraub and Vera (1991). Theorem 2.3: Let x = (xi,..., xk)' ~ ECk(fi, S, <f>). If a vector (ag,. . . , a°k), where a? > 0 for i = 0,1,..., k, is such that n a°rro + X>°x° > d, ' (2.1) 13 where x0 is a constant, and x® satisfies Pr(xi >x°) > 1 — a, for i = 1,..., k, then the vector (OQ, . . . , a®) will also satisfy the following inequality: a°0x0 + a°'/x + F-l(a)Va°'Ea0 > d, (2.2) where a 0 = ( a ° , . . . , a£)', F(-) is the cumulative distribution function of the distribution ECi(0,1,0) or Si(</>), and a is chosen such that F~1(a) < 0. Proof. Let us first denote the (i, I) element of the variance-covariance matrix £ as su, for i = 1,..., k and I = 1,..., k. Then we have su = puSiSi, where pu is the correlation between Xi and xi, and S; and S/ are the standard deviations of Xi and x\ respectively. Applying Theorem 2.2 with v = 0 and B being a k x 1 vector with value 1 in the ith element and values 0 elsewhere, we have Xi ~ ECi(fj,i,Si,<j>) for i = l,...,k. For each X® satisfying Pr(xi > xf) = 1 — a, we have Pr(£5jjLA_i > ^JZJM^ > 1 - a, where ~ £^Ci(0,1, 0) = Si (</>). So, we can re-write the above probability statement as the following: i t i i i <i?-i( a), Si =>x?< pi + F-l(a)Si. Therefore, if aP0xQ + E i L i a°z° > d, then A: fc a°x 0 + YI aiVi + F~l(a) E a°isi ^ d (2-3) i=l i=l 14 On the other hand, from the facts that pu < 1, Si > 0, and a° > 0, we also have i=l i=l1=1 i=l i=l1=1 k E ° ° i s i ^ va^SaO. j=i Hence, with F~1(a) < 0 we have k F _ 1(a) Ea?*i < i ^ M v V ' E a 0 (2.4) «=i From (2.3) and (2.4), we get k a°0x0 + E a°iVi + F-^^VaO'Sa 0 > d. i=i Q.E.D. Theorem 2.3 states how the inequality (2.2) can be approximated by a linear one (2.1) when the random variables have an EC distribution. We will see how it is applied when we describe the cutting plane method. In the next section, we will illustrate how to handle the portfolio selec-tion problem when the returns follow an EC distribution. 15 2.3 Portfolio Selection Model with Determin-istic Constraints Recall that our portfolio selection problem is: k MaX{a(o)}E(EG^) i=0 k s.t. P r { ^ api > dj} > 1 - aij, j = 1,..., m, i=0 k i=0 and Oj > 0, i = 0,..., k. where a(0) = (a0, ai , . . . , ak)', The non-negative constraints for e^ 's assume that no short-selling is allowed. Moreover, we can see that there are m proba-bilistic constraints in the model. Constraint j specifies a certain disaster level of return dj an investor can tolerate given a certain probability of occurrence dj. This can capture the situation of multiple levels of risk. In order to solve the above maximization problem, we first rewrite the probabilistic constraints into their deterministic forms by assuming the returns rj's for i — 1,..., k following some multivariate distributions. Here, we will assume that they have an EC distribution, i.e., r~£C f c (A / ,£,0), where r = (n, r 2 , . . . , r )^'. Using theorem 2.2, we have a'r ~ £Ci(a'//,a'£a, 0), 16 where a = (ai, a 2 , . . . , a^ )'. Hence, for each probabilistic constraint j, we can write it as dj - a0r0 - a'fj, My > -—7=^=—} > 1 - a,, where y = ^J~^ ~ ECi(0,l,</>) = Si((f>). Assuming F(-) is the cumulative distribution function of y, we have the following equivalent deterministic form: d J - a 0 r o - a > < F _ <S=^ a0r0 + a'/i + F ~ 1 ( a i J ) v / a 7 £ a > dj. Therefore, the deterministic version of the portfolio selection problem will be: Max { a ( o ) } a 0 r 0 + E a i M i s.t. a 0r 0 + a'/x + F~l(a.j)\/a'£a > dj, j = 1,... ,m, k i=0 and Oj > 0, z = 0,..., k. With the Cutting Plane Algorithm, which will be presented in section 2.5, we will be able to solve the above problem. However, before we get into the algorithm, we will first discuss in details how the empirical distribution is used in the next section. 17 2.4 Empirical Distributions In section 2.3, although we allow for the more general class of distributions to model the returns which can capture to a certain degree the heavier tails, it should be noted that choosing an appropriate distribution for a data set is non-trivial. Also, the distributional form may change as a result of some influential market movements such as market crashes and rallies. This section mainly focuses on using the historical data in constructing the empirical dis-tributions, so that the portfolio selection problem can be solved. First of all, suppose we have a total of k series of returns data under consideration. We will have k different marginal empirical distributions. The empirical distribution function of the ith series is defined as Hi(r)= 1 E l ( ^ < r ) , h-h + l j = t l where is the return of the ith index at date j starting from t\ to t?, and r is some values of returns ranging from —oo to oo. The function I(rjj < r) is an indicator function which equals 1 when rjj < r, and 0 otherwise. However, without any assumption on the distribution of the data, we cannot proceed further as we need to deal with the distribution of the port-folio, i.e., some linear combinations of the returns. This is because before we optimize the portfolio holding, we need to know the distribution of the op-timal portfolio. Recall that in the previous section, when we transform the probabilistic constraint into the deterministic form, we need to know the dis-18 tribution of the linear combination of returns. However, we do not know what the optimal portfolio will be before optimization. Therefore, we need the fol-lowing assumption: Assumption: The returns follow a joint distribution (time-varying) which has the following properties: _ a'r — a'/it V ~ v / a 7 f a r _ follows the same distribution for any vector a at any specified time point.3 With the above assumption, it is easy to verify that the result of The-orem 2.3 still holds, and the deterministic version of the portfolio selection problem is the same as that presented in the previous section. The allowance of time-varying distributional form is actually a generalization of the portfolio selection model. The main advantage of using empirical distribution is ap-pealing since we can have different approximations for the true distribution by using different portions of the historical data at different time points we are interested in. Thus the changes in the distributional form of the returns can be captured. When we solve the portfolio selection problem, we may notice that the only place the distribution of returns being used is in the deterministic equivalence of each of the probabilistic constraints (e.g. constraint j): 3 It is noted that all the distribution families in the class of EC distribution satisfy this assumption. 19 a0r0 + a'/x + F 1(aj)v/a'Sa > dj. Further, we actually only need to know F~1(aj), the critical value of the standardized distribution (with zero mean and unit variance) of the linear combinations of the returns. When we are using the well-known distributions such as normal or t, the critical value can be easily found from a statistical table or software. However, when we use the empirical distribution for approx-imation, the problem becomes how to obtain an approximation for the critical value. To solve such problem we need to use the assumption that all linear combinations of the returns have the same distribution up to a scale change. We thus randomly generate 100 linear combinations. For each of them, we have an empirical distribution. Then we standardize them so that the means and variances of the empirical distributions are 0 and 1 respectively. Finally, we obtain the lOOdj — th percentile point for each linear combination. Under the assumption, these percentile points should be the same for all the linear combinations. Certainly, when we use real data set, this will never happen. Therefore, we use the average of the lOOo, — th percentile points of all linear combinations as the estimate of F~1(aj). 2.5 Cutting Plane Method In this section, we will describe the cutting plane method for solving the portfolio selection problem. To be more specific, we mainly concentrate on the supporting hyperplane algorithm of Veinott (1967), Zangwill (1969), and 20 Weintraub and Vera (1991). Moreover, we will discuss its convergence prop-erties. In general, Veinott's algorithm can be applied to the following non-linear problem: k Max { a ( 0 ) }a 0ro + Ea*/^ i = i s.t. a(0) G G = {a(0) | 4,(a(0)) > dj, j = 1,..., m}, where gj(-), for j = l , . . . , m , are quasi-concave4 and continuously differen-tiable. This means that the feasible set G is a convex set. We can see that the above non-linear problem has exactly the same objective as that in the portfolio selection problem. In fact, the portfolio selection problem is just a special case. We will discuss it in more detail after we have presented Veinott's algorithm. We will first start with the assumptions of the algorithm: 1. There exists a compact set U such that the feasible set G is contained in it. 2. There exists an interior point b(0) such that gj(b(0)) > dj, j = 1,..., m. 3. For any a(0) such that gj(a^) = dj, we have Vt7j(a(0)) ^ 0. 4 A function g(-) is quasi-concave if and only if the set G 7 = {a | ff(a) > 7} is convex for any scalar 7. 21 When these assumptions are satisfied, we can perform the following algorithm: 1. Z 1 = U, I = 1 2. Solve the following linear programming problem for a solution U(o)': k Max { a ( o ) } a 0 r 0 + ] T a ^ i t=i s.t. a ( 0) G Zl 3. Solution test: if U(0)7 G G, we can stop and U( 0 / will be our solution. Otherwise, we will go to the next step. 4. Let Ii =.{1,... ,rri[} be the index set where <7j(u(0/) < dj for j G //, we can find a scalar 8l G [0,1] such that V ( 0 / — b(0) -r-t9'(u(0)' — b(0)), where 9j(v(0)1) > dj for j G It, and ^ 0 (v ( 0 ) ( ) = djo for some j0 G Here, v ( 0 )' is a boundary point of G. 5. Create a set H ( = {a(0) | Vc/j(v(0)')'(a(o) - v ( 0 / ) > 0, for j G h }. 6. Let Z m = Z ' n H ! . This will be the new constraint set. 7. I 4— 1 + 1, and go back to step 2. With all the conditions satisfied, the above algorithm was proven to be convergent, i.e., U(0)' —> U( 0)°°, as I —> oo 22 where U(o)°° passes the solution test. The details of the convergence property can be refered to Zangwill (1969). In the portfolio selection problem, the feasible set is G = (a(o) | a0r0 + a'ft + F 1(aj) Va'Ea > dj, j = 1,..., m; k Y a i — 1) and aj > 0, i = 0,..., k}, and we will let (7j(a(0)) = a0r0 + a'fx + F~1(aj)VapSa, j = 1,..., m, which are quasi-concave (Appendix A) and continuously differentiable. Hence, the feasible set of our investor's problem is convex.5 Before proceeding with the supporting hyperplane algorithm, we need to find the set U and the interior point b(0). Existence of U: To find a compact set U that contains G, we consider the following compact set: U = {a(0) | a 0r 0 + a'ft > dj, j = 1,..., m; k ^ ai = 1, and aj > 0, i = 0,..., k}. i=0 5Geometrically, as noted in Agnew, Agnew, Rasmussen and Smith (1969), the feasible set represents one nappe of a hyperboloid and its interior. 23 Since F~1(aj) < 0 and Va 'Sa > 0, we have a 0r 0 + a'/it > a 0r 0 + a'/z + F _ 1 (a^Va'Sa, Va(0). Hence, G C U. Existence of b(0): Applying Theorem 2.3, we can see that for any a(0) satisfying k i=i such that Pr(rj > r^) > 1 — otj or r?- < //; + F~1(ctj)si, for i = 1,..., k, then + a'/x + F - ^ a^Va ' S a > d. which is true for j — 1,... , m. If we further consider = ^ + -F/" 1^ < //i -r--f7,_1(o;:;)si = r?-, where F j - 1 is chosen to be strictly smaller than F~1(aj), j — 1,..., m, we can construct the following convex set: fc i=i L = {a(0) | a0r0 + ^ Ojr?- > dj + e, j = 1,..., m; fc E °i = 1) a n d flj > 0, z = 0,..., k}, i=0 where e > 0 is a very small value. Weintraub and Vera had shown that L C G and the solution obtained from solving k Max { a ( o ) } a 0 r 0 + i=i s.t. a(0) G L 24 will be interior point of the feasible set G. One more technical issue in the algorithm we like to discuss here is how the boundary point V( 0/ is determined after we find the solution U(o)( in the Ith iteration. Recall that we need to find V( 0) ( = b( 0 ) + 0 ' (u( O / — b( 0 ) ) such that . £j(v(o/) > dj for j G Ii, and gjo(-V(o)1) = djo for some j0 G It. Now, for each j G Ii, we can find at least one Qj G [0,1] such that 9j[b(o) + 0 j (u ( o / - b ( 0 ) ) ] = dj. Note that if we find more than one dj G [0,1] satisfying the above equation, we will take the small value as our 6j. To find Qj which satisfies it, there are many methods available. Here, we will follow the one used by Weintraub and Vera. After some algebra, it is not hard to find that the above equation can be written as a quadratic equation in Qj as the following: AjQ) + BjQj + Cj = 0, where ^ = -(e0r„ + E ^ ) 2 + [F-Hai)]2 E E i=l i=l t=l k k k k Bj = 2(dj - b0r0 - £ 6i/ii)(e{,r0 + E + W ^ j ) ? E E ^M, i=l i=l i=lt=l k k k Cj = [F-\aj)}2 Y, E s«kh - (dj - b0r0 - E bit*)2, i=i t=i i=i 25 e\ = u\ - bi, for i = 0,...,k. Then, we define 9l = m i n ^ | j e /,}. Hence we will obtain the boundary point V( 0/. In addition to the fact that G is convex, V( 0/ has the following property: •^(v(o)') >dj, for j = l , . . . , m , and <7j(v(o/) = dj, for some j. In the next chapter, we will start looking at some data and make use of the theory in this chapter. An empirical study will be presented and the results will be discussed in detail. 26 Chapter 3 Empirical Results 3.1 Introduction In our study, we mainly consider an investor who is interested in investing certain amount of capital into the international markets. We have chosen 11 stock indices from different international markets. We use the daily price data (Oct 25, 1990 to Oct 27, 1998)1 of different stock markets in the world. In section 3.2, some preliminary analysis of these data have been done in order to let us have some idea of their distributional behaviors. Next, we will use an example of the portfolio selection model described in chapter 2 for illustration and apply the cutting plane method to obtain the optimal allocation of capital. In this thesis, we consider the following simple model: fc Max { a ( 0 ) } a 0 r 0 + i=i 1 We obtained the data from D a t a S t r e a m provided in David Lam Library at University of British Columbia. Many thanks to Christina, the librarian in David Lam library, for her kind help when we were gathering the data. 27 s.t. a0rQ + a'/u + F~1(a) va'Ea > d, fc t=0 and aj > 0, i = 0,..., k. Now, we have k — 11, and the values of a and ro are chosen to be 5% and 0 (assuming the riskless asset is just the cash held in hand) respectively. Moreover, we only have one constraint related to the risk exposure. The only disaster level of return, d, will be treated as an input variable chosen by the investor. As stated before, many people assume the returns follow multivari-ate normal distribution, MVNk(fi, S), when they use the chance constrained model, we will also use it but only treat it as a benthmark for comparison. Then, we assume that the returns have an EC distribution. In our study, we will consider the well known multivariate-^ distribution, Mtk(m;Li, E) where m is the degrees of freedom. In addition, we further consider the case that we do not know which distribution family they belong to and their joint dis-tribution may be time-varying. Hence, we will use the empirical distributions described in section 2.4. Furthermore, we also consider the choice of d and window size in selecting portfolios. In section 3.3, we will clearly specify these factors and explain how we compare different portfolio schemes. Lastly, all empirical results, detailed analysis and interpretations will be presented in section 3.4, where we attempt to find the "best" portfolio scheme using a new approach. The steps of this approach can be summarized as the following: 28 1. For each portfolio scheme, we create a "portfolio line" which illustrates the relationship of portfolio returns and disaster levels of returns. 2. By connecting the envelop of these portfolio lines, we can obtain the "portfolio frontier". 3. Using a new return-to-risk ratio, we can find the "best" portfolio scheme, in terms of the rate of return and risk management ability, from the port-folio frontier. 3.2 Data The returns data we are using here are calculated by using the log-difference of the index prices, i.e., ritt = logPt - logP4_i , for index i at time t. The followings are the summary and notations of the data set: 1. r\. S&P Composite returns 2. r 2: Dow Jones Industrial Average returns 3. r 3: NASDAQ Composite returns 4. r^: Hong Kong Heng Seng Index returns 5. r 5: Tokyo Nikkei 225 Composite returns 29 6. r 6: Seoul Composite returns 7. r 7: Australia All Ordinaries returns 8. r 8: London FTSE 100 Index returns 9. rg: Frankfurt DAX Composite returns 10. r i 0 : Paris GAC 40 Composite returns 11. rn: Toronto TSE 300 Composite returns From the time series plots of these returns data, Figure B.la - B.Ik, we find the fluctuations of the returns vary both among different markets and different time periods. The most obvious observation is the exceptionally high volatilities in Hong Kong and Seoul markets after the 1997 Asian financial cri-sis. Some statistics for the whole data set are summarized in Tables 3.1 and 3.2. In order to assess the validity of normality assumption for the returns, we simply look at the quantile-quantile plots (QQ plots) of our daily return data. Not surprisingly, we find serious violation of the normality assumption for each index return from Figure B.2a - B.2k. On the other hand, the QQ plots suggest that the data have a fatter tails compared to those of normal distribution. Based on the histograms in Figure B.3a - B.3k, it is not obvious that any series of the returns data is far from symmetry in terms of the dis-tribution. Therefore, it is reasonable to assume that returns data follow some EC distributions. 30 Since multivariate-^ is the most well-known distribution in the EC dis-tribution family other than multivariate normal, and it is known for its fat-tail property, we will consider it as one possible candidate for the distribution of the returns. By looking at the QQ plots of the returns data and the t-distribution quantiles with different degrees of freedom, we found that ^-distribution with degrees of freedom 5 is a pretty good fit for the data. This can be justified by Figure B.4a - B.4k. To verify whether their joint distribution is multivariate-^ with degrees of freedom 5, or Mtn(5; fx, £ ) , we must at least, have a look on some linear combinations of the returns series. By Theorem 2.2, any linear combinations of r^s should follow univariate-i with degrees of freedom 5 if returns are distributed as M £ n (5; fx, S). From Figure B.5a - B.5i, we can see the QQ plots of 9 linear combinations of r;'s and the t-distribution quantiles with degrees of freedom 5. We chose the first principal component and 8 ran-domly selected linear combinations of r;'s. All these are some possible choices of portfolio. The plots show that these linear combinations are quite close to t-distribution with degrees of freedom 5. To have a clearer picture, we can compare the QQ plots of the same linear combinations and normal quantiles in Figure B.6a - B.6i. Therefore, assuming our data to follow the Min(5; M, S) is reasonable. It should be noted that we by no means try to say that Mtn(5; fx, E) is the best choice for fitting the data. In fact, other members within the EC distribution class are possible. Our main point is to investigate the difference 31 in the performance of portfolio schemes when we have a better fit of distribu-tions in describing the behavior of the data. 3.3 Comparison of Portfolio Schemes Before we actually carry out an empirical analysis using our data set, we have to define clearly what we would like to analyze. In this section, we will de-scribe in detail how we compare different portfolio schemes. In our study, we calculate the performances of portfolio schemes: • with different assumptions of the distributions of returns data, i.e., MVNU(LI, S), M i n (5; A*, E), and empirical distribution; • before and after the Asian Financial crisis in late October of 1997; • using different disaster levels of returns, d (or VaR values)2; and • using different window sizes of historical data. The most important thing we are investigating is whether the non-normality assumption of the distribution can lead us to a much better portfolio scheme, in terms of the returns and risk management ability. 2 A disaster level of return with the value d means the daily return of the portfolio is d. The VaR will then be current capital x \d\. 32 Other than the distribution issue, from the time series plots of the re-turns data, we find that for all the indices, fluctuations before and after the Asian financial crisis are very different. As crises do not happen very often, we can treat the period before the crisis as the "normal" period. On the other hand, period after the crisis can be thought as the "abnormal" period. In the "abnormal" period, obviously, investors are in a much riskier position. By looking at the portfolio returns after the crisis, we can tell the abilities on managing risk in the "abnormal" period for different portfolio schemes. To be more specific, we are going to look at the cumulative returns of each scheme for the year before the crisis, October 28, 1996 to October 27, 1997, and for the year after the crisis, October 28, 1997 to October 27, 1998. We also look at different d which can be chosen by an investor. We know that the greater the disaster level (smaller d), or the larger value of VaR, is chosen, the less risk adverse the investor is behaving. This is because it means the investor can tolerate losing more capital with the same probability. Thus we can tell how the risk attitude affects the investor's portfolio returns. More-over, we can investigate whether some choices of d will lead to an inefficient portfolio in Baumol's sense: riskier but with lower expected return. The dis-aster levels of returns we have considered in this thesis range from -0.0001 to -0.035. The last factor we will look at is the use of different window sizes of 33 historical data in portfolio optimization. It is sometimes argued that using all the past data in forming a portfolio is inappropriate. The main argument is that the really old data did not have much influence on the behavior of the recent data. There are 2 common possible ways to adjust for the smaller influence of the older data. The first one is assigning different weights to the data at different time points. The really old data will receive smaller weights and the recent data will have larger weights when we calculate the sample means and variance-covariance matrix, and when we construct the empirical distribution. However, there are many possible weighting schemes that can be chosen and it is not obvious which one is the best. The second method is simply truncating the really old data in our series. That means we will always use the most recent data, 3 years say. This is what we call a window. So a 3-year window means that when we are finding the optimal portfolio for tomorrow, we only make use of the past data from today up to 3 years ago. This method is easier to implement compared to the first one and thus we decided to use it. The only uncertain thing is the choice of the window size. Therefore, we will use different window sizes and compare them. In this thesis, we use 1-year, 2-year, up to 6-year as our window sizes. In addi-tion, we will also construct the portfolios on the basis of using all the past data. At this point, we can see that there are many possible portfolio schemes. When we compare them, we will look at their cumulative returns over a spe-cific period of time. The following example will give us a clearer picture. 34 Example 3.1 Suppose we are considering the following portfolio scheme: • distribution assumption: empirical distribution; • disaster level of return: -0.001; and • window size: 3 year. Then we will find the optimal portfolio holding for October 28, 1996 based on the data in the period of October 28, 1993 to October 25, 1996.3 Let the optimal portfolio be: (a0,28/10/1996> °1,28/10/1996> •••) Gll,28/10/199'6)-Using the realized daily returns of the indices on October 28, 1996, we can calculate the daily return of the portfolio, r p > 28/io/i996) on October 28, 1996: a0,28/10/1996 r0 + 0^28/10/1996^1,28/10/1996 + • • • + all,28/10/1996?"ll,28/10/1996 > where r i > 28/io/i996 is the daily returns of index i, for i = 1,..., 11, and r 0 = 0. Similarly, we can calculate the daily return of the portfolio on October 29,1996, ^,29/10/1996, by using the data in the period of October 29, 1993 to October 28, 1996. This process is repeated until we obtain the daily return of the portfolio on October 27, 1998, r P ) 2 7/io/i998- Now, we can see that the cumulative return of this portfolio scheme for the year before the Asian financial crisis, i.e., for the period of October 28, 1996 to October 27, 1997, is 3 The Exchange markets were closed on October 26, 1996 and October 27, 1996. 35 ^p,28/10/1996 + ?"p,29/10/1996 + • • • + 7>,27/10/1997-In a similar way, the cumulative return of this portfolio scheme for the year after the Asian financial crisis, i.e., for the period of October 28, 1997 to October 27, 1998, is rp,28/10/1997 + rp,29/10/1997 + •'•• + ?p,27/10/1998-Of course, the above example is only for one of many possible portfolio schemes. In the next section, we will present the performance of all the portfolio schemes we have considered. With the help of some graphs, we can visualize the comparisons of different portfolio schemes. More importantly, we will try to find the "best portfolio scheme". 3.4 Empirical Results and Discussion In the last section, several factors were considered in constructing a portfo-lio scheme, choosing the best portfolio scheme is not an easy task. Firstly, 4 Note that r^s are the continuously compounded rate of returns per day. Therefore, suppose the investor had an amount of capital Mo putting on the indices and the riskless assets before October 28, 1996, he will then have M 0 e r P 2 8 / 1 0 / 1 9 9 6 on October 28, 1996, ^ 0 e ' -p,28/io/i996 e ' -p, 2 9/io/i996 o n October 29, 1996, and so on. Eventually, on October 27, 1997, he will have ^gg rp,28/10/1996g rp,29/10/1996 g^p,27/10/1997 — J\^0g(rp,28/10/1996+rp,29/10/1996 + --- + ''p,27/10/1997) _ Therefore, the continuously compounded rate of return for this period of time (or the cu-mulative return) is rP i 28/io/i996 + rP,29/io/i996 + . . . + ^27/10/1997 • 36 we will present the results for the performances of different portfolio schemes. Secondly, we will use the idea of Baumol's "efficient portfolio" to find the "ef-ficient frontier". Lastly, by defining a new return-to-risk ratio, we attempt to find the best portfolio scheme. The way we visualize the data is by using some plots of cumulative returns against the disaster levels of returns. The cumulative returns are cal-culated in the way described in Example 3.1. For each window size, we will first consider the cumulative returns for the year before the Asian financial crisis. Then we will look at those for the year after the crisis. In addition, we are also interested in comparing the cumulative returns of these two years for each portfolio scheme. Thus, there will be 3 graphs for each window size. Also, in each graph, we have drawn three lines representing the returns and disaster levels of returns pairs for the 3 distribution assumptions. We call these lines portfolio lines. Since we have 6 different window sizes, plus the one that we always use all past data, we will have 21 plots (Figures C.la - C.7c) in total. In each of the plots, the y — axis is the cumulative returns, which is the realized performances of different portfolio schemes. The x — axis is the disaster levels of returns, d, ranging from some negative values to zero. It can be seen that the portfolio lines are very different among different windows sizes, before and after the Asian crisis, and various distribution assumptions. However, only based on these plots, it is difficult to draw a conclusion. Hence, 37 we will use the idea of Baumol's "efficient portfolio". Recall that a rational investor should never choose a disaster level of return while he can get a higher return with a larger value of d. To find the "portfolio frontier", we simply look for the maximum cumulative returns an investor can get at each disaster level of return. Then we can construct the portfolio frontier curve. Tables 3.3 - 3.5 summarize the optimal window size, distribution assumption and the corre-sponding cumulative returns for different disaster levels of returns at different periods of time. One thing we may notice is that before the Asian financial crisis, from Table 3.3, the multivariate normal distribution assumption gives us maximum cumulative returns for most of the disaster levels of returns. Moreover, when d is between -0.0001 and -0.0120, the window size of 3 years is the best choice while the 1-year window is the best for the other disaster levels of returns. However, after the crisis, from Table 3.4, the empirical distribution assump-tion always gives the maximum returns. For the optimal window sizes, they vary with the disaster levels of returns. For an investor, it may be interesting to look at the cumulative returns for the period October 28, 1996 to October 27, 1998 since he would never know when a crash would come. This will capture the performances of differ-ent portfolio schemes for both "normal" and "abnormal" years. From Table 3.5, it is noted that empirical distribution assumption gives the best returns 38 for most of the disaster levels of returns, especially for the efficient ones. In our case, the disaster levels of returns in the range of -0.0001 to -0.0120 (ex-cept for -0.0110) are considered to be efficient since the cumulative returns are increasing. In Figure 3.1, portfolio frontier curve is created by considering the returns and disaster level for the period October 28, 1996 to October 27, 1998. The efficient portfolios are those on the negative sloping part of the portfolio frontier curve. Therefore, an investor is only advised to choose a disaster level of return in the range of-0.0001 to -0.0120 other than -0.0110. If the disaster level is chosen between -0.0001 and -0.0100, a 3-year window size and empirical distribution should be used. A 5-year window and multivari-ate normal distribution is recommended for the disaster level of return -0.0120. One observation should be noted is that the use of multivariate-i dis-tribution with degrees of freedom 5 does not lead to any portfolio scheme in the efficient frontier. This can also be seen from Figures O l a - C.7c, where we find that Mtn(5; fx, E) assumption only leads to higher returns when the disaster level of the return is smaller than the efficient ones. Before we attempt to find the best performing portfolio scheme, we will define a measure for the performances of different portfolio schemes. The fol-lowing definition is a new return-to-risk ratio we are going to use: Definition 3.1: 39 Suppose a portfolio scheme gives an cumulative return r at a disaster level of return d, its return-to-risk ratio (r/d ratio) is defined as |^|. The meaning of this ratio will be the cumulative return for each unit of absolute disaster level of return. Obviously, a larger value of this ratio rep-resents a better portfolio scheme when both return and risk are taken into account. For the portfolio schemes in the portfolio frontier curve, we calculate their r/d ratios and summarize them in Tables 3.6-3.8. Again, we only con-sider the period October 28, 1996 to October 27, 1998 so that both "normal" and "abnormal" years are included. From Table 3.8, we find that the r/d ratio is greatest when the disaster level of return is -0.0090. Therefore, we conclude that by using a 3-year window, empirical distribution assumption, and with -0.0090 as the disaster level of return, we can obtain the best portfolio scheme for these 11 stock indices and riskless asset. Since this result has already taken the "abnormal" year into account, it also has the ability to manage the risk when some unexpected negative market movements occur. Therefore, this scheme should be advised to an investor interested in investing these assets. For the period October 28, 1996 to October 27, 1998, the portfolio holdings of this scheme are shown in Figure 3.2a - 3.21. 40 Table 3.1: Sample averages of the daily returns for each index. T2 3^ 0.000591 0.000582 0.000776 0.000565 -0.000291 -0.000362 rw ni 0.000302 0.000448 0.000546 0.000368 0.000311 41 Table 3.2: Sample correlation matrix of the daily returns. n ?~3 r\ r5 r6 1.0000 0.9510 0.8165 0.1277 0.1115 0.0872 r2 0.9510 1.0000 0.7405 0.1468 0.1110 0.0872 r3 0.8165 0.7405 1.0000 0.1492 0.1274 0.0940 U 0.1277 0.1468 0.1492 1.0000 0.2666 0.1286 r5 0.1115 0.1110 0.1274 0.2666 1.0000 0.0876 r& 0.0872 0.0872 0.0940 0.1286 0.0876 1.0000 r7 0.0872 0.1098 0.1204 0.4470 0.3195 0.1556 r& 0.3505 0.3562 0.3479 0.3031 0.2737 0.1202 rg 0.2731 0.2862 0.2805 0.4037 0.2861 0.1115 r w 0.3413 0.3538 0.3240 0.2779 0.2485 0.0784 rn 0.6502 0.6488 0.6172 0.2385 0.1795 0.0877 r-i rs rg no rn n 0.0872 0.3505 0.2731 0.3413 0.6502 ri 0.1098 0.3562 0.2862 0.3538 0.6488 r3 0.1204 0.3479 0.2805 0.3240 0.6172 U 0.4470 0.3031 0.4037 0.2779 0.2385 r5 0.3195 0.2737 0.2861 0.2485 0.1795 r& 0.1556 0.1202 0.1115 0.0784 0.0877 r7 1.0000 0.2982 0.4103 0.2624 0.2257 r8 0.2982 1.0000 0.5493 0.6768 0.3777 rg 0.4103 0.5493 1.0000 0.6239 0.3459 rw 0.2624 0.6768 0.6239 1.0000 0.3649 rn 0.2257 0.3777 0.3459 0.3649 1.0000 42 Table 3.3: Summary of maximum cumulative returns for each disaster level (d) in the period Oct 28, 1996 to Oct 27, 1997. d Window size Distribution * Maximum returns -0.0001 3 MVN 0.0025 -0.0005 3 MVN 0.0126 -0.0010 3 MVN 0.0253 -0.0030 3 MVN 0.0759 -0.0050 3 MVN 0.1265 -0.0070 3 MVN 0.1771 -0.0090 3 MVN 0.2227 -0.0100 3 Emp 0.2227 -0.0110 3 Mt5 0.2259 -0.0120 3 Mt5 0.2236 -0.0130 1 MVN 0.2216 -0.0140 1 MVN 0.2394 -0.0150 1 MVN 0.2452 -0.0160 1 Emp 0.2478 -0.0170 1 Emp 0.2496 -0.0180 1 MVN 0.2531 -0.0200 1 MVN/Emp 0.2567 * MVN: Multivariate normal distribution, Mt5: Multivariate-*: distribution, Emp: Empirical distribution. 43 Table 3.4: Summary of maximum cumulative returns for each disaster level of return (ri) in the period Oct 28, 1997 to Oct 27, 1998. d Window size Distribution Maximum returns -0.0001 4 Emp 0.0012 -0.0005 4 Emp 0.0062 -0.0010 4 Emp 0.0123 -0.0030 4 Emp 0.0370 -0.0050 4 Emp 0.0619 -0.0070 4 Emp 0.0860 -0.0090 3 Emp 0.1116 -0.0100 5 Emp 0.1305 -0.0110 5 Emp 0.1399 -0.0120 5 Emp 0.1715 -0.0130 5 Emp 0.1654 -0.0140 2 Emp 0.1745 -0.0150 2 Emp 0.1794 -0.0160 2 Emp 0.1926 -0.0170 2 Emp 0.1944 -0.0180 2 Emp 0.1968 -0.0200 2 Emp 0.1739 44 Table 3.5: Summary of maximum cumulative returns for each disaster level of return (d) in the period Oct 28, 1996 to Oct 27, 1998. d Window size Distribution Maximum returns -0.0001 3 Emp 0.0036 -0.0005 3 Emp 0.0179 -0.0010 3 Emp 0.0358 -0.0030 3 Emp 0.1075 -0.0050 3 Emp 0.1799 -0.0070 3 Emp 0.2494 -0.0090 3 Emp 0.3327 -0.0100 3 Emp 0.3399 -0.0110 5 Emp 0.3389 -0.0120 5 MVN 0.3721 -0.0130 2 Emp 0.3680 -0.0140 5 Emp 0.3668 -0.0150 5 Mt5 0.3665 -0.0160 2 Emp 0.3572 -0.0170 2 Emp 0.3545 -0.0180 2 Emp 0.3560 -0.0200 5 Mt5 0.3383 45 Table 3.6: Summary of return-to-risk ratio (r/d ratio) for each disaster level of return (d) in the period Oct 28, 1996 to Oct 27, 1997. d Window size Distribution r/d ratio -0.0001 3 MVN 25.2968 -0.0005 3 MVN 25.2964 -0.0010 3 MVN 25.3014 -0.0030 3 MVN 25.2994 -0.0050 3 MVN 25.2923 -0.0070 3 MVN 25.2976 -0.0090 3 MVN 24.7484 -0.0100 3 Emp 22.2659 -0.0110 3 Mt5 20.5364 -0.0120 3 Mt5 18.6309 -0.0130 1 MVN 17.0448 -0.0140 1 MVN 17.1023 -0.0150 1 MVN 16.3499 -0.0160 1 Emp 15.4901 -0.0170 1 Emp 14.6837 -0.0180 1 MVN 14.0627 -0.0200 1 MVN/Emp 12.8360 46 Table 3.7: Summary of return-to-risk ratio (r/d ratio) for each disaster level of return (d) in the period Oct 28, 1997 to Oct 27, 1998. d Window size Distribution r/d ratio -0.0001 4 Emp 12.3326 -0.0005 4 Emp 12.3420 -0.0010 4 Emp 12.3021 -0.0030 4 Emp 12.3201 -0.0050 4 Emp 12.3861 -0.0070 4 Emp 12.2920 -0.0090 3 Emp 12.4054 -0.0100 5 Emp 13.0524 -0.0110 5 Emp 12.7203 -0.0120 5 Emp 14.2933 -0.0130 5 Emp 12.7253 -0.0140 2 Emp 12.4651 -0.0150 2 Emp 11.9611 -0.0160 2 Emp 12.0400 -0.0170 2 Emp 11.4367 -0.0180 2 Emp 10.9316 -0.0200 2 Emp 8.6942 47 Table 3.8: Summary of return-to-risk ratio (r/d ratio) for each disaster level of return (d) in the period Oct 28, 1996 to Oct 27, 1998. d Window size Distribution r/d ratio -0.0001 3 Emp 35.6745 -0.0005 3 Emp 35.8006 -0.0010 3 Emp 35.7771 -0.0030 3 Emp 35.8360 -0.0050 3 Emp 35.9815 -0.0070 3 Emp 35.6246 -0.0090 3 Emp 36.9694 -0.0100 3 Emp 33.9917 -0.0110 5 Emp 30.8121 -0.0120 5 MVN 31.0108 -0.0130 2 Emp 28.3107 -0.0140 5 Emp 26.1991 -0.0150 5 Mt5 24.4358 -0.0160 2 Emp 22.3242 -0.0170 2 Emp 20.8531 -0.0180 2 Emp 19.7753 -0.0200 5 Mt5 16.9146 48 Figure 3.1: Portfolio frontier curve for the period 28/10/96 to 27/10/98 Figure 3.2: Optimal portfolio scheme curves from 28/10/96 to 27/10/98 Figure 3.2a: Riskless Asset Figure 3.2b: S&P Composite Index 0 100 200 300 400 500 0 100 200 300 400 500 Time index Time index Figure 3.2c: Dow Jones Industrial Average Index Figure 3.2d: NASDAQ Composite Index 100 200 300 400 500 Time index 100 200 300 400 500 Time index 50 Figure 3.2: Optimal portfolio scheme curves from 28/10/96 to 27/10/98 (Cont'd) Figure 3.2e: Hong Kong Heng Seng Index Figure 3.2f: Tokyo Nikkei 225 Composite Index 100 200 300 400 500 Time index 100 200 300 400 500 Time index Figure 3.2g: Seoul Composite Index 0 100 200 300 400 500 Time index Figure 3.2h: Australia All Ordinaries Index tn sz oo CT) o 0) ima c i opt o o 0 100 200 300 400 500 Time index 51 Figure 3.2: Optimal portfolio scheme curves from 28/10/96 to 27/10/98 (Cont'd) Figure 3.2i: London FTSE 100 Index Figure 3.2j: Frankfurt DAX Composite Index 100 200 300 400 500 Time index J= oo «5 3 0 100 200 300 400 500 Time index Figure 3.2k: Paris CAC 40 Composite Index 100 200 300 400 500 Time index Figure 3.2I: Toronto TSE 300 Composite Index CO 2: oo D) ci 'CD ma d do q d 100 200 300 400 500 Time index 52 Chapter 4 Further Research and Discussion 4.1 Introduction The main advantage of the method in choosing a portfolio, developed in the previous section, is that it is simple and easy to implement in reality. In ad-dition, the use of cutting plane method is also computationally efficient. Our experience is that the computer time of finding the optimal portfolio is within seconds. The details of computational aspects are given in section 4.2. However, if we really like to use the suggested model in practice, it may seem to be too simple. Moreover, we may want to consider some more technical issues when we are looking for the optimal portfolio scheme. We will discuss all these in section 4.3, along with some directions so that we can 53 actually apply our study to the real world. Although the VaR approach (or the chance constrained programming approach) is popular in risk management for many firms, there are criticisms for it. We will have a brief discussion in section 4.4. 4.2 Computational Aspects In the algorithm described in Chapter 2, basically, we have to solve a lin-ear programming problem in each iteration. In the first iteration, we have a linear objective function, and two linear constraints. After that, for each additional iteration, we will include one more linear constraint to the linear programming problem. To solve these linear programming problems, we use the simplex method. The approach we used in this thesis has been implemented using C pro-gramming language. In the more than one thousand lines of code we have writ-ten, six files of source code from the Numerical Recipes (Press et al (1992))are used. One of them is the C subroutine for the simplex method, which is in the file "simplx.c". The other five files we used contain the supporting functions used in "simplx.c". They are "simpl.c", "simp2.c", "simp3.c", "nrutil.c" and "nrutil.h". Among them, "nrutil.c" and "nrutil.h" are the ANSI C version of the Numerical Recipes utility files. 54 Our program was written in the way that when the disaster level of return, the distribution assumption, and the window size are inputted, the op-timal portfolio holdings and the realized returns for the period from October 28, 1996 to October 27, 1998 will be calculated. According to our experience, the computer time needed for such calculation is around 30 to 60 minutes in a Sun Enterprise 450 computer running Solaris 2.6. Therefore, to find the optimal portfolio holding for a particular date, the computer time required is within seconds. 4.3 Beyond the model Transaction Cost One very important thing we have not considered in our model is the transaction cost needed when the portfolio holdings are changed. In our study, since the optimization is done everyday and the portfolio holdings will be re-vised daily, transaction costs may be of concern. Even though in the recent years, electronic trading through the Internet has reduced the transaction costs a lot,1 we may still want to consider it. 1 Based on the commissions charged by online brokers nowadays, even in our case where portfolio holdings are revised daily, total transaction cost involved is still small compared to the rate of return obtained by the optimal portfolio scheme. For the two-year time horizon in our example, the net return rate is at least 29% while the gross return rate is 33%. 55 To include the transaction cost in the process of selecting the optimal portfolio, two modifications of our model can be made. One is simply incor-porating the transaction cost into the objective function. For our case in the last chapter, suppose the transaction costs for a unit change in the holding of the assets are co,Ci,... ,ck respectively. If our current portfolio has the weights vector (OQ, a",..., a°k), then to find the optimal portfolio in the next time period, we solve the following maximization problem: k k Max { a ( 0 ) } a 0 r 0 + a ^ _ E c * l a * ~ a ° l i=l i = 0 s.t. a0r0 + a'n + F~ 1(a)v /a 7Ea > d, k i = 0 and > 0, i = 0,..., k. The difficulty in solving the above problem is that the objective func-tion is no longer a simple linear function. In fact, such an objective function is piecewise linear. We can perform a 2-stage optimization by first maximiz-ing over a partition of domain where the objective function is locally linear in each sub-domain. This part can utilize the cutting plane method. Then we can optimize over all the sub-domain to find the optimal portfolio. The trade-off is that it requires much more computer time. 56 The second modification we can make for our model is the case when an investor has a limited amount of transaction cost to spend for changing the assets holding. This can be done by adding the following constraint in order to restrict the transaction cost to be less than an amount C: k J2°i\ai - ai \ < C -i=0 Some remarks on the use of empirical distribution In the last chapter, we used the empirical distribution to approximate the theoretical critical value of the distribution of the linear combination of returns data. Recall that we randomly selected 100 linear combinations of the returns data and constructed their empirical distributions. Then we used the simple average of the critical points of these 100 generated empirical distri-butions as an estimate of the theoretical critical value. These sample critical points are supposed to be the same theoretically. We found that the vari-ance of these sample critical values are around 0.2% (or less) of their average. Therefore, it is reasonable to believe that all linear combinations of the re-turns data have approximately the same distribution form. Certainly, further testing can be done to varify its appropriateness. Another point to take note is the use of sample mean as an estimate of the theoretical critical point. We used it because it is simple, and reasonable. 57 . In fact, other estimation methods are also possible. 4.4 Discussion on the Use of VaR Approach The main criticism for the use of VaR as a risk measure in the context of the chance constrained programming model, which we discussed in the previous chapter, is that it gives the same penalty to different levels of violation of the chance constraint. In our model, the original probabilistic constraint is k Pr{X°^ >d) > I - a . i=0 Such a constraint is, in fact, only requiring the inequality £*_ 0 a^i > d to be true for 100 x (1 — a) times out of 100. However, it cannot distinguish between a serious violation of the inequality (i.e., J2i=oairi ^ °0 a n Q l a slight violation of the inequality (i.e., d — X ^ = 0 a^i is a very small positive number). Certainly, it may not be an important issue if the violation is not so serious. But in the case of a serious violation, a huge loss will occur. We admit that this is the disadvantage of using VaR as the risk measure because it treat huge and small losses equally which does not seem to make economic sense. However, we are not only choosing the optimal portfolio at one time point, where in that case, it is true that we may end up with losing a large portion of the original capital. Instead, we repeat the optimization pro-58 cess in selecting our portfolio for a period of time,2 plus the choice of optimal portfolio scheme is over several different values of disaster levels of return.3 Therefore, the disaster level of return (or the VaR value) is carefully chosen, and it will be extremely unlikely to lose a large portion of the original capital. For further study, we can also manipulate the value of disaster probability, a, when we choose the optimal portfolio scheme. In addition, we would like to point out that in our generalized VaR ap-proach to portfolio selection, the distribution assumption of the data is crucial. The reason is that an inappropriate distribution assumption may underesti-mate our risk exposure (or the VaR value), which may lead us to a huge loss. On the other hand, it may also overestimate our risk exposure. Then, our choice of portfolio may be too conservative, which means it tends to give us low portfolio return. The way we use the empirical distribution in our optimal portfolio scheme suggests a method to address the distribution assumption problem by the use of historical data. In summary, we believe that the use of VaR with various disaster levels of returns in our model is reasonable and appropriate. 2 There are 600 times of optimization in the 2 years period. 3 I n fact, what we did is a sensitivity analysis over the values of d. 59 Chapter 5 Conclusion The traditional VaR approach to portfolio selection problem often assumes that the joint distribution of the returns data is multivariate normal. This assumption is mainly implemented for handling linear combinations of the re-turns data. The class of multivariate normal distribution has the desirable property that any linear combination also has a univariate normal distribu-tion. In fact, this similar property can be found in the class of EC distributions, which contains multivariate normal distribution as a subclass. In our generalized VaR approach, firstly, we extended the distribution assumption to the class of EC distribution family. The second generalization we made to our portfolio selection model was that we allowed for time-varying distributional form of the data set. Therefore, we were able to capture any change in the behavior of the data that might be due to some unexpected en-vironmental changes. For this extension, the empirical distribution was used. 60 In chapter 3, we analyzed some stock index returns data in the con-text of selecting the optimal portfolio over a period of time. The results showed that empirical distribution had an overall good performance compared to multivariate normal and multivariate-i distributions. More, it had the best performance in the "abnormal" year, which meant its ability of managing the risk was higher. This was mainly because empirical distribution can capture the time-dependency character of our returns data. In addition, we also found that the number of historical data values being used (window size) was an important factor in solving our investor's problem. Finally, we would like to emphasize that our study gave an idea on how to approach a portfolio selection problem with VaR as the risk measure. To apply our model and method in practice, we can consider other factors such as the transaction cost in order to obtain a better and more accurate solution. 61 Appendix A Proof of Quasi-concavity of Constraint Functions Lemma A . l : is a convex function. Proof. Let A £ (0,1), and A = 1 — A. To show the convexity of 9(a) is the same as showing /^(Aa + Ab)'£(Aa + Ab) < AVa'Sa + AVb'Eb <S> A2a'Sa + A 2b'£b + 2AAa'£b < A2a'Sa + A2b'Sb + 2AAVa'£aVb'i;b & a'Sb < Va'SaVb'Sb. 62 Since S is positive definite, we can write £ = S 1/ 2S 1 /' 2. By Cauchy-Schwarz inequality, a'Sb = (£ 1 / 2 a) ' (£ 1 / 2 b) < v/a7S^v/b7Sb. Q.E.D. Recall #j(a(o)) = a0r0 + a'fx + F~ 1(a j)v /a 7£a, j = 1,... ,m, where F~1(aj),s are all negative. Applying Lemma A.l, (^a(o)) can be ex-pressed in the following form: linear function - convex function, which is a concave function. Hence, gj(a.(0)) is also a quasi-concave function for j = 1,... ,m. 63 Appendix B Returns Data 64 Figure B . l : Time Series Plots of returns data Figure B.1a Figure B.1b 8, ° E I °-< o 1000 1500 Time index 1000 1500 Time index Figure B.1c Figure B.1d 1000 1500 Time index 1000 1500 Time Index Figure B.1e Figure B.1f 1000 1500 2000 Time Index 65 Figure B . l : Time Series Plots of returns data (Cont'd) Figure B.1g Figure B.1h M Index returns 0 0.1 0.2 London FTSE 11 0.2 -0.1 0. 0 500 1000 1500 2000 Tims indsx Figure B.1i 0 500 1000 1500 2000 Time index Figure B.1j >m posits returns 0 0.1 0.2 Paris CAC 40 C< 0.2 -0.1 0. 0 500 1000 1500 2000 Time Index Figure B.1k 0 500 1000 1500 2000 Time index 66 Figure B.2: QQ Plots with normal probabilities 67 Figure B.3: Histograms of returns data Figure B.3a •0.08-0.06-0.04-0.02 0.0 0.02 0.04 SAP Composite returns Figure B.3e -o.oa -o.w -o.w OJ> om OM OM t ToKyo HUM 22$ C«npo«» r«um* Figure B.3i -0.10 -0.05 0.0 0.05 Frankfurt DAX Com posits returns Figure B.3b Jill L -0.08 -0.06 -O.04 -0.02 0.0 0.02 0.04 OJ Industrial Average returns Figure B.3f .till. -0.10 -0.05 0.0 0.05 0.10 Seoul Composite returns Figure B.3j -0.05 0.0 0.0S Paris CAC 40 Composite returns Figure B.3c -0.05 0.0 0.05 NASDAQ Composite returns Figure B.3g -0.08 -0.04 0.0 0.02 0.04 0.0< Australia All Ordinaries returns Figure B.3k If J -0.06 -0.04 -0.02 0.0 0.02 0.04 Toronto TSE 300 Composite returns Figure B.3d HK Heng Seng Index returns Figure B.3h -0.04 -0.02 0.0 0.02 0.04 London FTSE 100 Index returns 68 Figure B.4: QQ Plots with t(5) probabilities Flguia a* Figura BA) Flgun B.tk 69 Figure B.5: QQ Plots of t(5) probabilities - Linear Combinations • B O B - S O B - G O Figure fl.Bfl Flgura B.6h Figura B.6i 70 ;ure B . 6 : Q Q Plots of normal probabilities - Linear Combinations -2 0 2 -2 0 2 -2 0 Rgun B.»g Flgu* B.«h Figur* B.fli 71 Appendix C Portfolio Lines Figure C l : Portfolio lines for various distribution assumptions, using 6-year window Figure C . l a : Portfolio lines for the period 28/10/96 to 27/10/97 d 1_ o CO ye o T — 1_ o LO o to c C> CD o > d 3 E L O o O d o d Multivariate normal tfultivanate-t with d.f. 5 :mpineal distribution -0.020 -0.015 -0.010 -0.005 disaster levels of returns 0.0 73 Figure C l : Portfolio lines for various distribution assumptions, using 6-year window (Cont'd) o ci LO L. q co o cu >. ^ o o 6 CO E in ^ ° i • > o i « d I o CM d Figure C . lb : Portfolio lines for the period 28/10/97 to 27/10/98 Multivariate n tfumv3riate-t impirtcal dtsl ormal ' vyrth.d.f. 5 i i T i tribution -0.020 -0.015 -0.010 -0.005 disaster levels of returns 0.0 74 Figure C l : Portfolio lines for various distribution assumptions, using 6-year window (Cont'd) Figure C.lc: Portfolio lines for the period 28/10/96 to 27/10/98 CM CO CD ' 1 1 1 1 1 1 1— -0.020 -0.015 -0.010 -0.005 0.0 disaster levels of returns 75 Figure C.2: Portfolio lines for various distribution assumptions, using 5-year window Figure C.2a: Portfolio lines for the period 28/10/96 to 27/10/97 C M O i _ O 03 CM O If) c o CD 0) •>-> d ro E in 3 o o ci Multivariate normal Multivanate-t with d.f. 5 — Empir ica l distribution -0.030 -0.025 -0.020 -0.015 -0.010 disaster levels of returns -0.005 0.0 76 Figure C.2: Portfolio lines for various distribution assumptions, using 5-year window (Cont'd) Figure C.2b: Portfolio lines for the period 28/10/97 to 27/10/98 o CM d LT) 3 »> ,_ o CO E o 3 T -^ 6 CO > CO 3 LO E P 3 O o normal j^urtivanate-t with d.f. 5 Empirical distribution o d -0.030 -0.025 -0.020 -0.015 -0.010 disaster levels of returns -0.005 0.0 77 Figure C.2: Portfolio lines for various distribution assumptions, using 5-year window (Cont'd) Figure C.2c: Portfolio lines for the period 28/10/96 to 27/10/98 d 2 S m. >» o CM v_ o 05 £ CM 3 d > ra E d O o d • •> • • V ... \ . Multivariate normal Multivanate-t wrth d.f. 5 — Empirical distribution -0.030 -0.025 -0.020 -0.015 -0.010 disaster levels of returns -0.005 0.0 78 Figure C . 3 : Portfolio lines for various distribution assumptions, using 4-year window Figure C.3a: Portfolio lines for the period 28/10/96 to 27/10/97 o CM -d LO o . d , , , I 1 • < ' I -0.020 -0.015 -0.010 -0.005 0.0 disaster levels of returns 79 Figure C.3: Portfolio lines for various distribution assumptions, using 4-window (Cont'd) Figure C.3b: Portfolio lines for the period 28/10/97 to 27/10/98 LO d • LO — p 1 1 1 1— -0.020 -0.015 -0.010 -0.005 0.0 disaster levels of returns 80 Figure C.3: Portfolio lines for various distribution assumptions, using 4-year window (Cont'd) Figure C.3c: Portfolio lines for the period 28/10/96 to 27/10/98 co -0.020 -0.015 -0.010 -0.005 0.0 disaster levels of returns 81 Figure C.4: Portfolio lines for various distribution assumptions, using 3-year window Figure C.4a: Portfolio lines for the period 28/10/96 to 27/10/97 in | C M -d d , , l_ -0.020 -0.015 -0.010 -0.005 0.0 disaster levels of returns 82 Figure C.4: Portfolio lines for various distribution assumptions, using 3-year window (Cont'd) Figure C.4b: Portfolio lines for the period 28/10/97 to 27/10/98 LO 1 ci LO o . q , , , -0.020 -0.015 -0.010 -0.005 0.0 disaster levels of returns 83 Figure C.4: Portfolio lines for various distribution assumptions, using 3-year window (Cont'd) Figure C.4c: Portfolio lines for the period 28/10/96 to 27/10/98 co co d ro CD >* CM i— o »s c o 1_ 3> CD > i d •=> o Multivariate normal /ultivanate-t with d.f. 5 — Empirical distribution O d -0.020 -0.015 -0.010 -0.005 disaster levels of returns 0.0 84 Figure C.5: Portfolio lines for various distribution assumptions, using 2-year window Figure C.5a: Portfolio lines for the period 28/10/96 to 27/10/97 CM d ._ o CO CM o m c o 0 *- o CD •>-.5 d JO E in 3 O o x o d - • • • — - • — — • - • - 2 " * — Mufavanate normal ^tultivanate-t with d.f. 5 - - impirical distribution -0.030 -0.025 -0.020 -0.015 -0.010 disaster levels of returns -0.005 0.0 85 Figure C.5: Portfolio lines for various distribution assumptions, using 2-year window (Cont'd) Figure C.5b: Portfolio lines for the period 28/10/97 to 27/10/98 -0.030 -0.025 -0.020 -0.015 -0.010 .-0.005 0.0 disaster levels of returns 86 Figure C.5: Portfolio lines for various distribution assumptions, using 2-year window (Cont'd) Figure C.5c: Portfolio lines for the period 28/10/96 to 27/10/98 d —i 1 1 1 1 1 1— -0.030 -0.025 -0.020 -0.015 -0.010 -0.005' 0.0 disaster levels of returns 87 Figure C.6: Portfolio lines for various distribution assumptions, using 1-year window Figure C.6a: Portfolio lines for the period 28/10/96 to 27/10/97 o CO c cS cu > E O CO . CD LO D.15 0.20 0.2 '•- A \ - Y o Multivariate normal MultivariatB-t with d.f. 5 — Empirical distribution **'* * \ *• ^ \ \ 0.05 0.1 *. *' * \ o CD " • • -0.03 -0.02 -0.01 0.0 disaster levels of returns 88 Figure C.6: Portfolio lines for various distribution assumptions, using 1-year window (Cont'd) Figure C.6b: Portfolio lines for the period 28/10/97 to 27/10/98 / / \ Multivariate normal Murtivpriate-t with d.f. 5 Empirical distribution -0.03 -0.02 -0.01 disaster levels of returns 0.0 89 Figure C.6: Portfolio lines for various distribution assumptions, using 1-year window (Cont'd) Figure C.6c: Portfolio lines for the period 28/10/96 to 27/10/98 co CO O * CO CD >% CM c o 3 CD CD > JO i-i d o "-• \ ^ t - ; K Multivariate normal tfultivanate-t with d.f. 5 zmpmcal distribution O d -0.03 -0.02 -0.01 disaster levels of returns 0.0 90 Figure C.7: Portfolio lines for various distribution assumptions, using all data Figure C.7a: Portfolio lines for the period 28/10/96 to 27/10/97 m d i 1 1 , 1 1 -0.020 -0.015 -0.010 -0.005 0.0 disaster levels of returns 91 Figure C.7: Portfolio lines for various distribution assumptions, using all (Cont'd) Figure C.7b: Portfolio lines for the period 28/10/97 to 27/10/98 in d CO CD I , , , , , -0.020 -0.015 -0.010 -0.005 0.0 disaster levels of returns 92 Figure C.7: Portfolio lines for various distribution assumptions, using all (Cont'd) Figure C.7c: Portfolio lines for the period 28/10/96 to 27/10/98 L O I CM -O -0.020 -0.015 -0.010 -0.005 0.0 disaster levels of returns 93 Bibliography [1] Agnew, N. H., R. A. Agnew, J. Rasmussen and K. R. Smith (1969). An Application of Chance Constrained Programming to Portfolio Selection in a Casualty Insurance Firm. Management Science, 15, B512-B520. [2] Avriel, M., W. E. Diewert, S. Schaible and W. T. Ziemba (1981). In-troduction to Concave and Generalized Concave Functions, in: Schaible S. and W. T. Ziemba (eds.), Generalized Concavity in Optimization and Economics, New York, London: Academic Press, 21-50. [3] Baumol, W. J. (1963). An Expected Gain-confidence Limit Criterion for Portfolio Selection. Management Science, 10, 174-182. [4] Brockett, P. L., A. Charnes, W. W. Cooper, K. H. Kwon and T. W. Ruefli (1992). Chance Constrained Programming Approach to Empirical Analyses of Mutual Fund Investment Strategies. Decision Sciences, 2 3 , 385-408. [5] Charnes, A. and W. W. Cooper (1959). Chance-Constrained Program-ming. Management Science, 6, 73-79. 94 [6] Charnes, A. and W. W. Cooper (1963). Deterministic Equivalents for Op-timizing and Satisficing under Chance Constraints. Operations Research, 2, 18-39. [7] Constantinides, G. M. and A. G. Malliaris (1995). Portfolio Theory, in: Jarrow, R. A., V. Maksimovic and W. T. Ziemba (eds.), Handbooks in Operations Research and Management Science, Vol. 9, Amsterdam, New York: Elsevier, 1-30. [8] Fang, K. T., S. Kotz and K. W. Ng (1989). Symmetric Multivariate and Related Distributions. London, New York: Chapman and Hall. [9] Fang, K. T. and Y. Zhang (1990). Generalized Multivariate Analysis. New York: Springer-Verlag. [10] Gupta, A. K. and T. Varga (1993). Elliptically Contoured Models in Statis-tics. Dordrecht, Boston, London: Kluwer Academic Publishers. [11] Hillier F. S. and G. J. Lieberman (1990). Introduction to Operations Re-search (Fifth Edition). New York: McGraw-Hill. [12] J. P. Morgan and Reuters (1996). RiskMetrics - Technical Document (Fourth Edition). J. P. Morgan's Web page on the Internet. [13] Kallberg, J. G. and W. T. Ziemba (1981). Generalized Concave Functions in Stochastic Programming and Portfolio Theory, in: Schaible S. and W. T. Ziemba (eds.), Generalized Concavity in Optimization and Economics, New York, London: Academic Press, 719-767. 95 [14] Kataoka, S. (1963). A Stochastic Programming Model. Econometrica, 31, 181-196. [15] Kelley, J. E. (1960). The Cutting-Plane Method for Solving Convex Pro-grams. SIAM 8, 703-712. [16] Lintner, J. (1965). The Valuation of Risk Assets and the Selection of Risky Investments in Stock Portfolios and Capital Budgets. The Review of Economics and Statistics, 47, 13-37. [17] Markowitz, H. (1952). Portfolio Selection. Journal of Finance, 7, 77-91. [18] Markowitz, H. (1959). Portfolio Selection. New Haven, London: Yale Uni-versity Press. [19] Press, W. H., S. A. Teukolsky, W. T. Vetterling and B. P. Flannery (1992). Numerical Recipes in C: The Art of Scientific Computing (Second Edi-tion). Cambridge University Press. [20] Pyle, D. H. and S. J. Turnovsky (1970). Safety-First and Expected Util-ity Maximization in Mean-Standard Deviation Portfolio Analysis. The Review of Economics and Statistics, 52, 75-81. [21] Roy, A. D. (1952). Safety First and the Holding of Assets. Econometrica, 20, 431-449. [22] Sharpe, W. F. (1964). Capital Asset Prices: A Theory of Market Equi-librium Under Conditions of Risk. The Journal of Finance, 19, 425-442. 96 [23] Telser, L. (1955-6). Safety First and Hedging. Review of Economic Stud-ies, 23, 1-16. [24] Veinott, A. F. (1967). The Supporting Hyperplane Method for Unimodal Programming. Operations Research, 15, 147-152. [25] Weintraub, A. and J. Vera (1991). A Cutting Plane Approach for Chance Constrained Linear Programs. Operations Research, 39, 776-785. [26] Zangwill, W. I. (1969). Nonlinear Programming: A Unified Approach. Englewood Cliffs, N.J.: Prentice Hall. 97
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Extensions of the VaR approach to portfolio selection...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Extensions of the VaR approach to portfolio selection with non-normal returns Tse, Wai 1999
pdf
Page Metadata
Item Metadata
Title | Extensions of the VaR approach to portfolio selection with non-normal returns |
Creator |
Tse, Wai |
Date Issued | 1999 |
Description | The well-known mean-variance approach to portfolio selection problem proposed by Markowitz (1952) is often citicized for its use of variance as a measure of risk exposure. Recently, Value at Risk (VaR) has become a popular alternative for measuring risk in many firms. Using the idea of VaR, we formulated a chance constrained programming problem for portfolio selection. Untill recently, most real life applications rely on the normality distributional assumption of the asset returns which seems to be inconsistent with the empirical distributions. To relax this assumption, our study focused on the extensions of the VaR approach to portfolio selection to the class of Elliptically Contoured Distributed returns, and time-varying distributed returns. For the later case, we proposed a new solution via empirical distributions. Moreover, a profile map of returns versus risks was proposed so that, the optimal portfolio could be identified for various time-window sizes. The performance of various portfolios over different time periods were evaluated by means of off-sample cumulative returns and a new return-to-risk measure. |
Extent | 2804631 bytes |
Genre |
Thesis/Dissertation |
Type |
Text |
FileFormat | application/pdf |
Language | eng |
Date Available | 2009-06-16 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
DOI | 10.14288/1.0089054 |
URI | http://hdl.handle.net/2429/9278 |
Degree |
Master of Science - MSc |
Program |
Statistics |
Affiliation |
Science, Faculty of Statistics, Department of |
Degree Grantor | University of British Columbia |
GraduationDate | 1999-11 |
Campus |
UBCV |
Scholarly Level | Graduate |
AggregatedSourceRepository | DSpace |
Download
- Media
- 831-ubc_1999-0388.pdf [ 2.67MB ]
- Metadata
- JSON: 831-1.0089054.json
- JSON-LD: 831-1.0089054-ld.json
- RDF/XML (Pretty): 831-1.0089054-rdf.xml
- RDF/JSON: 831-1.0089054-rdf.json
- Turtle: 831-1.0089054-turtle.txt
- N-Triples: 831-1.0089054-rdf-ntriples.txt
- Original Record: 831-1.0089054-source.json
- Full Text
- 831-1.0089054-fulltext.txt
- Citation
- 831-1.0089054.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0089054/manifest