Matrix Methods in Queueing and Dynamic Programming by Bernard Fernand Lamond D.Sc, University de Montreal, 1978 M.Sc, University de Montreal, 1980 A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of DOCTOR OF PHILOSOPHY in The Faculty of Graduate Studies Faculty of Commerce and Business Administration We accept this thesis as conforming to the required standard The University of British Columbia December 1985 © Bernard Fernand Lamond, 1985 00 In p r e s e n t i n g t h i s t h e s i s i n p a r t i a l f u l f i l m e n t of requirements f o r an advanced degree a t the the University o f B r i t i s h Columbia, I agree t h a t the L i b r a r y s h a l l make it f r e e l y a v a i l a b l e f o r reference and study. I further agree t h a t p e r m i s s i o n f o r e x t e n s i v e copying o f t h i s t h e s i s f o r s c h o l a r l y purposes may department o r by h i s or her be granted by the head o f representatives. my It is understood t h a t copying or p u b l i c a t i o n o f t h i s t h e s i s f o r f i n a n c i a l gain s h a l l not be allowed without my permission. Department o f The U n i v e r s i t y o f B r i t i s h Columbia 1956 Main Mall Vancouver, Canada V6T 1Y3 J-6 (3/81) written Abstract We investigate some modern matrix methods for the solution of finite state stochastic models with an infinite time horizon. Markov and semi-Markov decision processes and finite queues in tandem with exponential service times are considered. The methods are based on the Drazin generalized inverse and use matrix decomposition. Unlike the related Jordan canonical form, the decompositions considered are numerically tractable and use real arithmetic when the original matrix has real entries. The spectral structure of the transition matrix of a Markov chain, deduced from nonnegative matrix theory, provides a decomposition from which the limiting and deviation matrices are directly obtained. The matrix decomposition approach to the solution of Markov reward processes provides a new, simple derivation of the Laurent expansion of the resolvent. Many other basic results of dynamic programming are easily derived in a similar fashion and the extension to semi-Markov decision processes is straightforward. Further, numerical algorithms for matrix decomposition can be used efficiently in the policy iteration method, for evaluating the terms of the Laurent series. The problem of finding the stationary distribution of a system with two finite queues in tandem, when the service times have an exponential distribution, can also be expressed in matrix form. Two numerical methods, one iterative and one using matrix decomposition, are reviewed for computing the stationary probabilities. Job-localbalance is used to derive some bounds on the call congestion. A numerical investigation of the bounds is included. It suggests that the bounds are insensitive to the distribution of the service times. ii Table of Contents Abstract ii List of Tables v List of Figures vi Acknowledgements I. viii Introduction 1 Matrix decomposition and Drazin generalized inverse 3 1.1. Review of linear algebra II. 4 1.2. The Drazin generalized inverse 14 1.3. Matrix decomposition algorithms 20 1.4. Solution of singular systems of equations 36 1.5. Other algorithms for Drazin inversion 42 Nonnegative matrix theory 44 2.1. Matrices and directed graphs 45 2.2. Eigenvalue structure of a stochastic matrix 49 2.3. Limiting and fundamental matrices 53 2.4. Geometric convergence in the aperiodic case 59 III. Discrete time Markov decision processes 63 3.1. Markov reward processes 64 3.2. Singular systems of equations 68 3.3. Computation of gain, bias and other terms 77 iii 3.4. Iterative computation of gain and bias 78 3.5. Markov decision processes 82 IV. Continuous time Markov decision processes 4.1. Continuous time Markov reward processes V. 91 . . . 91 4.2. Algebraic structure of the resolvent 94 4.3. Laurent expansion of the discounted rewards 96 4.4. Equivalent Markov decision processes 98 Semi-Markov decision processes 102 5.1. Semi-Markov reward processes 103 5.2. Decomposition of the first moment 106 5.3. Series expansion of the discounted rewards 112 5.4. Semi-Markov decision processes 116 5.5. Three special cases 118 VI. Stationary characteristics of tandem queues 124 6.1. Global balance equations 126 6.2. Computation of the stationary probabilities 128 6.3. The matrix decomposition method 134 6.4. Job local balance and bounds 138 6.5. Performance results 144 Conclusion 153 Bibliography 154 iv List o f Tables 6.1.1. Transition rates out of state (ni, na) 127 6.5.1. Numerical values with A = 1, varying /xi and 145 6.5.2. Numerical values with A = fix = ^ — 1 146 6.5.3. Comparison with hyperexponential service times 151 v List of Figures 1.1.1. Directed graphs of Jfc(A) and Z /c(A) 12 1.3.1. Compute z = A~ b where A — PLU 21 1.3.2. Operations counts for Gaussian elimination 23 1.3.3. The slow decomposition algorithm 26 1.3.4. Operations count for the slow decomposition algo- 2 1 } rithm 28 1.3.5. The fast decomposition algorithm 33 1.3.6. Storage considerations for the fast algorithm 34 1.4.1. The slow solution algorithm 37 1.4.2. Algorithm to compute Ui = B^x\ 40 1.4.3. The fast solution algorithm 40 2.1.1. The directed graph G[A) 45 2.1.2. Directed graph of a periodic matrix 48 3.2.1. Directed graph of C(2) 75 3.4.1. Power method for gain and bias 79 3.5.1. Policy iteration method for p-optimal policy 89 3.5.2. Policy iteration method for A: discount optimal policy 89 6.2.1. The power method for stationary probabilities 132 6.2.2. Pascal procedures for transition rates 133 6.3.1. Unstable algorithm for stationary probabilities 136 vi 6.3.2. Modified algorithm for stationary probabilities 138 6.4.1. Failure of job local balance 140 6.5.1. Blocking probability vs service rates 147 6.5.2. Blocking probability contour curves 149 vii Acknowledgements Some of the material in Chapters II and IH is adapted from Lamond and Puterman (1985), and some of the material in Chapter VI is adapted from van Dijk and Lamond (1985). I am grateful to my research advisor, Martin L . Puterman, for his encouragement and patience in reading numerous drafts of all parts of this thesis. My work on numerical algorithms has benefited from discussions with A . Laurie Johnston at the beginning, and more recently from comments by Uri Ascher. My interest in queueing has been stimulated by my joint work with Nico van Dijk, and by frequent discussions with Shelby L . Brumelle. Recent comments by Uriel G. Rothblum and Arthur F. Veinott, Jr. have helped clarify some important ideas concerning the use of generalized inverses in Markov decision processes. The numerical algorithms discussed in this thesis have been coded in Pascal. The text of the thesis, as well as the tables and most figures were produced with I g X . The figures involving curves were produced with Tell-a-Graf. Part of this research was done while the author was receiving an NSERC postgraduate scholarship, and later an SSHRC doctoral fellowship. Generous support from NSERC Grant A-5527 is also gratefully acknowledged. viii To my parents ix Introduction The six chapters of this thesis can be grouped into three distinct parts. The first part, which consists of Chapters I and II, deals with matrix decomposition. The second part contains Chapters III, IV and V . It explores the applications of the matrix decomposition approach to Markov and semi-Markov decision processes. Finally, Chapter V I is concerned with tandem queueing systems. In Chapter I, a brief review of matrix algebra is given, concentrating on the Jordan canonical form. For a singular matrix, the generalized inverse of Drazin (1958) is presented. A significant improvement of the algorithm of Wilkinson (1982) is obtained, for computing the Drazin inverse of a matrix of index 1. Our fast algorithm uses only one third as many arithmetic operations as the recent shuffle algorithm of Anstreicher and Rothblum (1985). In Chapter II, the structure of the Jordan canonical form of a stochastic matrix is derived from nonnegative matrix theory, using results of Varga (1962). The matrix decomposition approach is then applied to study the limiting and fundamental matrices of Kemeny and Snell (1960), and the deviation matrix of Veinott (1969) and (1974). The approach used is inspired from Campbell and Meyer (1979). In Chapter III, the matrix decomposition results are applied to discrete time Markov decision processes. A new proof of many results of Blackwell (1962) and Veinott (1969) and (1974) is given. In particular, a direct derivation of the Laurent expansion of the resolvent, for a small interest rate, is given. The matrix decomposition algorithm of Chapter I is used in a new policy evaluation method. 1 In Chapter IV, the formulation of a continuous time decision process is reviewed, based on the continuous time Markov chain of Cjnlar (1975). Matrix decomposition is used to show the similarity between continuous time and discrete time processes. An equivalence considered by Howard (I960), Veinott (1969), Schweitzer (1971) and Serfozo (1979) is reviewed. In Chapter V , semi-Markov decision processes are considered. Matrix decomposition is used to obtain a new derivation of many results of Denardo (1971). In particular, the decomposition of the first moment of the transition probability function is derived. A new solution algorithm is also obtained. The models of Chapters HI and IV are formulated as special cases. Also, an equivalence used by Ross (1970) is discussed. In Chapter VI, a queueing system with two service stations in series is formulated, using the notation of Disney and Konig (1985). The numerical computation of the stationary probabilities is investigated. The iterative power method and the matrix decomposition method of Wong et al. (1977) are reviewed. Bounds are introduced, based on results in Hordijk and Van Dijk (1981) and the job-local-balance property. A numerical investigation of the bounds is included. This research suggests that matrix decomposition is a powerful, as well as versatile, approach to the study and solution of finite state stochastic systems. Not only does it provide simple derivations and efficient algorithms, it also appears to unify the treatment of many models. 2 Chapter I Matrix decomposition and Drazin generalized inverse In this chapter, we present the mathematical basis for the subsequent chapters. The notions of Jordan canonical form and Drazin generalized inverse are reviewed, and some efficient algorithms are developed, to compute a matrix decomposition and use it as a data transformation to produce the solution of a singular system of equations. In Section 1.1, we review the Jordan canonical form and related well known results. Our approach is similar to that of Golub and Wilkinson (1976), and reflects the general approach that will be followed throughout the thesis, although we will not be concerned with actually computing the Jordan canonical form. Further, as this is to be applied to matrices with real entries, we also introduce a modified Jordan canonical form which involves only real arithmetic, but still retains the basic properties of the original Jordan canonical form. This leads to a new, direct proof of the existence of a real decomposition of a matrix, in a form that provides a suitable representation of the Drazin inverse. The idea of replacing a pair of complex conjugate eigenvalues by a 2 x 2 matrix has been exploited in numerical analysis for computing the eigenvalues of a real matrix by the QR method [see e.g., Golub and Van Loan (1983), Section 7.4]. In Section 1.2, we review the definition of the generalized inverse of Drazin (1958). We derive its representation using a matrix decomposition based on the modified Jordan canonical form. This derivation follows very much that of Wilkinson (1982). In Section 1.3, we review the Gaussian elimination algorithm for computing the 3 P L U decomposition of a matrix [see e.g., Wilkinson (1965)]. Then we specialize the method of Wilkinson (1982) for computing the Drazin inverse of a general matrix, to the special case when the matrix has an index of 1, which is the case of interest in Markov decision processes. The main original contribution here is the fast algorithm, which produces a representation of the Drazin inverse, using essentially the same amount of arithmetic as for the PLU decomposition of a nonsingular matrix. In fact, its operations count is one third that of the recent shuffle algorithm of Anstreicher and Rothblum (1985) and, in the context of dynamic programming, it is essentially the same as for the method of Howard (1960). Our algorithm, however, is well suited for use in the lexicographic policy iteration method of Miller and Veinott (1969), as we will see in Chapter III. In Section 1.4, it is shown how to use the matrix decompositions to efficiently compute the solution vectors. We call these vectors gain and bias, in anticipation of the dynamic programming applications of Chapter DI. Finally, Section 1.5 reviews some other algorithms which have been studied in the literature. 1.1. Review of linear algebra Let i i , Z 2 , . . . , i m be n-vectors with complex entries. If the system of equations a i i i + a i a + . . . + ct x 2 has the unique solution a i = . . . = a m m m —0 = 0, then the vectors are said to be linearly independent. Otherwise, they are linearly dependent and those with a,- ^ 0 can be expressed as a linear combination of the others. A complex n x n matrix A is said to be singular if its columns are linearly dependent. Otherwise, A is said to be nonsingular. It is well known that A is singular iff its determinant, denoted det(.A), is equal to zero. Let 6 be a given n-vector, called the 4 right-hand side. Then the system of linear equations Ax = b (1.1.1) has a unique solution x = A~ b if the matrix A is nonsingular. The matrix A~ is l l uniquely determined by A and is called its inverse. Hence a nonsingular matrix is also said to be invertible. On the other hand, if A is singular, the system (1.1.1) does not necessarily have a solution for every right-hand side vector b. A vector b for which there is a solution is said to be compatible with A. A system of equations with singular A and compatible b has infinitely many solutions. The algebraic structure of a complex n x n matrix A is described by its eigenvalues and its eigenvectors. A right-hand eigenvector of A is a nonzero column vector x such that Ax = Ax, for some complex number A. This number A is the eigenvaiue of A associated with x. Similarly, a left-hand eigenvector of A is a nonzero row vector y such that yA = Ay. Let A' be the transposed of the matrix A. Then y' is a right-hand eigenvector of A'. For simplicity, we will use the term eigenvector (alone) to denote a right-hand eigenvector. The above definition implies that an eigenvector x is any nonzero solution of the system of linear equations (XI-A)x = 0, (1.1.2) where / denotes the n x n identity matrix. This system has a nonzero solution x iff the matrix (XI — A) is singular, that is, iff det(A7 - A) = 0 . Since det(A/ — A) is a polynomial of degree n in A, A has at least one and at most n distinct eigenvalues. Hence Equation (1.1.2) always has a nonzero solution. Let 5 be a nonsingular matrix and 5 - 1 its inverse. Then a matrix B such that B = S~ AS l 5 (1.1.3) is said to be similar to A, and equation (1.1.3) is a similarity transformation. Much of the material in this thesis will be concerned with similarity transformations which transform a given matrix A into a new matrix B with a more convenient structure. An important property of similarity transformations is that they preserve the eigenvalues of a matrix. Indeed, we have XI - B = XS'^S - S~ AS = l {XI - A)S, so that det(A7— B) = 0 iff det(A7—A) = 0. In the context of linear systems of equations, a similarity transformation can be seen as a change of variables. Let y = S" ! 1 and c = S~ b. Then Ax = b iff By = c. This last property is important. It says that A 1 and 6 are compatible iff B and c are compatible. When a solution exists, then x is a solution of Ax = 6 iff y is a solution of By = c. This fact will be used extensively in our discussion of Markov decision processes. One structure of much interest is that of a diagonal matrix. We denote by D = diag(d,) a matrix with diagonal entries di, i = l , . . . , n . The following well known lemma indicates when a matrix A is similar to a diagonal matrix. Lemma 1.1.1. A n n x n matrix A is similar to a diagonal matrix iff it has n linearly independent eigenvectors. Proof. Let x\,...,x n be n linearly independent eigenvectors associated each with an eigenvalue A,-, i = l , . . . , n . Then the matrix X = (xi,...,x ) n Hence we have AX = XA, where A = diag(A,), so that X~ AX l the " i f part. For the "only i f part, suppose that S~ AS 1 has an inverse X - 1 . = A. This proves = D, for some matrices S and D = diag(d,-). Then the n columns of S are linearly independent, and AS = SD implies that the i-th column of 5 is an eigenvector of A associated with the eigenvalue di, i = 1,..., n. • For example, the matrix 6 has two linearly independent eigenvectors X l = (l) a n (-l) d X 2 = with eigenvalues Ai = 2 and Aa = 4, respectively, so that \1 -l) \0 A) V0.5 - 0 . 5 y ' Observe that a matrix satisfying Lemma 1.1.1 may have an eigenvalue with multiplicity greater than 1, that is, A,- = Ay for some t ^ j. Some matrices have fewer than n linearly independent eigenvectors. Some have only one. Such a matrix is bound to have at least one eigenvalue with multiplicity greater than 1. An example is the Jordan submatrix of order n, denoted by J (A), with all diagonal entries equal to A and n subdiagonal entries equal to 1. A l l other entries are zero. For example, Ji(A) = (A) is diagonal and MX) /A 0 1 A = 0 1 Vo 0 0 0\ 0 0 A 0 1 X) It is easy to verify that any eigenvector of J ( A ) is a nonzero multiple of e , the n-th n n unit vector. The following results illustrate how the Jordan submatrix provides the building blocks for a general matrix decomposition. L e m m a 1.1.2. Suppose that A is a singular, complex n x n matrix. Suppose also that there are k nonzero vectors x , . . . , x* such that Ax\ = 0 (with X\ ^ 0) and x Axi = Xi-!, t = 2,...,fc, (1-1-4) where 1 < k < n. Then X\,... ,Xfc are linearly independent. Proof. By induction on k. The result is trivially true for k = 1 (because such an X\ exists since A is singular). Now suppose X i , . . . , Xk-i are linearly independent and that Axk = Xk-i for some nonzero vector x*. If Xfc = QTiXi + . . . + QTfc_iXfc_i 7 then Axk = a.\Ax\ + atzAxi + ... + = 0+ # because ak-iAxk-i ark-iZfc-a ctzXi + . . . 4- Xk-l are linearly independent. Hence Xk may not be a linear com- xi,...,Xk-i bination of x i , . . . ,Xk-i in order to be a solution of Equation (1.1.4), and the result follows. • Lemma 1.1.3. Suppose A is a complex n x n matrix, and let 5 = (zi,...,x„), where linearly independent n-vectors. Then A = SJ (X)S~ (1.1.5) l n iff the vectors z,- satisfy (A - XI)x = 0 n and (A - XI)xi; = x , i , t = l,...,n—1. + (1.1.6) Proof. This can be seen directly by writing Equation (1.1.5) as AS = SJ (X) and n expanding J (X) and S from their definition. For example, with n = 3 we have n ( X 0 o\ 1 0 A 0 I, X) 1 which is as required. • L e m m a 1.1.4. If A satisfies Equation (1.1.5), then A has exactly one linearly independent eigenvector. Proof. The vector x is an eigenvector of A because it satisfies Ax n n = Xx . Suppose n y is another eigenvector of A, with eigenvalue 7. Then Ay = iy implies that SJ (X)S- y 1 n 8 = y. 1 But this implies that J (X)z = 72, where z = S n 1 y , so that 7 = A and z = ae , for n some nonzero scalar a, so that y — Sz = a S e = ax . n n Hence x n and y are linearly • dependent. Lemma 1.1.5. Suppose there is a nonsingular matrix S such that A=S where 1 < k < n — 1, with B an (n —fc)x (n — A;) matrix. Then A has at least two linearly independent eigenvectors. Proof. The matrix B has at least one eigenvector. Let z be an arbitrary eigenvector of B. Then the vectors • are linearly independent, and both are eigenvectors of A. The next two results are given without proof. See Gantmacher (1959) for a complete derivation of the Jordan canonical form. Lemma 1.1.0. A complex n x n matrix A is similar to J (A) iff it has exactly one n linearly independent eigenvector. Theorem 1.1.7. (Jordan canonical form). An n x n complex matrix A has exactly k linearly independent eigenvectors (1 < k < n) iff it is similar to a block-diagonal matrix B — diag(i?,-) such that = J .(A,), for i = 1,..., k, and £ £ = 1 i n n For example, 1.5 -0.5 -0.5 0.5 1.5 -0.5 -1 0 2 has two linearly independent eigenvectors and y = I - 1 2 9 = **• with eigenvalues Ai = 1 and A = 2, respectively. Solving Equation (1.1.6) with 1 3 = y 2 2 and A = 2, we get 2 so that A has multiplicity 2. Letting x\ = 3/1 and S = ( x i , x , x ) , we get the Jordan 2 2 3 canonical form of A as ( 1 1 1 1 1 -1 1 W i - 1 0 -I J \0 0 0W0.5 2 0 0 1 2 J \0.S 0 0.5 "\ 0.5 -0.5 . -0.5 0 J Hence we have B\ = J i ( l ) and 2J = ^2(2). 2 Some observations are in order here. First, Lemma 1.1.1 is a special case of the above theorem with k = n, while Lemma 1.1.6 is the special case with k = 1. Second, our formulation of Theorem 1.1.7 emphasizes the fact that each Jordan submatrix is associated with exactly one eigenvector of the matrix A. Third, even when all the entries of a matrix A are real, it is possible that some of its eigenvalues be complex, in which case the transformation matrix S of Theorem 1.1.7 has some complex entries too. For example, the matrix -;) has two imaginary eigenvalues X\ = 1 and A = —t, where t = — 1. 2 2 It should be clear, from our derivation of the Jordan canonical form, that if A is real and if all the eigenvalues are real, then S is real as well, because then the columns of S are solutions of linear systems of equations with real coefficients. Also, since all the coefficients of the characteristic polynomial det(AJ — A) are real, the eigenvalues of the real matrix A which are not real are arranged in complex conjugate pairs. Hence if A = a +1(3 is an eigenvalue of A, so is A = o — + - We use this property to derive a modified Jordan canonical form which has all entries real. First, observe that if x = u+tv is an eigenvector of A with eigenvalue A = ct+i/3, + + then x~ = u — iv is also an eigenvector of A, but with eigenvalue A = a — 4/3. Then - suppose that A has k\ real linearly independent eigenvectors x,- with eigenvalues A,-, 10 i = l,...,ki, and A; pairs of complex conjugate linearly independent eigenvectors 2 (xtjX^ ), i = ki + 1,..., ki + fc . By Theorem 1.1.7, for every complex conjugate - 2 pair (x*,x^~) there is a corresponding pair of Jordan blocks (J».-(At), J . ( A r ) ) . We B describe how such a pair of complex conjugate Jordan blocks can be replaced by a completely real submatrix. Consider a complex eigenvalue A = a + */?. We define the 2 x 2 matrices K and A such that and Lemma 1.1.8. K is similar to A. Proof. Because K = XAX~ , where l Z-(* -f) and *-. = (-;£f W ) . (,:,„) • Now we define the block Jordan submatrix Jh(A), where A is a square matrix, so that its subdiagonal blocks are equal to I, the identity. Then, for example, Ji(A) = A and MA) = (A I 0 V0 0 A / 0 0 0 A I 0\ 0 0 A) L e m m a 1.1.9. Let K be defined as in Equation (1.1.8). Then Jk(K) is similar to i / c ( A ) , where 2 i 2 f c ( A ) "V 0 Ma-10))' Proof. First observe that Jfc(iiC) is similar to Jfc(A), by Lemma 1.1.8. To see this, take 5 = diag(X ), with X = X as in Equation (1.1.10). Then J (K) f t = 5J (A)5~ . 1 k fc Next we show that Jfc(A) is similar to 7j *;(A). In fact, Jfc(A) = T Z i f c ( A ) 2 , where ,_1 2 2 11 T is a permutation matrix. To see this, let us associate to each matrix a directed graph as follows. There are 2k nodes, one per row, labelled 1 to 2k, and for every nonzero entry, there is a corresponding arc labelled by the value of that entry. The graphs of Jfc(A) and L^kW are shown in Figure 1.1.1, in which A* denotes the complex conjugate of A. One can convince oneself that the graphs are correct by checking them with k = 2. A A 2k- 1 Jfc(A): d 2k A* A A £ *(A): 2 1 k+ 1 i A* k+ 2 r J L A* 1 2k h Figure 1.1.1. Directed graphs of Jfc(A) and X jt(A). 2 We have •MA) -(;:)12 A 0 1 .0 0 A* 0 1 0 0 0 0 A 0 0 A* ( and A 1 0 0 0 0 A 0 0 A* 0 1 0 \ 0 0 * X*J Now both graphs in Figure 1.1.1 are identical, except for the node labels. Hence Jfc(A) is obtained by permuting the rows and columns of L^kW accordingly, which completes the proof. • Theorem 1.1.10. (Modified Jordan canonical form). Suppose A is a real n x n matrix. Then A has exactly ki linearly independent real eigenvectors and k% linearly independent complex conjugate pairs of eigenvectors iff it is similar to a block diagonal matrix B = diag(B,) such that = Jn,(A,), for i = l,...,ki, and = J (K{) ni t for t = ki + 1,..., ki + A>2, with ki fci+fca + 2 ]T rii = n. *i+i Proof. First apply Theorem 1.1.7 to A. We have the real Jordan blocks B{ = J .(A,), n for i = 1,...,ki, directly. Then organize the complex conjugate pairs as ^2n,(Ai), for t = ki + 1,..., ki + &2. The result follows because by Lemma 1.1.9, J3,- = J .(K~i) is n similar to Z^n.CA,). • For example, the matrix A= 2 - 10 2 0 0 3 1 2 2 - 1 2 0 0 - 1 0 has only one pair of complex conjugate eigenvectors, with eigenvalues 1 + t and 1 — t, each with multiplicity 2. Then A is similar to ( 1+4 0 0 0 \ 1 1+t 0 0 0 0 l - i 0 0 0 1 l - i / We have the modified Jordan canonical form A = SJ2(K)S~ , where / l 0 0 0\ / 1 0 0 0' 1 5 = 13 and UK) = XJ (A)X - = l 2 with X = and X' = 1 Hence the original Jordan canonical form of A is A = SXTL (I + 4 or- *- ^- , 1 1 1 where T is the permutation matrix that interchanges rows 2 and 3. The modified Jordan canonical form effectively contains all the information of the original canonical form, but it is defined with real arithmetic only. For, suppose A = SBS' 1 where B is as in Theorem 1.1.10, then S is a solution of the equation AS = S B , which has real coefficients only. Hence Theorem 1.1.10 will be useful in establishing the existence of a matrix decomposition suitable for the definition and representation of the Drazin inverse of a real matrix. Moreover, Theorem 1.1.10 suggests an easy way to generate test matrices for evaluating the performance of computer algorithms to compute the Drazin solution of singular equations. [This method has its limitations, however. See Golub and Wilkinson (1976), Page 583]. 1.2. The D r a z i n generalized inverse Let A be a real n x n matrix such that A = 0, for some positive integer k. Then A is K said to be nilpotent. The smallest such integer k > 1 is called the index of nilpotency of A [see e.g., Gantmacher (1959)]. For example, the index of nilpotency of the Jordan submatrix «/ (0) is n, because J {0) n n n = 0 while J ( 0 ) n n - 1 ^ 0. However, «7 (A) is not n nilpotent if A ^ 0. L e m m a 1.2.1. Suppose A is a real n x n matrix. Then A is nilpotent iff zero is its only eigenvalue. Moreover, the index of nilpotency of a nilpotent matrix A is the dimension 14 of the largest block of its Jordan canonical form. Proof. First, suppose A is similar to some matrix B. Observe that A is nilpotent iff B is nilpotent. Moreover, if A is nilpotent then it has the same index of nilpotency as B. This is because A - SB S~ , k k for any k > 0, so that A = 0 iff B - 0. 1 k k Next, suppose A has m linearly independent eigenvectors, and consider the blockdiagonal matrix B of the Jordan canonical form of A of Theorem 1.1.7. If m = 1, then A is similar to J (X) and hence A is nilpotent iff A = 0. Moreover, if A is nilpotent, its n index of nilpotency is n. If m > 1, the Jordan blocks are J ( A i ) , . . . , J {X ). n i Xi = . . . = A m nm m Again, A is nilpotent iff = 0, and if A is nilpotent, its index of nilpotency is max{n - : i = t 1,..., m}. D Now consider a general real n x n matrix A. The index of A, denoted ind(A), is the smallest nonnegative integer k such that rank(A fc+1 ) = rank(A ) [see e.g., Campbell fc and Meyer (1979) and Berman and Plemmons (1979)]. This definition is an extension of the above index of nilpotency. For, if A is nilpotent, then ind(A) is the index of nilpotency of A. If A is nonsingular, then ind(A) = 0. If A is singular but not nilpotent, we have the following decomposition. Theorem 1.2.2. Suppose A is a real n x n matrix which is singular but not nilpotent. Then there is a nonsingular matrix S such that A = s ~ {o £) ' 1 5 ( 1 , 2 , 1 ) where B is a non-singular, real (n — m) x (n — m) matrix and AT is a nilpotent m x m matrix, with 1 < m < n — 1. Moreover, the index of A equals the index of nilpotency of N. Proof. One such decomposition is the modified Jordan canonical form of Theo- rem 1.1.10. The matrix B contains all the modified blocks with nonzero eigenvalues, and N has all the Jordan blocks with eigenvalue zero. The result about the index of A follows because r a n k ( A fc+l ) = rank(A ) iff N = 0. fc k 15 • Hence the index of a singular matrix A is the index of nilpotency of its nilpotent component N, which is equal to the dimension of the largest Jordan block with eigenvalue zero. Note that if ind(A) = 1, then N = 0, the zero matrix. This special case will be of particular interest in the analysis of Markov processes. Following Drazin (1958), we say that a real n x n matrix A D is a Drazin generalized inverse, or simply a Drazin inverse, of the n x n matrix A if it satisfies the three following conditions: A A = AA D m A A A m+l = (1.2.2) D for some m > 0 D A D (1.2.3) = (A ) A. D (1.2.4) 2 Drazin's definition, however, is valid in a much more general context (i.e., A may be an element of an arbitrary associative ring or semigroup, not necessarily a finite square matrix). While in a general algebraic structure there may be some element A for which no such A D exist, it is the case that every real square matrix has a Drazin inverse. We prove this as in Wilkinson (1982), after some preliminary lemmas. Lemma 1.2.3. Suppose A = S~ BS and that B 1 D A D = S~ B S. 1 satisfy (1.2.2), (1.2.3) and (1.2.4), then D D (1.2.5) D Proof. It is easy to verify that if B and B so do A and A exists. Then given by (1.2.5). We can see (1.2.2) as follows. A A = S- B SS~ BS = S~ B BS AA = S~ BB S, D l D 1 1 D and D = S- BSS~ B S 1 1 D 1 D so that BB D = BB D => S~ B BS l D = S~ BB S, 1 D and the result follows. The other conditions are left as an exercise to the reader. 16 • Lemma 1.2.4. If A is nonsingular, then A = A D is the unique matrix that satisfies 1 (1.2.2), (1.2.3) and (1.2.4). Proof. Because X = A -1 is the unique solution of AX = J, which is Equation (1.2.3) with m = 0. The other conditions are trivial. Lemma 1.2.5. If A is nilpotent, then A D • = 0 is the unique solution of (1.2.2), (1.2.3) and (1.2.4). Proof. It will be convenient, for later results, to use the notation N = A and T — A . D We show that X = 0 is the unique solution of Equation (1.2.4). The other conditions are trivial. We have T = T N = T(T)N = 2 = T (T)N' = ... = T (T)N = 0, 2 2 k k T(T N)N 2 where k = ind(i\T). • Lemma 1.2.6. Suppose where B is nonsingular and N is nilpotent. Then A •(V.S) D is the unique solution of (1.2.2), (1.2.3) and (1.2.4). Proof. Suppose A ° {z r ) ' = and let k = ind(A). Then (1.2.3) with m = k gives (B k 0\ (B X o) \ k+1 o 0\ oy' so that X = B~ is uniquely determined. Next (1.2.2) gives l / / \ZB YN\ _( TN)~\NZ 17 I BY \ NT J ' so that YN = BY (1.2.6) ZB = NZ. (1.2.7) and We use (1.2.6) to show that Y = 0. We have B(YN - ) k so that YN 1 = {BY)^- = (YN)N ~ 1 k l = Y(N ) k = Y{0) = 0, = 0 because B is nonsingular. Continuing this way, we get YN ~ k-1 k 2 = 0 , . . . , YN = 0 and finally, Y = 0. Similarly, we can use (1.2.7) to show that Z = 0. Hence Y and Z are uniquely determined by Equation (1.2.2). Finally, Equation (1.2.4) gives / B~ l V o 0 \ _ (B~ x TJ ~V o 0 \ r ^/' 2 from which T = 0 is uniquely determined as in Lemma 1.2.5. • Theorem 1.2.7. Suppose A is a real n x n matrix. Then its Drazin inverse A D is unique and is given by ' A~ 0 if A is non-singular, if A is nilpotent, x 5 - 1 ^"^o o) ^ otherwise, where B is the nonsingular matrix of Equation (1.2.1). Proof. Immediate from Lemmas 1.2.3, 1.2.4, 1.2.5 and 1.2.6. Uniqueness for the nonsingular and nilpotent cases has been shown already. There is an apparent difficulty in the other case, because the decomposition (1.2.1) is not unique. Observe, however, that for a given similarity transformation 5, there is a one-to-one correspondance between A and C = ( B £). Hence the uniqueness of C D implies that of A , by Lemma 1.2.3. D • As will be seen in Chapter HI, the Drazin inverse A D will be useful for expressing the solution of the resolvent equations which are encountered in dynamic programming. Let the matrix W be a projection onto the null space of A . D 18 L e m m a 1.2.8. If A is decomposed as in Equation (1.2.1), then W Proof. = ~ (l S °i) ' 1 Denote the null space of A D (1-2.8) S by xm\l(A ). D By definition, x € nu\l(A ) iff D A x = 0. Also, W is a projection onto nu\l(A ) if Wx = x for x E n\i\l(A ), A Wy D D D D = 0 and WA y = 0 for an arbitrary vector y. D Now write x Then A x = 0 iff D -»- t)- " - - (J 9 , d r 5 , that is, iff uj = 0. Hence Wx=x iff (z r)(«° ) 2 = (£)' so that y = 0 and T = I, the identity matrix. The condition A W y = 0 for arbitrary y is equivalent to A W = 0, that is D D (B\ 0 0 \ (X OJ \Z 1 0\ _ /0 I) ~ \0 0\ OJ ' which implies X = 0. Finally, the condition W M ^ y = 0 is equivalent to WA \z i) \ o oj ~~ D = 0, which implies oj' so that Z = 0. Hence the result. • The following useful representation for W is immediate. Corollary 1.2.9. W = I — AA . D Proof. 19 1.3. Matrix decomposition algorithms Throughout the thesis, we will be concerned mainly with matrices A such that ind(A) = 1. Then by Theorem 1.2.2, we have (1.3.1) In this special case, the Drazin inverse of A is called its group inverse [see Campbell and Meyer (1979) and Berman and Plemmons (1979)], and is denoted by A#. By Theorem 1.2.7, we have (1.3.2) We will consider two algorithms for producing the group inverse of a matrix. Both methods consist essentially of two major steps: first produce a similarity transformation to decompose A as in (1.3.1) and second, invert the matrix B as in (1.3.2). They are specializations of the more general algorithm of Wilkinson (1982) to compute the Drazin inverse A D of a general matrix. Suppose ind(A) = A: > 1. Then Wilkinson's method uses k steps to decompose A as in Equation (1.2.1) and one further step to invert B. Although we will encounter some matrices of index A; > 1 in dynamic programming, we will not need Wilkinson's algorithm in all its generality, because the special structure of the problem will make it possible to break it down into a sequence of subproblems of index 1. Hence there is no need, within the scope of this thesis, for a general algorithm. Before actually discussing the algorithms, it is important to realize that, as pointed out by Wilkinson (1982), there is no need for producing the matrix A * (nor J 5 - 1 and W = I — AA#) explicitly, as we are interested in producing the vectors h = A * b and g = Wb, for a given right hand side vector b. For this purpose, the algorithms will produce a representation of B~ , A* and W in factorized form. Then a solution can l be obtained by applying that sequence of factors to the right hand side 6 directly. To illustrate the idea, let us consider a nonsingular matrix A. By applying Gaussian elimination with row interchanges [see e.g., Wilkinson (1965)], we obtain an upper 20 triangular matrix U such that (1.3.3) A = PLU, where P is a permutation matrix and L is a unit lower triangular matrix (unit means that its diagonal entries are equal to 1). Typically, only U is obtained explicitly. L is represented by a product of elementary row operations and it is needed only to store the n(n — 1) multipliers of the Gaussian elimination. P is represented by (n— 1) integers giving the rows which were interchanged during the elimination process. Now, to solve the system of equations Ax = 6 and obtain x = A~ b, we use the l factorization (1.3.3) to rewrite our system as (PLU)x = b. We can then solve the latter system in three steps, as in Figure 1.3.1. Step 1. Solve Pz = b, giving z = P~ b x (permute the entries of 6). Step 2. Solve Ly — z, giving y = L~ z x (apply the multipliers of the Gaussian elimination to y). Step 3. Solve Ux = y, giving x = U~ y x (use back substitution, producing x , x _ i , . . . , i i . n n Figure 1.3.1. Compute x = A~ b, where A = PLU. x Hence we do get z = U~ L- P~ b 1 without producing A U , L x 1 1 1 = A~ b x or P . In fact, P and L were not produced x explicitly either. A simple example will clarify this. Let us solve Ax = 6, where 21 Using Gaussian elimination with partial pivoting [the pivot elements are chosen as big as possible, see e.g., Wilkinson (1965)], a matrix A L U and a vector NPIV are produced, such that ( 5 -0.4 -0.2 -2 2 \ -1 1 and NPIV = CL5 1.1 J The upper part of A L U is the upper triangular matrix U itself, and the underscored entries are the multipliers that were used to produce the zeroes where they are stored. The vector NPIV tells that the rows of A were interchanged as follows: first row 1 and row 2, then row 2 and row 3 [see e.g., Johnston (1982), Chapter 2]. The solution x can then be computed as follows, as in Figure 1.3.1. Step 1. z = P~ b = l Step 2. y = L~ z = l Step 3. x = U~ y : x Since our algorithms will use the P L U decomposition, it is worthwhile to look at the cost of producing the P L U factorization of a matrix and computing the solution for a given right hand side. We consider three types of operations. A flop, short for floating point operation, is a pair of arithmetic operations that consists of a multiplication/division and an addition/subtraction (for example Y27=i *7^« a c a n D e computed with n flops). A swap is the interchange of two entries of a vector or a matrix (hence interchanging two rows of an n x n matrix requires n swaps). Finally, a comparison is a relational operation (i.e. <,>). The operations counts for Gaussian elimination, as well as for solving a system of linear equations, are given in Figure 1.3.2. (The comparison operations are done when searching for the largest pivot). For n reasonably large, the terms in n are negligible in the Gaussian elimination procedure. 2 We will use those counts as a reference, to compare with the Drazin inverse algorithms. 22 Gaussian elimination of a nonsingular matrix A = PLU: | n flops +n swaps + | n comparisons. 3 2 2 Permutations Px or P~ x\ n swaps. l Triangular products Lx, L~ x, Ux, U~ x: l l Solution of RHS z = U- Ir P- b: x l ^n flops. 2 n flops +n swaps. 2 x Figure 1.3.2. Operations counts for Gaussian elimination. Before developing the algorithms, however, it will be convenient to derive the following results. Lemma 1.3.1. Suppose A is an n x n real matrix such that rank(A) = n — m, with 1 < m < n, and A =(o s B B o) " <'- - > s 3 4 for some nonsingular S, where Bu is an (n — m) x (n — m) matrix. Then Bu is nonsingular iff ind(A) = 1. Proof. Assume first that Bu is nonsingular. Then J)(5V)- , A = (SV)( » B 1 where This implies that ind(A) = 1, because rank(A ) = rank(i4), and the "only i f part is 2 proved. For the "if* part, assume that ind(A) = 1. By Equation (1.3.1), there is a nonsingular matrix U and a nonsingular (n — m) x (n — m) matrix C u such that - (°o °o) -'- A u u By Equation (1.3.4), we have then 23 Letting V = S U and writing 1 Equation (1.3.5) becomes (B \ 0 u B \(X 0 ) \Z Y\_(X T) ~ \Z l2 Y\(C TJ \ 0 0\ Oj' n Hence we get BX tl +BZ = XCu BY +B T =0 l2 u 12 (1.3.6) 0 = ZCu. Now (1.3.7) since Cn is nonsingular, Equation (1.3.7) implies that Z = 0. Hence for V to be nonsingular, both X and T must be nonsingular as well. But then Equation (1.3.6) implies that f?n = XCnX , -1 and thus flu is nonsingular. • Theorem 1.3.2. Suppose A is an n x n matrix satisfying Equation (1.3.4) with f?n nonsingular. Then A# = s( » 5 B " ^ 5 l 2 ) 5 - (1.3.8) 1 and W = I-AA* = s(l 5- . (1.3.9) 1 Proof. As in the first part of the proof of Lemma 1.3.1, let v - (I -B^Bii\ / )' \° Then so that by Theorem 1.2.7 and Equation 1.2.8, A* =S V ^ ^ - S- =s(^jf l l V 24 B n TiBi2^ -i B s and W = SV • Both algorithms will use Gaussian elimination to decompose a singular matrix A as in Equation (1.3.4). But rather than transforming the matrix as in Equation (1.3.1), the Drazin inverse will be represented directly using Theorem 1.3.2. Moreover, the case ind(A) > 1 will be flagged automatically when Bu is found to be singular, by Lemma 1.3.1. Hence our algorithms will produce a representation of the group inverse A* of a matrix A if ind(A) = 1, and return an error condition when ind(A) > 1. A 2 x 2 example of the latter situation is when Then ind(A) = 2 and B n = 0. Let us call slow algorithm the first, and most obvious, of our algorithms. It is shown in Figure 1.3.3. Steps 1.1 and 1.2 essentially obtain a representation of Equation (1.3.1), and steps 2.1 and 2.2 obtain a representation of (1.3.2). The reason for calling it the slow algorithm will become clear after we analyze the amount of arithmetic it requires. Before that, let us see what the algorithm does. In Step 1.1, Gaussian elimination is used to compute a P L U factorization of A such that the upper triangular matrix U has m zero rows, with rank(A) = n — m, so that Then in Step 1.2, the matrix B = UPL is such that Moreover, B is similar to A, because A = PLU = {PL)U(PL){PL)25 1 = (PL)B(PL) - l Step 1.1. Compute P, L and U such that A = PLU (using Gaussian elimination). Step 1.2. Compute B = UPL (applying P and L to the nonzero rows of U). Step 2.1. Compute R, S and T such that B u = RST (using Gaussian elimination). (Note: R is permutation, S lower, T upper triangular). Step 2.2. If Bu is singular then "Error: ind(A) > 1" else stop (we have all the necessary information). Figure 1.3.3. The slow decomposition algorithm. Now let X = UP, and write That B has the required form can be seen directly: 0 0 • The above argument is well known, and is used by Wilkinson (1982) in particular. It is important to note that while Uu is likely to be singular in general, the matrix Bu is always nonsingular, provided that ind(A) = 1, by Lemma 1.3.1. Hence Step 2.1 simply computes the PLU decomposition of the nonsingular, (n — m) x (n — m) matrix Bu, giving a permutation matrix R, a unit lower triangular matrix S and a nonsingular upper triangular matrix T. For example, let us apply the algorithm to a 3 x 3 matrix A such that ind(A) = 1: ( A = - 1 0 l 0 1 \-l 26 °\ 11. Oj Using Gaussian elimination, we get A = PLU, 1=1 1 0 0\ 0 1 0 ) - 1 0 1/ where P = I, the identity matrix, (1-10 and P = I0 \0 0 0 1 0 Observe that is singular. Then we produce B = UPL = UL: Finally, using Gaussian elimination we get B = RST, where R = I, Hence B - ^ T - i S - i R - ^ ^ This is all we need in order to compute A* = B&, using Theorem 1.3.2, and {PL)B*(PL)~ . l We get 0 B* = | - 1 0 -1 -1 0 l \ / 1 2 and A* = I 1 OJ \-l —1 -1 1 2 1 -1 We should keep in mind, however, that in general we will not be interested in obtaining the numerical values of B& and The factorization will be applied directly to the right hand side b to compute a solution, using the algorithms of the next section. The computational effort required by our decomposition algorithm is shown in Figure 1.3.4. 27 Step 1.1. | ( n - m ) flops +|n(n - m) swaps + n comparisons 3 3 L 2 2 Step 1.2. |(n — m)n flops +(n — m)n swaps 2 Step 2.1. |(n - m) flops +|(n — m) swaps +|(n — m) comparisons 3 2 2 Figure 1.3.4. Operations connt for the slow decomposition algorithm. The count for Step 1.1 is for the worst case, that is, when the matrix Un happens to be nonsingular. This is the case when all the zero pivots are obtained in the last m stages of the Gaussian elimination process. Then the whole process is the same as for a nonsingular matrix, except that the last m stages do not need to be done. This takes | m flops off the | n flops required for a nonsingular matrix. Also, Step 1.2 can 3 3 be implemented such that P and L are applied to each one of the (n — m) rows of U in turn. That is, the elements of row 1 of U are permuted (by P), and the multipliers are applied (i.e., L), giving the first row of B. Then the process is repeated for row 2, and so forth. Finally, Step 2.1 is just the regular Gaussian elimination of a nonsingular (n — m) x (n — m) matrix, and Step 2.2 does not require any arithmetic. Now let us consider the case when m is much smaller than n. This is the case when m = 0 (nonsingular A) and m = 1 (unichain Markov process), for instance, and also whenever there are several states per recurrent subchain, in a dynamic programming problem. With m = 0, the total number of flops is £ n + | n + | n = | n flops. 3 3 3 3 This is 3.5 times the number of flops required for applying Gaussian elimination to a nonsingular matrix (i.e., | n flops). This fact justifies the name 3 that slow algorithm we have been using. Our second algorithm, which we call the fast algorithm, will use only | n flops 3 when m is small, the same as when applying Gaussian elimination to a nonsingular matrix. It is based on a modified Gaussian elimination procedure in which not only the rows, but also the columns are interchanged. Hence for a singular matrix A such 28 that rank(A) = n — m, the modified procedure obtains the four matrices P, L, U and Q such that A = PLUQ, where P and Q are permutation matrices, with Lu a unit lower triangular matrix and U\\ a nonsingular upper triangular matrix. Specifically, the Gaussian elimination procedure is modified as follows. Suppose the first zero column is encountered after k elimination steps have been performed. For example, with n = 5 and k = 2, we have a matrix A^ with the following pattern: fx 0 A<> = 0 0 VO x x x x XX X X 0 0 X X 0 0 X x) 0 0 X where the nonzero entries are denoted by x. 2 Now let Q^ ) be the permutation matrix that exchanges the columns k + 1 and n. 1 We get x x x x x x A ^ Q M = 0 x x 0 x x 0 x x and the elimination procedure is resumed. Suppose 0 0 0 VO X 0 0 OJ that the second zero column is encountered after the £-th elimination step. Then exchanges the columns 1+1 and n — 1. Similarly, end, we have have as required. will exchange the t-th zero column with column n — t + 1. At the A = PLUQ, where Q = Q M Q W ...Q^ K fx x x X x\ 0 XXXX U= 0 0 XXX 0 0 0 0 0 OJ VO 0 0 0 29 m With n = 5 and m = 2, we To illustrate the idea, let us apply modified Gaussian elimination to the 3 x 3 matrix of the last example. We get A = PLUQ, where P = I, ( 1 0 - l \ / 1 0 o\ 0 1 0 , L=I 0 1 0 0 0 0 J \-l 0 1J and Q interchanges the columns 2 and 3 of A. Here Un = I is indeed nonsingular. This modified Gaussian elimination procedure PLUQ does exactly the same number of flops as the original P L U procedure, and requires only an extra mn swaps to exchange the columns. But the fact that Un is nonsingular will allow us to find a representation for Bn l which requires the P L U factorization of a small m x m matrix, instead of the large (n — m) x (n — m) matrix Bn itself. The idea is based on the following result of Faddeeva (1959), for inverting a matrix by partitioning. The proof is included for the sake of completeness. L e m m a 1.3.3. Let X be a nonsingular n x n matrix and Y its inverse, such that X = ( \X i y _ /*n 1 1 2 X22 J Y\ \Y-2i Y22 J i2 Suppose also that X\ \ is nonsingular. Then Y22 is also nonsingular and %n ~ * u — Yi Y ~£Y2\. 2 (1.3.11) 2 Proof. Since X\ 1 is nonsingular, then X is nonsingular iff the matrix / I \—X2iX 0\fXn X\ IJ \X2i X22/ l2 ll = fXn \ 0 is nonsingular too, which is the case iff (X22 — X \X^i 2 X X22 — X \X ia 2 lx \ X\2) X12) is nonsingular. Now writing the condition XY = 7, we get XuYn+XnYn^I (1.3.12) ^11^12 + *12*22 = 0 (1.3.13) X2\Yn + -X22 J21 = 0 •^21^12 + X22Y22 — I30 (1.3.14) Using (1.3.13), we have Y\ = —XuX\i,Y , 2 so that (1.3.14) becomes 22 {X — X iX^_"_ X\ )Y = I, l 22 2 2 22 which implies that F22 is nonsingular. Using (1.3.13) again, we have -X12 = — XuY\ Y . l 2 22 Finally, Equation (1.3.11) follows by substituting X\ into Equation (1.3.12). 2 • For our fast algorithm, we need to construct a matrix X such that Xu = Bu and for which the inverse Y is easily obtained. Then B^ can be represented using 1 Equation (1.3.11). We proceed as follows. Suppose that A = PLUQ, and let C = QPL. Then write We want B = (PL)- A(PL) = UQPL = UC, so that X r> _ (Un U\ \ V 0 I 2 J\c Cu Bu Ci \ _ / 2 c J - \ 0 as in Equation (1.3.4), with S = PL. Now define D such that 0 2l 22 B\ i2 0J Then D is nonsingular, because Uu is nonsingular in the PLUQ decomposition, and we have We finally obtain our matrix X = DC, and it is easy to verify that X = (^ \X i •* n X 2 Now its inverse Y = X~ l 1 2 22 j J Bu \C i Bi \ C J ' ( = 2 2 22 is obtained directly, because Y = C~ D~ l l Theorem 1.3.4. Both Bu and Y 22 = I- P- Q- I>- . 1 1 1 1 are nonsingular iff ind(A) = 1. Moreover, Bu — Yu ~ Yi Y ~2^Y i. 1 2 31 2 2 (1.3.15) • Proof. Immediate from Lemmas 1.3.1 and 1.3.3. We now have all the mathematical tools necessary for constructing our fast algorithm. Still, if we implement Equation (1.3.15) to produce the whole matrix Y, it will cost | ( n — m) n flops to compute /_?""* plus | n 2 1 to do the product by L~ . 3 l The total effort for such an algorithm would then be | n flops (including the initial PLUQ 3 decomposition and ignoring the terms of degree less than 3 in n). This is even worse than the long algorithm! The key here is that we need only to produce Y ^ , which can be done as follows: (y«) ~ ~ ~ (~^T ) L = lp lQ l Ui2 (1.3.16) The computations are performed as follows. Back substitution is applied to each of the m columns of t7 in turn. Then the permutation matrices Q~ and P~ l 12 l are applied. Finally, the multipliers of the Gaussian elimination are applied. We get even more than we need: we have to compute Y12, in addition to F , 2 2 because the matrix L~ l is represented in product form (so that L \ is not known). 2 Even so, the total cost of computing (1.3.16) is \m[n — m) + | m n flops, which is 2 2 negligible when m is small. Hence for m small, the fast algorithm costs only | n flops, 3 which is the cost of the initial PLUQ decomposition. The fast algorithm is shown in Figure 1.3.5. Let us continue our 3 x 3 example. We have ( 1 0 0 0 so that - l \ 1 0 0 1/ /l 1 and D' 1 Y = L~ P- Q- D~ l 1 l 1 1 0 1 0 | , \0 0 1 1 0 1 = ( 0 0 1 = I0 1 1 1 1 1 Hence we have y i l = (o J))' y i 2 = ( i ) ' Y 32 2 1 = { ) a n d y 22=(l) Step 1.1. Compute P, L, U and Q such that A = PLUQ (using modified Gaussian elimination, with U\\ nonsingular). Compute Y22 as in Equation (1.3.16) Step 1.2. Step 2.1. Compute R, S and T such that F 2 2 = RST (using Gaussian elimination). (Note: R is permutation, S lower, T upper triangular). Step 2.2. If y 2 2 is singular then "Error: 'md(A) > I s else stop (we have all the necessary information). Figure 1.3.5. The fast decomposition algorithm. Moreover, ( 1 -1 0 and we have B l n 0 1 0 = Yu — Y\iY_\ Y<2\. Indeed, Y -YnY£Y_ xx -1 0 0 x x = ( j o ) " ( l l) = (-l - l ) = B " ' The algorithms of the next section, however, will not produce Y\\ and Yu, applying rather the product representation directly to the right hand side 6. The organization of the computations is worth some discussion. It is possible to perform all the computations of the fast algorithm using one real array A L U of dimension n x n , two integer arrays PIVA and PIVB of dimension n and one real array W K of dimension n. Figure 1.3.6 shows how the various submatrices can be stored. The PLUQ decomposition of Step 1.1 can be organized to overwrite the original matrix A as in the nonsingular case, and we get an m x m lower right submatrix of zeroes. We can also implement Step 1.2 so that it produces one column of Y at a time, storing it in the real array W K (which is not shown in the figure). The (n — m) 33 ALU PIVB (unused) U 12 n —m m PIVA R '21 n— m m Step 1.1. A = PLUQ, lower right corner = 0. Step 1.2. Compute Y22 and store it in the lower right corner. Step 2.1. Y 22 = RST. Figure 1.3.6. Storage considerations for the fast algorithm. components that belong to Y12 are discarded, and we need only to store the m entries corresponding to 122- Those are conveniently stored in the zero lower right corner of A L U . As we will see in the next section, there is no loss of efficiency in dropping the submatrix Y12, as it will be just as efficient to apply it to a vector using the product representation. Finally, the PLU decomposition of Y22 can be done so that S and T overwrite Y22. The permutation matrices P, Q and R are represented by the integer vectors PIVA and PIVB. The slow algorithm, on the other hand, cannot be organized as efficiently. This is because the submatrices Bu and B12 have to be produced explicitly. While B12 may very well overwrite Ui , it is not possible for B u to overwrite Lu and Uu. This 2 34 is because while Un and Un will not be required any more after B is produced, the matrix L will still have to be applied when computing a solution, as will be seen in the next section. Hence not only is the long algorithm inefficient in terms of computational effort, but it also needs more memory than the fast algorithm. One aspect of both algorithms which we have not considered yet is the effect of rounding errors on the Gaussian elimination of a singular matrix. The problem is that, due to rounding errors, it is likely that the entries which would be set to exactly zero when using exact arithmetic will in fact be set to some small, nonzero numbers. The problem is to select a tolerance level, say e, such that an entry will be considered zero if its absolute value is less than €. Some theoretical bounds are known when the elimination is done using the QR method, or singular value decomposition [see Stewart (1984)]. This device is not very reliable, however, when applying Gaussian elimination to a general singular matrix A. In fact, Gaussian elimination with partial pivoting is considered to be numerically unstable when A is singular [see Golub and Van Loan (1983)]. The question, then, is why did we develop our algorithms using Gaussian elimination. The answer is three-fold. First, Gaussian elimination is mathematically well defined and is known to the expected audience of this thesis. Second, the formulation of our algorithm in terms of QR decomposition or singular value decomposition is straightforward. The QR version of the fast algorithm requires | n flops, twice that of Gaussian 3 elimination. Hence our results about the operations count need only be multiplied by a factor of 2 if the QR method is used. Third, the intended application of our algorithms is with a matrix A = I—P, where P is stochastic (see Chapter H). In this case, the rank of A is known in advance of the elimination. Also, Gaussian elimination is known to be stable even without row interchanges [see Harrod and Plemmons (1984) and Funderlic and Plemmons (1981)]. (Of course, row interchanges are necessary for our application, so that the zeroes be at the bottom). 35 1.4. Solution of singular systems of equations Consider a n n x n matrix A of index 1 and an arbitrary vector 6, and let c = 56, where 5 is the transformation matrix of Equation (1.3.1). We are interested in the system of equations Ax = b. Writing y = 5s, we have where Hence the system has a solution iff c = 0. Moreover, if the solution exists, it has 2 yi = J 9 C I , and y is arbitrary. - 1 2 For an arbitrary vector 6, compatible with A or not, we define the gain as the vector g such that by Equation (1.2.8). Hence a vector 6 is compatible with A iff g = 0. Moreover, we define the bias as the vector h such that by Equation (1.3.2). In this context, the gain is the projection of the right-hand side onto the null space of A , while the bias is the unique solution which belongs to the range of A. That is, of all the possible solutions, h is the only one such that Wh = 0. This property of the Drazin inverse solution is crucial for the dynamic programming application, for which other generalized inverses would not be appropriate. The terms gain and bias are well known in dynamic programming and their interpretation in this context will be considered in Chapter 3. We will now construct two algorithms for computing the gain and the bias for a given vector 6, using the decompositions of the matrix A obtained with the algorithms of the previous section. 36 These decompositions have the form of Equation (1.3.4), so that A* and W are given by (1.3.8) and (1.3.9), respectively. Hence we have and ( B T i ( ' i + BTtBi*») j 1 = s .. > t (1 4 2) where c = S~ b. In both decomposition algorithms, the transformation matrix is S = l PL, with P and L obtained at Step 1.1 by Gaussian elimination. The corresponding solution algorithms will have three major steps: transformation of the right hand side, computation of the transformed solution vectors, and finally, back transformation of the solution vectors. The slow solution algorithm is described in Figure 1.4.1. Step 1.1. Compute c = Step 2.1. Compute U\ = L~ P- b. l l B^i- Step 2.2. Compute yx =T- S- R- u . 1 l l Step 2.3. Compute z = T~ S- R~ (ei x l x Step 3.1. Compute g = l 1 + y ). t PL(~^). Step 3.2. Compute h = PL( ^). Z Figure 1.4.1. The slow solution algorithm That it effectively gives the gain and bias vectors as in (1.4.1) and (1.4.2) follows from the fact that J 3 U = RST, where R, S and T are the matrices produced by using Gaussian elimination, at Step 2.1 of the long decomposition algorithm. The total 37 number of floating point operations required to compute the gain and the bias is then ^ n + +m(n - m) + 2(n - m) + n . 2 2 2 Neglecting the terms of degree less than 2 in n, we get 3.5n flops when m is small. 2 The standard solution of a nonsingular system, with g = 0 and h — A~ b, would cost 1 only n flops. It is to be emphasized, however, that it is the decomposition which has 2 the dominant cost, with 3.5n flops for the slow algorithm. 3 To illustrate the solution process, let us consider again the matrix A = 0 0 1 , with 6 = The matrices P, R, S, T and B12 were obtained by the slow decomposition algorithm. Applying the slow solution algorithm, we obtain: Let us now proceed to the construction of a solution algorithm that will use the output of the fast decomposition algorithm. This fast solution algorithm will work much like the algorithm of Figure 1.4.1, except that we do not have the matrix Moreover, the fast decomposition is based on the idea that Y11 — Yi Y 2 38 22 Y i, 2 B. i2 but we do not have Yn, F i and F i [although F 2 2 had been produced in Equa- l 2 tion (1.3.16) and then discarded to save memory storage]. We do have F 2 2 = RST, Where R, S and T were obtained by Gaussian elimination at Step 2.1 of the fast decomposition algorithm. But we still need to derive a way to express Bi , Yn, F 2 i 2 and F i in terms of P, 2 L, U and Q, which were obtained at Step 1.1. We do this as follows. First, recall that B = UQPL, with -tt' °o)' "ft* o)^-{t !)• V We do not know Ln and L\ 2 explicitly, but we do not need them either. It is the structure of L which is important here, and we have Hence the product i ? i x can be done efficiently by first permuting the vector and then 2 2 multiplying it by Un and U12. Next, the product by Y\ can be done using Equation (1.3.16). We get y± = F x 2 1 2 2 by computing We also get y , and since we do not need it, we just discard it. Similarly, we apply Yn 2 and F i to a vector as follows, using (1.3.15) to get 2 The fact that Yn and F i must be applied simultaneously to the same vector X i is not 2 a problem. It is indeed exactly what we need because by Theorem 1.3.4, we have &ii i x — (Yn ^i2F ~ 1 22 = (F x )-F F 1 1 1 1 2 39 F i)xi 2 1 2 2 (F x ). 2 1 1 Step 1. Compute Zi = Y\xX\ and 23 = Y21X1 using Equation (1.4.5). Step 2. Compute v = T~ S~ R~ z . l 2 Step 3. 1 1 2 Compute tui = Y i t ; using Equation (1.4.4). 2 2 Step 4. Compute U i = z\ — xo\. Figure 1.4.2. Algorithm to compute M\ = B\~_ x\. l Step 1.1. Compute c = L~ P~ b. l 1 Step 2.1. Compute ui = B\_C2 using Equation (1.4.3). Step 2.2. Compute y\ = B ui as in Figure 1.4.2. l n Step 2.3. Compute z\ = B\~_ [c\ + J/i) as in Figure 1.4.2. l Step 3.1. Compute g = PL("»»). Step 3.2. Compute A = P X ^ ) . 1 Figure 1.4.3. The fast solution algorithm. This leads to the algorithm of Figure 1.4.2 for computing u = B x. 1 t n i The matrices R, S and T of Step 2 were obtained by Gaussian elimination of y , 2 2 at Step 2.1 of the fast decomposition algorithm, in Figure 1.3.5. The fast solution algorithm is shown in Figure 1.4.3. Observe that the algorithm of Figure 1.4.2 for applying B l n to a vector is used twice as a subroutine by the fast solution algorithm. The total number of flops used for this product is [I 2 n + 1( n _ m ) 2 ] + m 2 + Jl n 3 + 1 ( n _ m ) = n + (n — m ) + mn. 3 2 40 2+ m ( n _ m ) j Hence the total number of flops required by the fast solution algorithm is 2mn + 1.5m . 2 When m = 0, the fast solution algorithm takes then 6n flops, which is more work 2 than the slow solution algorithm. We must keep in mind, however, that the fast decomposition algorithm takes only | n 3 flops. Hence, to compute the gain and bias of a singular system by the fast algorithm takes about the same amount of flops as for solving a nonsingular system with 6 right hand sides. It should be clear from Figure 1.4.3 that the only difference between our two solution algorithms is the manner in which the products by 2?i and J3f/ are implemented. In 2 the slow algorithm, Bu is used explicitly and the product by B l n is produced from the PLU decomposition of Bn (i.e., R, S and T). In the fast algorithm, however, the product by B i2 is done by Equation (1.4.3), and the product by B^ is done by the 1 algorithm of Figure 1.4.2. Let us apply those products to our numerical example. Recall that ( 1 0 0 1 0 0 -: 0 0 L = P = I and Q interchanges columns (or rows) 2 and 3. Also, Y 22 = (1) and R = S = T = (1). Let us do Step 2.1 of the fast solution algorithm. We have c = (4), so that 2 which is the same value as with the long algorithm. Now we do Step 2.2 to get y\ = 41 t/ = r- 5- ij- ^ = (4) 1 2 = (4) *2 *1 l 1 3 (!) This value of y is again the same as that we obtained with the long algorithm. Finally, x Step 2.3 would work the same way, giving Z\ as before. 1.5. Other algorithms for Drazin inversion Both the long and the fast algorithms of the previous sections are specializations of the general decomposition algorithm of Wilkinson (1982). We implemented them using Gaussian elimination, but we could have used singular value decomposition, or the QR decomposition [see e.g., Stewart (1984)], just as well. In fact, the singular value decomposition version of Wilkinson's method is described in Campbell and Meyer (1979), Algorithm 12.5.1. We used Gaussian elimination because it is more economical than both other decompositions [see e.g., Johnston (1982)], and it is stable when A = I — P, with P stochastic. The shuffle algorithm of Anstreicher and Rothblum (1985) uses a different approach. Instead of producing a factorization of the Drazin inverse in terms of elementary matrices, it produces a full matrix from which the Drazin inverse is obtained. More 42 specifically, for a matrix A of index k, the algorithm performs k steps in which Gaussian elimination and a "shuffle" operation are used, in which a matrix B is produced, such that A = B A, D k+1 k at a cost of n flops (if m is small). This is three times the 3 amount of work required for applying Gaussian elimination to a nonsingular matrix. Recall that our slow algorithm is just slightly slower, with a ratio of 3.5, and that our fast algorithm is three times as fast (with a ratio of 1). In the special case of interest, with ind(A) = 1, the gain and bias vectors can be computed as follows, for a given right hand side 6: h = A*b = B Ab 2 g = Wb = (I- AA*)b = b-Ah. Hence the bias can be computed with 3n flops, and the gain with another n flops, for 2 2 a total of 4n flops. Recall that the slow solution algorithm of Section 1.4 takes only 2 3n flops, while the fast algorithm takes 6n flops. 2 2 Other methods are based on sequences of matrix multiplications and are not computationally attractive. See Campbell and Meyer (1979), Greville (1973), Hartwig (1976) and Anstreicher and Rothblum (1985). 43 Chapter II Nonnegative matrix theory In this chapter, we show how the Perron-Frobenius theory of nonnegative matrices can be used to deduce the eigenvalue structure of the transition matrix of a Markov chain. The matrix decomposition so obtained leads to a simple characterization of the limiting and fundamental matrices, using the Drazin inverse notion of Chapter I. Further, the Jordan canonical form is used to derive the geometric convergence of the power method. First, the ideas of reducibility and periodicity of a matrix are reviewed, in Section 2.1, and the correspondence between matrices and directed graphs is established. Next, the theory of nonnegative matrices is reviewed in Section 2.2. Our approach is inspired from Varga (1962), but the derivations we provide are consistent with our exposition of the Jordan canonical form in Chapter I. Lemmas 2.2.1, 2.2.2, 2.2.3 and 2.2.8 are given without proof. Although elementary, their derivation is not directly relevant to the material of the thesis. In Section 2.3, the matrix decomposition approach is used in a new derivation of the properties of the limiting matrix, the deviation matrix and the fundamental matrix of a discrete time Markov chain. We show that the probabilistic interpretation of these matrices, using Cesaro limits, requires the boundedness of the powers of the transition matrix. This property is not required for defining the matrices in terms of the Drazin inverse, though. Finally, it is shown, in Section 2.4, that when the transition matrix is aperiodic, its 44 powers converge to the limiting matrix, at a geometric rate. This result is a specialization of a result of Varga (1962). 2.1. Matrices and directed graphs It is often convenient to associate the entries of a complex n x n matrix A with a directed graph G(A) with n nodes, labelled 1,..., n, and with a directed arc for every nonzero entry of A. Hence G(A) has an arc from node t to node j, labelled a,y, iff a,y ^ 0, for * = 1,..., n and j = 1,..., n. For example, the matrix ( 1 0 0 0.2 has the directed graph of Figure 2.1.1. 1 1 2 1 0.2 0 2 1 2 J 1 1 ' 3 " 0.2 L 0.2 1 Figure 2.1.1. The directed graph G(A) In the context of queueing and dynamic programming, the directed graph naturally suggests that the nodes represent each a state of a stochastic system, while the arcs represent a possible transition from one state to another. Hence the entries of A are transition rates or probabilities (depending on the specific application). We have already used the directed graph representation of a matrix in Chapter I (se Figure 1.1.1). The first well known result of interest is as follows. 45 L e m m a 2.1.1. Let A and B be two complex n x n matrices. Then there is a permutation matrix T such that B = TAT -1 iff G(B) is the same labelled graph as G(A), with the node labels permuted. • Proof. Left to the reader. Let N = {1,..., n} be the set of all the nodes of a directed graph G. This set N can be partitioned into m equivalence classes 5*, k = 1,..., m, with 1 < m < n, such that two nodes i, j & N are equivalent iff there is a path from i to j and a path from j to t. The equivalence classes Sk, k = l , . . . , m , are called strongly connected components [some authors use this term for the subgraphs (5*, Ek) where Ek is the set of all arcs between the nodes in Sk]. If a graph G has exactly one strongly connected component (i.e., m = 1), then G is said to be strongly connected. There is a similar concept in matrix theory. A complex n x n matrix A is said to be irreducible if there is no permutation matrix T such that A = TBT~ l with (2.1.1) where Bn and B22 are square submatrices. If such a matrix T exists, then A is said to be reducible. L e m m a 2.1.2. A matrix A is irreducible iff its directed graph G(A) is strongly connected. Proof. If A is not irreducible then let Si and S2 be the subsets of N corresponding to B n and B22 of Equation (2.1.1). Let * G S and j € S . Since B i = 0, there is no path x 2 2 going from ; to * and hence G(A) is not strongly connected. To show the converse, suppose there is no path from j to i. Let S2 be the strongly connected component of G(A) that contains t, and let Si = N — S . Then a matrix B satisfying (2.1.1) can 2 be constructed, and hence A is not irreducible. • Suppose a graph G has m > 1 strongly connected components. Suppose also that there is an arc from some state » G Sk to some state j G N — Sk. Then Sk is a transient 46 class and all the nodes in Sk are said to be transient. If no such outgoing arcs originate from Sk, then Sk is a recurrent class and all the nodes in Sk are recurrent. Theorem 2.1.3. Suppose A is a complex n x n matrix. Suppose also that G(A) has I transient classes and m recurrent classes. Then there is a permutation matrix T such that A = T ( £)r-, B 0 (2.1.2) with B = I Bn 0 B \ Bit B\2 B (D xx x t 22 Vo Bu where Bn,...,Bu and Dn,...,D and D = 0 0 D 0 \ 22 0 (2.1.3) \ 0 are irreducible, square submatrices. mm Proof. Each connected component of (7(A) has an associated submatrix. The / transient classes correspond to the submatrices Bn,...,Bu, correspond to Dn,...,D . mm and the m recurrent classes The submatrix D is block diagonal because there are no outgoing arcs from the recurrent classes. The submatrix B is block upper triangular because if both Bij and Bji were nonzero, with 1 < i < ji < £, then all the nodes corresponding to the submatrices Bu and B j would belong to the same strongly connected 3 component. Hence G(A) would have only t — 1 transient classes. This implies that at least one block must be zero, and we can sort the components so that Bij = 0. • This parsing of a graph into strongly connected components can be done with a number of arithmetic operations proportional to the number of arcs in the graph, that is, the number of nonzero entries of the associated matrix. One such algorithm uses the method of depth-first search [see e.g., Aho et al. (1974)]. As we will see in the next section, Theorem 2.1.3 has an important consequence on the eigenvalue structure of a nonnegative matrix. Also of importance is the periodic (or cyclic) structure of a graph and hence, of a matrix. Suppose A is an irreducible, complex n x n matrix with directed graph G(A). 47 Then G(A), and A itself, is said to be periodic with period k, with 2 < k < n, if the set N of its nodes can be partitioned into k subsets S ...,Sk it such that all arcs go from S{ to 5i+i, for »' = 1,..., k — 1, or from Sk to Si. That is, suppose there is an arc from node r to node s. Then either r € and a € <Si. The subsets Si,...,Sk and s € , if 1 < » < k — 1, or else r _ Sk are called periodic classes. If no such partition of N exists, then G(A) is said to be aperiodic. Lemma 2.1.4. An irreducible, complex n x n matrix is periodic with period k iff A; > 2 is the largest, integer for which there is a permutation matrix T such that A = TBT- , 1 with ( o 0 B12 0 0 B 23 ... 0 '•. : 0 0 0 Bk-i,k 0 B = 0 VBfc! Proof. 0 0 \ (2.1.4) Because by definition, the directed graph G(A) has the structure shown in Figure 2.1.2 when A is periodic with period k. In this representation, a "node" Si denotes the set of nodes in 5,-, and an "arc" i?,y denotes the set of arcs going from nodes in 5,- to nodes in S : There are no arcs going from nodes in Si to other nodes in 3 the same set Si. • Another definition of the period A; of a graph is as the greatest common divisor of the lengths of all the paths going from any node to itself. We feel that the above 48 definition, in terms of periodic classes, is more immediate. Moreover, it will be used directly in the queueing application of Chapter VI. The terminology we use is worth some comments. We refer to Aho et al. (1974) for the strongly connected components of a graph and their computation. Varga (1962) defined a matrix as either irreducible or reducible, and observed that an irreducible matrix has a strongly connected graph. When the matrix represents the transition probabilities of a Markov chain, Karlin (1969) used the terms transient and recurrent to describe the irreducible subchains. Kemeny and Snell (1960) also used the term ergodic to denote a recurrent set. Karlin (1969) also used the terms periodic and aperiodic for an irreducible Markov chain. Kemeny and Snell (1960) used the terms cyclic and reguiar, respectively. The terms cyclic and primitive were used by Varga (1962) when the matrix is nonnegative. The period is then called the cyclic index of a periodic matrix. For a complex, not necessarily nonnegative matrix, Varga (1962) also used the term weakly cyclic. 2.2. Eigenvalue structure of a stochastic m a t r i x A real n x n matrix A is said to be nonnegative if a - > 0, and positive if a,y > 0, for tJ i,j = 1,..., n. A nonnegative n x n matrix P is said to be stochastic if £^y=i Pij = 1, for » = l , . . . , n . Moreover, a nonnegative, irreducible n x n matrix Q is said to be substochastic if Yl^iQij < 1» for i = l , . . . , n , with strict inequality for at least one row. Suppose a stochastic matrix P is reducible, with £ transient classes and m recurrent classes. Then the transient submatrices Bu,..., tic, while the recurrent submatrices Du,...,D mm Bu of Theorem 2.1.3 are substochasare stochastic. We will derive some well known results about the eigenvalue structure of an irreducible, nonnegative matrix, and use them to obtain the eigenvalue structure of a general stochastic matrix. Recall that the spectral radius of a matrix A, denoted o~(A), 49 is defined as <r(A) = |A|, where A is a dominant eigenvalue of A. That is, if 7 is an eigenvalue of A, then |7| < <r{A). We begin by discussing a few basic results from Varga (1962), concerning the Perron-Frobenius theory of irreducible nonnegative matrices. Suppose A is an irreducible, nonnegative n x n matrix and let x > 0 be a nonzero vector. We define the quantity n r = min {__] x t:x,->0 ~ . (2.2.1) 3=1 Then r = sup{p : Ax > px,p > 0}, and we consider the quantity x r = sup{r : x > 0, x ^ 0}. x (2.2.2) Lemma 2.2.1. Suppose A is an irreducible, nonnegative n x n matrix. Then r > 0, and there is a positive eigenvector x such that Ax = rx. Lemma 2.2.2. Suppose A is an irreducible, nonnegative n x n matrix. Then <r(A) = r, i.e., if A is an eigenvalue of A, then |A| < r. Lemma 2.2.3. Suppose A is an irreducible, nonnegative n x n matrix. If x and y are such that Ax — rx and Ay = ry, then x and y are linearly dependent. Theorem 2.2.4. Suppose A is an irreducible, nonnegative n x n matrix. Then there is a nonsingular matrix S such that where B is an (n — 1) x (n — 1) matrix such that o~(B) < r, and r is not an eigenvalue of B. Proof. Such matrices B and S are defined by Theorem 1.1.10. That o-(B) < r follows from Lemma 2.2.2, and Lemma 2.2.3 implies that there is exactly one Jordan block corresponding to the eigenvalue r. We have to show that it has dimension 1. We do this as in Section 1.1, by showing that Equation (1.1.6) has no solution. Suppose x > 0 is the eigenvector of A of Lemma 2.2.1. We need to show that (rI—A)y = 50 x has no solution y. Let w > 0 be a left-hand eigenvector of A, such that w(rl—A) = 0. Such a vector exists by Lemma 2.2.1, because w' is an eigenvector of A', which is also nonnegative. Hence we have 0 = w(rl - A)y = tux > 0, so that the system is not compatible. • L e m m a 2.2.5. Suppose P is an irreducible stochastic nxn matrix. Then r = 1. Proof. That 1 is an eigenvalue of P follows from the fact that Pe = c, where e is a vector with all entries equal to 1. Now suppose A is any eigenvalue of P, and let x be an eigenvector associated with A. Let k be such that |x*| > |x,|, for t = 1,..., n. Then Px = Az implies that n n ]T>y|xy| < (X>y)l**l = |A| • \x \ < k y=i c f |z |, fc y=i so that |A| < 1. Hence r = a(P) = 1. • L e m m a 2.2.6. Suppose Q is an irreducible substochastic nxn matrix. Then r < 1. Proof. Suppose A is an eigenvalue of Q, with eigenvector z. Suppose also that the rows and columns of Q have been permuted in such a way that | z i | = . . . = |z | > \x i\ > . . . > | x | . fc k+ n If k < n, there is at least one i < k for which g,y > 0 for some j > k, because Q is irreducible. Then Qx = Az implies n k n |A| • |z,-| < ^2q \xj\ = ^2qij\xi\ + ^2 i3 3= 1 3= 1 qij\x \ 3 3=k + l n < E*7)i*.-i ^ i «i> z 3= 1 so that |A| < 1. If k = n, there is at least one t < n such that ]Cy=i Hj < n |A| • |x,| < 1- Then Qx = Az implies n 5>y|zy| = Efty)|*.i 3= 1 < 3= 1 so that |A| < 1 as well. • 51 Theorem 2.2.7. Suppose P is a stochastic n x n matrix with m recurrent classes. Then there is a nonsingular n x n matrix S such that (2.2.4) where I is the m x m identity matrix and Q is an (n — m) x (n — m) matrix such that ^(Q) < 1 and 1 is not an eigenvalue of Q. Proof. By Theorem 2.1.3, every connected class corresponds to an irreducible submatrix. The eigenvalues of the transient classes all have modulus less than 1, by Lemma 2.2.6, and each of the m recurrent classes has exactly one Jordan block J\ (1), by Theorem 2.2.4 and Lemma 2.2.5. Hence P is similar to the matrix of Equation (2.2.4). • Finally, we will need some further results which refine Theorems 2.2.4 and 2.2.7. Lemma 2.2.8. Suppose A is an aperiodic, irreducible, nonnegative n x n matrix, and suppose r = a(A). If A is an eigenvalue of A then either A = r or |A| < r. Theorem 2.2.9. Suppose A is a periodic, irreducible nonnegative n x n matrix with period k > 2, and suppose r = a(A). Then A has exactly k eigenvalues of modulus r, such that Ay = rexp(iic2j/k) for j = 1 , e a c h with multiplicity 1. Moreover, if t A is any other eigenvalue of A, then |A| < 1. Proof. Let T be the permutation matrix of Lemma 2.1.4, and let A = TBT~ l as in Equation (2.1.4). By Lemma 2.2.1, r is an eigenvalue of B and there is a positive vector z such that Bx = rx. Partition x as in Equation (2.1.4), so that It is straightforward to verify that Ay = rexp(t7r2y/A:) is an eigenvalue of B with eigenvector / 2 (j) _ exp(ur2j/k)xi exp{iir4j/k)x 2 \exp(iir2kj / k)xk 52 for j = I,...,k. We will see that each of these k eigenvalues Ay has multiplicity 1, as follows. Consider the matrix B of Equation (2.1.4). Then B = diag(Cy), with k C\ = B12B23•..Bk\ Now each Cy, for j' — l,...,k, ... = 1,..., k, where Cfc = Bk\B\2 • • • Bk-i,k- is an aperiodic, irreducible nonnegative matrix. By Lemma 2.2.8, A is an eigenvalue of Cy implies that A = <x(Cy) or else |A| < <r(Cy). Now if 7 is an eigenvalue of B, then 7* is an eigenvalue of B . This implies that B must k k have r* as an eigenvalue, with multiplicity at least k, and that |A| < r*. We conclude that the multiplicity of r* is exactly k, because each of the k blocks Cy has exactly one eigenvalue A = r*. • Corollary 2.2.10. Suppose P is a stochastic matrix such that every recurrent class is aperiodic. Then the matrix Q of Theorem 2.2.7 has cr(Q) < 1. 2.3. L i m i t i n g and fundamental matrices In the context of Markov chains and Markov decision processes, Theorem 2.2.7 has some important consequences. Let P be the transition matrix of a Markov chain with n states. Then P is a stochastic nxn matrix, hence it satisfies Equation (2.2.4). We use this fact to provide an alternate proof of the following result of Kemeny and Snell (1960) that was fundamental in Blackwell (1962). Our approach is also different from that of Campbell and Meyer (1979), because our argument is entirely based on the decomposition property. Consider a stochastic n x n matrix P with m recurrent classes. By Theorem 2.1.3, we can write (-Poo 0 P = T : -Poi -Po2 • • • Pom \ Pn 0 ... 0 • : ••. 0 P22 : V 0 0 53 P J mm where T is a permutation matrix. Here, the matrices B and C of Equation (2.1.2) are given by B = Poo and C = (Poi ••• and the matrix D of Equation (2.1.3) is such that D Pom), = Pkk, for A; = 1,..., m. In the kk following theorem, a matrix P* is defined. Then the same permutation T is applied to the matrix P*, giving the blocks P* for i,j = 1,...,m. Jt Theorem 2.3.1. If P is a stochastic matrix with m recurrent classes, then there is a unique matrix P* such that PP* = P*P = P*P* = P* and P kk (2.3.1) > 0 for every recurrent class, for k = 1,..., m. Moreover F * = 5 - 1 (o fjS (2.3.2) and Z = (I - P + P*)~ exists. l Proof. Decompose P as in equation (2.2.4), and apply the same similarity transformation S to P*, to obtain \ -*V21 -^22/ where Rn is (n — m) x (n — m), i ^ a is m x m and R i2 and 72 i have the appropriate 2 dimensions. If m = n, then P = 7 = P* and we are done. Otherwise, using (2.2.4) and (2.3.1), we have (because B = I — Q is non-singular): PP* =P* ==> Qi?n = R n => i? = 0 u QR12 — R12 —>* 7212 = 0 P*P = P* => RQ 2l = Tfei => 7^!= 0 P*P* = P* => R22R22 = Tiaa ==> 7^22 is a projection. Now P^fc > 0 for all recurrent classes implies that rank(P*) = m. This implies that rank(7?22) = and hence 7^2 = I since the only projection of full rank is the identity. So we have (2.3.2). Also, it trivially follows that I-P + P*=S~ [ l 54 B °^JS (2.3.3) is indeed non-singular. • We recall that P* is called the limiting matrix of P, while Z is called the fundamentai matrix [Kemeny and Snell (I960)]. Let A = / - P. By Theorem 2.2.7, the matrix A satisfies Equation (1.3.1) with B = I — Q, and hence its group inverse A# exists and is given by Equation (1.3.2). Then A# is the matrix H of Blackwell (1962), who derived the three following properties using a partial Laurent expansion of the resolvent of P (see Chapter HI). They follow immediately from our representation. Corollary 2.3.2. Let A# and P* be defined as above. Then (0 (ii) (ni) {I-P)A* = A*{I-P) =I - P \ A#P* = P*A# = 0 and {I-P*)A# = A*(I-P*) = A*. Proof. We show only (t), the rest follow in a similar manner and are omitted. • A * is also called the deviation matrix by Veinott (1974), while P* is called the eigenprojection of A at X = 1, by Rothblum (1981). The following representations for P * and Z are useful and are easily derived using (2.2.4) and (2.3.2). Corollary 2.3.3. P* = / - AA* and Z = (A + P * ) _ 1 Proof. We have 55 =A*+P\ by (2.3.2), and A* + P-= S-> ( ~ B fjS^iA + l PT 1 by (2.3.3). • Corollary 2.3.4. ZP* =P\ Proof. • The probabilistic interpretation of the limiting and deviation matrices is obtained using Cesaro limits. We give a new derivation of some results of Veinott (1974). Consider a sequence {A,- : t = 0,1,2,...} of n x n matrices, and let N-1 B N = Ai, for N = 0,1,2,... «=o If lim#_.oo N~ Bff 1 = A exists, then A is called the Cesaro limit of order one of {Ai}, and we write lim Aft = A (C, 1). AT—•oo Further, if limjv_oo B^ = B (C, 1), then we write OO N=0 We investigate the limits of sums of powers of a matrix. Lemma 2.3.5. Suppose I — Q is nonsingular. Then N ^2Q i for;v>o. = (I-Q)- (I-Q ), 1 N+1 i=0 Proof. Because N {i-Q)J2Q i = i=0 I -Q N + 1 - (--) 2 3 4 • 56 Lemma 2.3.6. Suppose limjv-^oo Q = 0. Then (/ — Q) is nonsingular and N «=0 Proof. For N large enough, the matrix / — Q of Equation (2.3.4) is nonsingular N+1 because the entries of Q are small. This implies that / — Q is nonsingular. The N+1 result follows by taking limits. • Lemma 2.3.7. Suppose Q is bounded, for N = 0,1,2,..., and I - Q is nonsingular. N Then lim Q = 0 (C, 1). N Proof. By Lemma 2.3.5, we have t=0 The result follows because Q is bounded. • N Lemma 2.3.8. Suppose Q is bounded, for N = 0,1,2,..., and J — Q is nonsingular. N Then oo £ Q " = (/-Q)- 1 (C,l). JV=O Proof. Let N-l i • oo 1=0 y=o By Lemma 2.3.5, we have x= lim jvr 1 E( -«)" ( -^ / 1 J )- + 1 i=0 Define F = lim A " " V / 1 N-l and Z = lim A " V Q*. 1 t=0 i=0 Then Y = I, and Lemma 2.3.7 implies that Z = 0. Hence X=(/-Q)- (K-QZ) = (/-Q)- , 1 1 and the result follows. 57 Theorem 2.3.0. Suppose P is a stochastic matrix, and let P* be defined as in Theorem 2.3.1. Then lim P N = P* (C, 1). N—*oo Further, if P is aperiodic, then the ordinary limit can be used instead of (C, 1). Proof. Decompose P as in Equation (2.2.4). Then N P Since P N = S -I^ (2.3.6) is a stochastic matrix, we have that P N is bounded, and hence so is Q . N Moreover, I—Q is nonsingular, by Theorem 2.2.7. The result follows from Lemma 2.3.7 and Equation (2.3.2). If P is aperiodic, then c(Q) < 1, by Corollary 2.2.10. This implies that lim Q N AT—oo = 0, by Theorem 2.4.2. • Theorem 2.3.10. Suppose P is a stochastic matrix, and let A = I — P. Then f;(p"-p*)==A#(c,i). Further, if P is aperiodic, then the ordinary limit can be used instead of (C, 1). Proof. Using Equations (2.2.4) and (2.3.2), we have P N - P* = S- 1 °) S. (2.3.7) The result follows from Lemma 2.3.8, with A * defined by Equation (1.3.2). If P is aperiodic, the result follows from Lemma 2.3.6. • The consequence of Theorem 2.3.9 is that, for a Markov chain with stochastic transition matrix P , the entry 7r,y of the limiting matrix P* is the expected fraction of the time spent in state j, given that the system started in state i. 58 By Theorem 2.3.10, the entry a,y of the deviation matrix A* is the expected difference between the expected number of visits to state j when the system starts in state t, and the expected number of visits to state j when the system starts with the stationary distribution. We conclude this section with a simple numerical example, provided by the transition matrix which has no transient states and only one recurrent class (with period 2). We have that p* _ ( 1 1/2\ (0 0 \ /1/2 -1/2 \ _ /1/2 1/2 \ and 2.4. Geometric convergence in the aperiodic case In the previous section, we expressed P* and A* as Cesaro limits using sequences of powers of the transition matrix P. The rate of convergence of those limits is only harmonic, however, as the proofs of Theorems 2.3.9 and 2.3.10 indicate. When the transition matrix P is aperiodic, ordinary limits converge. In this section, we show that the rate of convergence is geometric in this case. Consider the limiting matrix P*. By Theorem 2.3.9, if P is aperiodic then lim P N N-*oo Define the error matrix = P*. of the JV-th power as (N) E = p N _* p ( 2 4 1 j From Equation (2.3.7), we have E W = S-*( Z J ) * Q 59 (2.4.2) where Q is as in Equation (2.2.4). Specifically, choose Q to be the Jordan canonical form, i.e., Q = diag(J ,.(A,)), with |Ax| > |A | > . . . > |A |, n 2 fc where A; is the number of Jordan blocks. Let 7 = | A i | . By Corollary 2.2.10, we have 7 < 1 for an aperiodic transition matrix P . Lemma 2.4.1. [See e.g., Varga (1962), Equation (1.28)]. Suppose djj^ is the (t,;)-th entry of the matrix ( j ( A ) ) ^ . Then n { 0 if i < j (i-j)^ ~ 0 where N * ; ' < » < min(n, 7Y + ;) if N + j < i < n i+3 \k) ~ k\(N-k)\ w Theorem 2.4.2. Suppose a(Q) < 1. Then lim^-oo Q N = 0. Proof. Because Q^=diag((J ,(A,)) ) W n and, by Lemma 2.4.1, we have lim (J„,(A,-))"=:0 iV —•OO when j A, j < 1. • Corollary 2.4.3. Suppose P is an aperiodic stochastic matrix. Then lim = 0. TV—00 Proof. By applying the theorem to Equation (2.4.2). • The geometric rate of convergence is a direct consequence of Lemma 2.4.1. The idea is more precisely expressed using norms. Following Varga (1962), Section 1.3, we define the Euclidean norm of a real vector x as ||x|| = ( x ' x ) ' . 1 2 60 2 Moreover, the spectral norm of a real matrix A is defined as \\A\\ = sup \\Ax\\ 2 2 xjtO \\X\\2 Also, by Theorem 1.3 of Varga (1962), we have ||A|| = (<T(A'A)) . 1/2 2 The following result is an immediate consequence of Theorem which is based on Lemma 3.1 of Varga (1962), 2.4.1. Theorem 2.4.4. Suppose P is an aperiodic stochastic matrix, and let 7 == <r(Q). Then \\E^%_ /-Too ||£(")|| 2 ~ T This result implies that for large N, every power of P reduces the error by a factor 7, the subdominant eigenvalue of P. Now suppose P is irreducible. By Lemma 2.2.3, there is a unique row probability vector ir such that rcP = jr. Then ir is the stationary distribution of P , and TTP* = 7r. Corollary 2.4.5. Suppose P is an aperiodic, irreducible stochastic matrix, and let TT(°) be an arbitrary probability vector. Then TT= Urn ir^P N is the stationary distribution of P . Proof. Let = n^P . N By Theorem 2.3.9, the limit exists when P is aperiodic, and is given by IT = jr( )p*. Since P * P = P * , we have irP = JT, SO that rr is the 0 unique eigenvector of P of Lemma 2 . 2 . 3 , with eigenvalue distribution of P . L e m m a 2.4.6. Let 1. Hence IT is the stationary • = - TT be the vector of errors on the N-th iterate TP K Then e (")=e(°)i?(">, 61 (2.4.3) where e(°) = — ir is the initial error. Proof. Since IT = *4°)P*, we have e W = = i r W_ j r = ff (o) ^_p* . (P ) Now icP = IT implies that ix(P - P*) = 0, so we get N N eW=irW(P -P*)-«{P -P') N N = (7r(°)-7r)(P 7 V - P * ) = e(°)EW, by Equation (2.4.1). • To show that the rate of convergence of is geometric, we use a measure of the error, such that He^lh < € ^ ) . We choose e<"> = ||e< )|| -||£<">|| . 0 2 Corollary 2.4.7. The error 2 (2.4.4) converges to zero geometrically. More precisely, e (N+l) lim —rrrr- = N-*00 e( ) 7, N where 7 < 1 is the modulus of the subdominant eigenvalue of P . Proof. By Theorem 2.4.4. • The above results will be extended, in Chapter VI, to the case of a periodic Markov chain. Also, the rate of convergence for the deviation matrix A* will be investigated in Chapter III, in the aperiodic case, when solving the evaluation equations of dynamic programming. 62 Chapter IH Discrete time Markov decision processes In this chapter, we use the decomposition results of the previous chapters to provide new, simple and direct derivations of many important results of dynamic programming. Also, some new algorithms are described, to compute the gain, the bias and the higher order terms required in the policy improvement methods of dynamic programming. With a given stationary policy 5, a Markov decision process evolves according to a Markov chain with stochastic transition matrix P . s Hence the singular matrix A = I — P can be decomposed as in Equation (1.3.1). 6 6 In Section 3.1, the decomposition of A is used to give a direct derivation of the Lau6 rent expansion of the resolvent [see Veinott (1969) and (1974)], and of the discounted value function, when the interest rate is small. The boundedness of the powers of the transition matrix P is not used in this derivation. This property is crucial, however, s for the probabilistic interpretation of the terms of the series expansion, especially the gain and the bias terms. Then is Section 3.2, we formulate the problem of finding the terms of a truncated series as a finite system of linear equations. Using matrix decomposition again, we show that the packed vector (of the terms of the truncated series) can be solved using the Drazin inverse of the packed matrix of coefficients. Next, the computation of the terms of the series expansion is considered in Section 3.3. We show how the matrix decomposition algorithms of Chapter I can be used to compute the gain, the bias and the higher order terms. 63 In Section 3.4, we show that when the transition matrix is aperiodic, the gain and the bias can be computed iteratively using the power method, and that the convergence is geometric. We also show that the error on the approximate gain and bias is related to the residual error by a system of equations of the same type as the evaluation equations. Finally, the application of the Drazin inverse approach to policy improvement algorithms is explored in Section 3.4. We provide a new proof of the result of Veinott (1974) that at most n — m terms are needed to compare the value of two policies, with respect to the discount optimality criterion, where n is the number of states and m is the number of recurrent classes. 3.1. Markov reward processes Consider a homogeneous Markov chain {X(t) : t = 0,1,2,...} with finite state space N = { 1 , . . . , n} and stochastic transition matrix P, where Pij - = Prob{X(* + 1) = j | X(t) = i}, x, j = 1,..., n. Suppose also that at the end of every period t, the system earns a reward r,-, where i = X(t). We say that r is the vector of immediate rewards. Such a Markov chain with rewards is called a Markov reward process, and is denoted by the triplet (TV, r, P). Now suppose an arbitrary random variable Y is defined on the Markov chain {X(t)}. We define the conditional expectation operator Ei as Ei(Y) = E{Y | X(0) = »}, for i = 1,..., n. Let p > 0 be the interest rate for one period. Then f3 = (1 + p)~ l is the one period discount factor, with 0 < {3 < 1. We are interested in the present value, at time t = 0, of the total reward earned over an infinite horizon. More precisely, we define the expected total discounted reward function v(p) such that oo Vi(p) = Ei(Y_ / ? f c + 1 r x ( f c ) fc=o 64 ), » = 1,..., n. The expectation can be expressed directly using matrix notation, and we get OO v(p) = /? £ p P r k=o k = /?(/ - PP)- r. k 1 (3.1.1) The infinite series in Equation (3.1.1) converges by Lemma 2.3.6, because Um 0 P = 0, k k fc—•OO with 0 < p < 1. We are interested in the case when f3 is in the neighbourhood of 1. Following Veinott (1969) and (1974), we replace the discount factorftby the interest rate p, so that v(p) = (l + )- {I-(l+p)- P}~ r 1 P l 1 = {(l+ )/-P}- r 1 / J = {pI + A}~ r (3.1.2) l where A = I — P. The matrix R = (pi + A)~ x p is called the resolvent of A. We now provide a direct derivation of the Laurent expansion for the resolvent [Theorem 2 of Veinott (1969) and Theorem 3 of Veinott (1974)] using algebraic properties of the matrix A and its Drazin inverse A#. Theorem 3.1.1. Let A be the non-zero eigenvalue of A with smallest modulus. If 0 < p < |A| then R = p~ P* + f ] (-p)\A*) \ *=o l i+ p Proof. By definition, *„=(,/+.0-=s-(> + / = s - ( ( „ / + * ) - ' ^ o J 65 B 5 (3.1.3) The first term is p P*. Now consider the second term. l [ I + B]- = [BipB- 1 + I)}' 1 P 1 = [I + pB- )1 B- . 1 1 By hypothesis, v(pB~ ) = p/|A| < 1, so that l i=0 by Lemma 2.3.6. But so the second term above equals i=0 which gives the desired result. • Corollary 3.1.2. Let tu_i, w , Wi,..., be vectors such that 0 ty_ = P*r (3.1.4) x and w = (-l) (A*) r, i » = 0,1,2,... t+1 i Then (3.1.5) oo v(p) = p t u _ i + w + Y_ P^i-1 Q (3.1.6) 1=1 Proof. By substituting (3.1.3) into Equation (3.1.2). • This Laurent expansion of the discounted reward function v(p) is fundamental for the notion of discount optimality that will be discussed in Section 3.5. Observe that our derivation uses only the property that ind(A) = 1, that is, that the matrix A can be decomposed as in Equation (1.3.1). A special case of Theorem 3.1.1 is when the matrix A is nonsingular [i.e., ind(i4.) = 0]. This is the case, for example, if all the recurrent subchains of P are substochastic. Then Equation (3.1.3) becomes ^ = E(-P) 'U- ) ' . , t=0 66 1 , +1 Theorem 3.1.1 is itself a special case of a result of Rothblum (1981) [his Theorem 3.1], in which a matrix A with ind(jl) = k > 0 has terms in p~ , for » = 0 , . . . , A:. x When P is a stochastic matrix, the terms and wo have a probabilistic inter- pretation of particular interest. Define the symbols = P*r (3.1.7) h = w = A*r. (3.1.8) g = -i W and Q The term g is called the gain of the process and h is the bias. Theorem 3.1.3. Suppose P is a stochastic matrix. Then 2 = lim P r (C, 1) k k—• OO and h= JT,(P r-g) k (C7,l). fc=0 Moreover, if P is aperiodic then the result is valid using ordinary limits. Proof. By Theorems 2.3.9 and 2.3.10. • Note that in addition to the fact that ind(A) = 1, this result uses the fact that the powers of P are bounded matrices. Theorem 3.1.3 suggests the following interpretation for the gain and bias terms. The entry <7-, is the expected reward earned by the system per time period, in the long run, given that the system starts in state *. The entry A,-, on the other hand, is the total difference between the expected earnings in one period and the long run average earning, given that the system starts in state », summed over all periods. We conclude this section with a very simple derivation of an equation which is well known for computing the gain and the bias [see Howard (1960) and Blackwell (1962)]. This equation is usually derived using a partial series expansion, but here it is based on the generalized inverse representation for P*. 67 Proposition 3.1.4. Let g and h be defined by (3.1.7) and (3.1.8). Then g and h satisfy h = r-g + Ph. (3.1.9) Proof. We use the fact that P* = I - AA*. By (3.1.7), we have g = {I-AA*)r = r - A(A*r) = r - ( J - P)h, where the last equality follows from (3.1.8). The result follows by rearranging terms. • 3.2. Singular systems of equations Let A be an n x n matrix of index 1 [i.e., decomposition (1.3.1) is valid], and suppose 6 is a given column vector. Recall that the projection matrix W = I — AA* is given by Equation (1.2.8). In the special case A = I — P with P a (stochastic) transition matrix, we have W = P*, the limiting matrix. Lemma 3.2.1. The (singular) system of equations Ax = 6 has a solution iff Wb = 0. Further, if Wb = 0, then for arbitrary y x = A*b + Wy (3.2.1) is a solution of Ax — b. Proof. Let *=*-fe)-»-*-(!0-*'-*-*(£)• ->--(o SK'(:;)=*-ft) , Then , which is equivalent to (o o)(x ) 2 = (&2)' This system has a solution iff 6 = 0> which is equivalent to Wb = 0, because 2 " — - • ( ! ! ) " - ' & ) - i - 68 1 ( £ ) • Now we assume that 6 = 0 and we take x = A#b + Wy, for some arbitrary vector y. 2 We have Ax = AA*b + AWy = 5 - ' ( o ) + = - 0 6 • Theorem 3.2.2. Let {6,-,i = —1,0,1,...} be a sequence of n- vectors and W = I—AA&. Then the system of equations Ax- X = 6_! (3.2.2) x,_! + Axi = 6,- »' = 0,1,2,... (3.2.3) has a solution iff Wb-x = 0. (3.2.4) Furthermore, if (3.2.4) holds, the solution is unique and is given by Xi = A*(b - Xi-x) + Wb x, { i+ i = -1,0,1,2,... (3.2.5) where x _ = 0. 2 Proof. By Lemma 3.2.1, Equation (3.2.2) has a solution iff Wb_x = 0. Now assume (3.2.2) has a solution. We show by induction on i that under (3.2.4), a solution exists and (3.2.5) gives its unique value. The induction hypothesis is that Xfc_i is unique and that (3.2.3) has a solution for i = A;, for some k > — 1. This is trivially true for A; = —1 with x_2 = 0, because (3.2.2) has a solution, by hypothesis. Rewrite (3.2.3) with i=k as Axk = bk - Xk-x- By Lemma 3.2.1, one solution is x fc = A*(b - x _i) + Wy k fc 69 k (3.2.6) where y is an arbitrary vector. Now apply Lemma 3.2.1 to Equation (3.2.3) with k i — k + 1. It has a solution iff W{b -x )=0. k+1 k Substituting (3.2.6), we have W(b k+1 x _0 - Wy ) - A*(b - fc k k = 0. But WA* = 0 (see Corollary 2.3.2) and W = W, so we must have 2 Wy = k Wb k+l and hence x is uniquely determined by (3.2.5). • k We now apply this to the Markov reward process (N,r,P). Corollary 3.2.3. Let A = I — P. Then the terms g, h and tt/,-, i = 1,2,..., of the series (3.1.6) are the unique solution of the recurrence Ag = 0 (3.2.7) g+ Ah = r (3.2.8) tu,_i + Awi = 0 » = 1,2,... (3.2.9) with g = tu_i and h = w . 0 Proof. Apply Theorem 3.2.2 with 6_ = 0, b = r and 6,- = 0, for t = 1,2,... x Q Here W = P* so we get g = P*r, h = A*r and assuming ti;,_i = Equation (3.2.6) gives to,- = ( - l ) ' ( A ) ' r as in Equation (3.1.5). # + 1 (—l) ~ (A#) r, t i % • This last result unifies some previous approaches to the problem. Howard (1960) derived (3.2.7) and (3.2.8) using z-transforms and made h unique by setting one component to zero in each recurrent class. Blackwell (1962) derived both equations using the series (3.1.1) and some limiting argument as ft / * 1. He also used the extra conditions P*g = P*r 70 and P*h = 0. We saw that these conditions are necessary and sufficient for (3.2.8) and (3.2.9) with t = 1 to have a solution, respectively. Finally, Veinott (1969) derived (3.2.7), (3.2.8) and (3.2.9) using the series expansion (3.1.6). As will be seen in the next section, truncated recursions as in (3.2.2) and (3.2.3), but with a finite number of terms, are also of interest. They form a finite system of linear equations. Let A be an n x n complex matrix such that ind(A) < 1 (i.e., A is either non-singular or else it satisfies equation (1.3.1)). For k > 1, we define the matrix A(k) as the block Jordan matrix of order A; that has A for its diagonal blocks. For example, A ( l ) = A and for k = 4, we have (A I A(4) = 0 VO 0 A I 0 0 0 0 0 A 0 / A. We are interested in solving the system of equations: A{k)x(k) = b(k) (3.2.10) where x(k) and 6(A:) are partitioned so that they be conformable with A. For example, with A: = 4 we have x . 4 where x ,..., x x and b\,..., 6 are n-vectors. This system can also be written recur4 4 sively as Axx = b (3.2.11) x and x , _ i + Ax% = 6,-, 71 ii = 2 , . . . , A;. (3.2.12) If the matrix A is non-singular, then (3.2.11) would have the unique solution x = A~ bi l and (3.2.12), which can be written as Axi = bi- Xi-i, (3.2.13) would have the unique solution Xi = A~ (bi — z,_j), i = 2, l Expanding this iterative solution, we can easily verify that it can be expressed as x(Jfc) = A(k)~ b(k) 1 where A(k)~ i is a block lower triangular matrix such that for t > block (», j) is given by (-l) , W (ii- ) 1 , W + 1 . (3.2.14) For example, with k = 4 we have ( A~ -A" il" V-A- 0 A~ -A~ A~ l 2 AiA)- 1 = l 3 4 2 3 0 0 A~ -A~ 0 0 0 A- l 2 1 We now extend these results to the case when A is singular. We assume that ind(A) = 1, so that A& exists, and we define W = I — AA& as before. We show below that an analogous representation for A(k) D with A# replacing A - 1 in (3.2.14) holds, but that it does not give a solution to the system of equations (3.2.10). Theorem 3.2.4. Suppose ind(A) = 1. Then the system (3.2.10) has a solution iff Wbi = 0. In this case, x i , . . . , x -i k are uniquely determined and are given by xi = A*bi + Wb (3.2.15) 2 and Xi = A*(bi - x,_i) +Wb i, i+ i = 2,...,k-l (3.2.16) while the last component is x = A#(b -x - ) k k k 72 1 + Wy (3.2.17) for an arbitrary vector y. Proof. Apply Lemma 3.2.1 as in Theorem 3.2.2 to obtain existence and uniqueness up to i = k — 1. For i = k, there is no next equation to satisfy, so the vector y is arbitrary. • This result is readily applied to the Laurent expansion for Markov reward processes. Let us take A = I — P, where P is, as before, the transition matrix of a Markov chain with immediate reward vector r. Corollary 3.2.5. The terms tuy, j = —1,0,...,» — 1 of the series (3.1.6) are uniquely determined by a system of t + 2 equations. Proof. To obtain the terms tu_i, two,..., tu,-, let us define «y + a and = u»y, j = - 1 , 0 , . . . , t, { 0 if j = - 1 r if j = 0 0 if j = l , 2 , . . . , t . Using equations (3.2.7), (3.2.8) and (3.2.9), we form the system (3.2.10) with k=i+2. For example, for i=2, we have k = 4 and /A / 0 VO 0 A / 0 0 0 A / 0\ 0 j [ tuo 0 I I tot AJ \ w 2 /0\ I_ IrI I~I0 I ' J \oJ Clearly, the first equation has a solution (because b = 0) and so Theorem 3.2.4 applies. t Hence W-i,..., u;,_i are uniquely determined and are given by (3.2.15) and (3.2.16). The last component u;,- is not unique and is given by (3.2.17). • We will now see how the solution of (3.2.10) can be expressed in terms of the Drazin inverse of the matrix A(k), thus extending equation (3.2.14) to the singular case. L e m m a 3.2.6. If A is singular but satisfies (1.3.1), then A(k), k > 1, has index A; and its Drazin inverse A(k) D is a block lower triangular matrix such that its block 73 * > j \ is given by (-i) (A#r . y+i ,w Proof. (3.2.18) We use the fact that A can be decomposed as in (1.3.1), where B is non- singular. Let S(k) be the block diagonal matrix that has all diagonal blocks equal to the matrix S of Equation (1.3.1). We define a new matrix C(k) as C(k) = S(k)A{k)S{k) - l so that A{k) = Sik^C^Sik). We argue that there is a permutation matrix Q(k) such that D(k) = Q{k)C(k)Q(k)~ _(B{k) 0 \ l (3.2.19) where B{k) and J(k) are both block Jordan matrices of order k, with their diagonal blocks respectively equal to B and 0. We show how to produce such a permutation in the simple case k = 2. It will be clear that the same is true for k > 2. We have (B 0 C(2) = / V0 0 0 0 0 0 B I 0 0 0 0 0. Let us now consider a directed graph that has one node associated with each row of C(2), and with an arc going from node i to node j whenever block is non-zero. The graph of C(2) is shown in Figure 3.2.1. In general, the graph of C(k) has the following structure: every node i such that 3 < t < 2k has an arc going to node * — 2 with label / , and every odd-numbered node i has an arc going to itself, with label B. Hence we see that the nodes of the graph can be grouped into two subsets: the odd numbered nodes and the even numbered nodes. The desired permutation is simply to reorder the nodes as (1,3,..., 2k — 1,2,4,..., 2&), because the subgraph associated with the odd-numbered nodes is the graph of B(k), 74 B I i B q O 1 I 9 4 L Figure 3.2.1. Directed graph for C(2) and the subgraph associated with the even-numbered nodes is the graph of J(k). So we have, for k = 2: 0(2) = B 0 0 0' / B O O 0 0 0 0 0 0 / 0 . which is indeed of the form (3.2.19). Now it is easy to verify that J(k) k =0 and that J(k) is of index k. Then Dmr-pf ;) by definition, as in Lemma 1.2.6. Now B(k)~* is given by Equation (3.2.14), and by a graph argument similar to the above we see that A(k) D equation (3.2.18). has the required shape as in • It is straightforward to verify that A(k)A{k) 75 D is a block diagonal matrix with all diagonal blocks equal to AA*. For k = 4, we have A(4)A(4) D 0 A* = 0 0 A* ~{A*f {A*f 0 \ 0 0 A*) The consequence is that y(k) = A(k) b(k) is not a solution of (3.2.10) in general, D because AA*b{ ^ 6,- for i > 2. Nonetheless, we can use A(k) D to construct a solution. Define a new matrix R(k) whose only non-zero blocks are on the superdiagonal and are all equal to W = J — AA*. For example, R{\) = 0 and with A: = 4 we have R(4) = 0 W 0 0 0 0 .0 0 0 W 0 0 0 0 w 0 We show the following result. Theorem 3.2.7. If the system Ax± = 6i has a solution, then the system A(k)x(k) — b(k) has a solution given by x(k) = \A{k) + R{k)]b{k). (3.2.20) D Moreover, x , x ,... ,ifc-i are uniquely determined while x + Wy is also a solution, t 2 k for any vector y. Proof. Expanding the matrix product in (3.2.20) gives precisely the solution of Theorem 3.2.4. • Corollary 3.2.8. The terms tw_i,...,u;,- of the series (3.1.6) are given by [A(A:) + I> R(k)]b{k), where A; = t + 2 and 6* is defined as in Corollary 3.2.5. 76 3.3. Computataion of gain, bias and other terms A consequence of Corollary 3.2.3 is that the terms g, h and tu,-, for » = 1,2,..., of the series (3.1.6) can be computed recursively, by solving successive systems of linear equations, all with the same singular matrix of coefficients. One such algorithm is described in Veinott (1969), Page 1651. It requires parsing the transition matrix P into its recurrent subchains and transient states. Gaussian elimination is used to compute the solutions. Another approach is to use the matrix decomposition algorithms of Chapter I. Applying the fast decomposition algorithm of Figure 1.3.5 to the matrix A = I—P, both the gain g and the bias h can be computed directly using the fast solution algorithm of Figure 1.4.3 with the right hand side b = r, where r is the vector of immediate rewards. The terms u;,-, for t = 1,2,..., can be computed as follows. Observe that at Step 3.2 of the fast solution algorithm, the bias is given by where Z\ was computed at Step 2.3. Let z[ ^ = z\. 0 Lemma 3.3.1. Let z[ ^ be given by l+1 *<< = -fl M*'\ +1) u where the product by B^ 1 (3.3.1) is computed as in Figure 1.4.2. Then Wi; = PL (*P) , » = 0,1,2,..., (3.3.2) where P and L were obtained with the fast decomposition algorithm. Proof. Recall that the fast decomposition algorithm of Figure 1.3.5 factorized the matrix A as A = PLUQ. Then the matrix B, although never explicitly computed, was denned as B = UQPL, so that A = (PL)( » f)(PL)-\ B B 77 Using Equation (1.3.8), we have » » *yPLr\ A# = (PL)( » B B B (3.3.3) Bl We show by induction that Equation (3.3.2) holds. We know it holds for t = 0, the bias term. By Equation (3.1.5), we have VH+I =(-A*)((-lY(A*) r) =-A#w i+1 .- = 0,1,2,... i) Hence = -{PL) ( » B Ti'BjB^ B { p L ) and the result follows from Equation (3.3.1). -i { p L ) ^ • The amount of arithmetic required is about the same for both the method of Veinott (1969) and our fast solution algorithm (when the number of recurrent classes is small), provided that only a small number of terms are required (less than n). We have no computational experience from which to compare their relative performance, however. 3.4. Iterative computation of gain and bias When only the gain and bias terms are required [e.g., when using the average reward criterion as in Howard (1960) and Blackwell (1962)], and when the transition matrix P is large and sparse, it may be appropriate to use an iterative method to compute g and h. We describe such an algorithm based on the powers of the transition matrix, and we show that its rate of convergence is geometric when P is aperiodic, using the results of Chapter II. Also, a stopping criterion is proposed, based on a sensitivity analysis of the evaluation equations. The iterative algorithm is described in Figure 3.4.1. We call it the power method because, as we will see, it produces the same sequence of iterates for the gain as the standard power method for computing an eigenvector. 78 Step 0. Set = r, x* ) = r and N = 1. 1 Step 1. Compute x ^ + 1 ) = r + Px<*). Step 2. Compute gW = x<" ) - x<">. +1 Step 3. Compute A<*> = - JtyM. Step 4. UN- WgW - gW-VW < 5 then stop, else add 1 to JV and go to Step 1. Figure 3.4.1. Power method for gain and bias Theorem 3.4.1. Suppose P is aperiodic, and let the sequences gl ) and h^ ^ be N N defined as in Figure 3.4.1. Then lim gW = g and N-*oo lim = h, AT—.oo where g = P*r and h = A&r. Proof. From Step 1, we have x =£>V, [N+l) «'=0 and hence = pN ^ r ( 3 4 x ) which converges when P is aperiodic, by Theorem 2.3.9. Further, we have hS )=Y^P r-NP r N i N «=o N-1 = 53 (P» - P*)r - JV(P* - P*)r. (3.4.2) »=o The first term converges to h = A#r, by Theorem 2.3.10, and the second term goes to zero. • 79 Lemma 3.4.2. Suppose P is aperiodic, and let = P —P*, as in Equation (2.4.1). N Then e[ N) = gW-g = EWr (3.4.3) and 4") = A<*) - h = -{A* + NI)EWr. (3.4.4) Proof. Equation (3.4.3) is immediate from (3.4.1). To show (3.4.4), recall that 1=0 by Theorem 2.3.10, when P is aperiodic. Hence from (3.4.2), we have oo N-l - h = J 2 ( ~ *) ~ E( * ~P*)r«=o t=0 oo =-^(P -P*)r-Ar^)r p i p r F NE^r , - i=N oo = ~(__]{P -P*)){P i - P*)r - N NE^r 1=0 = -A*E^r-NE^r t and the result follows. • To show that the convergence is geometric, let us measure the error with e ^) = ||^)|| .||r|| 2 (3.4.5) 2 and e N) 2 = (\\A*\\2 + N)e[ K N Then \\e[ h < «i"> K) and I I 4 " ' I I . s 4". Corollary 3.4.3. Suppose P is aperiodic. Then {N+i) € }im 1 N-*oo J ) N e = 7 and .lim N—*oo 80 (N+i) 2 A) N (3.4.6) where 7 < 1 is the modulus of the subdominant eigenvalue of P . Proof. By Theorem 2.4.4. • Although this result indicates that the convergence is asymptotically geometric, it is to be observed that the factor ||A^||2 + N in Equation (3.4.6) implies that the error on the bias is always greater than the error on the gain. Now that the rate of convergence of the algorithm has been determined, we still have to justify the stopping rule given at Step 4 of Figure 3.4.1, and to specify some guidelines for the choice of the parameter 5. We proceed as follows, using residual errors. Let = AgW (3.4.7) and = gl") + Ah,™ - r. (3.4.8) These can be computed easily from gW) and h,( ), although we will see that their N computation is not necessary. Now since g and h satisfy the evaluation equations, the (unknown) errors e[ ^ and N must satisfy Ae<"> = (3.4.9) and + Aei"\= S^K (3.4.10) Theorem 3.4.4. The gain and bias errors are related to the residual errors by e[ N) = A*6W and e ">=A*6<">. 2 IN) Proof. We have that S X ' is compatible with A by construction, from Equation (3.4.7). Hence Theorem 3.2.4 gives eW = A*6W+P*6W 81 (3.4.11) and e[ N) (3.4.12) = A#5W+P*y, where y is arbitrary. Now = P*gW + p*Ah! > - P*r = 0 P*^ N) N because g = P*g( ) = P*r and P * A = 0. Also, Equation (3.4.4) implies that N P*e N) 2 because P* A* = 0 and P*E^ = 0, = -P*{A# + NI)EWr = 0. Hence P*y = 0, and the result follows. • The consequence of this result is that l|el l| <l|ii*|| -||* * || N) ( a a 1 (3.4.13) r) a> so that the error on the gain is bounded by a fixed factor ||A*||2 times the residual error. This factor indicates the sensitivity (or conditioning) of the Markov chain with transition matrix P [see Meyer (1980)] to changes in the data. Now observe that, at Step 4 of Figure 4.3.1, we have m g - i"-D g = (/> _ i)gl«-D = -AgC-V = -6i"- . l) Hence our stopping criterion simply ensures that the residual error is small enough. By choosing 6 such that ||A#|| .$<6say, 2 we see from Equations (3.4.5) and (3.4.6) that both e[ ^ < e and € ^ < c, assuming N N 2 that the problem is reasonably well conditioned (i.e., that ||A*||2 is not too big). 3 . 5 . Markov decision processes Now consider a system in which an action must be taken before a transition occurs. Let N = { l , . . . , n } be the (finite) state space, and let A,- be the finite set of actions that can be taken when the system is in state t, for * = 1,..., n. 82 The system evolves in time according to the process {X(t) : t = 0,1,2,...}, where X(t) is the state of the system at time t. Let {a{t) : t = 0,1,2,...} be the sequence of selected actions, with o(t) € 4?c(t)We assume that the system satisfies the Markov property and that the transition probabilities do not depend explicitly on time. Hence we define p& = Prob{X(< + 1) = j | X(t) = i , a(t) = *}, for k G Ai and t, j = 1,..., n. We assume that the system satisfies the Markov property, that is Prob{X(< + 1) = j | X(0),o(0),...,a(0> = Prob{X(f + 1) = j | X{t),a{t)}. Immediately before a transition occurs, the system earns an (expected) reward r^, where k = a(t) and i = X(t). The action a(£) is selected according to a function JT, called a policy, such that a(t) = n(t, X(0), a(0),..., X(t - 1), a(t - 1), X{t)), for t = 0,1,2,... Now let us denote by P the transition probability function, by r the reward function and by n the set of all policies. Then the above model is called a Markov decision process and is denoted by the quadruple (N,r,P,H). For a given policy rr G II and interest rate p > 0, we define the expected total discounted reward function v*(p) such that tf(p) = £r(f> 4w)> fc+1 t- = i,...,n, fc=0 where (3 = (1 + p)~ and E* is the conditional expectation operator under policy jr. l That is, for an arbitrary random variable Y defined from {X(t)} and E i( ) Y = E*{Y | X(0) = i}, 83 t = l,...,n, {o(t)}t w e n a v e where the expectation is taken under policy TT. The problem is to find a policy IT G II that maximizes in some sense, for » = 1,..., n. Now define a function 8 to be a decision rule if Si G Ai, for i = 1,..., n, and let A = A\ x A x . . . x A be the set of all decision rules. Then a policy TT is said 2 n to be a Markov policy if ir(t,X(0),a(0),...,X(t))=8i(t), where i = X(t) and 6(t) G A. Moreover, a Markov policy w is said to be stationary if <5(0) = 5(1) = . . . For problems with an infinite time horizon, it has been shown that there is a stationary policy that is optimal [see e.g., Heyman and Sobel (1984), Theorems 4-3 and 4-4] for the criteria we will consider. Hence there is no loss of generality in restricting the problem to stationary policies. For simplicity of notation, we will denote a stationary policy by its decision rule 8, and the set of stationary policies by A. For a fixed, stationary policy 8 G A, the system is a Markov chain with rewards and we obtain the Markov reward process (N,r ,P ), 6 6 as in Section 3.1, with transition matrix P = P and reward vector r = r . For a given interest rate p > 0 and discount 6 6 factor P = (1 + p)~ , the expected total discounted reward v (p) is given by l s OO ^(/0 = £ / ? ' , + 1 ( ')V p ! 1=0 as in Section 3.1. Following Miller and Veinott (1969), we say that a policy 8 is poptimal if v {p)>v' {p) S l V GA. 7 (3.5.1) It was shown by Blackwell (1962) that there is a policy 6 G A that is p-optimal for every small enough p > 0. Such a policy is said to be Blackwell optimal. Motivated by the series (3.1.6), Veinott (1969) and (1974) defined a policy 6 to be k-discount optimal if lim p~ \v (p) - v^ip)} > 0 k 6 84 V7 G A. (3.5.2) (Veinott's definition also extends to the case of p < 0, but we will stick to the positive case, which Veinott(1969) calls k discount optimality). As special cases, a — 1-discount + optimal policy is also said to be gain optimal, while a 0-discount optimal policy is also said to be bias optimal [Denardo (1970)] or nearly optimal [Blackwell (1962)]. The policy iteration method of Howard (1960) and Blackwell (1962) produces a policy 8 that is gain optimal. This method has been extended by Veinott (1966) to find a bias optimal policy and Miller and Veinott (1969) to find a Blackwell optimal policy. Veinott (1969) modified it to find afc-discountoptimal policy, for any k > — 1. More precisely, let tuf, i = — 1,0,1,..., be the terms of the Laurent series (3.1.6), for a policy 8 G A. Then (3.5.2) implies that 8 is A:-discount optimal if k t=-i k i = - i for all small enough p > 0. The extended algorithm evaluates u ; i l f tw*,..., tujt+i *° show that no other policy is lexicographically better, up to w . k Now let n be the number of states of the process. Miller and Veinott (1969) showed that a policy is Blackwell optimal if and only if it is n-discount optimal. This result was extended by Denardo (1971) who showed that an (n — l)-discount optimal policy is Blackwell optimal. Then Veinott (1974), using the fact that rank(A) = n —m, where A — I — P and m is the number of recurrent classes of P, showed that any (n — rediscount optimal policy is Blackwell optimal. We give a new proof of the latter result, using our matrix decomposition approach. As Miller and Veinott (1969), we need the following lemma from Gantmacher (1959), which we state without proof. Lemma 3.5.1. Let Af be a k x A; matrix and L a linear subspace of R . If Af'x G L k for * = 0,1,...,k - 1, then Af*x G L for i = k, k + 1, (The idea is that for t > A:, M ' x is a linear combination of x, A f x , . . . , A f fc_1 x ) . This implies the following. Theorem 3.5.2. Suppose that some policy 8 G A has m recurrent classes (i.e., 1 < m < n). Suppose also that some policy 7 € A is such that to? = twf, for t = — 1,0,..., n — m. Then w~[ = ty* for i > n — m. Proof. First, observe that in the special case m = n, the result is trivially true because g"i = g , A = I - P = 0 and {A )* = 0, so that h = u;£ = wf = . . . = 0. This 6 6 6 6 s implies that h = 0. Now by equations (3.5.4) and (3.5.6), 1 u,7 = (-1)'((A ) ) V = 0 = «;f, 7 » > 0. # In the case m < n, both policies have the same gain and bias, i.e. wt_i = w2_ = <7 x and IWQ = WQ = n. Equations (3.5.4) and (3.5.6) imply that i > 0. «,£ = {-iy((A f) h, 6 % (3.5.4) Decompose A as in equation (1.3.1), and define 6 «-(£:)-*•? where y,-,i and y,,2 have dimensions (n — m) and m, respectively. Define also ,.(«.) = „ = 5 ( v , v - ( v 2)^. Hence 2 = 0. Multiply Equation (3.5.4) by S to get 2 so that W,i = (-l)'(B- ) '* 1 , ll and define y«,2 = 0. / Now apply the same transformation S to A' , giving t > 0. , , (3.5.5) 1 A =S 1 - 1 f v U \ A86a i v 1 2 V A22 / (3-5.6) Equation (3.2.9), with i=i+l and A = A together with (3.5.6), gives 1 \A l 2 A22) \E»+1,2/ (3.5.7) \Xi,2) By hypothesis, x,-,i = y,,i and x, 2 = 0, for i = 0 n-mso t (£: a C r ) = - ( " ' ) ' —->• Using (3.5.5), we get (3.5.8) (A- B- -/)(B- )*2 =0 1 1 n 1 and X (B- ) =0. l (3.5.9) i+l 21 2l Now recall that Z\ is a vector of dimension (n — m). Define a linear subspace L of R " - such that 1 € L iff ( X n - B m - 1 - /)x = 0. Then Equation (3.5.8) implies that {B~ ) Z\ G L, for t = 0,..., n — m — 1. By Lemma 3.5.1, it is true also for » > n — m l % and hence Equation (3.5.8) is also valid for » > n — m. The same argument applies to (3.5.9) as well. We still have to show that it implies that x,- i = y,-,i and x, f i2 = y,-,2 = 0, for i > n — m. By Corollary 3.2.3, the recurrence (3.5.7) has a unique solution. Equations (3.5.8) and (3.5.9) imply that x t)1 = y, = (1 (—l) (B~ ) z and t 1 % 1 x,-, = yt,2 = 0 is that 2 solution for every * > 0. • The following example shows that 6 and 7 in Theorem 3.5.2 need not have the same number of recurrent classes. Let policy 6 have the transition matrix P vector r given by 6 (\ 0 0 0' P = 0 0 1 0 0 0 0 1 vo 1 0 0. and 6 Note n = 4 and m = 2. With A = I - P we have 6 /o A* = 0 0 \0 0 1/3 -1/3 0 87 0 0 1/3 -1/3 6 and reward so that and ttfo = Let 7 be a policy with a transition matrix given by P 7 ( 1 - 3x 0 = 0 \ 0 x 0 0 x 1 0 1 0 with 0 < x < 1/3, and r = r . Then g = g , n = h , wj = twf and w = u^. Thus 7 6 n 6 7 6 7 6 and 7 have the same series expansion, but 7 has only one recurrent class if x > 0. We now apply the above result to dynamic programming. Theorem 3 . 5 . 3 . A policy with m recurrent classes is Blackwell optimal iff it is (n—rediscount optimal. • Proof. By applying Theorem 3.5.2 to Equation (3.5.3). It implies, in general, that a policy is Blackwell optimal if and only if it is (n — 1)discount optimal (because m > 1, always). We will now use this result and the notation of the previous section to formulate a policy iteration algorithm in a unified way. We first recall that with a fixed interest rate p the policy iteration method for finding a t /^-optimal policy can be formulated as in Figure 3.5.1, with (3 = (1 + p)~ . l Using the notation of Section 3.2, we can now formulate the policy iteration method for finding a A; discount optimal policy as follows. Let K = k + 3 (we need to evaluate K terms xv-\, two, • • • 1 u>fc+i). The method is shown in Figure 3.5.2. The operator of Step 1, derived from Drazin inverse theory, provides A (K) +R (K) 6 D 6 an explicit solution to the policy evaluation problem. It is to be seen whether the latter algorithm could be formulated as a specialization of the method of Newton for solving systems of nonlinear equations, as the former was by Puterman and Brumelle (1979) using the ordinary inverse (A ) 6 . 1 88 Step 0. Select an arbitrary policy 8 G A. Step 1. Policy evaluation. Compute v = (ii')~V, where A 6 6 = I- PP . 6 Step 2. Policy Improvement. Find 7 G A s.t. r + 7 ppiy 6 = m a x , { r " + pP"v e A 6 }. (Choose 7 = 8 if possible). Step 3. If 7 = 8 then stop else set 8 = 7 and go to Step 1. Figure 3.5.1. Policy iteration method for p-optimal policy Step 0. Select an arbitray policy 8 G A. Step 1. Policy evaluation. Compute X {K) 6 + [A (K) = 6 D R {K)]b {K), 6 6 as in Corollary 3.2.5. Step 2. Policy improvement. Find 7 G A s.t. 6 (/c) + P (/c)z*(*c) = max^ {6"(/c) + 7 7 GA P"(K)X (K)} 6 where P ( K ) = I(K) - A (AC) 7 7 and the maximization is done lexicographically. (Choose 7 = 8 if possible). Step 3. If 7 = 6 then stop else set 8 = 7 and go to Step 1. Figure 3.5.2. Policy iteration method for A: discount optimal policy It is important to note, however, that the formulation of Figure 3.5.2 is not the most 89 appropriate for computations. Indeed, we saw in Section 3.2 that the terms u>*, for t = — 1,..., k, can be evaluated successively. Hence it is more efficient to combine Steps 1 and 2 in such a way that only those terms which are really required be computed. There are two ways to do it. One is to successively evaluate x (i) = tu,_3 and 6 produce A , such that A , = {7 G A , _ ! : 6 (») + P V ( » ) > 6"(t) + P"**(»),Vn G A , _ i } 7 where Ao = A , and stop at the smallest i < k for which A,- has only one element. This element is the policy 7 of Step 2. (If A * has more than one element, choose any 1 e A ). fc Another way is to stop at the smallest i < k for which A,- does not contain 8, and choose 7 G A,- arbitrarily. Then 7 is a better policy than 6, although it is not necessarily the best improvement. This latter approach might require fewer terms to be evaluated than the former. 90 Chapter I V Continuous time M a r k o v decision processes This chapter is mainly a review of known results on continuous time Markov reward processes and decision processes. Its purpose is to show the equivalence between continuous time and discrete time Markov decision processes, and to introduce some of the notation that will be used in the next chapter. In Section 4.1, we define the continuous time Markov reward process from the continuous time Markov chain of Cinlar (1975). Then we show, in Section 4.2, that the resolvent has the same algebraic structure as in the discrete time case. Our proof is more detailed than that of Howard (1960) and uses the same approach as in Chapter II. This result is used, in Section 4.3, to obtain a Laurent expansion of the resolvent. The gain and bias terms are also interpreted as limits of the reward functions. Finally, in Section 4.4 the equivalence between continuous time and discrete time processes is reviewed, using a data transformation considered by Howard (1960), Veinott (1969), Schweitzer (1971) and Serfozo (1979). 4.1. Continuous time M a r k o v reward processes Let {X(t) : t > 0} be a homogeneous, continuous time Markov chain with state space N = { 1 , . . . , n}. Let S be the epoch of the A:-th transition, with S = 0. Then k 0 we have Prob{S -S >t\X{S ) k+l k = i} = e- , Xit k where 0 < A,- < oo, for t = 1,..., n. 91 i > 0, (4.1.1) The transition rates are finite (A,- < oo) because the state space is finite [see Cinlar (1975), page 244]. We will also assume that they are positive (A,- > 0). Then T,- = 1/A,- is the expected holding time in state t, for i = 1,..., n, and the time matrix T = diag(r,) is nonsingular. The transition probabilities are defined as P i j with pa = 0 and = Prob{X(S )=j\X{S )=i}, k+1 ]Cy=i Vij = i,j = l,...,n, k h f° * r = * * s o stochastic. For A; = 0,1,2,..., let Y(k) = X{S ). k n a matrix P = (4.1.2) (p,y) is Then the process {Y(k) : A; = 0,1,2,...} is a discrete time Markov chain with transition matrix P, embedded at the transition epochs. The continuous time Markov chain {X(t) : t > 0} is completely determined by the matrix c? = r- (P-/), 1 (4.1.3) which is called its infinitesimal generator. Assume that X(t) gives the state of a dynamic system. Further, define the earning rate vector r = (r,) as follows. Assume that when the system is in state t, it earns a reward at a rate of u,-,- dollars per unit of time. Further, just before making a transition to state a reward of u,y dollars is earned instantly, for i,j = 1,..., n. From this, the expected earning rates are defined as r, = uu + A,- ]_2 PijUij (4.1.4) [see Howard (1960), Equation (8.16)]. Such a model is a continuous time Markov reward process and is denoted by the triplet (JV,r,Q). Define Z(t) as the total reward earned until time t. With Ei as the conditional expectation operator of Section 3.1, we define the vector V(t) of expected undiscounted 92 rewards by Vi(t) = Ei{Z(t)}, for i = 1,..., n. The function V,(f) is known to be differentiable, and its derivative U j ( t ) has a limit, as t —• oo, for i — 1,..., n. Indeed, define the matrix 11(f) such that *i,it) = Prob{X(0 = ; | X{0) = i}, t > 0. Then Jr,y(£) has a limit as t —* oo [see Heyman and Sobel (1982), Proposition 8-8], and the expected earning rate at time t is n »(0 = Yl*«(0 yi u = ii • • • i»• r 3=1 This implies that V(t) = 0(t), because = lim - f u,(z)efe lim is bounded. Let s > 0 be the instantaneous interest rate, and let v(s) be the vector of expected total discounted rewards. That is, «,(s) is the expected total discounted reward of the process over an infinite horizon, when the process starts in state t, for » = 1,... , n. Using the conditional expectation operator Ei, we have v {8) = i Ei{f ° 0 = / e-*dZ(t)} e~ dVi(t), i = l,...,n. 9t Jo The expectation and integration operators can be permuted since the result is finite, because V(t) — 0(t). Conditioning on the first transition, we get «<(«)= / Jo \f e- Uiidx Mo tx + e- ^pi (u j at 3 i j 9 t i which can be integrated directly, giving 3& 1 ' 3?* 93 + v is))]\ie- > x 3 J t dt, Multiplying both sides of Equation (4.1.5) by (1 + sr,), and using matrix notation, we get {I+sT)v( ) = Tr + Pv(s). 3 Collecting terms and premultiplying both sides by T~ , we finally obtain the resolvent l equation {8l + T- A)v(a) = r, (4.1.6) l where A = I — P. It is not obvious whether the matrix (sI+T~ A) is nonsingular for any small enough 1 interest rate s. We will show, in the next section, that it is indeed the case, and that the resolvent R = (si + T~* A) has exactly the same properties as the resolvent R of a 1 9 p discrete time process. Hence all the results of discrete time dynamic programming that are based on the algebraic structure of R are also valid for continuous time Markov p decision processes. 4.2. Algebraic structure of the resolvent All the results of Chapter HI, for discrete time decision processes, are based on the fact that ind(A) = 1, where A = I - P. We show that mA(T~ A) = 1, where T is l the diagonal matrix of expected transition times. Our argument is similar to that of Howard (1960), page 98, but we emphasize the algebraic structure. The result is not true for a general matrix A of index 1. For example, let We have ind(A) = 1, because We have, however, that mc\(T~ A) = 2, because l r-M-*-•(; - )™-'(; 3 94 s ) ! ) * so that T~ A is similar to J (0) which has index 2, by Lemma 1.2.1. l 2 We now proceed to show that the result is true when A = I—P, with P a stochastic matrix. Lemma 4.2.1. Suppose P is an irreducible, stochastic matrix, where A = I — P. Then T~ A has the eigenvalue 0 with multiplicity 1. l Proof. We proceed as in the proof of Theorem 2.2.4. We know that the column vector e is a right hand eigenvector of P with eigenvalue 1, and that there is a positive row vector to that is the corresponding left hand eigenvector of P . This implies that e is also a right hand eigenvector of T~ A with eigenvalue 0, and that wT is the l corresponding left hand eigenvector of T~ A. l We show that the corresponding Jordan block of T~ A has dimension 1, by proving l that Equation (1.1.6) has no solution. That is, we have to show that T~ Ay = e has l no solution y. Suppose y is a solution. Then (wT)(T~ A)y l = {wT)e>0, because both wt and e are positive vectors. But wt is an eigenvector of T A with l eigenvalue 0, so that (wT)(T~ A) = 0, which is a contradiction. Hence the result l follows. • Theorem 4.2.2. Suppose P is a stochastic matrix. Then ind(T A) = 1, where X A — I - P. Proof. Let m be the number of recurrent classes of P. Then by Theorem 2.1.3, there is a permutation matrix S such that Aom \ 0 A= S~ X 0 V 0 95 0 A,mm J Here AQO = I — Poo is the nonsingular submatrix corresponding to the transient states of P. Now, applying 5 to the diagonal matrix T~ , we have x ( TQQ AQQ 1 TQQ AQI T' 1 00 TfiAn 1 i4o2 ••• TQQ Ao 1 m 0 T~ A = S -1 l where ^ n \ T - 1 A s, is the diagonal submatrix of T corresponding to the »-th subset, for » = 0 , . . . , m. The matrix T^ A 1 00 is nonsingular, hence all its eigenvalues are nonzero. Moreover, by Lemma 4.2.1, each T^ An, for t = 1,..., m, has the eigenvalue 0 with multiplicity 1. 1 This implies that the Jordan canonical form of T~ A has exactly m Jordan submatrices l with eigenvalue 0, each of dimension 1. Hence ind(T i4) = 1. • _1 The consequence of Theorem 4.2.2 is that the resolvent R of a continous time $ Markov process exists and has a Laurent expansion analogous to that of the resolvent R p of a discrete time process. 4.3. Laurent expansion of the discounted rewards Let C = T~ (I l — P) = — Q. Then the resolvent of the continuous time process is R = (si + C ) , and the vector of total discounted rewards is given by v(s) = R u. - 1 9 g Because ind(C) = 1, we know that C# exists. Moreover, let W = / — C C . # The following results follow immediately from the results of Chapter HI. Theorem 4.3.1. Let A be the nonzero eigenvalue of A with smallest modulus. If 0 < s < |A|, then oo = -iW + £(- )«'(C#)* . + 1 5 a (4.3.1) «=o Proof. Same as Theorem 3.1.1. • Defining g = Wr, h = C^r and to,- = (—a)'(C#)' r, for t = 0,1,..., we have +1 oo v(s) = s~ g +h + Y^s'wil i=i 96 (4.3.2) Theorem 4.3.2. The terms g, h and u;,-, for i = 1,2,..., of the series (4.3.2) are the unique solution of the recurrence Cg = 0 (4.3.3) g + Ch = r (4.3.4) Wi-i +Cwi = 0, x = l,2,..., (4.3.5) where w = h. a Proof. Same as Corollary 3.2.3. • This implies that the policy iteration method of Chapter III can be applied directly to the continuous time Markov decision processes, using the matrix C instead of A. Further, the terms g and h of the series (4.3.2) can still be interpreted as gain and bias, respectively. We do this as follows. Recall from Section 4.1 that the expected total discounted reward u,(s) is the Laplace transform of the expected earning rate function u,(«), that is, i( ) v 3 f°° = I Jo e Vi(i)dt, 3t t = l,...,n. We need the following Tauberian theorem. Theorem 4.3.3. [Heyman and Sobel (1982), Property A-8]. Suppose /(a) is the Laplace transform of /(£). Then lim fit) = lim sf(s), provided both limits exist. Corollary 4.3.4. The vector g of Equation (4.3.2) is the limiting earning vector, that is, lim u,(i) = gi, i= 1 n. t—•oo Proof. We know from Section 4.1 that u,(t) has a limit as t -* oo. By Equation (4.3.2), we also have that lim svi(s) = g{. The result follows from the theorem. • 97 Corollary 4.3.5. The vector h of Equation (4.3.2) is the bias vector, that is, lim (V,(t) - gtt) = hi, t—• i = 1,..., n. OO Proof. Let /,(r) = V,(*) - g t. Then t fi{s) = f°° e-'Viit) dt Jo = f°° e- Vi(t)dtJo gt Vi{s) I™ e-'git Jo gi s ' 2 s where the last equality follows from elementary properties of the Laplace transforms. Hence Urn sfi{s) «\o = hi, by Equation (4.3.2). Also, /,(<) has a limit as t —* oo because it is bounded and u,-(t) converges. The result follows from the theorem. • We note that it is sufficient to use ordinary limits. In the discrete time case, it was necessary to use Gesaro convergence when the transition matrix was periodic. It is not necessary in the continuous time case, because the exponential distribution of the transition times automatically does the time averaging. 4.4. Equivalent M a r k o v decision processes In this section, the continuous time Markov decision process is defined. By examining the reward processes associated to stationary policies, a whole family of discrete time Markov decision processes are shown to be equivalent to a given continuous time process. A continuous time Markov decision process is a system that evolves according to a process {X(t) : t > 0} in which an action is selected every time a transition occurs, and which earns a reward that depends both on the state cf the system and the action selected. 98 More precisely, suppose the fc-th transition occurs at time t = Sk and that the system moves to state t = Yk = X(Sk). Then an action a = ak € A,- is selected and the next transition will occur according to the following probability function: Prob{y fc+1 = j,S >t + x\Y k+l k = i, S = t,a k k = a} = p&e"*?*, (4.4.1) where 0 < A? < oo, p?- > 0, p£ = 0 and n Y^Pij = 0, for a € A,-, t = 1,...,n. 3=1 During the interval (S ,S +i), k k a reward is earned at a rate of u£ dollars per unit of time and an additional amount of u?- dollars is received at time Sfc+i. For a policy TT, define Z*{t) as the total reward earned until time t, and the vector V*(t) such that 17(0 = »'=l,.-.,n. Then with an interest rate s > 0, the expected total discounted reward vector v*(s) is defined as «?(«)= f°°e-'dVfit), i = l,...,n. The objective is to find a policy JT to select the actions so that v*(s) is maximized for all i = 1,..., n. It has been shown that there is a stationary policy that is optimal [see Veinott (1969)]. Hence it is sufficient to consider only the stationary policies. Suppose a fixed stationary policy 8 £ A is used, where A = A i x . . . x A . Then n the action ak = 8i is selected if the fc-th transition is to state t = X{S ). k system is the continuous time Markov reward process (N, r*,Q*), where and , f- A f 99 if i = j Hence the As was pointed out in the previous section, the solution of a continuous time Markov reward process has the same structure as in the discrete time case, and can be computed using the same algorithms. Further, the discount optimality criteria can be denned in just the same manner, from the Laurent series, and a A;-discount optimal policy can be found with the policy improvement methods. This similarity between continuous and discrete time processes was first pointed out by Howard (1960) for the expected total discounted reward criterions as well as the average reward criterion. Howard (1960) also noted that a given continuous time Markov decision process is equivalent to a whole family of discrete time processes, in the sense that for any stationary policy 6 G A , all these processes have the same value. This can be seen as follows. Consider the continuous time Markov reward process (N, r, Q), where r = r and Q = Q , and let a be a positive real constant. Then the s 6 reward process (N, ar, aQ) is equivalent to the original process, up to a scaling of the time unit. Now by choosing a appropriately, we can define a stochastic matrix P such that aQ = P - I. But then the discrete time Markov reward process (N,ar,P) has the same value as the original continuous time process, because A = I-P = -aQ so that the terms of the series for the discrete time process satisfy the same equations as for the continuous time process (compare Corollary 3.2.3 and Theorem 4.3.2, with C=-Q). If the scalar ot is chosen such that 0<a<l/Xf, Va€A„ i=l,...,n, then such a stochastic matrix P exists for all policies 6 G A . Hence such an a defines 6 a discrete time Markov decision process which is equivalent to the original continuous time process. 100 This equivalence was suggested by Howard (1960) for the expected total discounted ^ reward criterion and the average reward criterion. Veinott (1969) extended the result to the discount optimality criteria. The same device was suggested by Schweitzer (1971) as a data transformation to guarantee the convergence of the method of successive approximations to find a gain optimal policy (average reward criterion). To accomodate models with pf < 0 (strictly t speaking, those are semi-Markov decision processes), the scalar a must be selected such that 0 < (1 -pu)a < 1/A°, Va G A i , t= 1 n. Finally, the equivalence was extended to models with a countably infinite state space by Serfozo (1979). 101 Chapter V Semi-Markov decision processes In this chapter, we apply the matrix decomposition approach to semi-Markov decision processes. We follow the development of Denardo (1971), in which Laplace-Stieltjes transforms are used to derive a Laurent expansion of the expected total discounted rewards. In Section 5.1, we define a semi-Markov reward process in a way that is compatible with the definition of a semi-Markov decision process in Heyman and Sobel (1984). The material of Section 5.2 is completely new. The matrix decomposition approach is applied to the moments of the transition probability function, and a condition of Denardo (1971) about the entries of the first moment is interpreted algebraically. A new algorithm is also introduced to solve some singular systems of equations involving the first moment. In Section 5.3, many results of Denardo (1971) are reviewed, and their proofs are formulated in terms of matrix decomposition, using the results of Section 5.2. The Laurent expansion of the expected total discounted rewards is obtained, emphasizing the matrix formulation. In Section 5.4, the application of the above results to semi-Markov decision processes is reviewed, mainly following Denardo (1971). Finally, in Section 5.5 some special cases of semi-Markov decision processes are reviewed. As pointed out in Denardo (1971) and Heyman and Sobel (1984), both 102 discrete time and continuous time Markov decision processes can be seen as special cases. Their formulation as semi-Markov decision processes is a good illustration of the model. Also, the equivalence between the semi-Markov and continuous time Markov decision processes is reviewed, under the average reward criterion. The equivalence was used in Ross (1970), but our approach stresses the fact that the processes are not equivalent with respect to other optimality criteria (e.g., bias optimality, etc.). 5.1. Semi-Markov reward processes Let {Y(k),S(k) : A: = 0,1,2,...} be a two-dimensional process such that Y(k) E N = { 1 , . . . ,n}, 5(0) = 0 and, for all x > 0 and A; = 0,1,2,..., Prob{r(A: + 1) = j, S(k + 1) < x + t | y (0), 5(0),..., Y(k) = i, S(k) = x) Prob{F(A: + 1) = j, S(k + 1) < x +1 \ Y{k) = i , 5 ( A : ) = x} Qij(t), *>0, »•,; = l , . . . , n . (5.1.1) Then {Y(k) : k = 0,1,2,...} is a discrete time Markov chain with transition matrix P = Q(oo), and 5 ( A : ) is the time of the A>th transition. Let X(k) be the reward earned during the A;-th stage of the process. The probability distribution of X(k) is assumed to depend on Y(k), Y(k + 1) and r(k) = 5 ( A : + 1) — 5(A:). We define the conditional reward functions u (x, y) as the expected value of the f-7 cumulative reward during [5(AJ), 5 ( A ; ) + y) if Y(k) = t, Y(k+1) = j and r{k) = x, with 0 < y < i , and u (x,x) = E{X(k) | Y(k) = i, Y(k + 1) = j,r(k) = x). i3 Unconditioning, we get the reward function Ri(t) such that W)=JjlJ o U y(z,x)dQ,y(x) + j f t + U,y(t, is the expected reward earned during the interval [5(A:),min{5(A:) + *,5(A:-r 1)}], 103 x) (fQ,y(x)j (5.1.2) before the (fc-f-l)-th transition. A sufficient condition for the integrals to exist in (5.1.2) is if Ui (x,x) and u,y(t,z) have bounded variation in [0, oo] and [t, oo], respectively. 3 Such a model is called a semi-Markov reward process and is denoted by the triplet (N,R, Q), where R(t) is the vector of expected reward functions of Equation (5.1.2), and Q(t) is the matrix of transition probability functions of Equation (5.1.1). The discrete time and continuous time Markov reward processes of Sections 3.1 and 4.1 are special cases of the semi-Markov reward process. In the discrete time case, we have directly and In the continuous time case, we have <?.-y(0=P.-y(i-«" ).for» # ; ' . M and so that by Equation (5.1.2), r t* n ^'(0 - E[J ("' u z + Vij)PiAie~ dx + J XiX f°° 1 UiitpijXie- dx^ XiX = (?+Eww)( -- ' ) 1 = T+ 1 ~ e A t ~ > X,t) where r,- was defined in Equation (4.1.4). Hence Qij(oo) = Pij and -R,(oo) = r,/A,- is the expected one-stage reward. Now for t > 0, let Z(t) be the total reward earned by the system until time t. We are interested in the vector V(t) of expected total undiscounted rewards until time t, i.e., Vi{t) = Ei{Z{t)}, 104 : = l,...,n. For t > 0, the expectation is finite if the expected number of transitions in [0, t] is finite. A sufficient condition for this in terms of Q(t) is described in the Section 5.2.. Then with interest rate s > 0, we have the total discounted reward t/,(s) defined as v,(s) = / e~ dVi(t), for t = 1,..., n. JoHence v(s) is precisely the Laplace-Stieltjes transform of V(t). (5.1.3) Bt Similarly, consider the transforms q(s) and r(s), respectively of Q(t) and R(t). We have q is)= i3 f e-'dQtjit) Jo- (5.1.4) and r,(s)= / e—' dRi(t), (5.1.5) JQ- for i,j = 1,..., n. We define the A;-th normalized moments Qk and R as k Qk= f°°t Jo- dQ(t)/k\ k (5.1.6) and R= k f t dR{t)/k\, Jo- (5.1.7) k for A; = 0,1,2,..., whenever they exist. Note that Q is a matrix and R is a vector. k k These moments are useful because of the following Abelian theorem. Theorem 5.1.1. [See Heyman and Sobel (1982), Property A-9]. Suppose f(s) is the Laplace-Stieltjes transform of F(t). Then jH e-«1*dF(t) = (-l) ^/( ). (5.1.8) f c S Corollary 5.1.2. Suppose Q , Qi,. • •, Q exist and are finite, for some k > 0. Then 0 k q{s) = QQ - sQi + ... + (s) Q k + o(s ). k k (5.1.9) Proof. Evaluate Equation (5.1.8) at 3 = 0, with /(s) = <fty(s). By (5.1.6), we have WUo = (^)« for ^ = 0,...,*. The result follows by taking a Taylor expansion of g«y(s) in a positive neighbourhood of s - 0. • 105 Corollary 5.1.3. Suppose Ro,Ri,. ..,Rt exist and are finite, for some I > 0. Then r(a) = Ro - sRi + ... + (-s) Rt + o(a'). e (5.1.10) • Proof. Same as the previous corollary, but with /(a) = r,(a). A similar series expansion will be derived in Section 5.3 for the expected total discounted reward function t/(a). But first, some preliminary results on the algebraic structure of the first moment Q\ of the transition probability function are derived. 5.2. Decomposition of the first moment Let us explore the algebraic properties of the moments Q and Q\ of the transition 0 probability function Q(t). These properties will be useful for expressing the solution of the singular equations which will be encountered when deriving the Laurent expansion of the discounted reward function v(s). The properties of the zeroth moment Q were explored in Chapters II and III, since 0 where P = (pij) is the transition matrix of the Markov chain embedded at the transition epochs. For the rest of this section, we will denote Q by P . Also, we assume that P is 0 a stochastic matrix. Let also A = I — P. Then, as in Chapter H i , A satisfies Equation (1.3.1), and hence A* exists and is given by (1.3.2). Again, we will need the limiting matrix P* =I-AA#. Recall that (5.2.1) where the identity matrix is m x m, with m the number of recurrent classes of P . For the rest of this section, we denote the first moment Qi of Q(t) by H, i.e., Qi = H = (hij). By Equation (5.1.6), if p,j > 0 then n.y/p.y is the expected holding 106 time in state t, conditioned on the next state being j, for i, j = 1,..., n. The following results show that if the matrix H has at least one nonzero entry in each of the recurrent classes of the embedded Markov chain P , then the expected number of transitions in a finite time interval is finite. Lemma 5.2.1. Suppose P is irreducible, and suppose that H 0. Then rank(P*#P*) = rank(P*) = 1 Proof. Let TT > 0 be the row vector of the stationary probabilities of P . Then and rank(P) = 1. Now where a,- = £ y = i h-ij, for i = 1,..., n. There is at least one * for which a,- > 0, because H ^ 0. Then where f3 = £2y=i wyQfy > > 0, and the result follows. Lemma 5.2.2. Suppose P has m recurrent classes, and suppose H has at least one nonzero entry in each recurrent class of P . Then rank(P*#P*) = rank(P*) = m Proof. That rank(P*) = m is clear from Equation (5.2.1). We prove the other equality in the case m = 2. There is no added difficulty with m > 2, except for heavier notation. As in Theorem 2.1.3, let T be a permutation matrix such that P =T 107 • Then OP* u r and it is easy to verify that = r|o o ( • 01 r P;, o P* • 02 r o 13- , p y 2 ° Poi Hi i Pi i 0 P \QIIPI\ 0 X 0 1 PQ H22 P22 \ 2 0 r . -1 P * ff P v 2 2 22 2 This implies that rank(P*#P*) > m, because rank(P,*,i?,-,P * ) = 1, for i = l l by Lemma 5.2.1. The equality will be proved in the corollary. i m, • Now let us apply the similarity transformation of Equation (5.2.1) to the matrix Q, and write where H 22 is an m x m matrix (note that Hu and H22 are not the same here as in the proof of Lemma 5.2.2). Corollary 5.2.3. #22 is nonsingular. Proof. Using Equations (5.2.1) and (5.2.2), we have that Hence rank(if ) = rank(P*#P*) = m, by Lemma 5.2.2. The result follows because 22 H22 is an m x m matrix, thus of full rank. • This corollary has a consequence that will be important for deriving (and evaluating) the Laurent expansion of u(s). Recall from Chapter I that a singular system of equations is said to be compatible if it has a solution. The following theorem shows that we can select the solution x of a compatible system Ax = 6 (5.2.3) Ay = c- Hx (5.2.4) so that the system be compatible as well. 108 Theorem 5.2.4. Suppose A is decomposed as in Equation (1.3.1), with B = I — P u , and H as in (5.2.2), and suppose Then the system (5.2.3) is compatible and has a unique solution x such that (5.2.4) is compatible, given by Moreover, there are infinitely many solutions y to Equation (5.2.4), given by y = 5" 1 = 5" ( 1 5 - 1 ( C l - ~ H " X ^ , (5.2.6) where y is arbitrary. 2 Proof. Recall from Section 1.4 that the system (5.2.3) is compatible iff 6 = 0. Writing 2 (5.2.3) and (5.2.4) in decomposed form, we have 5zi=6x, Byi =ci - Huxi - (5.2.7) H12X2 (5.2.8) and 0= c - H21X1 2 - #22*2- (5.2.9) Since B is nonsingular, z i is uniquely determined by (5.2.7), and (5.2.4) is compatible iff z satisfies (5.2.9). But since fT 2 is nonsingular, by Corollary 5.2.3, we have that 2 2 X2 is uniquely determined. Finally, y is uniquely determined by (5.2.8), and y is t 2 arbitrary since it appears in none of the equations. • The decomposition approach is not only a convenient way to express the solution of the above equations, it also provides an efficient algorithm for its computation. We saw, in Chapter I, how the matrix A can be efficiently decomposed in a way that is equivalent to Equation (1.3.1). Moreover, we also saw how to use the decomposition to compute the component X\ of the solution, satisfying (5.2.7). 109 We now show how to compute x efficiently, to satisfy (5.2.9). Let d = c — 2 2 Then x is the solution of the nonsingular system h x 2 22 First, let z 2 = H xi. 2l = d . The following lemmas 2 show how to use the decomposition (1.3.1) to compute 2 2 and H H \Xi 2 22 efficiently. and write H iXi, 2 *=(:;)=(£iK <«•»> Clearly, if we can compute z efficiently, then we have d = c — z immediately. 2 2 2 Lemma 5.2.5. Proof. We have 5IT5- (?) = 55- (*» £ ) s s - ( ? ) = ( £ ) . , = . . • Hence z is obtained by premultiplying (* ) by S x - 1 , then by the matrix H and finally, by 5. The system H x 22 2 = cfo can be solved efficiently using Gaussian elimination, pro- vided that the matrix H 22 can be computed efficiently. That this is the case can be seen as follows. L e m m a 5.2.6. H 22 = (0 I)SHS~ l (J), where J is the m x m identity matrix. Proof. Because Sgg-i = [ Hn \ Bi 2 B\ \ H 22 2 J' by Equation (5.2.2), so we get to / ) « « - • ( ; ) - ( . , / , ( * » £ ) ( ; ) - * „ . • no This implies that, in order to compute fT , we simply apply the transformation S~ l 22 to m vectors, then we apply H, then S and finally, m row vectors. The total number of flops is proportional to m n . In fact, this product and the P L U decomposition of f f 2 22 needs to be done once only, because the solution for several right hand vectors 6 and c can be computed from the same decomposition. The above method does not require parsing the Markov chain into its recurrent subsets. The chain structure was used only to derive the properties of the decomposition of the matrix H. A different approach was used in Denardo (1971), where the solution vector x is expressed in terms of P* A* and, within each subchain, in terms of the t coefficient /? of the proof of Lemma 5.2.1. Our method is more economical because it does not require P* and A* to be explicitly computed. Another algorithm that does not require the parsing into its subchains is the method of Federgruen and Spreen (1980). It is based on a modified Gaussian elimination procedure similar to the PLUQ decomposition of Chapter I which we used in the fast decomposition algorithm. Their formulation of the procedure does not include row interchanges, however, and fails to produce the desired elimination for some transition matrices. An example is This matrix A cannot be made upper triangular by Gaussian elimination without row interchanges. Fortunately, their results are still valid because it suffices to relabel the states in order to produce a transition matrix P for which the elimination is valid. In the above example, we get which is already upper triangular. Ill 5.3. Series expansion of the discounted rewards In this section, we return to the original notation Qo,Qi,..., for the moments of the transition probability function. At the root of the Laurent expansion of v(s) of Denardo (1971) is the following renewal equation for V(f), which is obtained by conditioning on the time x of the first transition. Recalling the definition of Q(t) and R(t), we have where the integral includes the possible instantaneous rewards at times 0 and t. In matrix form, we have (5.3.1) By the convolution theorem [see Heyman and Sobel (1982), Property A-12a], the Laplace-Stieltjes transforms satisfy v{s) = r(a) + g(s)t/(s), which can be written as (5.3.2) This determines the function v(s) uniquely, for small s, because of the following lemma. L e m m a 5.3.1. Suppose Qi has at least one nonzero entry per recurrent class of Q . 0 Then the matrix I — q(s) is nonsingular. Proof. By Equation (5.1.9), we have / - q(s) = I - QQ + sQi + o(s). Decompose A = /— Qo as in Equation (1.3.1) and write Qi as in Equation (5.2.2). We have (5.3.3) 112 where B is nonsingular, and recall that H 22 is also nonsingular, by Corollary 5.2.3. This implies that for s > 0 small enough, both B + sHxx + o(s) and sJ?22 + o(s) are nonsingular, and the result follows. • We now obtain the Laurent expansion of the discounted rewards v(s) in the special case when all moments exist. This is a special case of Theorem 1 of Denardo (1971). Theorem 5.3.2. Suppose all moments of q{s) and r(s) exist and are finite. Then t/(«) = -V_x + V + sV + s V + ... s 2 0 x (5.3.4) 2 where V,-, for i = —1,0,1,2,..., are uniquely determined by AV- (5.3.5) = 0 X and 1-1 AK = (-!)%+.(-l)'" Q.wVy. (5.3.6) * = 0,1,2,., y J=-l Proof. Rewrite Equation (5.3.2) using the infinite expansions corresponding to (5.1.9), (5.1.10) and (5.3.4). We have [I-Qo + sQi - s Q + . . V_! + V + 8V +...] = [R - sR + s R 2 2 2 0 0 l x 2 -...]. Collecting terms, we get an infinite sequence of linear systems of equations, which can be written as follows, in block matrix form: / A Qi -Q2 0 A Qx -Q2 • • • 0 A Q x 0 ... A 0 Vo V ( x 2 R2 v \ * . 0 Ro -Ri \ (5.3.7) :J where A = I — QQ. The notation of (5.3.7) is equivalent to the sequence of (5.3.5) and (5.3.6). We still have to show that the V,- terms are uniquely determined. We do this by induction on i. The induction hypothesis is that V _ . . . , V,-_x are l f uniquely determined by rows j = — 1,...,», for some i > —1. We have to show that there is a unique solution Vi for which row t + 1 is a compatible system. 113 The hypothesis is trivially true for i = — 1, because the system (5.3.5) is homogeneous. Now with V _ i , . , V , _ i fixed, we can write Equation (5.3.6) as AVi = 6, where b = (-l)*i2,- + X)y=-i ( — (5.3.8) l)*~ Qi-jVj- The next equation, with V i , can be 3 + 1 written as AVi+^c-QM, where c = ( - l j ' J 2 , , +1 +l + (5.3.9) Ey=_i {-^Y ~ Qi+i-jVj+i 3 By Theorem 5.2.4, there is a unique V,- satisfying (5.3.8) for which (5.3.9) is compatible. This completes the proof. • The result of Theorem 5.3.2 can also be extended to the case when only a finite number of moments are known. Then the terms of the series for v(s) are solution of a truncated system of equations. The following result is Lemma 2 of Denardo (1971). We give a new proof based on matrix decomposition. Lemma 5.3.3. Suppose 6(s) = o(s) and let x(s) be the solution of [I-q( )}x{s) = b(s). (5.3.10) 3 Then x(s) = o(l), i.e., lim,_ + x(s) = 0. 0 Proof. Write J — q(s) as in Equation (5.3.3) and let Then Equation (5.3.10) becomes (Auia) \A (s) 2l A (*)\ A (s)j l2 22 \x (s)J («M\_(bi{»)\ 2 where An(s) = B + sHn +o(s) Ai (s) = sH 2 x2 +o(s) A i(s) = sH i + o(s) 2 2 114 \b (s)J> 2 (5.3.11) A {s) = sH 22 Hence A {a) = \H l 22 X 22 + 0(s). 22 + o(a ). Now let C{s) = An(a) - Ax {a)A {8)A x(a). _1 Then l 2 22 2 C(s) = B + o(l), so that C-^s) = fl + o(l). We can write the solution of (5.3.1) -1 directly as Xl{s) = C- (3)[6!(S) - i 4 ( 3 ) A ( 3 ) 6 ( s ) ] = o{s) 1 l2 22 2 and = *ni')Ms) ~ ilai(*)xi(s)] = o(l). This completes the proof. • The following result is Theorem 1 of Denardo (1971), giving the truncated series of v(s). Our proof is similar to that of Denardo (1971) but we emphasize the formulation with a block lower triangular matrix. Theorem 5.3.4. Suppose Qk+2 and R x are finite, for some k > 0. Then k+ v(a) = -V. a + V + aVi + ... + s V + o(a ), k t 0 (5.3.12) k k where V _ i , . . . , V are uniquely determined by the finite system of equations k A 0 Qi A -Q2 Qi \{-l) Q ••. k+1 k+2 (- ^ Vo Vx v -Q A) J \V i Qx 2 k+ 0 f 1 ^ Ro -Ri = l(- J i) U l R , k + (5.3.13) u and for which V)t i is determined up to a term P*y, where y is arbitrary. + Proof. Write v(s) as v(s) = - V _ ! + V + aVx + ... + s V + e {a). k 0 3 k k Then, using (5.1.8) with A: + 1 and (5.1.9) with / = k, Equation (5.3.2) becomes [A + aQx - ... - ( - l ) f c + 2 Q f c + 2 + o( = [Ro - aRx + ... + {-a) R k+1 k+1 fc+2 5 ) ] [ - V _ + V + sV + ... + a V + e (a)} 1 a k t + o{a )}. k+1 115 0 x k k (5.3.14) Collecting terms, we get the system / A Qi 0 A . \ / V * • • — A • 0 \ 0 Qi *•• -Q2 \(-l) Q ... ••. -Rx (5.3.15) 0 -Q k k+i together with [I - q(s)e {s) = s {-l) R i k+i k+l k k+ + E (-l) f c + 1 - Q y f c + 1 _ V y + o(s ). (5.3.16) k+l y Dividing (5.3.16) by s , we get k [I-q(s)][e ( )/s } k k so that e (s)/s k k 3 = o(s), = o(l) by Lemma 5.3.3. Hence e (s) = o(s ). The extra equation k k in (5.3.13) ensures that V is uniquely determined and finite. The result on V k because AP* =0. k+l follows • 5.4. Semi-Markov decision processes A semi-Afar/rov decision process is a model in which an action has to be selected each time the system makes a transition, and for which the transition probability function Qf (t) and the one-period expected reward function R*(t) depend on the 3 selected action a € A,-, for i,j = 1,..., n. Without loss of optimality, and as in Denardo (1971), we consider only the policies 6 which are stationary. Again, we denote by A = A i x . . . x A the set of stationary n policies. Given a fixed policy 8 £ A , the model reduces to the semi-Markov reward process (N, R , Q ), where N = { 1 , . . . , n}, 6 6 and = (<?$(')) 116 t,j = l , . . . , n . Correspondingly, the moments Q and for k = 0 , 1 , . . . , are defined by Equa- 6 k tions (5.1.6) and (5.1.7), and the Laurent expansion of the expected total discounted rewards v(s) is given by Theorem 5.3.4, with moments V * i , . . . , V ^ , provided Qk+2 1 and Rk+i are finite. Similarly to the discrete time case, a stationary policy 8 G A is said to be s-optimal if v {s) > «»(«), 6 V7 G A . Further, if 8 is a-optimal for all sufficiently small a > 0, then 8 is said to be optimal. Theorem 3 of Denardo (1971) gives a sufficient condition for an optimal policy to exist. We do not repeat it here. Moreover, a policy 8 is said to be A:-discount optimal if liminf a- [t/(a) - v"(a)] > 0, «-»o+ fc Vn G A . As noted in Denardo (1971), with A _ = A and 2 A = {8 G A _ ! : V > V ",Vn G A -i}, s fc f c k fc k * = -1,0,..., (5.4.1) every policy in Afc is Ar-discount optimal (provided that the relevant moments exist). Further, if there is an optimal policy 8, then 8 G Aoo* Unlike in the discrete time case, there may be no finite k such that a fc-discount optimal policy is optimal. Of course, if there is only one Ar-discount optimal policy (i.e., if |Afc| = 1), then it is optimal. Policies in A _ i and Ao are called gain optimal and bias optimal, respectively. This is because V _ i and V can be interpreted as the gain and bias terms, since from 0 Denardo (1971), page 483, we have Urn ^ = VLi (5.4.2) and lim V(t) - W_ = V {C, 1), t —»• 00 x 117 0 (5.4.3) where the Cesaro limit is defined as lim f(t) = g (C, 1) t—*oo if i r* Urn - / f{x)dx = g. t-*oo t J Q Hence V _ i is the average reward per unit of time, in the long run, and V is the total Q difference between the actual expected reward V(t) and the long run average V _ i * . Provided the required moments exist for all policies, the policy iteration method of Figure 3.5.2 can be extended for finding afc-discountoptimal policy of a semi-Markov decision process. But now, in the policy evaluation step (i.e., Step 1) we have to solve the system (5.3.13), as in Theorem 5.3.4. The terms V_i,...,Vfc can be evaluated as in Theorem 5.2.4 and Lemmas 5.2.5 and 5.2.6. Moreover, in the policy improvement step (i.e., Step 2), we take the matrix P (/c) 7 as Qo 0 Qo *•• -Qx P («0 7 = -Qx <?2 V(-l) Q 0 \ (5.4.4) Qo ••• Q -Qx Q J fc+2 fc+3 2 0 Hence the policy iteration method for discrete time Markov decision processes is a special case of the policy iteration method for semi-Markov decision processes, in which the matrix of moments is simpler. 5.5. Three special cases In this section, we consider the discrete time and continuous time Markov decision processes as special cases of the semi-Markov decision process. This was already done in Denardo (1971), but we emphasize the similarity of the reward processes, using the matrix equations. We will see that the series expansion (5.3.4) is identical to (3.1.6), in the discrete time case, and to (4.3.1) in the continuous time case. 118 Moreover, we will also consider the equivalence between an arbitrary semi-Markov decision process and a corresponding continuous time Markov decision process, under the average reward criterion (i.e., gain optimality). This equivalence was also considered in Ross (1970), page 161. Again, we show the similarity of the reward processes using the matrix equations. In a discrete time Markov reward process, the transition function Qij(t) is zero for 0 < t < 1, and jumps to p,y for t > 1, for i,j = l , . . . , n . It is convenient to assume that a reward is earned just before a transition occurs, so that The Laplace-Stieltjes transforms are obtained directly as v(s) = e~"*P and r(a) = e *r. (5.5.1) _ The discount factor ft over one time period is precisely /? — e~ . As in Chapter III, let a /? = (1 + p)~ , where p is the one period interest rate, giving l q(s) = (1 + p)~ P and r(s) = (1 + p)~ r. l l (5.5.2) It is natural, in the discrete time case, to use z-transforms instead of Laplace-Stieltjes transforms, and (5.5.2) gives precisely the z-transforms of V(t) and R{t), with parameter p. Let us denote the transforms by q(p) and r(p). With this notation, and using the fact that {l + p)~ = l-p + l p -..., 2 Equation (5.3.2) becomes [I-q(p)]v(p) = r{p), (5.5.3) where q{p) = P-pP + p P-... 2 and r (p) = r - pr + p r 2 119 ... Collecting terms, we get v(p) = p- V- +V + pV + ... 1 1 0 (5.5.4) l where V _ i , V , V i , . . . , are a solution of 0 (A P -P P 0 A P -P ... 0 A P ... 0 A V ... 0 f 0 = v 2 J °\ r —r r \ (5.5.5) J with A = / — P, as usual. Now, left multiplying by the matrix ... // 0 / / 0 0 / / 0 0 J / \ 0 • • « • > 0 • . we get / A I 0 0 0 A I 0 /V_ > V Vi x 0 A I •• • 0 A 0 • • • 0 • • * v V: — 2 r 0 0 (5.5.6) w This is identical to the recurrence (3.2.7), (3.2.8) and (3.2.9), with tw- = V,-, for t > - 1 . J t Hence v(p) has the same expansion as in Equation (3.1.6). In the continuous time case, we have, as in Section 5.1, and *•(<) = ^ ( l - « " A r f ). We obtain the Laplace transforms directly as = (1 + 8Ti)~ Pij l = (1 - STi + 5 T ? 2 •)Pij and r,(s) = (1 + ST^Tin = (1 - sn + S T? - . . .)r,r,-, 2 120 where r,- = 1/A,-. Using matrix notation, with T — diag(r ), we have f q{s) = P — sTP + s 7 / P - . . . (5.5.7) r(a) = Tr - sT r + s T r (5.5.8) 2 2 and 2 2 3 from which Equation (5.3.2) leads to ( A TP -T P TP 2 3 0 A TP -T^P ... 0 A TP \ ... 0 A ... 0 fV-i \ Vo Vi -. v 2 / 0 \ Tr -T r V / (5.5.9) 2 ! J Left multiplying by the following matrix (I T 0 0 I : we get (A T 0 0 0 A T 0 0 A T 0 ... 7. 0 I T 0 T •. ... 0 I •. •• • 0 ••• A 0 V'- 0 • ) (V-i \ / 0 \ Tr Vo Vi = 0 0 v2 •J By multiplying every row of the system (5.5.10) by T 1 (5.5.10) J I :J , we obtain precisely the recur- rence (4.3.3), (4.3.4) and (4.3.5), with to,- = V,-, for i > - 1 . Consider now an arbitrary semi-Markov reward process. Let P = Q and H = Qi 0 be the zeroth and first moments of Q(t), respectively, and let T be a diagonal matrix such that He = Te, where e is the column vector with all entries equal to L Then the diagonal entries of T are the expected transition times, that is, T = diag(r,). L e m m a 5.5.1. Suppose P is an irreducible, stochastic matrix. Then P*HP* — P*TP*. Proof. Because P* = eir, where rr is the row vector of stationary probabilities of P . Then HP* = Heir = Ten = TP*, 121 • and the result follows. Lemma 5.5.2. Suppose P is a stochastic matrix. Then P*HP* = P*TP*. Proof. Proceed as in the proof of Lemma 5.2.2, noting that P*HP* and P*TP* do not depend on the transient parts of H and T. • Suppose T,- > 0, for t = 1,..., n. Then T is nonsingular. Consider a continuous time reward process with expected transition times r,- and reward rates r,-, for i = 1,..., n., where RQ = TV is the zeroth moment of R[t). Theorem 5.5.3. Let g be the gain of the semi-Markov process and let g be the gain t e of the continuous time process. Then g = g . 9 e Proof. Let h and h be the respective bias vectors. Then Theorem 5.3.4 implies that a e a a (£)-u) and <«•«> (* y (£)-(£)• ™ Now decompose A as in Equation (1.3.1), and write H = S- ( " X H \.£/21 M s andl^S- (I" £ U 2> 1 "22/ \J21 i22/ Lemma 5.5.2 implies that /f 2 = ? 2 - Both systems (5.5.11) and (5.5.12) have the same 2 2 form as (5.2.3) and (5.2.4), with 6 = 0. This implies that X\ = 0, by Equation (5.2.5), and that g, does not depend on H2\ (and g does not depend on T^i)- The result e follows because H22 = T 2- • 2 Corollary 5.5.4. The higher order terms (e.g., bias h and h , etc.) are not equal in t e general. Proof. Because they depend on H\\, H12 and #21 > which are not necessarily equal to Tu, T\2 and T21. Also, the higher moments of both Q(t) and R(t) may be different, as well. • 122 The significance of Theorem 5.5.3 is that when only the average reward per unit of time (i.e., the gain) of a semi-Markov reward process is required (like for finding a gain-optimal policy in a semi-Markov decision process), then it is not necessary to compute the decomposition of H as in Lemma 5.2.6. Rather, we can simply solve the continuous time problem, which can be done using the same algorithm as for a discrete time process. 123 Chapter VI Stationary characteristics of tandem queues Consider a queueing system with two service stations in series (i.e., in tandem). New jobs entering the system arrive at station 1. After a job has received service at station 1, it moves on to station 2. Jobs leave the system after completing service at station 2. The waiting rooms are finite. There can be at most Ni jobs at station 1 and 7V at 2 station 2 (including the jobs that are being served), with iY\ > 1 and AT > 1. The 2 blocking protocol is as follows. Let n i be the number of jobs at station 1, and n at 2 station 2. A new job arriving at station 1 when n = Ni leaves the system immediately, x without requiring any service. A job completing service at station 1 when n = JV 2 2 returns to station 1 instantly, where it is treated as a new arrival. All jobs are identical and enter the system according to a Poisson process with parameter A. A job requests a random amount of service with mean T\ at station 1 and r at station 2. These service requirements are mutually independent. They are 2 also independent of those of other jobs and of the arrival process. Each station has a total service capacity (or service rate) of one unit of service per unit of time. The service discipline at station j , for j = 1,2, is determined by the functions 7y and 5y as follows [see Disney and Konig (1985), Section HI.2]. Suppose there are ny jobs at station j, with 1 < ny < Nj. Then 7y(ny,£) is the speed of service given to the job at position t at station j, with 1 < I < ny. When a job leaves station j from position £, the jobs in positions 1+1,..., 124 ny move to positions £ , . . . , ny — 1. When a job enters station j with ny jobs already present, it moves into position £ with probability 5y(ny + 1,£), with 1 < £ < ny + 1, and the jobs in positions £ , . . . , ny move to £ + 1,..., ny + 1. The functions satisfy E"=i 7j'( j>^) n 1 = E*=i fy( />^) = n Three service dis- ciplines of particular interest are First-In-First-Out (FIFO), La3t-In-First-Out (LIFO) preemptive and processor sharing. Their functions can be expressed as follows. FIFO: 7i(ny,l) = l , «,•(»* n,-) = 1; LIFO preemptive: 7y(iy, 1) = 1, 6j{ j> 1) = Processor sharing: n £) = 1/ny, 5y(ny,£) = 1/ny, £ = 1,...,ny. When the service requirements are exponentially distributed, all three disciplines produce the same queue length process. In the rest of this chapter, we will be interested in obtaining some stationary characteristics of tandem systems. First, in Section 6.1 we review the case in which the service requirements have an exponential distribution. In this case, we write the global balance equations using matrix notation. Next, we use the results of Chapters I and II to analyze two numerical procedures for computing the stationary probabilities of the queue length process. In Sections 6.2 and 6.3, we consider the method of successive approximations, or power method [see e.g., Stewart (1978)], and the matrix decomposition method of Wong et al. (1977). Our original contribution is a proof of convergence of the former, taking into account the periodicity of the transition matrix, and a justification of the latter from the point of view of numerical stability. In Section 6.4, we approach the solution of the stationary probabilities from the point of view of job-local-balance (JLB). The notion of local balance was studied by several authors, including Whittle (1968) and (1969), Kingman (1969), Baskett et al. (1975), Barbour (1976), Chandy et al. (1977), Schassberger (1978) and Hordijk and Van Dijk (1981), (1982), (1983a) and (1983b). 125 With the blocking protocol mentioned above, job-local-balance does not hold for our tandem system. Some modifications based on results in Hordijk and Van Dijk (1981) give two other queueing systems which satisfy J L B . The stationary probabilities of the modified systems are then expressed by a simple, explicit formula which is easy to compute. It is the object of Section 6.5 to provide some new numerical evidence that the call congestion (i.e., the probability that a new job is rejected) of the modified systems provide a lower and an upper bound on the call congestion of the original system. While a formal proof of the validity of the bounds, under certain conditions, can also be found in van Dijk and Lamond (1985), it was decided not to include it here, because the technique of proof bears little relationship with the methods discussed in this thesis. 6.1. G l o b a l balance equations Let Nj(t) be the number of jobs at station j at time t, with j = 1,2 and t > 0. Then {N (t), N (t) : t > 0} is the queue length process of the system. Suppose that x 2 the service requirements have an exponential distribution, with parameters fix = ( f i ) - 1 and /z = ( r ) . _ 1 2 2 We assume that the queue length process is stationary, that is, its distribution does not depend on t. Let s = (r»i, n ). If N\(t) = n and iV (t) = n , for some t > 0, then 2 x 2 2 we say that the system is in state s at time t. The process {JV (t),iV (t) : t > 0} is a 1 2 continuous time Markov chain [see Chapter IV] with state space S = {s = (rii,n ) : 2 0 < nt < Ni,i — 1,2}. Our model is the same as in Wong et al. (1977), with Ni and iV + 1 seats. 2 The transition rate from a state r € S to another state s G S, denoted by R(r, s), is given in Table 6.1.1. Let us also define B(s) as the total rate out of state a. Then for s = (ni, n?) we have B(s) = Aa (s) + (iia {s) 0 x 126 -K/JaM) 3 where a*(s) =0, k = 0,1,2, except for Oo(n ,n ) = l if ni < JVi, ai( n 2) = l if ni > 0 o-2{n n ) = \ if n > 0. 1 n 2 n u 2 to state (fi! + l , n ) (ni - 1 , ^ + 1) ( n i , n - 1) 2 2 and n < J\T , 2 2 2 condition rate (ni<^!) (ni > 0) and (r^ < N ) (n > 0) A p.\ p. 2 2 2 Table 6.1.1. Transition rates out of state ( n i , n ) 2 We are interested in the stationary probabilities p(s), a € S, such that p(a) = lim Prob{system is in state a at time t}. t—*oo Those probabilities are the unique solution of the global balance equations p(s)B(s) = ]Tp(r)/E(r, ), S aeS (6.1.1) res together with the normalizing condition 5>(«) = 1. (6.1.2) Now let N = (iVi + 1) x (JV + 1). Then (6.1.1) and (6.1.2) give a system of AT + 1 2 linear equations in N unknowns. We will see, below, that one of the balance equations (6.1.1) is redundant, and that p(s) is uniquely determined by the remaining N linear equations. But let us first look at the problem using matrix notation. Let us assume that the states a €. S are ordered in some way (e.g., lexicographically). Then p can be seen as 127 a row vector, B as a diagonal matrix with diagonal entries B(a), and R as an N x N matrix with entries R(r, s). The global balance equations (6.1.1) can then be written as pB = pR (6.1.3) and the normalizing condition (6.1.2) says that p is a probability vector. We can rewrite (6.1.3) as p(B -R) =0 (6.1.4) ir(I-P) =0 (6.1.5) IT = pB/(pBe) (6.1.6) which can be transformed into where and P = B' R. l This property was used in the proof of Lemma 4.2.1, where we used the time matrix T instead of the rate matrix B. Here e is just a column vector with all entries equal to 1, and the denominator of (6.1.6) ensures that TT is also a probability vector. Note that B is a diagonal matrix, so that its inverse B~ l is easily computed. We will take advantage of this fact in formulating the power method, in the next section. Observe also that Q = B — R is the infinitesimal generator of the queue length process. Moreover, P is the transition matrix of the embedded Markov chain [Ross (1970), chapter 5], for transitions out of a state, scaled to unit transition times. 6.2. Computation of the stationary probabilities We review one method for solving Equation (6.1.5), the power method. In Section 6.3, another method for solving (6.1.4), the method of Wong et al. (1977), will be reviewed as well. Several other numerical methods for computing the stationary probabilities of Markov chains are described in Stewart (1978), Kaufman (1983) and Harrod and Plemmons (1984). 128 The principal advantage of the power method, for solving the tandem queue problem, is its simplicity. It is easier to implement than other, faster converging, iterative methods. The method of Wong et al. (1977), on the other hand, takes directly advantage of the special structure of the infinitesimal generator Q = B—R of Equation (6.1.4). It effectively reduces the original N x N problem [with N = (Ni + 1){N + 1)] to a 2 (2JV*! + 2) x (2Ni + 2) problem. Its implementation, however, is significantly more difficult than the power method, because it requires the computation of a Jordan canonical form. A l l numerical results in this chapter were obtained with the power method. Let us analyze the solution of Equation (6.1.5) by the power method. We will proceed to compute IT, from which we obtain p simply as p = KB~ l{«B- e). x l The advantage of solving (6.1.5) instead of (6.1.4) is that IT can be seen as the left hand eigenvector of P associated with the eigenvalue 1. As we will see, this eigenvector can be computed iteratively using the power method [see Wilkinson (1965)]. Lemma 6.2.1. The embedded Markov chain (i.e., with transition matrix P) is ergodic (i.e., P is irreducible). Proof. We have to show that when the system is in any state r G 5, it can reach any other state s G S with positive probability, in a finite number of steps. This is the case when A, p*i and /z are positive (as we assume), because it is possible, with an 2 appropriate order of arrivals and completions, to reach any state s = (m, n ) € S from 2 state (0,0) with ni + n arrivals and n service completions at station 1. Similarly, the 2 2 system can return to state (0,0) with ni service completions at station 1 and ni + n 2 service completions at station 2. • Theorem 6.2.2. There is a unique probability vector that is a solution of equation (6.1.5). Proof. Since the embedded chain is ergodic, this follows from Theorem 5.1.2 of Kemeny and Snell (1960). [Also from our Lemmas 2.2.1 and 2.2.5]. 129 • We know from Corollary 2.4.5 that the power method converges to the stationary probability vector when P is aperiodic. We proceed to show that for the tandem queue problem, the transition matrix P is periodic, with period 3. Let us classify the states into the following three sets: S m = {s = ( n i , n ) G S : (n + 2n ) mod 3 = m} 2 t 2 m = 0,1,2. (6.2.1) Lemma 6.2.3. The embedded Markov chain is periodic with period 3. Proof. We show that all the possible transitions out of a state r G Sk are into some states s G S , where m = (k + 1) mod 3. Let r = (ni, n ), then k = (ni + 2n ) mod 3, m 2 2 according to (6.2.1). There are at most three possible transitions, as in Table 6.1.1: if ni < Ni if ni > 0 if n >0 2 then and m = [(ni + 1) + 2n ] mod 3; 2 n < iV 2 then m = [(ni — 1) + (2(n + 1)] mod 3; then 2 2 m = [m + 2(n - 1)] mod 3. 2 It is easy to verify that m = (k + 1) mod 3 in all three cases. • Lemma 6.2.4. £ff( ) = I k = 0,1,2. s Proof. . Because the embedded Markov chain spends exactly one third of the time in any class Sk- • Following Varga (1962), let us assume that the states are ordered such that So comes first, then Si and then 5' . We have 2 P = ( Poi 0 0 0 0 1^20 0 Pl2 0 and JT ). = (wo 2 Then 0 (To -\ 0 T t 0 lo 130 0> 0 where To = PoiPuPao, etc. L e m m a 6.2.5. Each T , A:=0,l,2, is an irreducible, aperiodic transition matrix. k Proof. See e.g., Varga (1962), page 44, exercise 4. [See also the proof of our Theorem 2.2.9]. • L e m m a 6.2.6. For A:=0,l,2, let q be the stationary probability vector of T . Then k k ""fc = |<7fcProof. From equation (6.1.5), we have IT = irP, and hence IT = T T P . But then 3 tffc = ffcJfc, and ir has weight | , by Lemma 6.2.4. • k Theorem 6.2.7. For Jfc=0,l,2, let q be a probability vector, and q 0) k k n) = q T£. ( 0) k Then q = Urn q TZ 0) k k n—KX> is the stationary distribution of T . [See Kemeny and Snell (1960), Theorem 4.1.6 for k the proof, and also our Corollary 2.4.5]. Corollary 6.2.8. Let = 7r(°)P . N Then lim n TT= {n) n-*oo if and only if X> Proof. Suppose that J2 9€Sk ( O ) 0 0 = g, A: = 0,1,2. (6.2.2) ir^(s) = to*, for some arbitrary numbers w , A:=0,l,2. k Then / 7r< ) \ 3n lim j jr< 3n+1 /tw 9o 0 ) j = I tw g 2 and the result follows from Lemma 6.2.6. 0 u>iqi vj q \ 2 2 u>oqi w q 1 t 2 • The above results indicate that the stationary probabilities can be computed iteratively, using the power method [Wilkinson (1965), page 570], which we can formulate as in Figure 6.2.1. 131 Step 0: Take an arbitrary probability vector JT(°) satisfying (6.2.2), set k = 0 and specify e > 0. Step 1: Compute = Step 2: Compute ir( fc+1 ) = Step 3: If ||jr( ) - ir^B~ . l p^R. < e for some appropriate norm (e.g., 1, 2, oo) fc+1 then stop else increment k and go to Step 1. Figure 6.2.1. The power method for stationary probabilities The sequence {7r( ), fc k = 0,1,2} converges to the stationary probability vector TT, by Corollary 6.2.8, and the algorithm is extremely easy to implement, using procedures like those shown in Figure 6.2.2. (Note that there are at most three multiplications for every state, in one iteration). The performance of the power method, then, is determined by the speed at which the sequence ir( ) approaches its limit jr. Let 7* be the modulus of the subdominant n eigenvalue of T , for k — 0,1,2. By Theorem 6.2.7 and Corollary 2.4.7, we see that k each of the subsequences 7r( ), 3 n fl-( ) 3n+1 and n( ) 3n+2 approaches ir geometrically, with parameter 70, 71 and 72, respectively. Hence at each step of the power method, the error is reduced by a factor 7 = (max{7o,7i '/2}) > 1/3 in average, when the number of iterations n is large. Another important consideration, when using the power method, is how to select the parameter e for the stopping criterion, at Step 3 of Figure 6.2.1. We show, as in Section 3.4, that if 8 is the accuracy required on TT, then we should select e < where A* is the Drazin inverse of A = I — P. 132 CONST DIM1=100; DIM2=100; (* storage dimensions *) TYPE STATE1=0..DIM1; STATE2=0..DIM2; DISTRIBUTION=ARRAY[STATE1,STATE2] OF REAL; VAR NI:STATE 1; N2:STATE2; (* server c a p a c i t i e s *) RATEO,RATE1.RATE2:REAL; (* a r r i v a l , s e r v i c e r a t e s *) P:DISTRIBUTION (* p vector *) FUNCTION OUTRATE(I:STATE1;J:STATE2):REAL; (* T o t a l r a t e of t r a n s i t i o n ont of a s t a t e *) VAR RATE:REAL; BEGIN (* OUTRATE *) RATE:=0; IF (KN1) THEN RATE: =RATE+RATEO; IF ((I>0) AND (J<N2)) THEN RATE:=RATE+RATE1; IF (J>0) THEN RATE:=RATE+RATE2; OUTRATE:=RATE END (* OUTRATE * ) ; FUNCTION INFL0W(I:STATE1;J:STATE2; VAR P:DISTRIBUTION):REAL; (* t o t a l flow of t r a n s i t i o n s i n t o a s t a t e *) VAR FLOW:REAL; BEGIN (* INFLOW *) FL0W:=0; IF (I>0) THEN FL0W:=FL0W+RATE0*P[I-1.J]; IF ( ( K N 1 ) AND (J>0)) THEN FL0W:=FL0W+RATE1*P[I+1. J - l ] ; IF (J<N2) THEN FL0W:=FL0W+RATE2*P[I,J+l] ; INFLOW:=FL0W END (* INFLOW * ) ; Figure 6.2.2. Pascal procedures for transition rates We see this as follows. Let it be a computed solution, with error e = # — TT. Define the residual error r = TT(J — P ) . Theorem 6.2.9. e = A*r. Proof. Let u be a row vector such that u,- = 1/JV. Then ir is the unique solution of 133 the evaluation equations of dynamic programming: irA = 0 IT + hA = u, where h stands as the "bias" vector. The result follows immediately from Theorem 3.4.4. • Now by taking norms, we have M I < un • WAV Hence at Step 3, when we test for ||r|| < c, we have IHI<||ii*||.e<||il*||.5/||il*||«*. It is our observation that ||A*||2 < 1000 for a wide range of the parameters A, fi l} fi-2, Ni and N%. Interestingly, the rate of convergence varies quite strongly. With an accuracy e = 1 0 , the number of iterations varied from 50 to 3000 iterations, and it is -9 not obvious how the parameters affect the rate of convergence. For most of the curves we produced, in which every point was evaluated using the latest ir vector available, the average number of iterations was between 20 and 50. We did not observe that the buffer sizes TYi and were a determinant factor of the speed of convergence (in terms of iterations required). This does not support an observation of Stewart (1978) that large waiting rooms imply slow convergence. 6.3. The m a t r i x decomposition method In this section, we describe a method due to Wong et al. (1977), which exploits the special structure of the infinitesimal generator. The method is related to the matrix decomposition approach of the previous chapters and uses the Jordan canonical form of a certain matrix. We briefly review the method and formulate it in algorithmic form. Certain aspects of the method are clarified and we give a justification of the use of the Jordan canonical form, from the point of view of numerical stability. We have no computational experience to report, about its performance. 134 The method of Wong et al. (1977) produces a stationary probability vector by direct solution of a singular system of Ni + 1 equations. Let us use the two parameters p = \f\i and a = Hi/p* . Also, assume that the states are sorted in reverse lexicographic 2 2 order [i.e., p(0,0),p(l,0),...,p(Ar ,0),p(0,l),...,p(Ar ,JV )]. Note: the order shown 1 1 2 on top of Page 5 of Wong et al. (1977) is incorrect. (The error has no effect on their development, however). Then the matrix Q — B — R of Equation (6.1.4) has a block-tridiagonal structure. We have (-A I x aL -A 2 (6.3.1) Q= ctL I -A J 3 where \ p+a Ax = p+a V A 2 -p a J = A i + I and A = A — aH, with / the identity matrix, H a diagonal matrix 3 2 with all ones on the main diagonal except ku =0, and L a matrix with all ones on the subdiagonal and all other entries zero. A i , A , A 3 , /, H and L are all (JVi +1) x [N +1) 2 x matrices and Q is an N x N matrix, where N = (Ni + 1)(AT + 1). [Hence Q has 2 (N<2 + 1) x (N + 1) blocks]. 2 Partition the stationary probability vector p as p = (po»Pi,.• • >Ptf )i where p 3 (p(0,n),... ,p(Ni,n)), = n for n = 0,...,N . Let also x = ( p , p i ) , for n = 0 , . . . , N 2 n n n + 2 1. Then all the stationary probabilities can be expressed in terms of p , using the 0 (2JVx + 2) x ( 2 ^ + 2) matrix B defined as *=(0/72£)Indeed, Equation (6.1.4) with Q as in (6.3.1) implies that Pi = Po-^i. 135 (6.3.3) and that x i = x B, for n = 0 , . . . , N - 1, (6.3.4) = X Q B " , for n = 0 , . . . , N . (6.3.5) N n+ 2 hence 2 Now consider the (Ni + 1) x (Ni + 1) matrix (6.3.6) The following result is shown in Wong et al. (1977). Theorem 6.3.1. The equation poC = 0 uniquely determines all the entries of p in 0 terms of p(0,0). The significance of Theorem 6.3.1 is that it reduces the problem of solving the (Ni + 1)(N + 1) equations of (6.1.4) to the much smaller problem of solving only 2 Ni + 1 equations. This suggests an efficient algorithm to compute p, as shown in Figure 6.3.1. Step 1. Let K = (I A ). 0 x Step 2. Compute Y +i = Y B, for n = 0 , . . . , N — 1. N n 2 Step 3. Compute C = r V ( ^ ) . _ H 2 Step 4. Let p(0,0) = 1 and solve poC = 0. Step 5. Compute pi = po-*4 . x Step 6. Compute p +2 = n Pn+iA 2 — ap L, for n = 0 , . . . , N — 2. n 2 Step 7. Scale p so that (6.1.2) is satisfied and stop. Figure 6.3.1. Unstable algorithm for stationary probabilities The matrix C of Equation (6.3.6) is produced at Steps 1, 2 and 3. Then Step 4 uses the result of Theorem 6.3.1. The vector po can be computed by either Gaussian 136 elimination or some other technique for singular equations (see Chapter I). Then the other components of the vector p are computed. Step 5 applies Equation (6.3.3) and Step 6 applies Equation (6.3.4). Finally, Step 7 obtains a probability vector. The algorithm of Figure 6.3.1 is efficient because it can be implemented with only one array of dimension (N + 1) x 2(/Yi + 1) and it takes 3(7^ + 1) (JV + 1) flops to 2 t 2 compute the matrix C. Then Step 4 takes ^(JVi + l ) flops if Gaussian elimination is 3 used. The amount of arithmetic for Steps 5, 6 and 7 is negligible. By comparison, each iteration of the power method takes 4{N\ + l)[N + 1) flops. In the case N — N 2 x 2i the algorithm is then equivalent to about Ni + 1 iterations of the power method. For N x and N not too large (say up to 50), our experience with the power method indicates 2 that the algorithm of Figure 6.3.1 would usually be much faster. This algorithm is not recommended, however, because it is numerically unstable, as preliminary tests have shown. It appears that the iteration with the matrix B (both at Step 2 and at Step 6) is inaccurate. The method suggested by Wong et al. (1977) [Note: the numerical unstability problem was not mentioned by these authors] works directly with the powers of B, in Equations (6.3.5) and (6.3.6). Those are obtained directly from the Jordan canonical form of B. The following result was obtained in Wong et al. (1977). and Disney and Chandramohan (1980). Note that their proof is similar to that of a related well known result concerning the eigenvalues of a symmetric tridiagonal matrix, based on the Sturm sequence property [see Wilkinson (1965)]. Theorem 6.3.2. There is a nonsingular matrix S such that (6.3.7) where JM(0) is the Jordan submatrix of order Af with eigenvalue 0, with Af = [{Ni + 2)/2j, and D is an (Ni — Af) x (N — Af) diagonal matrix with positive diagonal entries x that are distinct of each other. Assuming that the canonical form (6.3.7) is available, then the powers of B can 137 also be computed efficiently, so that Equations (6.3.5) and (6.3.6) become (6.3.8) and (6.3.9) ° ,)S-'. o-(/ A )B(I 'M)"' J d n 1 This leads to the algorithm of Figure 6.3.2. Step 1. Compute 5, D, and S - 1 as in Equation (6.3.7). Step 2. Compute C as in Equation (6.3.9). Step 3. Let p(0,0) = 1 and solve p C = 0. 0 Step 4. Compute pi = poA\ and let x = (po pi). 0 Step 5. Compute x as in Equation (6.3.8), for n = 1,..., JV . n 2 Step 6. Scale p so that Equation (6.1.2) is satisfied. Figure 6.3.2. Modified algorithm for stationary probabilities The difficult part of this algorithm is at Step 1. Although the theorem guarantees that the the eigenvalues are distinct, it does not indicate how well separated they are. Very close eigenvalues are known to cause numerical difficulties in computing the eigenvectors [see Golub and Wilkinson (1976)]. We have no computational experience to report, at the moment, concerning this problem. 6.4. Job local balance and bounds In this section, we review briefly the job-local-balance property (JLB) and show that it is not satisfied by the tandem queueing system under consideration. Certain modifications, based on results in Hordijk and Van Dijk (1981), are proposed and the 138 modified models are shown to satisfy J L B . It is argued intuitively that these models provide a lower and an upper bound on the call congestion of the original model. If Ni = JV = oo then it is well known [Jackson (1957), (1963), Kingman (1969), 2 Kelly (1979)] that for the exponential case and for any service discipline, the stationary probabilities are given by the product form (with c a normalizing constant): p(ni,n ) = c ( — ) 2 Pi (—) M2 ni>0,n >0. 2 In addition, as shown by Barbour (1976), Schassberger (1978), Kelly (1979) and Hordijk and Van Dijk (1982), for a number of service disciplines such as computer sharing or a LIFO-preemptive discipline (but not a FIFO discipline), this product form solution remains valid for the non-exponential case, that is, with non-exponential service requirements. This phenomenon is known in the literature as "insensitivity". In different frameworks and terminology, Whittle (1968) and (1969), Kingman (1969), Baskett et al. (1975), Barbour (1976), Chandy et al. (1977), Schassberger (1978), Hordijk and Van Dijk (1982), (1983a) and (1983b), have demonstrated that product form solutions and insensitivity properties are related to a notion of partial balance. Roughly speaking, partial balance says that the global balance equations, which are necessary and sufficient for the stationarity of a probability distribution, are satisfied with a particular decomposition in local equations. More precisely, as analyzed in Hordijk and Van Dijk (1983a), all these terminologies have the interpretation that for each job present in the system and for any system configuration, that job itself satisfies a notion of stationarity or more precisely, joblocal-balance (JLB). It is easily seen that with finite buffer sizes, J L B can not be satisfied (for instance, if the first node is saturated whereas the second node is not, then at the first node jobs can enter at a positive rate but only leave at a zero rate so that these rates can not balance). This can also be seen by looking at the transition diagram in Figure 6.4.1. 139 : 1 0 + + 1 + n , - l n, n,+l : N, Figure 6.4.1. Failure of j o b local balance The dashed arrows indicate the transitions which are not possible at the boundaries, thus destroying the job local balance. (An arrow is labelled t if it applies to a job at node t, for t = 1,2, and 0 for a job outside the system). The J L B notion, however, would guarantee the simple form of the stationary probabilities. Therefore, the purpose of this section is to find modifications of the finite tandem system which do satisfy J L B and which may provide approximations or bounds for stationary characteristics of the original model. More precisely, we concentrate on the stationary probability that an arriving job is blocked from entering the system. In communication engineering this is historically known as the "call congestion". The 140 modifications given are based on results given in Hordijk and Van Dijk (1981) and the intuition that they will provide a lower and an upper bound for the call congestion of the original tandem queue. We define two modifications of the original model, from which we intuitively expect to obtain lower and upper bounds on the call congestion. These modifications, suggestively called the lower and upper bound models, are characterized as follows. Lower bound model A job is blocked from entering the system (rejected) only if the total number of jobs present in the system is equal to N + JV . A job which completes its 2 x service at node 1 is always allowed to enter node 2 instantly. Upper bound model In addition to the blocking protocols of the original model, a job is also blocked from entering the system (rejected) if n = AT and a job which completes its 2 2 service at node 2 is prohibited from leaving the system if n = Ni, in which t case it must return in the queue for a new service at node 2. It is intuitively clear that the lower bound model should have a lower blocking probability than the original model, since it makes a more efficient use of the N 4- N x 2 available places, by sharing them among the two stations. Similarly, the upper bound model would be expected to have more congestion, since it has more stringent conditions for acceptance of a new job and it sometimes gives extra service to a completed job. As shown in the above reference, job-local-balance is closely related to the algorithmic computation of the stationarity probabilities. In the special case of a standard Jackson type network, this algorithmic computation simply leads to the product form probabilities. By using a notion equivalent to J L B , it has been shown in Sections 7 and 8 of Hordijk and Van Dijk (1981) that the stationary probabilities of the lower and upper bound models, in the exponential case, are given by Pr, and Pu as follows. 141 With Cj and c normalizing constants: 2 if 0 < ni;0 < n ; n + n < Ni + JV (6.4.1) otherwise. 2 x 2 2 ^(n ,n ) = {^^')"'(Ar ) 1 3 2 otherwise. Although these probabilities exhibit a product form, it must be realized that the use of the expression "product form" is not fully justified because of the constraints of the admissible regions. Clearly, afterwards, the stationarity can just as well be verified by checking the global balance equations. The JLB analysis has been instrumental, however, in the derivation of the above models as potential candidates. The lower bound model was derived by modifying the original model so that the dotted arrows of Figure 6.4.1 be included. On the other hand, the upper bound model was derived so that the transitions which are not locally balanced be excluded. Furthermore, in addition to an algorithmic computation of the stationary probabilities, job-local-balance is associated with the insensitivity property [cf. Barbour (1976), Schassberger (1978) and Hordijk and Van Dijk (1982) and (1983b)]. Under this property, the stationary probabilities depend on the service distributions only through their means. Hence for service disciplines satisfying a job-local-balance condition [see the -K"-criterion in Hordijk and Van Dijk (1983a)] the probabilities given by (6.4.1) and 2 (6.4.2) are valid as stationary probabilities regardless of the form of the service distributions. A computer sharing or LIFO-preemptive discipline satisfies such a condition. Hence, for such disciplines the bounding method may be expected to be insensitive as well, that is to provide bounds that hold for all service distributions. From now on we use the subscript L (U) to indicate a quantity for the lower (upper) bound model, and no subscript for the original model. Now, the stationary probabilities represent the fraction of time spent by the system in each state. Since Poisson arrivals see time averages [see Brumelle (1971) and Wolff (1982)], the call congestions are given 142 by B = P{n = Ni) (6.4.3) x B = PUni +n L = Ni + N ) 2 (6.4.4) 2 Bu = Pu{ni = N Vn x 2 = N ), (6.4.5) 2 where P() represents the stationary probability of an event. Based on the intuitive arguments at the beginning of this section, we may expect that B <B (6.4.6) B< Bu (6.4.7) L and for the exponential case. Moreover, we would also expect the same results in the nonexponential case as well, under service disciplines satisfying J L B . Section 6 of van Dijk and Lamond (1985) contains a formal proof of (6.4.6) for the exponential case and under the condition that A > fi . 2 It also briefly indicates a similar proof for (6.4.7) with Hi < fi and suggests a proof for the general case. 2 In addition to the validity of the bounds, a bounding method must also provide an indication of their accuracy. In Section 6.5, the values of B, Br. and Bu, as given by (6.4.3), (6.4.4) and (6.4.5), will therefore be compared for a wide range of the system parameters. The numerical results also support the validity of (6.4.6) and (6.4.7), in general. As a first illustration, let us briefly consider the case of A = p = p . Then, one x 2 easily verifies that: B = 2/(Nr+N L 2 + 2) Bu = {ft + N )/{NiN 2 2 +Ni+ N) 2 so that for N = N = N and with N large we observe that: Bu ~ 2B^. Hence, very x 2 accurate bounds in percentage can not be expected. On the other hand, for N large 143 it holds that Bu ~ 0 and the numerical results of section 6.5 will show that even on a percentage basis, Bi, is then very accurate. A converse statement holds with small values of N for the upper bound. A more detailed investigation follows. 6.5. Performance results 6.5.1. Exponential case In the case of exponential service requirements, the blocking probabilities were computed as described in Section 6.2 and compared with the lower and upper bounds, for a wide range of parameter values. Since only the proportions of A, ft and /z are x 2 relevant, we chose (without loss of generality) A = 1. Below we will discuss our major observations as illustrated by Tables 6.5.1 and 6.5.2 and Figures 6.5.1 and 6.5.2. Table 6.5.1. This tabulates the blocking probabilities for varying service speeds 0.1, 0.5 and 2.0, as well as buffer sizes (5,5), (5,10) and (10,5), in order to find out the accuracy of the bounds for large blocking probabilities, and simultaneously to observe the dominance of a bottleneck service speed and buffer size. As for the bounds, we found: • For B > 0.9, the bounds are very accurate, within 4% of the correct value in the worst case. • For B from 0.50 to 0.55, the worst case is accurate within 23% and the lower bound is always within 7% accuracy. • For B < 0.02, the upper bound is less than 0.035. Further, the following performance results appeared. • The smallest service rate is strongly dominant; the size of the second rate is hardly of influence (e.g., cf. pi = 0.1, p = 0.1; H\ = 0.1, /z = 0.5). 2 2 • The smallest buffer size is strongly dominant; the size of the other buffer size is hardly of influence (e.g., cf. N\ = 5, N = 5; N\ = 5, N — 10). 2 2 Moreover, the different blocking levels were characterized by (recall that A = 1): • B > 0.9 when the smallest rate is 0.1; • B w 0.5 — 0.55 when the smallest rate is 0.5; • B drops to less than 0.02 when both rates are 2.0. 144 N Pi 2 5 5 0.1 0.5 2.0 5 10 0.1 0.5 2.0 10 5 0.1 0.1 0.5 2.0 B L B Bu 0.1 0.5 0.1 0.5 2.0 0.5 2.0 0.909184 0.900000 0.900000 0.549973 0.500366 0.500366 0.002694 0.916667 0.900026 0.900026 0.584101 0.508094 0.500853 0.017682 0.947369 0.909357 0.909357 0.673684 0.511811 0.511811 0.031250 0.1 0.5 0.1 0.5 2.0 0.5 2.0 0.906294 0.900000 0.900000 0.533333 0.500011 0.500011 0.000122 0.909091 0.900001 0.900000 0.547311 0.507937 0.500018 0.015905 0.947369 0.909100 0.909357 0.670319 0.508055 0.504240 0.016346 0.1 0.5 0.1 0.5 2.0 0.5 2.0 0.906294 0.900000 0.900000 0.533333 0.500011 0.500011 0.000122 0.916667 0.900026 0.900026 0.583336 0.500591 0.500381 0.000744 0.947369 0.909357 0.909100 0.670319 0.504240 0.508055 0.016346 Table 6.5.1. Numerical values with A = 1, varying Hi and / i 2 Table 6.5.2. Throughout our computations it appeared that, as a function of the service speeds, the worst results occured with equal parameters A = / i = / i . This x 2 makes sense since if one of them is significantly smaller, then that one will become the dominant bottleneck value. In Table 6.5.2, we can check the accuracy in this worst case for blocking probabilities varying from 0.6 to almost 0. We varied the buffer sizes from (1,1) up to (40,20). As for the bounds we now found: • With Ni = AT and B > 0.10, the accuracy in both directions is within 30%. 2 • For Ny 7^ AT and B > 0.10, the accuracy is maintained for the upper bound. 2 145 N B 2 L B{N N ) lt 2 B{N ,Ni) Bu 2 1 1 0.500000 0.600000 0.600000 0.666667 2 2 1 2 0.400000 0.333333 0.533332 0.422222 0.533334 0.422222 0.600000 0.500000 3 3 3 1 2 3 0.333333 0.285714 0.250000 0.512193 0.379571 0.325026 0.512196 0.379579 0.325026 0.571429 0.454545 0.400000 5 5 5 5 5 1 2 3 4 5 0.250000 0.222222 0.200000 0.181818 0.166667 0.501736 0.347796 0.280451 0.244335 0.222356 0.501743 0.347809 0.280480 0.244373 0.222356 0.545455 0.411765 0.347826 0.310345 0.285714 10 10 0.090909 0.124192 0.124192 0.166667 20 20 10 20 0.062500 0.047619 0.084538 0.064373 0.104327 0.064373 0.130435 0.090909 40 40 10 20 0.038462 0.032258 0.046216 0.034501 0.097846 0.056293 0.111111 0.069767 Table 6.5.2. Numerical values with A = Vl = A»2 = 1 • For iVi 7^ N and B < 0.10, the accuracy is maintained in the bottleneck direction (that is, for the lower bound if Ni > N \ for the upper bound if N\ < N ). 2 2 2 As in Table 6.5.1, again the dominance of the smallest buffer can be observed [for instance, compare (2,1), (3,1) and (5,1)]. Further, as a particular result of the equality of A, (ii and fi , the calculations could be restricted by using a symmetry property. 2 This will be discussed below (see Lemma 6.5.1). The blocking probabilities could be classified as by • B < 0.54 if N N U • B < 0.13 UN N U 2 2 > 2. > 10. 146 ju2=5.0 M2 = 2.0 ^2=1.0 /u2=0.5 M2=0.2 -| 1 1 1 1 1 2 3 4 5 n Ml Ml 0 Nl=2, N2=5 Nl=5, N2=5 ^2=5.0 /u2 = 2.0 ^2=1.0 M2 = 0.5 M2 = 0 2 -1 M l 5 -1 5 0 Ml Nl=5, N2=2 Nl=2, N2=2 Figure 6.5.1. Blocking probability vs service rates Figure 6.5.1. Here the blocking probabilities are plotted against pi, the service rate at station 1, for several values of the rate at station 2. The solid line represents 147 the blocking probability, the short dashed line is the lower bound and the dashed line is the upper bound. Those curves show the distance between the bounds and the true value. In accord with intuition, the lower bound is more accurate when the bottleneck is at station 2 (i.e., when N 2 or /z is small), and the upper bound is more accurate 2 when the bottleneck is at station 1 (i.e., when N x or u.^ is small). This can be argued by that in the lower bound model the blocking occurs when both stations are full, which is the case most of the time in the original model when the bottleneck is at station 2. When the bottleneck is at station 1, however, most blocking occurs when only the first station is full, because the second station is very seldom saturated, in both the original and upper bound models. Both bounds are most inaccurate when both rates are close to 1 (i.e., when pi « fi 2 w A)> where they are both in error by up to 30% approximately. Figure 6.5.2. In this figure we give contour curves for fixed blocking probabilities as a function of y.\ and fi . For each value of the blocking probability, the upper curve 2 represents the upper bound, the middle curve is the true value and the lower curve is for the lower bound. Each curve partitions the space of the service rates into two regions: points above the curve have a lower blocking probability, and points below the curve have a higher blocking probability. Hence they indicate how to select pi and fi when designing a system with constraints on the call congestion. Moreover, the 2 bottleneck effect is shown more dramatically here than in Figure 6.4.1. Symmetry property. In Table 6.5.2, the bounds are symmetric in Ni and N , 2 because p.\ = \i . Moreover, the blocking probabilities for the original model were 2 obtained by evaluating the stationary probabilities only once, because of the following result. Let p(ni,n ;iV ,iV2,A,/i ,/i ) 2 1 1 2 be the stationary probability that the original model be in state (ni,n ). We have the 2 following symmetry. 148 M 2 M2 5-i 4- 2- I 1 1- V 2 3 5 Nl=2, N2=5 Nl=5, N2=5 M2 5n M 2 „ 4- 4- 2- 2 MI 3 MI Nl=2, N2=2 Nl=5, N2=2 B = 0.1 B=0.2 - B=0.5 B=0.8 Figure 6.5.2. B l o c k i n g probability contonr curves L e m m a 6.5.1. p{n ,n ',N N ,X,fi fi ) 1 2 u 2 U 2 =v{N -n N 2 2i x 149 - n N ,N n ,n \) u 2 X) 2 u (6.5. Proof. Consider the system in which the empty seats (i.e., gaps) denote the states, instead of the number of jobs. Then whenever a job moves to the right, a gap moves to the left. Hence both systems have the same stationary probabilities. • Table 6.5.1 shows some values with various service rates p\ and ji > with blocking 2 probabilities ranging from near 0 to near 1. While it seems natural to fix A and vary \t\ and /i2, Lemma 6.5.1 suggests that it would be more economical to fix pi = 1 and vary A and fi . We did not, however, attempt to produce other tables using this symmetry. 2 The idea of considering the gaps moving in the opposite direction appears to be related to the reversibility concept of Yamazaki et al. (1985), although our result is different. 6.5.2. Hyperexponential case To support the conjectured insensitivity of the bounds, we also computed the blocking probabilities in the case of hyperexponential service distributions; that is, for service requirements of two successive exponential phases both with parameter 71 at node 1 and similarly with parameter 72 at node 2. Hence, the mean service requirements are Ti = 27,-- , i= 1 1,2. Since the insensitivity of the product forms is only guaranteed for service disciplines satisfying a notion of J L B , we chose as natural candidate a computer sharing discipline with unit rate. Hence, if only one job is present at a node, it is served at rate 1, whereas if 2 jobs are present they are both served simultaneously each at a rate 0.5. Further, since the state description has to be extended, such as by listing the number of residual phases for each job, we restricted ourselves to the buffer sizes Ni = iNT — 2. (The 2 number of states in this case is 36). Similarly to the exponential case, as described in Section 6.2, the blocking probabilities were calculated using the power method. Essentially, only the transition matrix of the embedded Markov chain became more complex, particularly because of the required blocking protocol. Although the theory of J L B suggested insensitivity results only for a protocol which prescribes a repetition of a complete service at node 1 when 150 T2 1 2 3 4 5 2 4 2 2 1 2 3 4 5 5 5 10 0.5 B B Bw L 0.33333 0.62016 0.74040 0.80352 0.84209 0.80484 0.82684 0.90022 0.52381 0.38000 0.64220 0.75419 0.81376 0.85032 0.80957 0.83540 0.90111 0.57244 0.39857 0.65539 0.76477 0.82228 0.85737 0.81503 0.84322 0.90223 0.57345 Bu BE R 0.42222 0.67580 0.77986 0.83404 0.86697 0.82102 0.85283 0.90325 0.57759 0.50000 0.72727 0.81818 0.86487 0.89286 0.84615 0.88048 0.91247 0.60000 Hyperexponential case Numerical values with A = 1 and N\ = N = 2 2 Table 6.5.3. Comparison with hyperexponential service times blocking at node 2 occurs [see Hordijk and Van Dijk (1983a)], we also investigated the protocol where only the last exponential phase of the service needs to be repeated after a blocking. Note that this latter protocol can also be interpreted as interruption of only the last phase. Hence, with a fixed mean service time r at node 1 and a service distribution t of n successive phases, the expected time of the last phase is ( n 7 ) , which tends to _1 1 zero if n gets larger. Thus, this latter protocol approximately performs as a "waiting protocol" in which a job only has to wait at the preceding node exactly until the next node gets a vacancy. We will therefore refer to our original and this new blocking protocol as "returning", respectively "waiting", and use the notation: BE' for the blocking probability in the exponential case (as by interruption); BR: for the blocking probability under the returning blocking protocol; Bw' for the blocking probability under the waiting blocking protocol. 151 Even when the bounds are very close (within 1%) we observed (see Table 6.5.3) that Br, < Bw < BR < BE < Bu. (6.5.2) This observation suggests the following conclusions (all w.r.t. computer sharing). 1. The bounds hold for phase type distributions under the returning protocol; that is, they are insensitive bounds, as was expected based upon J L B . 2. The bounds are also insensitive under the waiting protocol; this could have been expected for the upper bound as from (6.5.3) below. For the lower bound, we could give an intuitive argument similar to the exponential case. 3. The blocking protocols are related by Bw < BR < BE- (6.5.3) This can be argued as follows: BR < BE' in the exponential case the returning protocol can be seen as an interruption of the total exponential service. Hence, a blocked job always still has to complete a full exponential service with mean T . For phase type distributions it may X occur that during the first phase of a service the second node becomes saturated and then free again, without interrupting the service at all. The expected service time still needed by a job at node 1 after a saturation at node 2 will therefore be less than r . x Clearly, smaller sojourn times at node 1 imply a less frequent congestion and therefore blocking at node 1. Bw < BR: a repetition of only the last phase requires less time than the repetition of all phases. Hence, as above, we conclude less blocking. 152 Conclusion This research was intended to produce the following contributions to the fields of queueing and dynamic programming. First, the matrix decomposition approach suggests a new way to look at the evaluation of the stationary characteristics of the models. It is hoped that this approach, which is mathematically simple and direct, will contribute to make the models easier to use and understand, and stimulate their development and application. Second, the formulation of probabilistic models in the language of matrix algebra puts in evidence certain computational aspects of the solution. It is hoped that this work has contributed to point out some similarities between the computational problems of applied probability and the standard solution techniques of numerical algebra. Finally, each one of the individual results is, of course, a contribution in its own right. 153 Bibliography [1] Aho, A.V., Hopcroft, J.E. and Ullman, J.D. (1974). The Design and Analysis of Computer Algorithms. Addison-Wesley, Reading, Mass. [2] Anstreicher, K . M . and Rothblum, U.G. (1985). Computing the index and Drazin inverse using the shuffle algorithm. Working Paper AR5384, School of Organization and Management, Yale University, New Haven, C T 06520. [3] Barbour, A.D. (1976). Networks of queues and the method of stages. Adv. Appl. Prob. 8, pp. 584-591. [4] Baskett, F., Chandy, M . , Muntz, R. and Palacios, J . (1975). Open, closed and mixed networks of queues with different classes of customers. J.A.C.M. 22, pp. 248-260. [5] Berman, A. and Plemmons, R.J. (1979). Nonnegative Matrices in the Mathematical Sciences. Academic Press, New York. [6] Blackwell, D. (1962). Discrete dynamic programming. Ann. Math. Statist. 33, pp. 719-726. [7] Brumelle, S.L. (1971). On the relation between customer and time averages in queues. J. Appl. Prob. 8, pp. 508-520. [8] Campbell, S.L. and Meyer, C D . Jr. (1979). Generalized Inverses of Linear Transformations. Pitman, London. [9] Chandy, K . M . , Howard, J.H. and Townsley, D.F. (1977). Product form and local balance in queueing networks. J.A.C.M. 24, pp. 250-263. [10] Cjnlar, E. (1975). Introduction to Stochastic Processes. Prentice-Hall, Englewood Cliffs, N . J . [11] Denardo, E.V. (1970). Computing a bias-optimal policy in a discrete-time Markov decision problem. Operations Research 18, pp. 279-289. [12] Denardo, E.V. (1971). Markov renewal programs with small interest rates. Ann. Math. Statist. 42, pp. 477-496. [13] van Dijk, N . and Lamond, B.F. (1985). Bounds for the call congestion in tandem queues. Working Paper #1078, The University of British Columbia, Faculty of Commerce and Business Administration, 2053 Main Mall, Vancouver, B.C., Canada V6T 1Y8. 154 [14] Disney, R.L. and Chandramohan, J . (1980). A correction note on "Two finite M / M / l queues in tandem: a matrix solution for the steady-state". Opsearch 17, pp. 181-183. [15] Disney, R.L. and Konig, D. (1985). Queueing networks: a survey of their random processes. SIAM Review 27, pp. 335-403. [16] Drazin, M.P. (1958). Pseudo-inverses in associative rings and semigroups. Amer. Math. Monthly 65, pp. 506-514. [17] Faddeeva, V . N . (1959). Computational Methods of Linear Algebra. Dover, New York. [18] Federgruen, A . and Spreen, D. (1980). A new specification of the multichain policy iteration algorithm in undiscounted Markov renewal programs. Management Science 26, pp. 1211-1217. [19] Funderlic, R.E. and Plemmons, R.J. (1981). LU decomposition of M-matrices by elimination without pivoting. Lin. Alg. Appl. 41, pp. 99-110. [20] Gantmacher, F.R. (1959). The Theory of Matrices 1 (English translation). Chelsea, New York. [21] Golub, G.H. and Van Loan, C F . (1983). Matrix Computations. The Johns Hopkins University Press, Baltimore. [22] Golub, G.H. and Wilkinson, J.H. (1976). Ill-conditioned eigensystems and the computation of the Jordan canonical form. SIAM Review 18, pp. 578-619. [23] Greville, T.N.E. (1973). The Souriau-Frame algorithm and the Drazin pseudoinverse. Lin. Alg. Appl. 6, pp. 205-208. [24] Harrod, W.J. and Plemmons, R.J. (1984). Comparison of some direct methods for computing stationary distributions of Markov chains. SIAM J. Stat. Comput. 5, pp. 453-469. [25] Hartwig, R.E. (1976). More on the Souriau-Frame algorithm and the Drazin inverse. SIAM J. Appl. Math. 31, pp. 42-46. [26] Heyman, D.P. and Sobel, M . J . (1982). Stochastic Models in Operations Research, Vbiume /. McGraw-Hill, New York. [27] Heyman, D.P. and Sobel, M . J . (1984). Stochastic Models in Operations Research, Volume II. McGraw-Hill, New York. [28] Hordijk, A. and Van Dijk, N . (1981). Networks of queues with blocking. Performance 81 (ed. K . J . Kylstra). North-Holland, pp. 51-65. [29] Hordijk, A . and Van Dijk, N . (1982). Stationary probabilities for networks of queues. In Applied Probability—Computer Science: The Interface (eds. L . Disney and T . J . Ott), Birkhauser, Vol. II, pp. 423-451. 155 [30] Hordijk, A. and Van Dijk, N . (1983a). Networks of queues, part I: Job-local-balance and the adjoint process, part H: General routing and service characteristics. Proceedings International Seminar on Modelling and Performance Evaluation Methodology, INRIA, Vol. I, part I, pp. 79-104, part H, pp. 105-135. To appear in Lecture JVotes in Control and Information Sciences, Springer Verlag, Berlin. [31] Hordijk, A . and Van Dijk, N . (1983b). Adjoint processes, job-local-balance and insensitivity for stochastic networks. To appear in Proceedings 44th Session JSI, Madrid, Spain. [32] Howard, R.A. (1960). Dynamic Programming and Markov Processes. Wiley, New York. [33] Jackson, J.R. (1957). Networks of waiting lines. Operations Research 5, pp. 518521. [34] Jackson, J.R. (1963). Jobshop-like queueing systems. Management Science 10, pp. 131-142. [35] Johnston, R.L. (1982). JVumericai Methods: a Software Approach. Wiley, New York. [36] Karlin, Samuel. (1969). A First Course in Stochastic Processes. Academic Press, New York. [37] Kaufman, L . (1983). Matrix methods for queueing problems. SIAM J. Sci. Stat. Comput. 4, pp. 525-552. [38] Kelly, F.P. (1979). Reversibility and Stochastic Networks. Wiley. [39] Kemeny, J . G . and Snell, J.L. (1960). Finite Markov Chains. Van Nostrand, New York. , [40] Kingman, J.F.C. (1969). Markov population processes. J. Appl. Prob. 6, pp. 1-18. [41] Lamond, B.F. and Puterman, M.L. (1985). Generalized inverses in discrete time Markov decision processes. Working paper #1069, The University of British Columbia, Faculty of Commerce and Business Administration, 2053 Main Mall, Vancouver, B.C., Canada V6T 1Y8. Submitted, Math. O.R. [42] Meyer, C D . Jr. (1980). The condition number of a finite Markov chain and perturbation bounds for the limiting probabilities. SIAM J. Alg. Disc. Meth. 1, pp. 273-283. [43] Miller, B.L. and Veinott, A.F. Jr. (1969). Discrete dynamic programming with a small interest rate. Ann. Math. Statist. 40, pp. 366-370. [44] Puterman, M.L. and Brumelle, S.L. (1979). On the convergence of policy iteration in stationary dynamic programming. Math. O.R. 4, pp. 60-69. [45] Ross, S.M. (1970). Applied Probability Models with Optimization Applications. Holden-Day, San Francisco. [46] Rothblum, U.G. (1981). Resolvent expansions of matrices and applications. Lin. Alg. Appl. 38, pp. 33-49. 156 [47] Schassberger, R. (1978). The insensitivity of stationary probabilities in networks of queues. Adv. Apl. Prob. 10, pp. 906-912. [48] Schweitzer, P.J. (1971). Iterative solution of the functional equations of undiscounted Markov renewal programming. J. Math. Anai. Appl. 34, pp. 495-501. [49] Serfozo, R.F. (1979). An equivalence between continuous and discrete time Markov decision processes. Operations Research 27, pp. 616-620. [50] Stewart, G.W. (1984). Rank degeneracy. SIAM J. Sci. Stat. Comput. 5, pp. 403413. [51] Stewart, W.J. (1978). A comparison of numerical techniques in Markov modelling. Communications of the A.C.M. 21, pp. 144-152. [52] Varga, R.S. (1962). Matrix Iterative Analysis. Prentice-Hall, Englewood-Cliffs, N.J. [53] Veinott, A . F . Jr. (1966). On finding optimal policies in discrete dynamic programming with no discounting. Ann. Math, Statist. 37, pp. 1284-1294. [54] Veinott, A.F. Jr. (1969). Discrete dynamic programming with sensitive discount optimality criteria. Ann. Math. Statist. 40, pp. 1635-1660. [55] Veinott, A.F. Jr. (1974). Markov decision chains. In Studies in Optimization 10, M A A Studies in Mathematics, edited by G.B. Dantzig and B.C. Eaves. [56] Whittle, P. (1968). Equilibrium distributions for an open migration model. J. Appl. Prob. 5, pp. 567-571. [57] Whittle, P.(1969). Nonlinear migration processes. Internal. Statist. Inst. Bull. 42, Bk. 1, pp. 642-647. [58] Wilkinson, J.H. (1965). The Algebraic Eigenvalue Problem. Clarendon Press, Oxford. [59] Wilkinson, J.H. (1982). Note on the practical significance of the Drazin inverse. In Campbell, S.L. (ed.), .Recent Applications of Generalized Inverses. Pitman, Boston, pp. 82-99. [60] Wolff, R.W. (1982). Poisson arrivals see time averages. Operations Research 30, pp. 223-231. [61] Wong, B., Giffin, W. and Disney, R.L. (1977)^ Two finite M / M / l queues in tandem: a matrix solution for the steady-state. Opsearch 14, pp. 1-8. [62] Yamazaki, G., Kawashima, T. and Sakasegawa, H. (1985). Reversibility of tandem blocking queueing systems. Management Science 31, pp. 78-83. 157 BERNARD F. LAMOND Publications: B.F. Lamond (1985). M a t r i x Methods i n Queueing and Dynamic Programming. Ph.D. d i s s e r t a t i o n , The U n i v e r s i t y of B r i t i s h Columbia. N. van D i j k and B.F. Lamond (1985). Bounds f o r the c a l l c o n g e s t i o n i n tandem queues. Submitted, O p e r a t i o n s R e s e a r c h . B.F. Lamond and M.L. Puterman (1985). G e n e r a l i z e d i n v e r s e s i n d i s c r e t e time Markov d e c i s i o n p r o c e s s e s . Submitted, Mathematics of O.R. B. Lamond and N.F. Stewart (1981). Bregman's b a l a n c i n g method. T r a n s p o r t a t i o n Research, V o l . 15B, pp. 239-248. B. Lamond (1980). Balancement de m a t r i c e s et o p t i m i s a t i o n d ' e n t r o p i e . Master's t h e s i s , U n i v e r s i t e de M o n t r e a l .
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Matrix methods in queueing and dynamic programming
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Matrix methods in queueing and dynamic programming Lamond, Bernard Fernand 1985
pdf
Page Metadata
Item Metadata
Title | Matrix methods in queueing and dynamic programming |
Creator |
Lamond, Bernard Fernand |
Publisher | University of British Columbia |
Date Issued | 1985 |
Description | We investigate some modern matrix methods for the solution of finite state stochastic models with an infinite time horizon. Markov and semi-Markov decision processes and finite queues in tandem with exponential service times are considered. The methods are based on the Drazin generalized inverse and use matrix decomposition. Unlike the related Jordan canonical form, the decompositions considered are numerically tractable and use real arithmetic when the original matrix has real entries. The spectral structure of the transition matrix of a Markov chain, deduced from non-negative matrix theory, provides a decomposition from which the limiting and deviation matrices are directly obtained. The matrix decomposition approach to the solution of Markov reward processes provides a new, simple derivation of the Laurent expansion of the resolvent. Many other basic results of dynamic programming are easily derived in a similar fashion and the extension to semi-Markov decision processes is straightforward. Further, numerical algorithms for matrix decomposition can be used efficiently in the policy iteration method, for evaluating the terms of the Laurent series. The problem of finding the stationary distribution of a system with two finite queues in tandem, when the service times have an exponential distribution, can also be expressed in matrix form. Two numerical methods, one iterative and one using matrix decomposition, are reviewed for computing the stationary probabilities. Job-local-balance is used to derive some bounds on the call congestion. A numerical investigation of the bounds is included. It suggests that the bounds are insensitive to the distribution of the service times. |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2010-08-06 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
DOI | 10.14288/1.0097294 |
URI | http://hdl.handle.net/2429/27124 |
Degree |
Doctor of Philosophy - PhD |
Program |
Business Administration |
Affiliation |
Business, Sauder School of |
Degree Grantor | University of British Columbia |
Campus |
UBCV |
Scholarly Level | Graduate |
Aggregated Source Repository | DSpace |
Download
- Media
- 831-UBC_1986_A1 L35.pdf [ 7.38MB ]
- Metadata
- JSON: 831-1.0097294.json
- JSON-LD: 831-1.0097294-ld.json
- RDF/XML (Pretty): 831-1.0097294-rdf.xml
- RDF/JSON: 831-1.0097294-rdf.json
- Turtle: 831-1.0097294-turtle.txt
- N-Triples: 831-1.0097294-rdf-ntriples.txt
- Original Record: 831-1.0097294-source.json
- Full Text
- 831-1.0097294-fulltext.txt
- Citation
- 831-1.0097294.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0097294/manifest