Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

The exact tail asymptotics behaviour of the joint stationary distributions of the generalized join the… Zafari, Zafar 2012

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2012_fall_zafari_zafar.pdf [ 426.75kB ]
Metadata
JSON: 24-1.0072787.json
JSON-LD: 24-1.0072787-ld.json
RDF/XML (Pretty): 24-1.0072787-rdf.xml
RDF/JSON: 24-1.0072787-rdf.json
Turtle: 24-1.0072787-turtle.txt
N-Triples: 24-1.0072787-rdf-ntriples.txt
Original Record: 24-1.0072787-source.json
Full Text
24-1.0072787-fulltext.txt
Citation
24-1.0072787.ris

Full Text

THE EXACT TAIL ASYMPTOTICS BEHAVIOUR OF THE JOINT STATIONARY DISTRIBUTIONS OF THE GENERALIZED JOIN THE SHORTEST QUEUEING MODEL by ZAFAR ZAFARI B.Sc., Amirkabir University of Technology (Tehran Polytechnic), 2010 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in THE COLLEGE OF GRADUATE STUDIES (Mathematics) THE UNIVERSITY OF BRITISH COLUMBIA (Okanagan) April 2012 c Zafar Zafari, 2012  Abstract Parallel queueing networks have advantage over single server queueing networks, because when some servers simultaneously serve the customers in the line, the efficiency increases. Therefore, in the real world parallel queueing servers such as computer networks and multiple parallel processors, have become common. Since then many scientists have been studying the analysis of parallel queueing networks to give the exact practical models for the real world queueing problems. One of the topics in parallel queueing networks is the two-dimensional random walk, which recently have been studied by many scientists. The formulation for a random walk model in the first quadrant has been already studied by Fayolle, Malyshev and Iasnogorodski [19]. In this thesis I extend the formulation of a general random walk model to the half plane, including the first and fourth quadrants, and by using kernel method and Tauberian-like Theorem I investigate the exact tail asymptotic behaviour of the joint stationary distribution of the generating functions. In addition, I apply the results of the formulation of a general random walk model in the half plane to the Generalized-JSQ model, which is a queueing system with two parallel servers that have three streams of arrivals, two of which are dedicated to each servers, and the third one joins the shorter queue. Suppose that arrivals are independent Poisson processes, and service times have identical exponential distributions. Although this queueing model has been already studied by Zhao and Grassmann [75], and M. Miyazawa, [56], in this thesis I will use a different method named kernel method to investigate the exact tail asymptotic behaviour of the generatii  ing functions. The kernel method is simpler and faster than other methods, since in this method we are not dealing with the explicit expressions in terms of generating functions, but we only discuss the dominant singularity and its location.  iii  Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . .  ii  Table of Contents . . . . . . . . . . . . . . . . . . . .  iv  Acknowledgements . . . . . . . . . . . . . . . . . . . .  vi  Dedication  . . . . . . . . . . . . . . . . . . . . . . . . vii  1 Introduction . . . . . . . . . . 1.1 Motivation and Contribution 1.2 Literature Review . . . . . . 1.3 Organization . . . . . . . . .  . . . .  . . . .  . . . .  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1 1 2 3  2 Background . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Queueing Theory . . . . . . . . . . . . . . . . . . . . . 2.1.1 Counting Process . . . . . . . . . . . . . . . . . 2.1.2 Poisson Process . . . . . . . . . . . . . . . . . . 2.1.3 Markov Chain . . . . . . . . . . . . . . . . . . . 2.1.4 Champan-Kolmogorov Equations . . . . . . . . . 2.1.5 Queueing Models . . . . . . . . . . . . . . . . . . 2.2 Transition Probabilities . . . . . . . . . . . . . . . . . 2.3 Random Walk in the Quarter Plane . . . . . . . . . . . 2.4 Generating Functions . . . . . . . . . . . . . . . . . . . 2.4.1 Queueing Examples . . . . . . . . . . . . . . . . 2.5 Stability Condition . . . . . . . . . . . . . . . . . . . . 2.6 Riemann Surfaces . . . . . . . . . . . . . . . . . . . . . 2.6.1 Genus 1 or Genus 0 . . . . . . . . . . . . . . . . 2.7 Analysis of the Kernel . . . . . . . . . . . . . . . . . .  5 5 5 6 6 8 8 9 10 11 14 18 22 24 24 iv  2.7.1 Queueing Examples . . . . . . . . . . . . . . . . 25 3 Exact Tail Asymptotic Analysis of a General Random Walk Model Using Kernel Method . . . . . . . 3.1 Kernel Method and Generating Functions . . . . . . . 3.2 Key Kernel . . . . . . . . . . . . . . . . . . . . . . . . (−) 3.3 Asymptotic Analysis of P0 (x), Q0 (y) and Q0 (y) . . . 3.4 Tauberian-Like Theorem . . . . . . . . . . . . . . . . . 3.5 Exact Tail Asymptotic for P0 (x) . . . . . . . . . . . . 3.6 Exact Tail Asymptotic for Q0(y) . . . . . . . . . . . .  28 28 34 38 40 41 60  4 Exact Tail Asymptotic Behaviour of the Joint Stationary Distributions of Generalized-JSQ with Kernel Method . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Generalized-JSQ Model and Kernel Method . . . . . . 4.2 Branch Points and Branches . . . . . . . . . . . . . . . 4.3 Asymptotic Analysis of P0 (x) of the Generalized-JSQ . 4.4 Asymptotic Analysis of Q0(y) of the Generalized-JSQ . (−) 4.5 Asymptotic Analysis of Q0 (y) of the Generalized-JSQ 4.6 Exact Tail Asymptotic for P0 (x) of the Generalized-JSQ 4.7 Exact Tail Asymptotic for Q0(y) of the Generalized-JSQ  66 66 70 71 73 74 75 78  5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . 80 Bibliography . . . . . . . . . . . . . . . . . . . . . . . 82  v  Acknowledgements I offer my enduring gratitude to my supervisor, Dr. Javad Tavakoli, and my co-supervisor, Dr. Winfried K. Grassmann, who have inspired me to continue my work in this field. I owe particular thanks to Dr. Yiqiang Q. Zhao, whom I have spent a great time doing research with in summer 2011 at Carleton university. This dissertation would not have been possible without his guidance and support. I would like to thank Dr. Edward Butz for his support during past two years. I offer my best and special thanks to my parents, best friends of my life, who have supported me morally and financially throughout my years of education. Lastly, I thank my wonderful wife, who is the most valuable asset of my life.  vi  Dedication I lovingly dedicate this thesis to my father, who was the first teacher of my life, to my mother, whose love and kindness never ends, and also to my wonderful wife, whose love and encouragements are always with me.  vii  Chapter 1 Introduction 1.1  Motivation and Contribution  Previously, the formulation for a random walk model in the first quadrant, (x ≥ 0, y ≥ 0), was studied by Fayolle, Malyshev and Iasnogorodski [19]. In chapter 3 of this thesis I will extend the formulation of a general random walk model to the half plane, including the first quadrant (x ≥ 0, y ≥ 0) and the fourth quadrant (x ≥ 0, y ≤ 0), and by using the kernel method and Tauberian-like Theorem I investigate the exact tail asymptotic behaviour of the joint stationary distribution of the generating functions, which can be used as a reference for the one who is interested in analysing any random walk model in the half plane. In addition, in chapter 4, I will apply the result of the formulation of a general random walk model in the half plane to a real queueing model, the Generalized-JSQ, which is a queueing system with two parallel servers that have three streams of arrivals, two of which are dedicated to each servers, and the third one joins the shorter queue upon arrival. We assume that arrivals are independent Poisson processes, and service times have identical exponential distributions. Although this queueing model already has been studied by Zhao and Grassmann [75], and M. Miyazawa, [56], in this thesis I will use a different method named the kernel method, which is a simpler and faster method for investigating the exact tail asymptotic behaviour, due to the fact that in the kernel method we are not 1  dealing with the explicit expressions in terms of generating functions, and we only discuss the dominant singularity and its location compared with other singularities to figure out that how far we may expect to extend the radius of convergence in generating functions.  1.2  Literature Review  Parallel queueing networks have advantages over single server queueing networks, because when some servers simultaneously serve the customers in the line, the efficiency increases. Therefore, using parallel queueing servers such as computer networks and multiple parallel processors has become common. As a consequence many scientists and engineers have been studying parallel queueing networks in order to give the exact practical models for these queueing problems. We will see some of the works done in this area in the following paragraph. In the early 1960s and 1970s, the diffusion approximation method for queueing systems has been discussed by Cox and Miller [13], and later on by Gaver [30]. In 1990-1991 Adan et al. [3] introduced the compensation method which was useful for some specific two-dimensional queueing models. In 1987 and 1990 Blanc [7], and in 1988 Hooghiemstra, Keane, and Van de Ree [35] proposed a new technique which was based on the power-series expansion of the state probabilities. In 1995 B. Blaszczyszyn, A. Frey, and V. Schmidt [5] applied the factorial moment expansion in order to get the approximate formula for stationary characteristics of the multi-server queue with Markov-modulated arrival process and FIFO (first in first served) discipline. In 2005 the large deviation method was applied to modified Jackson Network by Foley and David R. McDonald [15], which is a good tool for analysing the multi-server queueing models. Later in 2008 the large deviation method was used on infinite dimensional stochastic dynamical systems by Amarjit Budhiraja, Paul Dupuis and Vasileios Maroulas [6]. In the first years of 1970s, V. A. Malyshev [60] [61] [62], and also in 2  1977 L. Flatto and H. P. McKean proposed the generating function method in order to get the asymptotic behaviour of the stationary probabilities of a random walk in the quarter plane containing the points with integer coordinates, Z2+ = {(i, j) : i, j = 0, 1, 2, ...}. In 1994 J. W. Cohen [11] used the generating function method in twodimensional random walk. In 1999 G. Fayolle, R. Iasnogorodski, and V. Malyshev [19] published a book called “Random Walks in the Quarter-plane”, in which they have applied many different mathematical tools in order to get the explicit expressions for generating functions of the two-dimensional random walk in the quarter-plane. Halfin [34], Mitzenmacher [63], and Winston [73] have been gaining valuable results for the parallel queues. Although over the past years many researchers have published papers in the area of parallel queues, still JSQ model has lots of problems that need to be worked on. Some work has been done on the stability problems of the JSQ model including Foley and McDonald [16], Foss and Chernova [27] [28], Kurkova [44], Sharifnia [66], Suhov and Vvedenskaya [67], Tandra, Hemachandra and Manjunath [69], Vvedenskaya and Suhov [71], Vvedenskaya, Dobrushin and Karpelevich [72].  1.3  Organization  In chapter 2 of this thesis, I will give some basic definitions and concepts relating to queueing systems; In addition, I will go through the two-dimensional random walks, generating functions, stability conditions for a random walk model, and the analysis of kernel in a fundamental form. Moreover, some real world queueing examples are given for each area. In chapter 3, I will get the formula for analysis of the exact tail asymptotic behaviour of the stationary probabilities in a general two-dimensional random walk model in the first and fourth quadrants using the kernel method. In chapter 4, I use the results of chapter 3 to apply the kernel method to the Generalized-JSQ (Join the Shortest Queue) model in order to obtain the exact tail asymptotic behaviour of the Generalized-JSQ’s joint 3  stationary probability distributions, and finally in the last chapter, I will summarize my main findings and achievements and give some suggestions in the area of random walk to the motivated readers for further studies in this area.  4  Chapter 2 Background 2.1  Queueing Theory  In this section, I will review some definitions and basic concepts from Queueing Theory. A queueing system includes customers arriving at random times to a system and depart after receiving service. Queueing systems are classified according to : (1) the input process, the probability distribution of arrival of customers in time; (2) the service time distribution, the probability distribution of the time to serve the customers; (3) the queueing discipline, the order in which customers are served; (4) the number of servers. A very basic queueing formula is L = λW , where L = the average number of customers in the system, λ = the arrival rate of customers to the system, W = the average waiting time for each customer in the system. 2.1.1  Counting Process  A stochastic process {N (t), t ≥ 0} is called a counting process when N (t) represents the total number of events that have occurred by time t. N (t) must satisfy: (i) N (t) ≥ 0  (ii) N (t) is integer valued. 5  (iii) if s < t, then N (s) ≤ N (t) (monotonicity).  (iv) For s < t, N (t) − N (s) represents the number of events that have occurred in the interval (s, t). 2.1.2  Poisson Process  One of the simplest counting processes is the Poisson process. Definition 2.1. The counting process {N (t), t ≥ 0} is called a Poisson process having rate λ, λ > 0, if (i) N (0) = 0. (ii) The process has independent increments. (iii) The number of events in any interval t is Poisson distributed with mean λt, for all s, t ≥ 0. n −λt (λt) , n = 0, 1, ... P {N (t + s) − N (s) = n} = e n! Definition 2.2. A counting process is called a process with stationary increments if the distribution of the number of events occurring in any time interval only depends on the length of the interval. That is, the process has stationary increments if the number of events in the time interval (t1 + a, t2 + a) has the same distribution as the number of events in the time interval (t1, t2 ). 2.1.3  Markov Chain  In this section we consider a discrete stochastic process {Xn , n = 0, 1, 2, ...} that takes on a countable number of possible values in the state space A. If Xn = i, then the process is in state i at time n. The notation pij means the probability of moving from state i to j. If we have the following condition, P {Xn+1 = j|Xn = i, Xn−1 = in−1, ..., X1 = i1 , X0 = i0 } = P {Xn+1 = j| Xn = i} = pi,j ,  (2.1)  for all states i0, i1 , ..., in−1, i, j and all n ≥ 0, then stochastic process is called discrete-time Markov chain. Equation (2.1) indicates that 6  the next state in Markov chain only depends on its current state, and does not depend on any previous states. A discrete-time homogeneous Markov chain is characterized by the stochastic matrix P = ||pi,j ||, i, j ∈ A,  such that  pi,j ≥ 0,  j  pi,j = 1, ∀i ∈ A. (n)  n  The matrix elements of P are denoted by pi,j . Definition 2.3. A Markov chain is called irreducible if, for every i, j, there exists m, depending on (i, j) such that (m)  pi,j = 0. A Markov chain is called aperiodic if, for some i, j ∈ A, the set (n) {n : pi,j = 0} has greatest common divisor equal to 1. Definition 2.4. An irreducible aperiodic Markov chain is called ergodic if, and only if, the equation πP = π, where π is the vector π = {πα , α ∈ A}, has a unique l1 -solution up to a multiplicative factor, which can be chosen πα = 1, πα > 0. α  The πα ’s are called stationary probabilities. A continuous-time Markov process is the continuous-time version of a Markov chain. Hence, it is a stochastic process {X(t) : t ≥ 0}, which satisfies the Markovian property or memoryless property, and can only take on values from a state space set A. For tm > tn > 0, the Markov property says that the conditional probability of an event at time tm , given the probabilities of that event for all times up to and including time tn , is only depending on time tn . The continuous-time Markov chain has many application in queueing systems. 7  2.1.4  Champan-Kolmogorov Equations  We have already defined the one-step transition probabilities, pi,j , now we want to define the n-step transition probabilities denoted (n) by pi,j , which indicates the probability that a process in state i will be in state j after n transitions. That is, (n)  pi,j = p{Xn+m = j|Xm = i}, n ≥ 0, i, j ≥ 0.  (2.2)  (1)  Obviously pi,j = pi,j . The Chapman-Kolmogorov equations provide a method to compute the n-step transition probabilities. These equations are, (n+m) pi,j  =  ∞  (n) (m)  pi,k pk,j  (2.3)  k=0  for all n, m ≥ 0, and all i, j. More information about ChapmanKolmogorov equation can be found in [65]. 2.1.5  Queueing Models  In this section we provide basic information on some well-known queueing models. 1. M/M/s model: The arrivals form a Poisson process, and the service times are exponentially distributed. In this model, there are s servers. The queueing parameters, W , the mean waiting time, and L, the mean queue length, are as follows,  8  λn ( ) n−1 1 λ λ µ j ( ( ) + )−1( )λs 1−λ sµ j=0 j! µ s!( ) 1 sµ W = + , λ µ λ s! µs (1 − )2 µ λn ( ) n−1 1 λ λ µ j ( ( ) + )−1( )λs 1−λ sµ j=0 j! µ s!( ) λ sµ L= + . λ µ s! µs (1 − )2 µ  (2.4)  (2.5)  2. M/G/1: Like in the previous model, arrivals follow a Poisson process with rate λ, while service time has an arbitrary distribution, G(y) = P r{Yk ≤ y} with finite mean service time ν = E[Yk ]. Also, 1 the service rate is µ = . In this model there is one server. ν 3. M/G/∞: It is like the M/G/1 model except that in this model there is infinite number of servers instead of one server.  2.2  Transition Probabilities  In this section we introduce the transition probabilities for the first and fourth quadrants of a two-dimensional random walk plane. A two-dimensional transition probability is the probability of moving from state, (m, n), to another state, (m + i, n + j), which is independent of m and n. The maximum step in any dimension is ±1. Transition probabilities depend only on step sizes, except for  9  boundaries. Therefore, we   pi,j    (0)   pi,j    p(1) i,j P¯(m,n),(m+i,n+j) = (2)  pi,j    (−)   pi,j    p(−2) i,j (0)  (1)  (2)  define pi,j as follows, if m ≥ 1 and n ≥ 1, −1 ≤ i, j ≤ 1,  if (m, n) = (0, 0), −1 ≤ i, j ≤ 1, if m ≥ 1 and n = 0, −1 ≤ i, j ≤ 1, if m = 0 and n ≥ 1, −1 ≤ i, j ≤ 1,  if m ≥ 1 and n ≤ −1, −1 ≤ i, j ≤ 1, if m = 0 and n ≤ −1, −1 ≤ i, j ≤ 1,  (−2)  (−)  where pi,j , pi,j , pi,j , pi,j , pi,j , and pi,j are non negative real numbers in [0, 1]. Since in this thesis we are dealing with discrete-time Markov chains which are homogeneous random walks, we have (0)  i,j=0,1  pi,j = 1,  (1)  i=0,±1,j=0,1  pi,j = 1,  (2)  i=0,1,j=0,±1  2.3  pi,j = 1,  (−)  i,j=0,1  pi,j = 1, (−2)  pi,j = 1. i=0,±1,j=0,±1  i,j=0,1  pi,j  = 1.  Random Walk in the Quarter Plane  Discrete-time Markov chains which are homogeneous two-dimensional random walks, have three main properties as follows, 1. The state space is two-dimensional, and consists of points with non-negative integer coordinates. So the state space is A = Z2+ = {(i, j) : i, j ≥ 0 are integers} 2. Because of the boundaries, we partition Z2+ as follows Z2+ = S ∪ S (1) ∪ S (2) ∪ {(0, 0)}, where S = {(i, j) : i, j > 0}, S (1) = {(i, 0) : i > 0}, 0}.  S (2) = {(0, j) : j >  3. We assume that jumps are bounded. Therefore, 10  p(i,j)(i+α,j+β) = 0,  unless −1 ≤ α, β ≤ 1,  Next we introduce an important theorem which indicates the stability conditions for a random walk model in first quadrant (x ≥ 0, y ≥ 0). Theorem 2.1. Let M, M (1) and M (2) be as follows   M = (Mx , My ) = ( ipij , jpij )     (1) (1) (1) (1) M (1) = (Mx , My ) = ( ipij , jpij )    (2) (2) (2) (2)  M (2) = (Mx , My ) = ( ipij , jpij )  then when M = 0, a random walk is ergodic if, and only if, one of the following conditions holds,   Mx < 0, My < 0 (i) Mx My(1) − My Mx(1) < 0,   (2) (2) My Mx − Mx My < 0; (2)  (2)  (ii) Mx < 0, My ≥ 0, My Mx − Mx My < 0; (1)  (1)  (iii) Mx ≥ 0, My < 0, Mx My − My Mx < 0; Proof. The probabilistic proof of this theorem is given in [19].  2.4  Generating Functions  In this section we introduce the generating functions and the fundamental form for a random walk in the quarter plane which play an important role in the kernel analysis. As stated in section 2.3, because of the boundaries, the state space of two-dimensional random walk can be written as the union of disjoint classes as follows, Z2+ = S ∪ S (1) ∪ S (2) ∪ {(0, 0)}, 11  As discussed earlier, two states of the same class have the same transition probabilities. Let Xn denote the state of the random walk at time n. Now we define two complex variables, u1, and u2, one for each direction. The variable u is the vector of complex variables in C2 as u = (u1, u2), |ui| = 1, ui ∈ C, i = 1, 2, and the jump generating functions are as follows P1 (u) = E[uXn+1−Xn |Xn = z ∈ S],  P2 (u) = E[uXn+1−Xn |Xn = z ∈ S (1) ],  P3 (u) = E[uXn+1−Xn |Xn = z ∈ S (2) ],  P4 (u) = E[uXn+1−Xn |Xn = z ∈ {(0, 0)}]. Here  (2.6)  2  uznn ,  z  u = n=1  where un , zn , n = 1, 2, are coordinates of the vector u and z respectively. Therefore we have E[uXn+1 ] =E[uXn uXn+1−Xn ] =E[uXn 1{Xn∈S}]P1 (u) + E[uXn 1{Xn∈S (1) } ]P2(u)+  E[uXn 1{Xn∈S (2) }]P3 (u) + E[uXn 1{Xn∈{(0,0)}}]P4(u). (2.7)  Now we introduce the generating function πz uz ,  π1 (u) = z∈S  πz uz ,  π2 (u) = z∈S (1)  πz uz ,  π3 (u) = z∈S (2)  π4 (u) = π0,0 ,  (2.8) 12  where πz indicates the stationary probability of being in state z. Using (2.8) and taking the limit as n → ∞ in (2.7), we get 4 r=1  [1 − Pr (u)]πr (u) = 0.  (2.9)  We can rewrite (2.9) as −h(x, y)π(x, y) = h1 (x, y)π(x) + h2 (x, y)˜ π (y) + π0,0 h0 (x, y), (2.10) which is called fundamental form, where  ∞   π(x, y) = πi,j xi−1y j−1 ,    i,j=1  π(x) = πi,0 xi−1,  i≥1     π0,j y j−1 , ˜ (y) = π  (2.11)  j≥1  and  1  1  h(x, y) = xy( i=−1 j=−1 2  pi,j xiy j − 1)  = a(x)y + b(x)y + c(x) = a ˜(y)x2 + ˜b(y)x + c˜(y), 1  1 (1)  pi,j xiy j − 1)  h1 (x, y) = x( =  i=−1 j=0 (1) (1) (p−1,0 + p0,0x  1  h2 (x, y) = y( =  (2.12)  (1)  (1)  (1)  (1)  (2)  (2)  (2)  + p1,0x2) + (p−1,1 + p0,1x + p1,1x2)y − x, (2.13)  1 (2)  pi,j xi y j − 1)  i=0 j=−1 (2) (2) (p0,−1 + p0,0y  (2)  + p0,1y 2 ) + (p1,−1 + p1,0y + p1,1y 2 )x − y, (2.14) 13  1  h0 (x, y) = x( =  1 (0)  pi,j xi y j − 1)  i=0 j=0 (0) (0) (p0,0 + p1,0x  (0)  (0)  + p0,1y + p1,1xy) − 1,  (2.15)  a(x) = −(p−1,1 + p0,1x + p1,1x2), b(x) = −(p−1,0 + (p0,0 − 1)x + p1,0x2), c(x) = −(p−1,−1 + p0,−1x + p1,−1x2), a˜(y) = −(p1,−1 + p1,0y + p1,1y 2 ), ˜b(y) = −(p0,−1 + (p0,0 − 1)y + p0,1y 2 ),  (2.16) (2.17) (2.18) (2.19)  with  2.4.1  (2.20)  Queueing Examples  Here we give some examples of random walks in queueing systems and their corresponding generating functions. 1. The Symmetric Join the Shortest Queue Model  In this queueing model, customers arrive to system with the rate of λ. Upon arrival, customers determine which server has the shorter queue, and then join the shorter one. Also service rates of customers from both servers have the same value of µ. If the length of the both server’s queues are the same, an arriving customer will join either queue with the probability of 1/2. In this queueing model, the random walk probabilities are as follows, p−1,1 = µ, p0,−1 = µ, p1,−1 = λ, (1)  (1)  p−1,1 = 2µ, p0,1 = λ, (2)  (2)  (2)  p0,−1 = µ, p1,−1 = λ, p0,0 = µ, (0)  (0)  p0,1 = λ, p0,0 = 2µ.  14  For this model the generating functions are h(x, y) = xy − µy 2 − µx − λx2, h1 (x, y) = 2µy + λxy − x, h2 (x, y) = µy + µ + λx − y, h0 (x, y) = 2µ + λy − 1. 2. Symmetric Join the Shorter Queue with Coupled processor  This model is similar to the Symmetric JSQ with the difference that whenever the length of Qi is zero, the service rate for the other queue will be changed from µj to µj + µ∗i . In this queueing model when x-axis represents min {Q1, Q2}, and y-axis represents the |Q2 − Q1|, the transition probabilities of random walk are as follows, p1,−1 = λ, p0,−1 = µ, p−1,1 = µ, (1)  (1)  p0,1 = λ, p−1,1 = 2µ, (2)  (2)  (2)  p1,−1 = λ, p0,−1 = µ + µ∗ , p0,0 = 1 − (λ + µ + µ∗ ), (0)  (0)  p0,1 = λ, p0,0 = 2µ, And with respect to the transition probabilities, the generating functions are as follows, h(x, y) = xy − µy 2 − µx − λx2, h1 (x, y) = 2µy + λxy − x, h2 (x, y) = µy + µ + λx − y, h0 (x, y) = 2µ + λy − 1. 3. The Pre-Emptive Priority Queueing System  In this example we consider the M/M/1 pre-emptive queueing model. In this queue we have two classes of customers with different priority of being served. Higher and lower priority customers arrivals occur according to a Poisson process with rates of λ1 and λ2 respectively. The service rule is FIFO (first in first served), and it is preemptive. 15  That is, if a lower priority customer is being served, his service is interrupted upon arrival of a higher priority customer to system, and the lower priority customer is waiting at the head of the queue until the service of the higher priority customer is completed and his service continues again. There is one server with exponential service time in this model, which may have different service rates, µ1 , and µ2 when serving respectively the higher priority customer or lower one. Without loss of generality we assume that λ1 +λ2 +µ1 +µ2 = 1. The transition probabilities of the random walk of the quarter plane in which the x-axis corresponds to the length of the queue of higher priority customers, and y-axis corresponds to the queue length of lower priority customers are given by, p1,0 = λ1 , p0,1 = λ2 , p−1,0 = µ1 , p0,0 = µ2 (1)  (1)  (1)  (2)  (2)  (2)  (0)  (0)  (1)  p1,0 = λ1 , p0,1 = λ2 , p−1,0 = µ1 , p0,0 = µ2 , (2)  p1,0 = λ1 , p0,1 = λ2 , p0,0 = µ1 , p0,−1 = µ2 , (0)  p1,0 = λ1 , p0,0 = µ1 + µ2 , p0,1 = λ2 . According to the transition probabilities above, we have the following generating functions for the pre-emptive priority queueing system, h(x, y) = xy − [λ1x2 + (µ2 + λ2 y)x + µ1 ]y, h1 (x, y) = [λ1 x2 + (µ2 + λ2 y)x + µ1 ] − x, h2 (x, y) = [λ1 xy + λ2 y 2 + µ1 y + λ2 ] − y, h0 (x, y) = [λ1 x + λ2 y + µ1 + µ2 ] − 1. 4. The Restricted Jackson Network  In this queueing model, we have two queues, qi , i = 1, 2. Each of them has an external arrival stream which is a Poisson process with rate of λi . The service times in both queues are exponential, and departures from the servers while busy occur at rate of µi , i = 1, 2. Each customer who completes his service in the qi has two options, either he gets out of the system with probability of ri,0, or 16  he joins the other queue with probability of rij . Also without loss of generality we assume that λ1 + λ2 + µ1 + µ2 = 1. For this queueing model, the transition probabilities are given by, p1,0 = λ1 , p0,1 = λ2 , p−1,1 = r12µ1 , p−1,0 = r10µ1 , p0,−1 = r20µ2 , p1,−1 = r21µ2 , (1)  (1)  (1)  (2)  (2)  (2)  (0)  (0)  (0)  (1)  (1)  p1,0 = λ1 , p0,1 = λ2 , p−1,1 = r12µ1 , p−1,0 = r10µ1 , p0,0 = µ2 , (2)  (2)  p1,0 = λ1 , p0,1 = λ2 , p0,0 = µ1 , p0,−1 = r20µ2 , p1,−1 = r21µ2 , p1,0 = λ1 , p0,1 = λ2 , p0,0 = µ1 + µ2 . According to transition probabilities the generating functions are, h(x, y) =xy − (λ1 x2y + r21µ2 x2 + λ2 xy 2 + r20µ2 x + r12µ1 y 2 + r10µ1 y), h1 (x, y) =(r12µ1 + λ2 x)y + (r10µ1 + µ2 x + λ1 x2) − x, h2 (x, y) =(r21µ2 + λ1 y)x + (r20µ2 + µ1 y + λ2 y 2 ) − y, h0 (x, y) =λ1 x + λ2 y + µ1 + µ2 − 1. 5. The Classical Tandem Queueing Model  In this model, there are two servers in tandem. Customers are arriving to the first server with rate of λ1 , and being served with rate of µ1 . After completion of their service in the first server, they go to the second server to receive service with rate of µ2 , and after completion of the their second service with server two, they leave the system. Without loss of generality we assume that λ + µ1 + µ2 = 1. The transition probabilities for the classical Tandem queue are given by, p1,0 = λ, p−1,1 = µ1 , p0,−1 = µ2 , (1)  (1)  (2)  (2)  (0)  (0)  (1)  p1,0 = λ, p−1,1 = µ1 , p0,0 = µ2 , (2)  p1,0 = λ, p0,0 = µ1 , p0,−1 = µ2 , p1,0 = λ, p0,0 = µ1 + µ2 . 17  Using the above transition probabilities, one can find the generating function as follows, h(x, y) = xy − (µ1 y 2 + λx2 y + µ2 x), h1 (x, y) = (λx2 + µ2 x + µ1 y) − x, h2 (x, y) = (λxy + µ2 + µ1 y) − y, h0 (x, y) = λx + µ1 + µ2 − 1. 6. Two Demands Queueing System  In this queueing model, there is one stream of customers with arrival rate λ. The server of qi, i = 1, 2 has service rate of µi , with an exponential service time independent of arrival rate λ. The stability conditions in this model are λ < µ1 , and λ < µ2 . Without loss of generality we assume that λ + µ1 + µ2 = 1. The transition probabilities for this queue are p1,1 = λ, p−1,0 = µ1 , p0,−1 = µ2 , (1)  (1)  (2)  (2)  (0)  (0)  (1)  p1,1 = λ, p−1,0 = µ1 , p0,0 = µ2 , (2)  p1,1 = λ, p0,0 = µ1 , p0,−1 = µ2 ), p1,1 = λ, p0,0 = µ1 + µ2 . Knowing the transition probabilities for this queue, one can get the following generating functions, h(x, y) = xy − (µ1 y + λx2y 2 + µ2 x), h1 (x, y) = λx2y + µ2 x + µ1 − x, h1 (x, y) = λy 2 x + µ2 + µ1 y − y, h0 (x, y) = λxy + µ1 + µ2 − 1.  2.5  Stability Condition  In this section we will derive the stability conditions of some wellknown queueing examples by using the theorem (2.1) by Fayolle, Iasnogorodski, and Malyshev. 18  1. The Symmetric Join the Shortest Queue Model  Considering the transition probabilities for the Symmetric JSQ model, we have   My = −µ, Mx = λ − µ, (1) (1) Mx = −2µ, My = λ,   (2) (2) Mx = λ, My = −λ. So by using the theorem (2.1), we have the following situations,  (1) λ < µ, − µ < 0,     2   (2) λ(λ − µ) − 2µ < 0, (i) ⇒ (λ + µ)(λ − 2µ) < 0 ⇒ λ < 2µ,    (3) (λ − µ)µ − λµ < 0,     ⇒ −µ2 < 0, (ii) λ < µ1 ,  (iii) µ ≤ λ,  −µ ≥ 0, (not possible)  −µ1 < 0,  ⇒ (µ + λ)(λ − 2µ) < 0 ⇒ λ < 2µ. Therefore, this queue is ergodic and stable when λ < µ. 2. Symmetric Join the Shorter Queue with Coupled processor  From theorem (2.1) we have   My = −µ, Mx = λ − µ, (1) (1) Mx = −2µ, My = λ,   (2) (2) Mx = λ, My = −µ − µ∗ .  Hence, the stability conditions are as follows,  19    (1) λ < µ, − µ < 0,    2   (2) λ(λ − µ) − 2µ < 0, (i) ⇒ (λ + µ)(λ − 2µ) < 0 ⇒ λ < 2µ,     (3) (λ − µ)(µ + µ∗ ) − λµ < 0,    ⇒ (λ − µ)µ∗ − µ2 < 0, (ii) λ < µ1 , (iii) µ ≤ λ,  −µ ≥ 0, (not possible)  −µ1 < 0,  ⇒ (µ + λ)(λ − 2µ) < 0 ⇒ λ < 2µ. Therefore, considering the conditions above, this queue is ergodic if λ < 2µ.  3. The Pre-Emptive Priority Queueing System  For this queueing model we have   My = λ2 , Mx = λ1 − µ1 , (1) (1) Mx = λ1 − µ1 , My = λ2 ,   (2) (2) Mx = λ1 , My = λ2 − µ2 .  Hence, the stability conditions are as follows,   (1) λ < µ1 , λ2 < 0, (not possible) (i) (2) (λ1 − µ1 )λ2 − λ2 (λ1 − µ1 ) < 0,   (3) λ2 λ1 − (λ1 − µ1 )(λ2 − µ2 ) < 0, (ii) λ < µ1 ,  λ2 ≥ 0,  λ2 λ1 − (λ1 − µ1 )(λ2 − µ2 ) < 0, λ1 λ2 + < 1, ⇒ λ1 µ2 + λ2 µ1 − µ1 µ2 < 0 ⇒ µ1 µ2 (iii) λ ≥ µ1 , λ2 < 0. (not possible) Therefore, this queue is ergodic if, and only if ρ = ρ1 + ρ2 < 1, where,  20  ρ1 =  λ1 , µ1  and  ρ2 =  λ2 . µ2  4. The Restricted Jackson Network  In this queue matrix M is   Mx = λ1 + r21µ2 − r12µ1 − r10µ1 ,    M = λ + r µ − r µ − r µ , y 2 12 1 21 2 20 2 (1) (1)  M = λ − (r µ + r µ ), M = λ2 + r12µ1 , x y 1 10 1 12 1    M (2) = λ + r µ , M (2) = λ − r µ . x y 1 21 2 2 21 2  With respect to the matrix M above, and also considering theorem (2.2) we have the stability condition for the restricted Jackson network as follows, ρ1 < 1,  and  ρ2 < 1.  5. The Classical Tandem Queueing Model  For Tandem queue is matrix M is   My = µ1 − µ2 Mx = λ − µ1 , (1) (1) Mx = λ − µ1 , My = µ1 ,   (2) (2) Mx = λ, My = −µ2 .  Therefore, by applying theorem (2.1) this queue is ergodic if, and only if λ < µ1 , and λ < µ2 . 6. Two Demands Queueing System  With respect to the transition probabilities for Two demands queueing system we have   My = λ − µ2 Mx = λ − µ1 , (1) (1) Mx = λ − µ1 , My = λ,   (2) (2) Mx = λ, My = λ − µ2 . 21  Hence, by applying theorem (2.1) this queue is ergodic if, and only if λ < min {µ1 , µ2 }.  2.6  Riemann Surfaces  Here we provide a summary of algebraic functions and Riemann surfaces which we need for our later discussions. An important class of complex variable functions are algebraic functions and their integrals. An analytic function y = y(x) is called an algebraic function if it satisfies the following equation, a0 (x)y n + a1 (x)y n−1 + ... + an (x) = 0, a0 (x) = 0,  (2.21)  where ai (x), i = 0, 1, 2, ..., n is a polynomial in x with complex coefficients. Furthermore, a rational function of x and y is of the form, b0(x)y m + b1(x)y m−1 + ... + bm (x) , R(x, y) = c0 (x)y k + c1 (x)y k−1 + ... + ck (x)  (2.22)  where bi, i = 1, 2, ..., m, and ci , i = 1, 2, ..., k, are polynomials in x with complex coefficients, and the denominator is not identically zero. The region on which an algebraic function is defined and single valued is a Riemann surface. The simplest algebraic functions are of the form a0 (x)y + a1 (x) = 0,  (2.23)  where a0 (x) and a1 (x) are polynomials in x with complex coeffi−a1 (x) cients. Hence, y = is a rational function of x. a0 (x) In this thesis we are dealing with algebraic functions of the form h(x, y) = a0 (x)y 2 + a1 (x)y + a2 (x),  (2.24) 22  where ai (x), for i = 0, 1, 2 are polynomials in x, and a0 (x) = 0. In this case we change the variable by ζ = 2a0(x)y + a1 (x),  (2.25)  ζ 2 − p(x) = 0,  (2.26)  to get  where p(x) = a21 (x) − 4a0(x)a2(x). A two-dimensional manifold M is a topological space, if every point of that is surrounded by a neighbourhood, which is homeomorphic to an open disk in the complex plane C. A pair {U, ϕ}, which is formed by neighbourhood U ⊆ M and its associated homeomorphism ϕ is called a chart. The mapping, Φ : U → C, defines a system of local coordinated in U . A collection of charts {(Ui, ϕi), i ∈ I}, where for some index set I, {Ui, i ∈ I} is an open covering of M, is called Atlas A. A connected two-dimensional manifold M is Riemann surface S, if there exists an atlas AS with the following property: For any pair {U, ϕ}, {V, ϕ} of charts in AS , such that U ∩ V = φ, the mapping ϕ o ψ −1 is holomorphic in ψ(U ∩ V ) ⊂ C. The classical notion of holomoorphic functions can be generalized to the case of Riemann surfaces. Let S be a Riemann surface, AS its atlas, and Y ⊂ S an open connected set of S. A function f : Y → C is holomorphic in Y , if, for any chart {U, ϕ} in AS , the mapping f o ϕ−1 : ϕ(U ) → C is holomorphic in the normal sense in the open set ϕ(U ) ⊂ C. Now let S and T be two Riemann surfaces. The mapping f : S → T is holomorphic if, for any pair of charts {U, ϕ}, {V, ϕ} belonging to AS and AT respectively, with f (U ) ⊂ V , the mapping ψ o f ϕ−1 is holomorphic in Φ(U ) ⊂ C. The uniqueness theorem remains valid: if f1 and f2 are two holomorphic functions from S to T , which are equal on an infinite compact set of S, then they must be equal everywhere.  23  2.6.1  Genus 1 or Genus 0  The Riemann surfaces which are generated by fundamental form and generating functions are divided into 2 cases, genus 1 or genus 0. In fact, Riemann surface has genus 0 if, and only if the discriminant, D(x) = b2 (x) − 4a(x)c(x) = d4 x4 + d3 x3 + d2 x2 + d1 x + d0 , of the generating function equation, Q(x) = a(x)y 2 + b(x)y + c(x) = 0, has multiple zeros. That is, whenever the number of zeros of the discriminant is 1 or 2 the Riemann surface is of genus 0 which creates sphere, and when the number of discriminant’s zeros is 3 or 4 the Riemann surface is of genus 1 which creates torus. For further studies one can refer to [19]. The following lemma gives the explicit classification of genus 1 and 0. Lemma 2.1. (Fayolle, Iasnogorodski, Malyshev) For all non singular random walks, a Riemann surface has genus 0 if, and only if one the following holds, Mx = My = 0, p1,0 = p1,1 = p0,1 = 0, p1,0 = p1,−1 = p0,−1 = 0, p−1,0 = p−1,−1 = p0,−1 = 0, p0,1 = p−1,0 = p−1,1 = 0.  2.7  Analysis of the Kernel  The equation, h(x, y) = 0, is called kernel equation. It is usually a quadratic equation in terms of x and y. If we want to solve the kernel equation for y, we see that for each value of x, there are two values for y. The solutions of this equation gives us a Riemann surface. Whenever the delta of D(x) or D(y) has three or four distinct zeros, then the Riemann surface will be of genus 1, which yields a torus. However, if D(x) or D(y) has one, or two distinct roots, then the Riemann surface will be of genus 0, which yields a 24  sphere. We denote the roots of D(x) by x1, x2 , x3, and x4. There is an important lemma by Fayolle, Iasnogorodski, and Malyshev [19], which indicates the relationship between xi, i = 1, 2, 3, 4, under different conditions. Here we state the lemma. Lemma 2.2. (Fayolle, Iasnogorodski, and Malyshev) A non-singular random walk with My = 0, y(x) has two real branch points x1 and x2 inside the unit circle, and two real branch points, x3 and x4, outside of the unit circle. The following classifications hold for the pair (x3, x4). √ 1. If p1,0 > 2 p1,1p1,−1, then 1 < x3 < x4 < ∞; √ 2. If p1,0 = 2 p1,1p1,−1, then 1 < x3 < x4 = ∞; √ 3. If p1,0 < 2 p1,1p1,−1, then 1 < x3 ≤ −x4 < ∞; Also for the pair (x1, x2) we have √ 4. If p−1,0 > 2 p−1,1p−1,−1, then 0 < x1 < x2 < 1; √ 5. If p−1,0 = 2 p−1,1p−1,−1, then x1 = 0 and 0 < x2 < 1; √ 6. If p−1,0 < 2 p−1,1p−1,−1, then 0 < −x1 ≤ x2 < 1. Proof of this lemma can be found in [19]. Lemma 2.3. (Fayolle, Iasnogorodski, and Malyshev) For nonsingular random walks with My = 0, one of the branch points of y(x) is 1. Furthermore, if Mx < 0, two branch points have a modulus greater than 1, and the remaining ones have a modulus less than 1, and if Mx > 0, two branch points have a modulus less than 1, and the other ones have a modulus greater than 1. 2.7.1  Queueing Examples  In this section we provide some queueing examples to see how the above lemma works for some queueing models. Since in all queueing examples we will study My = 0, then we have 0 ≤ x1 < x2 < 1 < x3 < x4 , ∞.  or  0 ≤ x1 < x2 < 1 < x3 < x4 = 25  1. The symmetric JSQ Model and the JSQ Model with Coupled Processors  Referring to the transition diagrams for the Symmetric JSQ model, and JSQ model with coupled processors, we have p1,0 = p1,1 = p−1,0 = p−1,−1 = 0,  p1,−1 = λ,  p−1,1 = µ.  Therefore, according to lemma (2.2) and (2.3), we obtain 0 = x1 < x2 < 1 < x3 < x4 = ∞. 2. The Pre-Emptive Priority Queueing System  The Pre-emptive priority queueing model is singular, and so its corresponding Riemann surface is of genus 0. Since we are dealing with the non-singular random walks, we can not discuss this here. 3. The Restricted Jackson Network  From transition diagrams for these two queueing model we have p1,0 = λ1 , p1,−1 = r21µ2 , p−1,−1 = 0.  p−1,0 = r10µ1 ,  p−1,1 = r12µ1 ,  p1,1 =  Therefore, according to the lemmas (2.2) and (2.3), we have 0 < x1 < x2 < 1 < x3 < x4 . 4. The Classical Tandem Queues  According to the transition probabilities for Tandem queue, we obtain p1,1 = p1,−1 = p−1,0 = p−1,−1 = 0,  p1,0 = λ,  p−1,1 = µ1 .  Hence, referring to the two lemmas (2.2) and (2.3), we have 0 = x1 < x2 < 1 < x3 < x4 .  26  5. The Queueing Systems with Two Demands  From transition probabilities for Two demand queueing model, we obtain p1,0 = p1,−1 = p−1,0 = p−1,1 = p−1,−1 = 0,  p1,1 = λ.  Therefore, from the two lemmas (2.2) and (2.3) above, we have 0 < x1 < x2 < 1 < x3 < x4 = ∞.  27  Chapter 3 Exact Tail Asymptotic Analysis of a General Random Walk Model Using Kernel Method 3.1  Kernel Method and Generating Functions  The kernel method was first introduced by Knuth [38] and then was developed by Banderier et al [4]. Suppose we have a fundamental form A(x, y)F (x, y) = B(x, y)G(x) + C(x, y), where F (x, y), and G(x) are unknown, and A(x, y), B(x, y), and C(x, y) are known two-variable complex functions. The main idea of the kernel method is to find the solutions of the equation F (x, y) = 0. Suppose x and Y0 (x) is the solutions of F (x, y) = 0, then obviously, G(x) = −C(x, Y0(x)) . In kernel method, finding the location of poles of B(x, Y0(x)) the unknown functions, as well as determination of the dominant singularities of the generating functions are sufficient for the exact tail asymptotics analysis. In this section we provide the generating functions for a general random walk model in the first and fourth quadrants of the twodimensional random walk. The generating functions are listed as follows,  28  (1) generating functions for the first quadrant: πm,n xm,  Pn (x) = m≥1  π0,n y n ,  Q0(y) = n≥1  πm,n y n ,  Qm (y) = n≥1  πm,n xmy n ,  P (x, y) =  (3.1)  m≥1,n≥1  (2) generating functions for the fourth quadrant: Pn(−) (x) =  πm,n xm , m≥1  (−)  π0,n y −n ,  Q0 (y) = n≤−1  πm,n y −n ,  Q(−) m (y) = n≤−1  P (−) (x, y) =  πm,n xm y −n ,  (3.2)  m≥1,n≤−1  (3) generating functions for the boundary x-axis: πm,0 xm .  P0 (x) =  (3.3)  m≥1  Using the generating functions defined above, we can write the balance equations for the first and fourth quadrants of the plane in order to find the fundamental forms of a random walk model. Balance equation describes that the probability of leaving each stationary state is equal to the probability of moving to that state. We categorize the balance equations as follows, (1) for the boundary on x-axis: 29  if m ≥ 2 and n = 0, 1  1  πm,0 = i=−1  (−1) πm+i,−1 p−i,1  + i=−1  πm+i,1 p−i,−1 +  (1)  i=−1,1  πm+i,0 p−i,0+  πm,0 p0,0,  (3.4)  if m = 1 and n = 0, 2  2  π1,o = i=1 (0) π0,0 p1,0 +  (−) πi,−1 p−i+1,1  (2−)  + i=1  (2)  πi,1 p−i+1,−1 + π0,−1 p1,1 + π0,1 p1,−1+  (1) π2,0 p−1,0,  (3.5)  if m = 0 and n = 0, (−2)  (−)  (1)  (2)  π0,0 = π0,−1 p0,1 + π1,−1 p−1,1 + π1,0 p−1,0 + π1,1 p−1,−1 + π0,1 p0,−1+ (0)  π0,0 p0,0,  (3.6)  (2) for the boundary on y-axis: if m = 0 and n ≥ 2, 1  1 (2) π0,n+i p0,−i  π0,n = i=−1  + i=−1  π1,n+i p−1,−i,  (3.7)  if m = 0 and n = 1, 2  π0,1 = i=1  2  π1,i p−1,−i+1 +  (2)  i=1  (0)  (1)  π0,i p0,−i+1 + π0,0 p0,1 + π1,0 p−1,1, (3.8)  if m = 0 and n = −1, π0,−1 =  −1 i=−2  (−) π1,i p−1,−i−1  +  −1 i=−2  (2−)  (0)  (1)  π0,i p0,−i−1 + π0,0 p−1,0 + π1,0 p−1,−1, (3.9)  30  if m = 0 and n ≤ −2,  1  1 (2−) π0,n+i p0,−i  π0,n =  (−)  + i=−1  i=−1  π1,n+i p−1,−i,  (3.10)  (3) for the interior of the first quadrant: if m ≥ 2 and n = 1, 1  πm,1 = i=−1  (1) πm+i,0 p−i,1  1  + i=−1  πm+i,2 p−i,−1 +  i=−1,1  πm+i,0 p−i,0+  πm,1 p0,0,  (3.11)  if m ≥ 2 and n ≥ 2, 1  πm,n = i=−1  1  πm+i,n−1 p−i,1 +  i=−1  πm+i,n+1 p−i,−1 +  i=−1,1  πm,n p0,0,  πm+i,n p−i,0+ (3.12)  if m = 1 and n ≥ 2, 1  (2) π0,n+i p1,−i  π1,n =  1  +  i=−1  i=−1  π2,n+i p−1,−i +  π1,n+i p0,−i, i=−1,1  (3.13) if m = 1 and n = 1, 2  2  π1,1 = i=1 (0) π0,0 p1,1 +  (1) πi,0 p−i+1,1  2 (2) π0,i p1,−i+1  +  +  i=1  i=1  π2,i p−1,−i+1+  π1,2 p0,−1,  (3.14)  (4) for the interior of the fourth quadrant: if m ≥ 2 and n = −1, 1  πm,−1 = i=−1  1  (1) πm+i,0 p−i,−1  + πm,−1 p0,0,  (−)  + i=−1  πm+i,−2 p−i,1 +  (−)  i=−1,1  πm+i,−1 p−i,0 (3.15) 31  if m ≥ 2 and n ≤ −2, 1  1  πm,n = i=−1  (−) πm+i,n−1 p−i,1  (−)  + i=−1  πm+i,n+1 p−i,−1 +  (−)  i=−1,1  πm+i,n p−i,0  + πm,n p0,0,  (3.16)  if m = 1 and n ≤ −2, 1  1 (2−) π0,n+i p1,−i  π1,n =  (−)  +  i=−1  i=−1  π2,n+i p−1,−i +  (−)  π1,n+i p0,−i, i=−1,1  (3.17) if m = 1 and n = −1, 2  π1,−1 = i=1  (1) πi,0 p−i+1,−1  (0) π0,0 p1,−1  +  +  −1  (2−) π0,i p1,−i−1  i=−2 (−) π1,−2 p0,1 +  +  −1 i=−2  (−) π1,−1 p0,0 .  (−)  π2,i p−1,−i−1+ (3.18)  Now according to the above balance equations one can get the following fundamental forms. For the first quadrant, fundamental form is P (x, y)h(x, y) = P0 (x)h1(x, y) + Q0(y)h2(x, y) + π0,0 h0 (x, y)+ P−1 (x)A(x) + π0,−1B(x), (3.19)  32  where 1  h(x, y) = 1 −  1  pi,j xiy j , i=−1 j=−1  1  1 (1)  pi,j xiy j − 1,  h1 (x, y) = i=−1 j=0 1  1 (2)  h2 (x, y) = i=0 j=−1 1  pi,j xiy j − 1,  1 (0)  h0 (x, y) = i=0 j=0  pi,j xiy j − 1,  1 (−)  pi,1 xi,  A(x) = B(x) =  i=−1 (−) p0,1  (2−)  + p1,1 x.  (3.20)  For the fourth quadrant, fundamental form is (−)  (−)  (−)  P (−) (x, y)h(−)(x, y) = P0 (x)h1 (x, y) + Q0 (y)h2 (x, y)+ (−)  π0,0h0 (x, y) + P−1(x)A(−)(x) + π0,−1B (−) x,  (3.21)  where  33  1 (−)  h  (x, y) = 1 −  1 (−)  pi,j xiy −j , i=−1 j=−1  1 (1)  pi,−1xiy,  h1 (x, y) = i=−1 1 (−) h2 (x, y)  1 (2−)  = i=0 j=−1  pi,j xiy −j − 1,  1 (−) h0 (x, y)  (0)  pi,−1xiy,  = i=0  1  A  (−)  (x) = −A(x) = −  B (−) (x) = −B(x) =  (−)  pi,1 xi,  i=−1 (−) −p0,1 +  (2−)  p1,1 x.  (3.22)  Lemma 3.1. For |x| ≤ 1 and |y| ≤ 1, we have the following two expressions for generating functions as follows, P (x, y)h(x, y)+P (−)(x, y)h(−)(x, y) = P0 (x)H1(x, y)+Q0(y)h2(x, y)+ (−)  (−)  Q0 (y)h2 (x, y) + π0,0H0 (x, y), where (−)  H1 (x, y) = h1 (x, y) + h1 (x, y), and (−) H0 (x, y) = h0 (x, y) + h0 (x, y). Proof. This follows from equations (3.19) and (3.21).  3.2  Key Kernel  For the first quadrant of the random walk plane, the kernel equation, h(x, y) = 0, gives an algebraic curve which turns out to be a Riemann surface. The zeros of the kernel equation are called branches of the fundamental form. In this section we analyse the kernel equation. For the first quadrant, the kernel equation becomes 34  a quadratic equation in terms of y, which is h(x, y) = a(x)y 2 + b(x)y + c(x) = 0,  (3.23)  where a(x) = −p1,1x2 − p0,1x − p−1,1, b(x) = −p1,0x2 + x − p−1,0, c(x) = −p1,−1x2 − p0,−1x − p−1,−1.  (3.24)  Therefore, the branches of the fundamental form are, −b(x) ± D(x) , 2a(x)  (3.25)  D(x) = b2 (x) − 4a(x)c(x)  (3.26)  Y± (x) = where  Among the zeros, Y± (x), the one with smaller modulus is denoted by Y0 (x) and the one with greater modulus is denoted by Y1 (x). Therefore, Y0(x) =  Y−(x) if |Y− (x)| ≤ |Y+ (x)|, Y+(x) if |Y+ (x)| ≤ |Y− (x)|.  (3.27)  The zeros of the equation D(x) = b2 (x) − 4a(x)c(x) = 0 are called branch points. In our case there are at most four branch points denoted by x1, x2, x3, and x4. As mentioned earlier, when the number of branch points are one or two, the corresponding Riemann surface of kernel equation is sphere, and when the number of branch points are three ot four, the corresponding Riemann surface will be torus. We also can express h(x, y) in quadratic form of x as follows, h(x, y) = a ˜(y)x2 + ˜b(y)x + c˜(y) = 0,  (3.28)  35  where a˜(y) = −p1,1y 2 − p1,0y − p1,−1, ˜b(y) = −p0,1y 2 + y − p0,−1,  c˜(y) = −p−1,1y 2 − p−1,0y − p−1,−1.  (3.29)  Hence, the solutions of kernel equation are X± (y) =  −˜b(y) ±  ˜ D(y)  2˜a(y)  ,  (3.30)  where ˜ D(y) = ˜b(y)  2  − 4˜a(y)˜ c(y).  (3.31)  There are at most four branch points for D(y) denoted by y1 , y2, y3 , and y4. Similar to (3.27), Among the roots X± (y), the one with smaller modulus is denoted by X0 (y) and the one with greater modulus is denoted by X1(y). Therefore, X0(y) =  X−(y) if |X− (y)| ≤ |X+ (y)|, X+(y) if |X+ (y)| ≤ |X− (y)|.  (3.32)  If we write the kernel equation for the fourth quadrant, we get h(−) (x, y) = a(−) (x)y 2 + b(−) (x)y + c(−) (x) = 0,  (3.33)  where (−)  (−)  (−)  a(−) (x) = −p1,1 x2 − p0,1 x − p−1,1, (−)  (−)  b(−) (x) = −p1,0 x2 + x − p−1,0, (−)  (−)  (−)  c(−) (x) = −p1,−1x2 − p0,−1x − p−1,−1.  (3.34)  The solutions for kernel equation are now, (−)  Y± (x) =  −b(−) (x) ± D(−) (x) , 2a(−) (x)  (3.35) 36  where D(−) (x) = (b(−) )2(x) − 4a(−) (x)c(−)(x).  (3.36)  We can also write the kernel equation of the fourth quadrant as follows, h(x, y) = a˜(−) (y)x2 + ˜b(−) (y)x + c˜(−) (y) = 0,  (3.37)  where (−)  (−)  (−)  a ˜(−) (y) = −p1,1 y 2 − p1,0 y − p1,−1, ˜b(−) (y) = −p(−) y 2 + y − p(−) , c˜(−) (y) =  0,1 (−) −p−1,1y 2  0,−1  (−)  (−)  − p−1,0y − p−1,−1.  Hence, the solutions of kernel equation are (−) X± (y)  =  −˜b(−) (y) ±  ˜ (−) (y) D  2˜a(−) (y)  ,  (3.38)  where ˜ (−) (y) = (˜b(−))2(y) − 4˜a(−) (y)˜ D c(−) (y). In this thesis, we define [x3, x4] = [−∞, x4] ∪ [x3, ∞] when x4 < −1, similarly we define [y3, y4] = [−∞, y4] ∪ [y3, ∞] when y4 < −1. Since all the generating functions are defined in the complex plane, to ensure the continuity or to avoid the transition from one branch to another, we consider the following cut planes ˜ x = Cx − [x3, x4], C ˜˜ = C − [x , x ] ∪ [x , x ], C x x 1 2 3 4 ˜ y = Cy − [y3, y4], C ˜˜ = C − [y , y ] ∪ [y , y ]. C y y 1 2 3 4  (3.39)  Lemma 3.2. The function Y0(x) and Y1 (x) are meromorphic in cut plane C˜x , Moreover, 37  (i) Y0(x) has one zero and no poles. Hence, Y0(x) is analytic in C˜x . (ii) Y1(x) has two poles and no zeros. (iii) |Y0 (x)| < |Y1 (x)| on the whole cut plane C˜˜x , and |Y0(x)| = |Y1 (x)| only on cuts. (iv) If |x| = 1, then |Y0 (x)| ≤ 1. Moreover, Y0 (1) = 1.  Proof. Since we are in the cut plane C˜˜x , which excludes the case of b(x) = 0, we have  2c(x)   , if −b(x) > 0,  −b(x) + D1 (x) Y0(x) = 2c(x)   , if −b(x) < 0,  −b(x) − D1 (x) Y0 (x)Y1(x) =  c(x) , a(x)  hence, it is clear that (i) Y0 (x) has one zero and no poles, and (ii) Y1 (x) has two poles and no zeros. (iii) is true according to the definition of Y0(x) and Y1 (x). (iv) It is proved in lemma 2.3.4. of Fayolle, Iasnogorodski, and Malyshev [19].  3.3  (−)  Asymptotic Analysis of P0(x), Q0(y) and Q0 (y)  In this section we find the singularities of the generating functions, (−) P0 (x), Q0(y) and Q0 (y) in order to find the asymptotic behaviour of them. Let f (x) = −Q0(Y0(x))h2(x, Y0(x)), (−)  (−)  (−)  (−)  f (−) (x) = −Q0 (Y0 (x))h2 (x, Y0 (x)), g0 (x) = −π0,0 Q0(x), (−)  g1 (x) = h1 (x, Y0(x)) + h1 (x, Y (−) (x)).  (3.40)  38  From lemma 3.1. and equations (3.40) we have, P0 (x) =  f (x) + f (−)(x) + g0 (x) , g1 (x)  (3.41)  Let x∗ be the zero of g1 (x). Hence, (−)  g1 (x∗) = h1 (x∗, Y0(x∗)) + h1 (x∗, Y (−) (x∗)) = 0. Now for analysing the singularity for Q0(y) we need the following notations. Let l1 (y) = − P0 (X0(y))h1(X0 (y), y), l2 (y) = − π0,0 h0 (X0(y), y) − P−1(X0 (y)) A(X0(y))− π0,−1 B(X0(y)), g2 (y) =h2 (X0(y), y). (3.42) Hence, from equations (3.19) and (3.42) we have, Q0(y) =  l1 (y) + l2 (y) g2 (y)  (3.43)  Now let y ∗ be the zero of g2 (y). Hence, g2 (y ∗ ) = h2 X0 (y ∗), y ∗  = 0.  (−)  Also for asymptotic analysing of the Q0 notations. Let (−)  (−)  (−)  (−)  (−)  we need the following  (−)  l1 (y) = − P0 (X0 (y))h1 (X0 (y), y), (−)  (−)  (−)  l2 (y) = − π0,0 h0 (X0 (y), y) − P−1 (X0 (y)) A(−) (X0 (y))− π0,−1 B (−) (X0(y)),  (−)  (−)  (−)  g2 (y) =h2 (X0 (y), y).  (3.44)  Therefore, from equations (3.21) and (3.44) we have, (−)  Q0(y) =  (−)  l1 (y) + l2 (y) (−)  g2 (y)  (3.45) 39  (−)  ∗ In this equation let y(−) be the zero of g2 (y). Therefore, (−)  (−)  g2 (y) = h2  (−)  ∗ ∗ X0 (y(−) ), y(−)  = 0.  The following theorem is proved by H. Li and Y. Zhao in [50]. Theorem 3.1. Let z be a pole of P0 (x) with the smallest modulus in the disk |x| < x3 . Then one of the following three cases must hold: 1. z is a zero of h1 (x, Y0(x)); 2. y ∗ = Y0 (z) is a zero of h2 (X0(y), y) and |y ∗ | > 1; or 3. z ∗ = X0(y ∗ ) is a zero of h1 (x, Y0(x)) and |z ∗ | > 1. (−)  Using theorem 3.1, it is easily shown that x∗, x˜1 , x˜1(−) , x3, and x3 are singularities of P0 (x), where (−)  ∗ ). x˜1 = X1(y ∗ ), and x˜1(−) = X1 (y(−) The singularity with smallest modulus greater than 1 will be called the dominant singularity for P0 (y), and consequently the radius of convergence of P0 (y) can be extended to this point by analytic continuation [19]. Similarly, according to (3.42) and (3.43), y ∗ , y˜1 , y3 are singularities ∗ of Q0(y), where y˜1 = Y1(x∗), and according to (3.44) and (3.45), y(−) , (−)  y˜1 (−) , and y3  3.4  (−)  (−)  are singularities of Q0 (x), where y˜1 (−) = Y1 (x∗).  Tauberian-Like Theorem  In this section we recall the Tauberian-like theorem which is an important tool for our calculation of asymptotic analysis of generating (−) functions, P0 (x), Q0 (y) and Q0 (y). Theorem 3.2. Assume that C(z) is an analytic function defined π on △(φ, ǫ) = {z : z ≤ (1 + ǫ), ǫ ≥ 0, < φ < }, except at z = 1. 2 n Suppose C(z) = cn z , and a is the dominant singularity of C(z) n≥0  40  z on the convergence circle, and limz→a(1 − )h C(z) = b. then a cn ∼  3.5  b nRe(h)−1 Re(a)n Γ Re(h)  as n → ∞.  (3.46)  Exact Tail Asymptotic for P0(x)  In this section we will find the exact tail asymptotic behaviour of the marginal stationary distribution along x-axis, with applying the Tauberian-like theorem to the generating function P0 (x). For this (−) purpose we need the following equations. Since Y0 (x) and Y0 (x) can be written as Y0 (x) = a(x) + b(x) 1 − (−)  x , x3  Y0 (x) = a(−) (x) + b(−) (x) 1 −  x (−)  x3  ,  (3.47)  we can write the following equations, g1(x) = a1 (x) + b1 (x) 1 −  x x + b2(x) 1 − (−) , x3 x3  x x ∗ )a (x) − b(x) 1 − , x3 x3 x (−) (−) (−) Y0 (x3 ) − Y0 (x) = (1 − (−) )a(−)∗ (x) − b(−) (x) x3 Y0(x3) − Y0 (x) = (1 −  1−  x ∗ x )a1(x) + b1(x) 1 − , x3 x3 x x (−) (−)∗ g1(x) − g1 (x3 ) = (1 − (−) )a1 (x) + b1 (x) 1 − (−) . x3 x3  x (−)  x3  ,  g1(x) − g1 (x3) = (1 −  (3.48)  41  To simplify our later calculations let α(x) = α(−) (x) = β(x) = β (−) (x) =  x ∗ a (x) − b(x) , x3 x 1 − (−) a(−)∗ (x) − b(−) (x) , x3  1−  x ∗ a (x) − b1(x) , x3 1 x (−)∗ (−) 1 − (−) a1 (x) − b1 (x) , x3  1−  (3.49)  Hence, according to (3.49) we can rewrite (3.48) as x α(x), x3 x (−) (−) (−) Y0 (x3 ) − Y0 (x) = 1 − (−) α(−) (x), x3 Y0 (x3) − Y0(x) =  1−  g1 (x) − g1 (x3) =  1−  (−)  g1 (x) − g1 (x3 ) =  x β(x), x3 x 1 − (−) β (−) (x). x3  (3.50)  According to (3.40), (3.41), and (3.50), we divide the asymptotic behaviour of πm,0 into five cases: (1) exact geometric decay rate, (2) exact geometric decay rate with factor m−1/2, (3) exact geometric decay rate with factor m1/2 , (4) exact geometric decay rate with factor m, and (5) exact geometric decay rate with factor m−3/2. Next, we will show the details on the five cases above. Case 1. If xdom = x∗, or xdom = x˜1, or xdom = x˜1 (−) , or xdom = x˜1 = (−) x˜1(−) , or xdom = x˜1 = x3 , or xdom = x˜1 (−) = x3, or xdom = x∗ = (−) x˜1 = x3, or xdom = x∗ = x˜1(−) = x3 , or xdom = x˜1 = x˜1(−) = x3, (−) (−) or xdom = x˜1 = x˜1 (−) = x3 , or xdom = x∗ = x˜1 = x3 = x3 , or (−) xdom = x∗ = x˜1 (−) = x3 = x3 , or xdom = x∗ = x˜1 = x˜1(−) = x3 = 42  (−)  x3  holds, then πm,0 ∼  L (xdom)m  ,  (3.51)  where L is a constant, and xdom is dominant singularity of P0 (x). We will prove it in the following conditions. Condition 1.1. If xdom = x∗ , then f (x) + f (−)(x) + g0 (x) P0 (x) = , (x − xdom ) g1∗ (x) f (x∗) + f (−) (x∗) + g0 (x∗) ⇒ lim (1 − )P0(x) = = L1 , x→xdom xdom x∗ g1∗ (x∗) x  Therefore, applying theorem 3.2 we get πm,0 ∼  L1 . (x∗)m  Condition 1.2. If xdom = x˜1 , then Y0 (x) ) y∗ + f (−) (x) + g0 (x) f (x) Y0 (x) (1 − ∗ ) y P0 (x) = , g1 (x) (1 −  ⇒ lim (1 − x→xdom  x xdom  )P0 (x) =  Y0(˜ x1 ) ) y∗ = L2 , ′ x˜1 Y0 (˜ x1) g1 (˜ x1 )  y ∗ f (˜ x1)(1 −  Therefore, applying theorem 3.2 we get πm,0 ∼  L2 . (˜ x1 )m  Condition 1.3. If xdom = x˜1 (−) , then 43  (−)  f (x) + f (−) (x) P0 (x) =  ⇒ lim (1 − x→xdom  Y (x) ) (1 − 0 ∗ y(−) (−)  Y (x) (1 − 0 ∗ ) y(−)  + g0 (x)  g1 (x)  x xdom  ∗ y(−)  )P0 (x) =  f  (−)  (x˜1  (−)  ,  Y0 (x˜1(−) ) )(1 − ) y∗  (−)  x˜1  (−)  dY0 (x) |x=x˜1 (−) g1(x˜1(−) ) dx  = L3 , Therefore, applying theorem 3.2 we get πm,0 ∼  L3 (x˜1(−) )m  .  Condition 1.4. If xdom = x˜1 = x˜1(−) , then (−)  P0 (x) =  Y0 (x) Y0 (x) (1 − ) ) (1 − ∗ y ∗ y (−) f (x) + f (−)(x) + g0(x) (−) Y0 (x) Y (x) (1 − ) (1 − 0 ∗ ) y∗ y(−)  ⇒ lim (1 − x→xdom  g1 (x) x xdom  ,  )P0 (x) =  ∗ (−) y(−) y∗ Y0(x) Y0 (x) (−) f (x)(1 − )+ f (x)(1 − ) ∗ (−) dY0(x) y∗ y dY0 (x) (−) x x dx dx g1 (x)  x=˜ x1  = L4 , 44  Therefore, applying theorem 3.2 we get πm,0 ∼  L4 . (˜ x1 )m (−)  Condition 1.5. If xdom = x˜1 = x3 , then Y0 (x) ) y∗ f (x) + f (−) (x) + g0 (x) Y0 (x) (1 − ∗ ) y P0 (x) = , g1 (x) (1 −  ⇒ lim (1 −  x  )P0 (x) =  y ∗ f (x)(1 −  Y0 (x) ) y∗  dY0(x) x g1 (x) dx Therefore, applying theorem 3.2 we get x→xdom  xdom  πm,0 ∼  = L5 , x=˜ x1  L5 . (˜ x1 )m  Condition 1.6. If xdom = x˜1 (−) = x3 , then (−)  f (x) + f (−) (x)  Y (x) (1 − 0 ∗ ) y(−) (1 −  P0 (x) =  (−) Y0 (x) ) ∗ y(−)  + g0 (x) ,  g1 (x) (−)  ⇒ lim (1 − x→xdom  x xdom  ∗ y(−)  )P0 (x) =  f  (−)  Y (x) (x)(1 − 0 ∗ ) y(−) (−)  dY0 (x) g1 (x) x dx  x=x˜1 (−)  = L6 , 45  Therefore, applying theorem 3.2 we get πm,0 ∼  L6 . (x3)m  Condition 1.7. If xdom = x∗ = x˜1 = x3, Y0 ) f (x) y∗ + f (−) (x) + g0 (x) x α(x) 1− xdom , x β(x) 1− xdom  y ∗ (1 − P0 (x) =  ⇒ lim (1 − x→xdom  x xdom  )P0 (x) =  Y0(x) ) y∗ α(x) β(x)  y ∗ f (x)(1 −  = L7 , x=x∗  Therefore, applying theorem 3.2 we get πm,0 ∼  L7 . (x∗)m (−)  Condition 1.8. If xdom = x∗ = x˜1(−) = x3 , (−)  ∗ y(−) (1  Y (x) (−) − 0 ∗ ) f (x) y(−)  1−  P0 (x) =  x xdom  + f (x) + g0 (x)  α(−) (x)  1−  x xdom  , β (−) (x) (−)  ⇒ lim (1 − x→xdom  x xdom  ∗ y(−)  )P0(x) =  Y (x) f (x)(1 − 0 ∗ ) y(−) α(−) (x) β (−) (x)  = L8 , x=x∗  46  Therefore, applying theorem 3.2 we get L8 . (x∗)m  πm,0 ∼  Condition 1.9. If xdom = x˜1 = x˜1(−) = x3, then (−)  Y (x) 1− 0 ∗ y(−)  f (−) (x)  1−  P0 (x) =  +  (−) Y0 (x) ∗ y(−)  y ∗ f (x) 1 − 1−  Y0 (x) y∗  x xdom  + g0(x)  α(x) ,  g1 (x) (−)  ⇒ lim (1 − x→xdom  ∗  y f  x xdom  )P0(x) =  (−)  Y (x) (x) 1 − 0 ∗ y(−)  = L9 ,  (−)  dY0 (x) x g1 (x) dx  x=˜ x1  Therefore, by theorem 3.2 we get πm,0 ∼  L9 . (x3)m (−)  Condition 1.10. If xdom = x˜1 = x˜1(−) = x3 , then  P0 (x) =  Y0(x) f (x) 1 − y∗ Y0(x) 1− y∗  ⇒ lim (1 − x→xdom  (−)  ∗ y(−)  +  f  (−)  Y (x) (x) 1 − 0 ∗ y(−)  1−  x xdom  + g0 (x)  α(−) (x) ,  g1 (x)  x xdom  )P0 (x) =  y ∗ f (x) 1 −  Y0(x) y∗  dY0(x) x g1(x) dx  = L10, x=x˜1  47  Therefore, by theorem 3.2 we get πm,0 ∼  L10 . (x˜1)m (−)  Condition 1.11. If xdom = x∗ = x˜1 = x3 = x3 , then y ∗ f (x) 1−  P0 (x) =  1− x xdom  Y0(x) y∗ α(x) 1−  ⇒ lim (1 − x→xdom  + f (−) (x) + g0 (x) ,  x  β(x)  xdom  Y0(x) y∗ α(x) β(x)  y ∗ f (x)  x xdom  )P0 (x) =  1−  = L11, x=x∗  Therefore, by theorem 3.2 we get πm,0 ∼  L11 . (x∗)m (−)  Condition 1.12. If xdom = x∗ = x˜1(−) = x3 = x3 , then (−)  ∗ y(−)  f  (−)  (x)  1−  P0 (x) =  Y (x) 1− 0 ∗ y(−) x  xdom  + f (x) + g0 (x)  α(−) (x) 1−  ,  x xdom  β(x) (−)  ⇒ lim (1 − x→xdom  x xdom  ∗ y(−)  )P0 (x) =  f  (−)  Y (x) (x) 1 − 0 ∗ y(−)  α(−) (x) β(x)  x=x∗  = L12, 48  Therefore, by theorem 3.2 we get L12 . (x∗)m  πm,0 ∼  (−)  Condition 1.13. If xdom = x∗ = x˜1 = x˜1(−) = x3 = x3 , then  P0 (x) =  (−)  Y0 (x) y ∗ f (x) 1 − ∗ y x α(x) 1− xdom  ∗ y(−)  +  Y (x) (x) 1 − 0 ∗ y(−)  1− 1−  +  f  (−)  x xdom  x xdom  α(−) (x)  β(x)  g0 (x) , x 1− β(x) xdom ⇒ lim (1 − x→xdom  Y0 (x) y ∗ f (x) 1 − y∗ α(x) β(x)  x xdom  )P0 (x) = (−)  ∗ y(−)  f  (−)  +  Y (x) (x) 1 − 0 ∗ y(−)  α(−) (x) β(x)  = L13, x=x∗  Therefore, by theorem 3.2 we get πm,0 ∼  L13 . (x∗)m (−)  Case 2. If xdom = x∗ = x3, or xdom = x∗ = x3 , or xdom = x˜1 = x3, (−) (−) or xdom = x˜1(−) = x3 , or xdom = x∗ = x3 = x3 , or xdom = x˜1 = (−) (−) x3 = x3 , or xdom = x˜1(−) = x3 = x3 holds, then πm,0  u m−1/2 √ ∼ . π(xdom )m  (3.52) 49  where u is a constant, and xdom is dominant singularity for P0 (x). The proofs are given in the following conditions. Condition 2.1. If xdom = x∗ = x3, then P0 (x) =  ⇒ lim  x→xdom  1−  x xdom  f (x) + f (−)(x) + g0 (x) , x 1− β(x) xdom  f (x) + f (−) (x) + g0(x) P0 (x) = β(x)  = u1 , x=x∗  Therefore, by theorem 3.2 we get πm,0  u1 m−1/2 ∼√ . π(x∗)m (−)  Condition 2.2. If xdom = x∗ = x3 , then f (x) + f (−)(x) + g0 (x) P0 (x) = , x (−) 1− β (x) xdom ⇒ lim  x→xdom  1−  x xdom  f (x) + f (−) (x) + g0(x) P0 (x) = β (−) (x)  = u2 , x=x∗  Therefore, by theorem 3.2 we get u2 m−1/2 πm,0 ∼ √ . π(x∗)m Condition 2.3. If xdom = x˜1 = x3, then y ∗ f (x) 1 − P0 (x) =  1−  x xdom  Y0 (x) y∗  + f (−) (x) + g0 (x)  α(x) g1 (x)  , 50  ⇒ lim  x→xdom  1−  x xdom  P0 (x) =  Y0(x) y∗ α(x) g1 (x)  y ∗ f (x) 1 −  = u3 , x=x˜1  Therefore, by theorem 3.2 we get πm,0  u3 m−1/2 ∼√ . π(x3)m (−)  Condition 2.4. If xdom = x˜1 (−) = x3 , then (−)  ∗ y(−)  f  (−)  Y (x) (x) 1 − 0 ∗ y(−)  1−  P0 (x) =  x xdom  + f (x) + g0 (x)  α(−) (x) ,  g1 (x) (−)  ⇒ lim  x→xdom  1−  x xdom  ∗ y(−)  P0 (x) =  f  (−)  Y (x) (x) 1 − 0 ∗ y(−)  α(−) (x) g1 (x)  x=x˜1 (−)  = u4 , Therefore, by theorem 3.2 we get πm,0  u4 m−1/2 ∼√ . (−) m π(x3 ) (−)  Condition 2.5. If xdom = x∗ = x3 = x3 , then P0 (x) =  ⇒ lim  x→xdom  1−  x xdom  f (x) + f (−)(x) + g0 (x) , x 1− β(x) xdom  f (x) + f (−) (x) + g0(x) P0 (x) = β(x)  = u5 , x=x∗  51  Therefore, by theorem 3.2 we get πm,0  u5 m−1/2 ∼√ . π(x∗)m (−)  Condition 2.6. If xdom = x˜1 = x3 = x3 , then y ∗ f (x) 1 − 1−  P0 (x) =  ⇒ lim  x→xdom  x xdom  Y0 (x) y∗  + f (−) (x) + g0 (x)  α(x) ,  g1 (x)  1−  x xdom  P0 (x) =  Y0(x) y∗ α(x) g1 (x)  y ∗ f (x) 1 −  = u6 , x=x˜1  Therefore, by theorem 3.2 we get πm,0  u6 m−1/2 ∼√ . π(x3)m (−)  Condition 2.7. If xdom = x˜1 (−) = x3 = x3 , then (−)  ∗ y(−)  f  (−)  Y (x) (x) 1 − 0 ∗ y(−)  1−  P0 (x) =  x xdom  + f (x) + g0 (x)  α(−) (x) ,  g1 (x) (−)  ⇒ lim  x→xdom  1−  x xdom  ∗ y(−)  P0 (x) =  f  (−)  Y (x) (x) 1 − 0 ∗ y(−)  α(−) (x) g1 (x)  x=x˜1 (−)  = u7 , 52  Therefore, by theorem 3.2 we get πm,0  u7 m−1/2 ∼√ . π(x3)m (−)  Case 3. If xdom = x∗ = x˜1 = x3 , or xdom = x∗ = x˜1(−) = x3, or (−) (−) xdom = x∗ = x˜1 = x˜1(−) = x3, or xdom = x∗ = x˜1 = x˜1 = x3 holds, then πm,0  n m1/2 . ∼ √ π m (xdom ) 2  (3.53)  where n is a constant, and xdom is dominant singularity for P0 (x). The proofs are given in the following conditions. (−)  Condition 3.1 If we have xdom = x∗ = x˜1 = x3 , then Y0 (x) y∗ Y0 (x) 1− ∗ y  f (x) 1 − P0 (x) =  1−  lim  x→xdom  1−  x xdom  3/2  P0 (x) =  + f (−) (x) + g0 (x) x  xdom  , β (−) (x)  y ∗ f (x) 1 −  Y0 (x) y∗  dY0(x) (−) x β (x) dx  = n1 , x=x∗  Therefore, using theorem 3.2 we get πm,0  n1 m1/2 . ∼ √ π ∗ m (x ) 2  Condition 3.2. If xdom = x∗ = x˜1(−) = x3, then we have 53  (−)  f  (−)  Y (x) (x) 1 − 0 ∗ y(−)  + f (x) + g0(x)  (−)  Y (x) 1− 0 ∗ y(−)  P0 (x) =  1−  ,  x xdom  β(x) (−)  lim  x→xdom  1−  x  ∗ y(−)  3/2  P0 (x) =  xdom  f  (−)  Y (x) (x) 1 − 0 ∗ y(−) (−)  dY0 (x) x β(x) dx  x=x∗  = n2 , Therefore, using theorem 3.2 we get πm,0  n2 m1/2 ∼ √ . π ∗ m (x ) 2  Condition 3.3. If we have xdom = x∗ = x˜1 = x˜1(−) = x3, then (−)  f  (−)  Y (x) (x) 1 − 0 ∗ y(−)  +  (−)  Y (x) 1− 0 ∗ y(−)  P0 (x) =  y ∗ f (x) 1 − 1−  1−  x xdom  Y0 (x) y∗  + g0(x)  α(x) ,  x xdom  β(x) (−)  lim  x→xdom  1−  x xdom  ∗ y(−)  3/2  P0 (x) =  f  (−)  Y (x) (x) 1 − 0 ∗ y(−) (−)  dY0 (x) β(x) x dx  x=x∗  = n3 , 54  Therefore, using theorem 3.2 we get πm,0  n3 m1/2 ∼ √ . π ∗ m (x ) 2 (−)  Condition 3.4. If xdom = x∗ = x˜1 = x˜1  P0 (x) =  Y0(x) f (x) 1 − y∗ Y0(x) 1− y∗  (−)  ∗ y(−)  x→xdom  1−  x xdom  f  (−)  +  Y (x) (x) 1 − 0 ∗ y(−)  1− 1−  lim  (−)  = x3 , then we have  3/2  P0 (x) =  x xdom  x xdom  + g0 (x)  α(−) (x) ,  β (−) (x)  y ∗ f (x) 1 −  Y0 (x) y∗  dY0(x) (−) x β (x) dx  = n4 , x=x∗  Therefore, using theorem 3.2 we get πm,0  n4 m1/2 . ∼ √ π ∗ m (x ) 2  Case 4. If xdom = x∗ = x˜1, or xdom = x∗ = x˜1(−) , or xdom = x∗ = x˜1 = x˜1(−) holds, then πm,0 ∼  rm , (xdom)m  (3.54)  where r is a constant, and xdom is dominant singularity of P0 (x). We will give the proofs in the following cases.  55  Condition 4.1. If we have xdom = x∗ = x˜1, then Y0 (x) y∗ Y0 (x) 1− ∗ y  f (x) 1 − P0 (x) =  + f (−) (x) + g0 (x) , g1∗ (x)  x − xdom  lim  x→xdom  1−  y ∗ f (x) 1 −  2  x  P0 (x) =  xdom  x2  Y0 (x) y∗  = r1 ,  dY0(x) ∗ g1 (x) dx  x=x∗  Therefore, using theorem 3.2 we get r1 m πm,0 ∼ ∗ m . (x ) Condition 4.2. If we have xdom = x∗ = x˜1(−) , then (−)  f  (−)  Y (x) (x) 1 − 0 ∗ y(−)  + f (x) + g0(x)  (−)  Y (x) 1− 0 ∗ y(−)  P0 (x) =  , g1∗ (x)  x − xdom  (−)  lim  x→xdom  1−  x xdom  ∗ y(−)  2  f  (−)  P0 (x) =  Y (x) (x) 1 − 0 ∗ y(−) (−)  x2  dY0 (x) ∗ g1 (x) dx  x=x∗  = r2 , Therefore, using theorem 3.2 we get r2 m πm,0 ∼ ∗ m . (x ) 56  Condition 4.3. If xdom = x∗ = x˜1 = x˜1(−) , then we have (−)  Y0 (x) f (x) 1 − y∗ Y0 (x) 1− ∗ y  f  (−)  +  Y (x) (x) 1 − 0 ∗ y(−) 1−  P0 (x) =  x − xdom lim  x→xdom  Y0(x) y ∗ f (x) 1 − y∗ dY0(x) ∗ g1 (x) x2 dx  1−  (−) Y0 (x) ∗ y(−)  + g0(x) ,  g1∗ (x)  2  x  P0 (x) =  xdom  (−)  ∗ y(−)  +  f  (−)  Y (x) (x) 1 − 0 ∗ y(−) (−)  dY0 (x) ∗ x2 g1 (x) dx Therefore, using theorem 3.2 we get r3 m πm,0 ∼ ∗ m . (x ) (−)  = r3 , x=x∗  (−)  Case 5. If xdom = x3, or xdom = x3 , or xdom = x3 = x3 then πm,0  s m−3/2 ∼√ , π(xdom )m  holds, (3.55)  where s is a constant, and xdom is the dominant singularity of P0 (x). The proofs are given in the following conditions, but first we need to define the following functions in order to simplify the later calculations. Let dQ0(Y0(x)) + L(x) = − h2 (x, Y0(x)) dx dh2(x, y) dh0 (x, y) − Q0(Y0(x)) − π0,0 , dy dy y=Y0 (x) 57  L  (−)  (x) = −  (−) (−) dQ0 (Y0 (x)) (−) (−) + h2 (x, Y0 (x))  dx  −  (−) (−) Q0 (Y0 (x))  N (x) = − Q0 (Y0(x)) −  O(x) =  (−) dh2 (x, y)  dy  dh2(x, y) dh0 (x, y) + π0,0 dx dx  (−) dh2 (x, y) (−) (−) Q0 (Y0 (x))  dx  dh1 (x, y) dy  (−)  dh (x, y) − π0,0 0 dy  , (−) y=Y0 (x)  + y=Y0 (x)  (−)  dh (x, y) + π0,0 0 dx  , (−)  y=Y0  (x)  , y=Y0 (x)  (−)  O  (−)  dh (x, y) (x) = 1 dy  Z(x) =  dh1 (x, y) dx  , (−)  y=Y0  + y=Y0 (x)  (x) (−) dh1 (x, y)  dx  , (−) y=Y0 (x)  M(x) = L(x)g1(x) − O(x) f (x) + f (−) (x) + g0 (x) , M (−) (x) = L(−)(x)g1(x) − O(−) (x) f (x) + f (−) (x) + g0(x) , U (x) = N (x)g1(x) − Z(x) f (x) + f (−) (x) + g0(x) , da(x) db(x) x + 1− , dx dx x3 da(−) (x) db(−) (x) x (−) K (x) = + 1 − (−) . dx dx x3  K(x) =  (3.56)  Condition 5.1. If we have xdom = x3, we write,  58  K(x) − dP0 (x) = dx  b(x)  M(x)  x 1− x3  2x3  +  2  g1(x) K  (−)  (x) −  b(−) (x) (−)  2x3  1−  M (−) (x) + U (x)  x (−)  x3  , (3.57)  2  g1(x)  lim  x→xdom  1−  x  d P0 (x)  xdom  =  dx  −b(x) M(x)  2  2x  g1 (x)  = s1, x=x3  Therefore, using theorem 3.2 we get s1 m−3/2 √ ∼ . π(x3)m  πm,0  (−)  Condition 5.2. If xdom = x3 , according to (3.57) we have,  lim  x→xdom  1−  x xdom  d P0 (x) dx  =  −b(−) (x) M (−) (x) 2  2x  g1 (x)  (−) x=x3  = s2 ,  Therefore, using theorem 3.2 we get πm,0  s2 m−3/2 ∼√ . (−) m π(x3 ) (−)  bf Condition 5.3. If we have xdom = x3 = x3 , according to (3.57) 59  we have,  lim  x→xdom  1−  x xdom  d P0 (x) =  dx  −b(x) M(x) − b(−) (x) M (−) (x) 2  2x  g1 (x)  = s3 , x=x3  Therefore, using theorem 3.2 we get s3 m−3/2 πm,0 ∼ √ . π(x3)m  3.6  Exact Tail Asymptotic for Q0 (y)  In this section we find the exact tail asymptotic behaviour of the marginal stationary distribution along y-axis with applying theorem 3.2 to the generating function Q0(y). From (3.42) and (3.43) we have Q0(y) =  l1(y) + l2(y) . g2 (y)  Since X0(y) can be written as X0 (y) = c(y) + d(y) 1 −  y , y3  (3.58)  we can write the following equations g2 (y) = c1 (y) + d1 (y)  1−  y , y3  y ∗ y )c (y) − d(y) 1 − , y3 y3 y y g2 (y3) − g2 (y) = c∗1 (y)(1 − ) − d1 (y) 1 − . y3 y3 X0 (y3) − X0 (y) = (1 −  (3.59)  Hence, according to (3.42), (3.43), and (3.59), we characterize the exact tail asymptotic behaviour of π0,n into four cases: (1) exact 60  geometric decay rate, (2) exact geometric decay rate with factor n−1/2, (3) exact geometric decay rate with factor n−3/2, and (4) exact geometric decay rate with factor n. In the following we will show the details on the cases above. Case 1. If the conditions, |ydom | = y ∗ < min {˜ y1 , y3}, or |ydom | = ∗ ∗ y = y˜1 = y3, or |ydom | = y˜1 < min {y , y3} hold, then π0,n ∼  c  (ydom )n  ,  (3.60)  where c is a constant, and ydom is the dominant singularity of Q0(y). We will give the proof in the following conditions. Condition 1.1. If |ydom | = y ∗ < min {˜ y1, y3}, then we have Q0 (y) =  ⇒ lim  y→ydom  1−  y  l1(y) + l2(y) , (y − ydom )g2∗ (y)  Q0 (y) =  ydom  l1 (y) + l2 (y) g2∗ (y)  = c1 . y=y ∗  Therefore, using theorem 3.2 we get π0,n ∼  c1 . (y ∗)n  Condition 1.2. If |ydom | = y ∗ = y˜1 = y3 , then x∗ l1(y) 1 − Q0(y) =  1−  y  1−  ydom  1−  y ydom  ⇒ lim  y→ydom  y ydom  1−  1−  X0 (y) x∗ c∗ (y) − d(y) y ydom  y ydom  + l2(y) ,  c∗1 (y) − d1(y)  Q0(y) =  61  x∗ l1 (y) 1 − 1−  y ydom  c∗ (y) − d(y)  X0 (y) x∗ 1−  y ydom  = c2 . c∗1 (y) − d1 (y)  y=y ∗  Therefore, using theorem 3.2 we get c2 π0,n ∼ ∗ n . (y ) Condition 1.3. If |ydom | = y˜1 < min {y ∗ , y3}, then X0 (y) x∗ X0 (y) 1− x∗ g2 (y)  l1(y) 1 − Q0(y) =  ⇒ lim  1−  y  Q0 (y) =  + l2(y)  x∗ l1(y) 1 −  , X0 (y) x∗  dX0 (y) y g2 (y) dy Therefore, using theorem 3.2 we get c3 . π0,n ∼ (y˜1)n y→ydom  ydom  = c3 . y=y˜1  Case 2. If the conditions, |ydom | = y ∗ = y3 < y˜1 , or |ydom | = y˜1 = y3 < y ∗ hold, then t n−1/2 π0,n ∼ √ , π(ydom )n  (3.61)  where t is a constant, and ydom is the dominant singularity of Q0(y). The proofs are given in the following conditions. Condition 2.1. If |ydom | = y ∗ = y3 < y˜1 , then Q0(y) = 1−  y ydom  l1(y) + l2(y) , y 1− c∗1 (y) − d1(y) ydom 62  ⇒ lim  1−  y→ydom  y ydom  l1 (y) + l2 (y) y 1− c∗1 (y) − d1 (y) ydom  Q0(y) =  y=y ∗  = t1 . Therefore, using theorem 3.2 we get t1 n−1/2 π0,n ∼ √ ∗ n . π(y ) Condition 2.2. If |ydom | = y˜1 = y3 < y ∗ , then x∗ l1(y) 1 − Q0(y) =  ⇒ lim  y→ydom  1−  1−  y ydom  y  1−  ydom  X0 (y) x∗  + l2(y)  y  c∗ (y) − d(y) ydom g2 (y) x∗ l1 (y) 1 −  Q0(y) =  1−  y ydom  ,  X0 (y) x∗  c∗ (y) − d(y)  g2 (y)  y=˜ y1  = t2 . Therefore, using theorem 3.2 we get π0,n  t2 n−1/2 ∼√ . π(y3)n  Case 3. If the condition |ydom | = y3 < min {˜ y1 , y ∗} holds, then π0,n  u n−3/2 ∼√ , π(ydom )n  (3.62)  where u is a constant, and ydom is the dominant singularity of Q0(y). The proof is given in the following condition, but first we need the following assigned notations in order to simplify the calculations. 63  In (3.19) let C(x) = P−1 (x)A(x) + π0,−1B(x). Therefore, according to (3.42) we have Q0 (y) =  l1(y) − π0,0h0 (X0 (y), y) − C(X0(y)) . g2 (y)  (3.63)  Also lets define dP0(X0(y)) dC(X0(y)) − − dx dx dh0 (x, y) dh1(x, y) + π0,0 P0 (X0(y)) , dx dx x=X0 (y)  D(y) = − h1 (X0(y), y)  E(y) = − P0 (X0(y)) I(y) = J(y) =  dh1(x, y) dh0 (x, y) + π0,0 dy dy  dh2 (x, y) dx  x=X0 (y)  dh2 (x, y) dy  x=X0 (y)  , x=X0 (y)  , ,  V (y) = D(y)g2 (y) − I(y) l1(y) + l2(y) , W (y) = E(y)g2(y) − J(y) l1 (y) + l2 (y) , R(y) =  dc(y) d d(y) + dy dy  1−  y . y3  (3.64)  Condition 3.1. If |ydom | = y3 < min {˜ y1 , y ∗}, then R(y) − dQ0 (y) = dy  ⇒ lim  y→ydom  1−  d(y)  2y3  V (y) + W (y)  y 1− y3  ,  2  g2 (y) y ydom  dQ0 (y) −d(y)V (y) = 2 dy 2y3 g2(y)  = u1 . y=y3  64  Therefore, using theorem 3.2 we get π0,n  u1 n−3/2 ∼√ . π(y3)n  Case 4. If the condition |ydom | = y ∗ = y˜1 < y3 holds, then π0,n ∼  vn , (ydom )n  (3.65)  where v is a constant, and ydom is the dominant singularity of Q0(y). The proof is given below. Condition 4.1. If |ydom | = y ∗ = y˜1 < y3 , then X0 (y) x∗ + l2(y) X0 (y) 1− x∗ , (y − ydom ) g2∗ (y)  l1 1 − Q0(y) =  ⇒ lim  y→ydom  1−  y ydom  2  Q0 (y) =  x∗ l1(y) 1 −  X0 (y) x∗  dX0 (y) ∗ y2 g2 (y) dy  = v1 . y=y ∗  Therefore, using theorem 3.2 we get π0,n ∼  v1 n . (y ∗)n  Because of the symmetry, the exact tail asymptotic behaviour of the marginal stationary distribution along −y-axis is similar to the asymptotics behaviour of π0,n. Therefore, we are not going through (−) the details of the analysis of Q0 (y).  65  Chapter 4 Exact Tail Asymptotic Behaviour of the Joint Stationary Distributions of Generalized-JSQ with Kernel Method 4.1  Generalized-JSQ Model and Kernel Method  In this section we describe the Generalized-JSQ model with two servers. We will model this queue as a two-dimensional random walk in the first and fourth quadrant of the random walk plane, and for each quadrant we will write a specific fundamental form. In the Generalized-JSQ model we have three arrival streams. The first stream is a Poisson process with rate of λ1 , which is dedicated to server 1. That is, this stream only goes to server 1. The second stream is a Poisson process with rate of λ2 , which is dedicated to server 2, and the third stream is a Poisson process with rate of λ, which is called smart stream. That is, this stream determines the shorter queue, and joins that queue. We have exponential distributions for service times, where the rate of service in server 1 is µ1 , and in server 2 is µ2 . Since G-JSQ is a two-dimensional random walk model, and we assume that random walks are considered to be without jumps, that is, the maximum step in any direction is 1. Therefore, the transition  66  probabilities for the Generalized-JSQ model are as follows,   pi,j if a ≥ 1 and b ≥ 1, −1 ≤ i, j ≤ 1,      p0i,j if (a, b) = (0, 0), −1 ≤ i, j ≤ 1,    (1) p if a ≥ 1 and b = 0, −1 ≤ i, j ≤ 1, i,j ¯ P(a,b),(a+i,b+j) = (2)  pi,j if a = 0 and b ≥ 1, −1 ≤ i, j ≤ 1,    (2−)   pi,j if a = 0 and b ≤ −1, −1 ≤ i, j ≤ 1,    p(−) if a ≥ 1 and b ≤ −1, −1 ≤ i, j ≤ 1, i,j  (0)  (1)  (2)  (−)  (2−)  where pi,j , pi,j , pi,j , pi,j , pi,j , and pi,j  are non-negative real num-  bers having the following property, i,j=0±1 (0) pi,j  i=0,1,j=0,±1 (−) pi,j i,j=0,±1  = 1, i=0,1,j=0,±1  (2) pi,j  (1)  pi,j = 1,  = 1, i=0,1,j=0,±1  pi,j = 1,  i,j=0,±1 (2−) pi,j =  1, and  = 1.  By modelling the Generalized-JSQ into the first and fourth quadrant of the random walk plane, where x-axis corresponds to min{Q1, Q2} and y-axis corresponds to (Q1 − Q2), we have the following probabilities, p1,−1 = λ1 + λ, p0,1 = λ2 , p−1,1 = µ1 , p0,−1 = µ2 , (1)  (1)  (1)  (1)  p0,1 = λ2 + λ/2, p−1,1 = µ1 , p0,−1 = λ1 + λ/2, p−1,−1 = µ2 , (2)  (2)  (2)  (2)  p0,1 = λ2 , p1,−1 = λ1 + λ, p0,−1 = µ2 , p0,0 = µ1 , (0)  (0)  (0)  p0,1 = λ2 + λ/2, p0,−1 = λ1 + λ/2, p0,0 = µ2 + µ2 , (−)  (−)  (2−)  (2−)  (−)  (−)  p1,−1 = λ1 + λ, p0,1 = λ1 , p−1,1 = µ2 , p0,−1 = µ1 , (2−)  (2−)  p1,−1 = λ2 + λ, p0,1 = λ1 , p0,−1 = µ1 , p0,0 = µ2 .  (4.1)  From the transition probabilities, one can write the balance equations as follows, (1) for interior of the first quadrant: if n = 1, then πm,1 = µ2 πm,2 + (λ1 + λ)πm−1,2 + (λ2 + λ/2)πm,0 + µ1 πm+1,0, 67  if n ≥ 2, then πm,n = (λ1 + λ)πm−1,n+1 + µ2 πm,n+1 + µ1 πm+1,n−1 + λ2 πm,n−1 , (2) for the boundary on x-axis: if n = 0, then πm,0 = (λ1 + λ)πm−1,1 + µ2 πm,1 + µ1 πm,−1 + (λ2 + λ)πm−1,−1, (3) for interior of the fourth quadrant: if n = −1, then πm,−1 = (λ1 + λ/2)πm,0 + (λ2 + λ)πm−1,−2 + µ2 πm+1,0 + µ1 πm,−2 , if n ≤ −2, then πm,n = λ1 πm,n+1 + (λ2 + λ)πm−1,n−1 + µ2 πm+1,n+1 + µ1 πm,n−1 . (4.2) Hence, by using the generating functions (3.1), (3.2), (3.3), and balance equations (4.2), we can derive two fundamental forms including one equation for the first quadrant , and one equation for the fourth quadrant as follows, (1) for the first quadrant, P (x, y)h(x, y) =P0 (x)h1(x, y) + Q0(y)h2 (x, y) + π0,0 h0 (x, y)+ π0,−1B(x) + p−1(x)A(x), (4.3) (2) and for the fourth quadrant, (−)  (−)  (−)  P (−) (x, y)h(−)(x, y) =P0 (x)h1 (x, y) + Q0 (y)h2 (x, y)+ (−)  π0,0 h0 (x, y) − π0,−1B(x) + µ1 )− p−1(x)A(x),  68  where, 1 − µ2 y −1 − x(λ1 + λ)y −1 − µ1 x−1y − λ2 y ,  h(x, y) =  h(−) (x, y) = h1 (x, y) = (−)  h1 (x, y) = h2 (x, y) = (−)  h2 (x, y) = h0 (x, y) = (−)  h0 (x, y) =  1 − µ1 y −1 − (λ2 + λ)xy −1 − µ2 x−1y − λ1 y , µ1 x−1y + (λ2 + λ/2)y − 1 , µ2 x−1y + (λ1 + λ/2)y , (λ1 + λ)xy −1 + λ2 y + µ2 y −1 + µ1 − 1 , (λ2 + λ)xy −1 + λ1 y + µ1 y −1 + µ2 − 1 , (λ2 + λ/2)y + µ1 + µ2 − 1 , (λ1 + λ/2)y ,  A(x) =  (λ2 + λ)x + µ1 ,  B(x) =  (λ2 + λ)x + µ1 .  (4.4)  Therefore, according to lemma 3.1. the fundamental form for the both first and fourth quadrant is P (x, y)h(x, y) + P (−) (x, y)h(−)(x, y) = P0 (x)H1(x, y)+ (−)  (−)  Q0 (y)h2(x, y) + Q0 (y)h2 (x, y) + π0,0 H0(x, y),  (4.5)  where, (−)  H1(x, y) = h1 (x, y) + h1 (x, y), (−)  H0(x, y) = h0 (x, y) + h0 (x, y).  (4.6)  69  4.2  Branch Points and Branches  In this section we introduce the kernel equation for the GeneralizedJSQ model, and provide details on properties of branches and branch points of the kernel equation. The polynomial h(x, y) is called kernel. Therefore, according to (3.23), for the generalized-JSQ model, kernel equation is h(x, y) = a(x)y 2 + b(x)y + c(x) = 0, where a(x) =  − λ2 x − µ1 ,  b(x) = x, c(x) =  − (λ1 + λ)x2 − µ2 x .  (4.7)  from (3.25), the branches of the kernel equation, h(x, y) = 0, are Y±(x) =  −b(x) ± D1 (x) , 2a(x)  where D1 (x) = b(x)2 − 4a(x)c(x). Furthermore, as stated in (3.28), kernel equation for the first quadrant can be written as, h(x, y) = a ˜(y)x2 + ˜b(y)x + c˜(y) = 0, where a˜(y) =  − λ1 − λ ,  ˜b(y) =  − λ2 y 2 + y − µ2 ,  c˜(y) =  − µ1 y 2 .  (4.8) 70  from (3.30), the solutions for x are, X± (y) =  −˜b(y) ± D2 (y) , 2˜a(y)  where D2 (y) = ˜b(y)2 − 4˜a(y)˜ c(y), Similarly as stated in (3.33), kernel equation of the fourth quadrant is h(−) (x, y) = a(−) (x)y 2 + b(−) (x)y + c(−) (x) = 0, where a(−) (x) = (−λ1x − µ2 ),  b(−) (x) = x,  c(−) (x) = (−(λ2 + λ)x2 − µ1 x),  (4.9)  and from (3.37), the kernel equation for the fourth quadrant can be written as h(−) (x, y) = a ˜(−) (y)x2 + ˜b(−) (y)x + c˜(−) (y) = 0, where a ˜(−) (y) = (−λ2 − λ), ˜b(−) (y) = (−λ1y 2 + y − µ1 ), c˜(−) (y) = (−µ2 y 2 ).  4.3  (4.10)  Asymptotic Analysis of P0(x) of the GeneralizedJSQ  In this section we make P0 (x) of the G-JSQ in terms of other generating functions from the fundamental forms. For simplification and reducing the unknown generating functions, we make the kernel 71  polynomial zero by substituting the branches of kernel polynomial of the Generalized-JSQ into the fundamental form of the first and fourth quadrant, and consequently get the simpler expression for P0 (x). Therefore, recalling (4.6) we have (−)  (−)  P0 (x) h1 (x, Y0(x)) + h1 (x, Y0 (x)) + Q0 (Y0(x))h2(x, Y0(x))+ (−)  (−)  (−)  (−)  Q0 (Y0 (x))h2 (x, Y0 (x))+ (−)  (−)  π0,0 h0 (x, Y0(x)) + h0 (x, Y0 (x)) (−)  = 0, (−)  (4.11) (−)  (−)  where h1 (x, Y0(x)), h2 (x, Y0(x)), h1 (x, Y0 (x)), h2 (x, Y0 (x)), (−) h0 (x, Y0(x)), and h0 (x, Y0(x)) can be found from (4.5). In order to analyse the asymptotic behaviour of P0 (x), we need to get information about the dominant singularity of P0 (x). The following lemma gives the solutions of the equation (−)  (−)  h1 (x, Y0 (x))  h1 (x, Y0(x)) +  = 0, which is one of the singularities of P0 (x).  Lemma 4.1. Suppose the following two inequalities hold, (1) µ1 − µ2 − 2µ1 µ2 + 2λ2µ21 + 2λ2µ22 + 2µ1 µ22 + 4µ21 µ2 − 3µ21 + 2µ31 + µ22 + 4λ2 µ1 µ2 < 0, (2) µ1 − µ2 − 2µ1 µ2 + 2λ2µ21 + 2λ2µ22 + 2λµ21 + 2λµ22 + 2µ1µ22 + 4µ21 µ2 − 3µ21 + 2µ31 + µ22 + 4λ2 µ1 µ2 + 4λµ1 µ2 > 0, 1 µ1 + µ2 2 then x∗ = ( )2 = ( ) is the solution of h1 (x, Y0(x))+ ρ λ1 + λ2 + λ (−) (−) h1 (x, Y0 (x)) = 0. Proof. If the first inequality holds, we have Y0 (ρ−2) = ρ−1 . Also (−) from the second inequality we have Y0 (ρ−2) = ρ−1 . Therefore, (−)  (−)  h1 (ρ−2, Y0(ρ−2)) + h1 (ρ−2, Y0 (ρ−2)) = 0. This completes the proof. (−)  (−)  From now on let x∗ be the solution of h1 (x, Y0(x))+h1 (x, Y0 (x)) = 0, with the smallest modulus greater than 1. Hence, h1 (x∗, Y0(x∗))+ (−) (−) h1 (x∗, Y0 (x∗)) = 0, for |x∗ | > 1. 72  4.4  Asymptotic Analysis of Q0(y) of the GeneralizedJSQ  Recall (4.3), which gives the fundamental form for the first quadrant. Now with replacing x by X0 (y) in (4.3) we get, Q0(y) =  −P0 (X0)h1 (X0, y) − π0,0h0 (X0 , y) − h2 (X0, y) (P−1(X0) + π0,−1)((λ2 + λ)X0 + µ1 ) . h2 (X0 , y)  (4.12)  Now for finding the solution of h2 (X0, y) = 0 with the smallest modulus greater than 1, we define the function, f (y) = a˜(y)h2(X0 , y)h2(X1, y). Hence, we have the following lemma. µ2 Lemma 4.2. y = is the number with the smallest modulus λ2 greater than 1 which makes f (y) = 0. Proof. Equation f (y) = a ˜(y)h2 (X0, y)h2(X1 , y) = 0 implies, −(λ1 + λ)µ1 y(y − 1)(λ2y − µ2 ) = 0. Therefore, y = greater than 1.  µ2 is the zero of f (y) with the smallest modulus λ2  Lemma 4.3. y =  µ2 = y ∗ is the zero of h2 (X0 (y), y). λ2  Proof. From (4.11) one can get µ2 µ22 (2µ1 + µ2 + λ2 − 1)2 D2 ( ) = , λ2 λ22 since we assumed the stability conditions for the system, (µ1 > λ1 + λ), which implies the inequality, 2µ1 + µ2 + λ2 > 1, from (4.11) one can get 73  µ2 µ2 µ2 µ2 µ1 ) = , and X− ( ) = . λ2 λ2 λ2 λ2 (λ1 + λ) With the above stability condition it is straight forward to see that µ2 µ2 X0 ( ) = , λ2 λ2 X+ (  Therefore, h2 X0 (  µ2 µ2 ), λ2 λ2  = (λ1 + λ)  µ2 µ2 µ2 + λ2( )2 + µ2 + (µ1 − 1) = 0. λ2 λ2 λ2  From now on we denote the solution of h2 (X0(y), y) = 0 with the smallest modulus greater than 1 by y ∗ .  4.5  (−)  Asymptotic Analysis of Q0 (y) of the GeneralizedJSQ  In this section, in order to analyse the asymptotic behaviour of (−) (−) Q0 (y) of the Generalized-JSQ, we replace x by X0 (y) in (4.4) to get, (−) Q0 (y)  (−)  =  (−)  (−)  (−)  (−)  −P0 (X0 (y))h1 (X0 (y), y) − π0,0 h0 (X0 (y), y) (−)  (−)  h2 (X0 (y), y)  (−)  +  (−)  −(P−1(X0 (y)) + π0,−1)((λ2 + λ)X0 (y) + µ1 ) (−) (−) h2 (X0 (y), y) (−)  (−)  (−)  .  (−)  Lemma 4.4. Let g(y) = a˜(−) (y)h2 (X0 (y), y)h2 (X1 (y), y), then µ1 y= is a solution of g(y) = 0, with the smallest modulus greater λ1 than 1. (−)  (−)  (−)  (−)  Proof. If g(y) = a ˜(−) (y)h2 (X0 (y), y)h2 (X1 (y), y) = 0, then −(λ2 + λ)µ2 y(y − 1)(λ1y − µ1 ) = 0.  µ1 is a solution of g(y) = 0, with the smallest λ1 modulus greater than 1.  Therefore, y =  74  Lemma 4.5. y =  µ1 (−) (−) is a solution of h2 (X0 (y), y) = 0. λ1  Proof. From (4.15) we can get (−) µ1 D2 ( ) λ1  µ21 (µ2 − λ2 − λ)2 = . λ21  By using (4.15) and assuming the stability condition, which is (µ2 > µ1 µ1 µ2 (−) µ1 (−) µ1 λ2 + λ), we can get X+ ( ) = , and X− ( ) = . λ1 λ1 λ1 λ1 (λ2 + λ) Therefore, (−)  (−)  h2 (X0 (  µ1 µ1 µ1 µ1 µ1 ), ) = (λ2 + λ) + λ1 ( )2 + µ1 + (µ2 − 1) = 0. λ1 λ1 λ1 λ1 λ1 (−)  (−)  From now on we denote the solution of h2 (X0 (y), y) = 0 with ∗ the smallest modulus greater than 1 by y(−) .  4.6  Exact Tail Asymptotic for P0(x) of the GeneralizedJSQ  In this section according to the results of the section 3.5 of this thesis, we find the exact tail asymptotics behaviour of πm,0 of the generalized-JSQ by applying the theorem 3.2 to P0 (x). If the inequalities of lemma 4.1. and two stability conditions, µi > λi +λ for i = 1, 2 hold, then the results of the section 3.5, indicate that the 1 µ1 + µ2 2 dominant singularity of P0 (x) can be either ( )2 = ( ), ρ λ1 + λ2 + λ 1 µ2 (−) 1 (−) µ1 or X1( ) = X1 ( ), or X1 ( ) = X1 ( ), or x3, the third ρ2 λ2 ρ1 λ1 (−) branch point of h(x, y), or x3 , the third branch point of h(−) (x, y). Therefore, we can characterize the exact tail asymptotic behaviour of P0 (x) of the Generalized-JSQ into 5 cases: (1) exact geometric decay rate, (2) exact geometric decay rate with factor m−1/2, (3) exact geometric decay rate with factor m1/2, (4) exact geometric decay rate with factor m, and (5) exact geometric decay rate with factor m−3/2. Next, we will show the details on the five cases above. 75  1 1 (−) 1 Case 1. If xdom = ( )2 , or xdom = X1( ), or xdom = X1 ( ), ρ ρ2 ρ1 1 1 (−) 1 (−) or xdom = X1 ( ) = X1 ( ), or xdom = X1 ( ) = x3 , or xdom = ρ2 ρ1 ρ2 1 2 1 1 (−) 1 X1 ( ) = x3 , or xdom = ( ) = X1( ) = x3, or xdom = ( )2 = ρ1 ρ ρ2 ρ 1 1 1 (−) (−) (−) X1 ( ) = x3 , or xdom = X1 ( ) = X1 ( ) = x3, or xdom = ρ1 ρ2 ρ1 1 1 1 1 (−) (−) (−) X1 ( ) = X1 ( ) = x3 , or xdom = ( )2 = X1 ( ) = x3 = x3 , ρ2 ρ1 ρ ρ2 1 1 1 1 2 (−) (−) or xdom = ( ) = X1 ( ) = x3 = x3 , or xdom = ( )2 = X1 ( ) = ρ ρ1 ρ ρ2 1 (−) (−) X1 ( ) = x3 = x3 holds, then ρ1 πm,0 ∼ where (limx→xdom (1 −  L (xdom)m  ,  x  )P0(x) = L) is a constant, and xdom is xdom dominant singularity of P0 (x), which can be obtained from case 1 of section 3.5. 1 1 (−) Case 2. If xdom = ( )2 = x3, or xdom = ( )2 = x3 , or xdom = ρ ρ 1 1 1 (−) (−) X1 ( ) = x3, or xdom = X1 ( ) = x3 , or xdom = ( )2 = x3 = ρ2 ρ1 ρ 1 (−) (−) (−) 1 x3 , or xdom = X1 ( ) = x3 = x3 , or xdom = X1 ( ) = x3 = ρ2 ρ1 (−) x3 holds, then πm,0 where (limx→xdom  1−  u m−1/2 ∼√ , π(xdom )m  x  P0 (x) = u) is a constant, and xdom is xdom dominant singularity for P0 (x), which can be obtained from case 2 of section 3.5. 1 1 1 (−) Case 3. If xdom = ( )2 = X1 ( ) = x3 , or xdom = ( )2 = ρ ρ2 ρ 76  (−)  1 1 1 (−) 1 ) = x3, or xdom = ( )2 = X1( ) = X1 ( ) = x3, or ρ1 ρ ρ2 ρ1 1 1 (−) 1 (−) = ( )2 = X1 ( ) = X1 ( ) = x3 holds, then ρ ρ2 ρ1  X1 ( xdom  πm,0  x  n m1/2 ∼ √ , π (xdom )m 2 3/2  P0 (x) = n) is a constant, and xdom xdom is dominant singularity for P0 (x), which can be obtained from case 3 of section 3.5. 1 1 1 (−) 1 Case 4. If xdom = ( )2 = X1 ( ), or xdom = ( )2 = X1 ( ), or ρ ρ2 ρ ρ1 1 2 1 1 (−) xdom = ( ) = X1 ( ) = X1 ( ) holds, then ρ ρ2 ρ1 where (limx→xdom 1 −  πm,0 ∼ where (limx→xdom 1 −  x  rm , (xdom)m  2  P0 (x) = r) is a constant, and xdom is xdom dominant singularity of P0 (x), which can be obtained from case 4 of section 3.5. (−)  (−)  Case 5. If xdom = x3, or xdom = x3 , or xdom = x3 = x3 then πm,0  where (limx→xdom  1−  x  holds,  s m−3/2 ∼√ , π(xdom )m d P0 (x)  = s) is a constant, and xdom xdom dx is the dominant singularity of P0 (x), which can be obtained from case 5 of section 3.5.  77  4.7  Exact Tail Asymptotic for Q0(y) of the GeneralizedJSQ  In this section we determine the exact tail asymptotic behaviour of π0,n for Generalized-JSQ with applying the theorem 3.2 to the generating function Q0(y). If the inequalities of lemma 4.1 and two stability conditions, µi > λi + λ for i = 1, 2, hold, then due to the results of the section 3.6, the dominant singularity of Q0(x) is either 1 µ2 1 µ1 + µ2 2 = , or Y1 ( 2 ) = Y1 (( ) ), or y3, the third branch ρ2 λ2 ρ λ1 + λ2 + λ point of h(x, y). Hence, according to section 3.6, we can categorize the exact tail asymptotic behaviour for the Q0 (y) of the generalized JSQ into 4 cases: (1) exact geometric decay rate, (2) exact geometric decay rate with factor n−1/2, (3) exact geometric decay rate with factor n−3/2, and (4) exact geometric decay rate with factor n. In the following we will show the details on the cases above. 1 1 Case 1. If the conditions, |ydom | = < min {Y1( 2 ), y3}, or ρ2 ρ 1 1 1 1 |ydom | = = Y1 ( 2 ) = y3 , or |ydom | = Y1( 2 ) < min { , y3 } hold, ρ2 ρ ρ ρ2 then c π0,n ∼ , (ydom )n where ( lim  y  Q0(y) = c) is a constant, and ydom is the ydom dominant singularity of Q0 (y), which can be obtained from case 1 of section 3.6. 1 1 Case 2. If the conditions, |ydom | = = y3 < Y1( 2 ), or |ydom | = ρ2 ρ 1 1 Y1 ( 2 ) = y3 < hold, then ρ ρ2 y→ydom  1−  π0,n  t n−1/2 ∼√ , π(ydom )n 78  where ( lim  y  1−  Q0 (y) = t) is a constant, and ydom is the ydom dominant singularity of Q0 (y), which can be obtained from case 2 of section 3.6. 1 1 Case 3. If the condition |ydom | = y3 < min {Y1( 2 ), } holds, then ρ ρ2 y→ydom  u n−3/2 π0,n ∼ √ , π(ydom )n y  dQ0 (y) = u) is a constant, and ydom is the y→ydom ydom dy dominant singularity of Q0 (y), which can be obtained from case 3 of section 3.6. 1 1 = Y1( 2 ) < y3 holds, then Case 4. If the condition |ydom | = ρ2 ρ  where ( lim  1−  π0,n ∼ where ( lim  2  Q0(y) = v) is a constant, and ydom is the ydom dominant singularity of Q0 (y), which can be obtained from case 4 of section 3.6. y→ydom  1−  y  vn , (ydom )n  Because of the symmetry, the exact tail asymptotics behaviour of π0,−n for n = 1, 2, 3, ... is similar to π0,n for n = 1, 2, 3, ..., and so we (−) are not going through the details of the analysis of Q0 (y).  79  Chapter 5 Conclusion In this thesis, I extended the idea of a random walk model in the first quadrant by Fayolle, Malyshev, and Iasnogorodski [19], to find the general formulation for the generating functions, and fundamental form for a general random walk model in the first quadrant and fourth quadrant of the plane. In addition, I extended the results of the kernel method in the random walk plane by Li and Zhao [50], to investigate the exact tail asymptotics behaviour of a general random walk model in the first and fourth quadrant of the plane. Moreover, I found the exact tail asymptotics behaviour of the Generalized-JSQ, which is a two-dimensional queueing model in the first and fourth quadrant of the plane by using the kernel method. This model was already studied by Li, Miyazawa, and Zhao [45], and Zhao and Grassmann [75]. But the advantage of the kernel method is that it is a simpler and faster method compared with other methods. Although many studies have been done in the area of random walks, this research field has still a lot of problems to be solved and discovered. Here I give some suggestions for future investigations in this area: (1) Studying the exact tail asymptotics behaviour of the generating functions in the three-dimensional random walk model using the kernel method, which has a large application in the manufacturing, communication, and healthcare industry. As an example one can work on parallel queueing problem with three servers. 80  (2) Studying the exact tail asymptotics behaviour of the two-dimensional queueing problems modelled in the first and fourth quadrants of the random walk plane with jump steps bounded by two instead of jump steps bounded by one.  81  Bibliography [1] Abate, J. and Whitt, W. (1997) Asymptotics for M/G/1 lowpriority waiting-time tail probabilities, Queueing Systems, 25, 173-233. [2] Adan, I., Foley, R.D. and McDonald, D.R. (2009) Exact asymptotics for the stationary distribution of a Markov chain: a production model, Queueing Systems, 62, 311-344. [3] IJBF Adan, J. Wessels, and WHM Zijm, Analysis of the Symmetric Shortest Queue Problem, Stochastic Models, 6:691-713, 1990. [4] Banderier, C., Bousquet-Mlou, M., Denise, A., Flajolet, P., Gardy, D. and Gouyou- Beauchamps, D. (2002). Generating functions for generating trees, (Formal Power Series and Algebraic Combinatorics, Barcelona, 1999), Discrete Math. 246:13, 29-55. [5] B. Blaszczyszyn, A. Frey and V. Schmidt, Light Traffic Approximation for Markov-Modulated Mulriserver Queues, Stochastic Models, Vol. 11, p. 423-445, 1995. [6] Amarjit Budhiraja, Paul Dupuis, and Vasileios Matoulas, Large Deviations for Infinite Dimensional Stochastic Dynamical Systems, Annals of Probability, Vol. 6, No. 4, 1390-1420, 2008. [7] J.P.C. Blanc, A note on waiting times in systems with queues in parallel, Journal of Applied Probability. Vol. 24, p. 540-546, 1987. 82  [8] Bender, E. (1974) Asymptotic methods in enumeration, SIAM Review, 16, 485-513. [9] Borovkov, A.A. and Mogul’skii, A.A. (2001) Large deviations for Markov chains in the positive quadrant, Russian Math. Surveys, 56, 803-916. [10] Bousquet-Melou, M. (2005) Walks in the quarter plane: Kreweras algebraic model, Annals of Applied Probability, 15, 14511491. [11] J. W. Cohen, Analysis of a Two-Dimensional Algebraic Nearest-Neighbour Random Walk (Queue with Paired Services), Technical Report BS-R9437, CWI, Amsterdam, 1994. [12] Cohen, J. W. and Boxma, O. J. (1983) Boundary Value Problems in Queueing System Analysis, North-Holland, Amsterdam. [13] D. R. Cox and H. D. Miller, The Theory of Stochastic Processes, John Wiley and Sons Inc., New York 1965, Chapter 2 and 5. [14] Dai, J. and Miyazawa, M. (2010) Reflecting Brownian motion in two dimensions: Exact asymptotics for the stationary distribution, submitted. [15] Robert D. Foley and David R. McDonald, Large Deviation of a Modified Jackson Network: Stability and Rough Asymptotics, The Annals of Applied Probability. Vol. 15, No. 1B, 519C541, 2005. [16] Foley, R.D. and McDonald, R.D. (2001). Join-the-shortestqueue: Stability and exact asymptotics. Ann. Appl. Probab. 11, 569-607. [17] Foley, R.D. and McDonald, D.R. (2001) Join the shortest queue: stability and exact asymptotics, Annals of Applied Probability, 11, 569-607. 83  [18] Foley, R.D. and McDonald, R.D. (2005) Bridges and networks: exact asymptotics, Annals of Applied Probability, 15, 542-586. [19] G. Fayolle, R. Iasnogorodski and V. Malyshev, Random Walks in the Quarter Plane, Springer, Berlin, 1999. [20] Fayolle, G. and Iasnogorodski, R. (1979) Two coupled processors: the reduction to a Riemann- Hilbert problem, Z. Wahrscheinlichkeitsth, 47, 325-351. [21] Fayolle, G., King, P.J.B. and Mitrani, I. (1982) The solution of certain two-dimensional Markov models, Adv. Appl. Prob., 14, 295-308. [22] Flajolet, P. and Odlyzko, A. (1990) Singularity analysis of generating functions, SIAM J. Disc. Math., 3, 216-240. [23] Flajolet, F. and Sedgewick, R. (2009) Analytic Combinatorics, Cambridge University Press. [24] Flatto, L. and McKean, H.P. (1977) Two queues in parallel, Comm. Pure Appl. Math., 30, 255-263. [25] Flatto, L. and Hahn, S. (1984) Two parallel queues created by arrivals with two demands I, SIAM J. Appl. Math., 44, 1041-1053. [26] Flatto, L. (1985) Two parallel queues created by arrivals with two demands II, SIAM J. Appl. Math., 45, 861-878. [27] Foss, S., and Chernova, N. (1991). On the ergodicity of multichannel not fully accessible communication systems. Problems of Information Transmission, 27, 94-99. [28] Foss, S., and Chernova, N. (1998). On the stability of partially accessible queue with state-dependent routing. Queueing Systems, 29, 55-73. [29] Guillemin, F. and Leeuwarden, J. (2009) Rare event asymptotics for a random walk in the quarter plane, submitted. 84  [30] D. P. Gaver, Diffusion Approximation and Models for Certain Congestion Problems, Journal of Applied Probability, Vol. 5, p. 607-1968. [31] Haque, L. (2003) Tail Behaviour for Stationary Distributions for Two-Dimensional Stochastic Mod- els, Ph.D. Thesis, Carleton University, Ottawa, ON, Canada. [32] Haque, L., Liu, L. and Zhao, Y.Q. (2005) Sucient conditions for a geometric tail in a QBD process with countably many levels and phases, Stochastic Models, 21(1), 77-99. [33] He, Q., Li, H. and Zhao, Y.Q. (2009) Light-tailed behaviour in QBD process with countably many phases, Stochastic Models, 25, 50-75. [34] Halfin, S. (1985). The shortest queue problem. J. Appl. Probab. 22, 865-878. [35] G. Hooghiemstra, M. Keane and S. Van de Ree, Power Series for Stationary Distributions of Coupled Processor Models, SIAM J. Appl. Math. Vol. 48, 1159-1166, 1988. [36] Khanchi, Aziz (2008) State of a network when one node overloads, Ph.D. Thesis, University of Ottawa. [37] Khanchi, Aziz (2009) Asymptotic hitting distribution for a reflected random walk in the positive quardrant, submitted. [38] D. E. Knuth, The art of computer programming, vol. 1: Fundamental Algorithms, Addison-Wesley, 1973, Third edition, 1997. [39] Kobayashi, M. and Miyazawa, M. (2010) Tail asymptotics of the stationary distribution of a two dimensional re-acting random walk with unbounded upward jumps, submitted. [40] Kobayashi, M. and Miyazawa, M. (2011) Tail asymptotics of the stationary distribution of a two dimensional reflecting random walk with unbounded upward jumps, submitted. 85  [41] Kobayashi, M., Miyazawa, M. and Zhao, Y.Q. (2010) Tail asymptotics of the occupation measure for a Markov additive process with an M/G/1-type background process, accepted by Stochastic Models. [42] Kroese, D.P., Scheinhardt, W.R.W. and Taylor, P.G. (2004) Spectral properties of the tandem Jackson network, seen as a quasi-birth-and-death process, Annals of Applied Probability, 14(4), 2057-2089. [43] Kurkova, I.A. and Suhov, Y.M. (2003) Malyshev’s theory and JS-queues. Asymptotics of stationary probabilities, The Annals of Applied Probability, 13, 1313-1354. [44] Kurkova, I.A. (2001). A load-balanced network with two servers. Queueing Systems, 37, 379-389 [45] Li, L., Miyazawa, M. and Zhao, Y. (2007) Geometric decay in a QBD process with countable background states with applications to a join-the-shortest-queue model, Stochastic Models, 23, 413-438. [46] Li, H. and Zhao, Y.Q. (2005) A retrial queue with a constant retrial rate, server break downs and impatient customers, Stochastic Models, 21, 531-550. [47] Li, H. and Zhao, Y.Q. (2009) Exact tail asymptotics in a priority queue-characterizations of the preemptive model, Queueing Systems, 63, 355-381. [48] Li, H. and Zhao, Y.Q. (2011) Exact tail asymptotics in a priority queue—characterizations of the non-preemptive model, Queueing Systems, accepted. [49] Li, H. and Zhao, Y.Q. (2011) Tail asymptotics for a generalized two demand queueing model-A kernel method, Queueing Systems, accepted.  86  [50] Li, H., Yiqiang Q. Zhao, A Kernel Method for Exact Tail Asymptotics-Random Walks in the Quater Plane, submitted. [51] Lieshout, P. and Mandjes, M. (2008) Asymptotic analysis of Levy-driven tandem queues, Queueing Systems, 60, 203-226. [52] Liu, L., Miyazawa, M. and Zhao, Y.Q. (2008) Geometric decay in level-expanding QBD models, Annals of Operations Research, 160, 83-98. [53] Mishna, M. (2009) Classifying lattice walks restricted to the quarter plane, Journal of Combinatorial Theory, Series A, 116, 460-477. [54] Miyazawa, M. (2004) The Markov renewal approach to M/G/1 type queues with countably many background states, Queueing Systems, 46, 177-196. [55] Miyazawa, M. (2007) Doubly QBD process and a solution to the tail decay rate problem, in Proceedings of the Second AsiaPacific Symposium on Queueing Theory and Network Applications, Kobe, Japan. [56] Miyazawa, M. (2009) Two sided DQBD process and solutions to the tail decay rate problem and their applications to the generalized join shortest queue, in Advances in Queueing Theory and Network Applications, edited by W. Yue, Y. Takahashi and H. Takaki, 3-33, Springer, New York. [57] Miyazawa, M. and Zhao, Y.Q. (2004) The stationary tail asymptotics in the G/I/G1 type queue with countably many background states, Adv. in Appl. Probab., 36(4), 1231-1251. [58] Morrison, J.A. (2007) Processor sharing for two queues with vastly different rates, Queueing Systems, 57, 19-28. [59] Motyer, Allan J. and Taylor, Peter G. (2006) Decay rates for quasi-birth-and-death process with countably many phases 87  and tri-diagonal block generators, Advances in Applied Probability, 38, 522-544. [60] V. A. Malyshev, Random Walks. The Wiener-Hopf Equations in Quadrant of the Plane. Galois Automorphisms. Moscow Univ., Press, 1970. (In Russian) [61] V. A. Malyshev, Analytic Method in the Theory of TwoDimensional Random Walks, Sib, Math. J. 13 1314-1327, 1972. [62] V. A. Malyshev, Asymptotic Behaviour of Stationary Probabilities for Two-Dimensional Positive Random Walks, Sib. Math. J. 14 156-169, 1973. [63] Mitzenmacher, M.D. (1996). The power of two choices in randomized load balancing. PhD thesis. University of California at Berkeley. [64] Raschel, K. (2010) Green functions and Martin compactification for killed random walks related to SU(3), Elect. Comm. in Probab., 15, 176-190. [65] Sheldon M. Ross, Introduction to Probability Models (8th Edition), 2003. [66] Sharifnia, A. (1997). Instability of the join-the-shortest-queue and FCFS policies in queueing systems and their stabilization. Operations Research, 45, 309-314. [67] Suhov, Yu.M., and Vvedenskaya, N.D. (2002). Fast Jackson networks with dynamic routing. Probl. Inform. Transmission, 38, 136-159. [68] Takahashi, Y., Fujimoto, K. and Makimoto, N. (2001) Geometric decay of the steady-state probabilities in a quasibirth-and-death process with a countable number of phases, Stochastic Models, 17(1), 1-24. 88  [69] Tandra, R., Hemachandra, N. and Manjunath, D. (2004). Job minimum cost queue for multiclass customers. Stability and Performance bounds. Probability in the Engineering and informational Sciences, 18, 445-472. [70] Tang, J. and Zhao, Y.Q. (2008) Stationary tail asymptotics of a tandem queue with feedback, Annals of Operations Research, 160, 173-189. [71] Vvedenskaya, N.D. and Suhov, Yu.M. (2004). Functional equations in asymptotic problems of queueing theory. Journal of Mathematical Sciences (N.Y.), 120, 1255-1276. 22 ABRAMOV. [72] Vvedenskaya, N.D., Dobrushin, R.L., and Karpelevich, F.I. (1996). A queueing system with selection the shortest of two queues: An asymptotic approach. Probl. Inform. Transmission, 32, 15-27. [73] Winston, W. (1977). Optimality of the shortest line discipline. J. Appl. Probab. 14, 181-189. [74] Wright, P. (1992) Two parallel processors with coupled inputs, Adv. Appl. Prob., 24, 986-1007. [75] Yiqiang Zhao, Winfried K. Grassmann. 1995. Queueing Analysis of a Jockeying Model. Operations Research 43, 520-529.  89  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0072787/manifest

Comment

Related Items