@prefix vivo: . @prefix edm: . @prefix ns0: . @prefix dcterms: . @prefix skos: . vivo:departmentOrSchool "Arts and Sciences, Irving K. Barber School of (Okanagan)"@en, "Computer Science, Mathematics, Physics and Statistics, Department of (Okanagan)"@en ; edm:dataProvider "DSpace"@en ; ns0:degreeCampus "UBCO"@en ; dcterms:creator "Bhattacharja, Bonane"@en ; dcterms:issued "2011-12-19T18:32:27Z"@en, "2011"@en ; vivo:relatedDegree "Master of Science - MSc"@en ; ns0:degreeGrantor "University of British Columbia"@en ; dcterms:description "In this research, an efficient numerical method is developed to determine the steady-state waiting time distribution of a GI/G/1 queue by solving the discrete-time version of Lindley’s equation, when the queue is bounded on a finite interval. Then, by using Little’s Formula, we calculate the stationary distribution for the total number of customers in the queue. The derivations are based on the Wiener-Hopf factorization of random walks. The method is carried out using a successive approximation method, by improving the weighted average. Finally, to prove the effectiveness of our method, we apply the algorithm for Uniform, Geometric, and Gamma distributions, to find an approximation. An analytical interpretation is also presented to find the waiting time distribution for the Geom/Geom/1 queue, which is not based on a finite interval, as an example of the GI/G/1 queue. Moreover, compare to the other related methods it has been proven that our method is numerically stable, simple, and robust."@en ; edm:aggregatedCHO "https://circle.library.ubc.ca/rest/handle/2429/39784?expand=metadata"@en ; skos:note """A BETTER NUMERICAL APPROACH FOR FINDING THE STEADY- STATE WAITING TIME AND THE AVERAGE QUEUE LENGTH OF A SYSTEM FOR THE ARITHMETIC GI/G/1 QUEUE. by BONANE BHATTACHARJA B.Sc., Mathematics, Khulna University, Bangladesh, 2005 M.Sc., Applied Mathematics, Khulna University, Bangladesh, 2006 A THESIS SUBMITTED IN PARTIAL FULLFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in THE COLLEGE OF GRADUATE STUDIES (Mathematics) THE UNIVERSITY OF BRITISH COLUMBIA (Okanagan) November, 2011 © Bonane Bhattacharja, 2011 ii    Abstract In this research, an efficient numerical method is developed to determine the steady-state waiting time distribution of a GI/G/1 queue by solving the discrete-time version of Lindley’s equation, when the queue is bounded on a finite interval. Then, by using Little’s Formula, we calculate the stationary distribution for the total number of customers in the queue. The derivations are based on the Wiener-Hopf factorization of random walks. The method is carried out using a successive approximation method, by improving the weighted average. Finally, to prove the effectiveness of our method, we apply the algorithm for Uniform, Geometric, and Gamma distributions, to find an approximation. An analytical interpretation is also presented to find the waiting time distribution for the Geom/Geom/1 queue, which is not based on a finite interval, as an example of the GI/G/1 queue. Moreover, compare to the other related methods it has been proven that our method is numerically stable, simple, and robust. iii    Table of Contents Abstract .................................................................................................................................... ii Table of Contents ................................................................................................................... iii List of Tables ........................................................................................................................... v List of Figures ......................................................................................................................... vi List of Symbols ...................................................................................................................... vii Acknowledgements .............................................................................................................. viii Dedication ............................................................................................................................... ix Chapter 1 Introduction .................................................................................................. 1 1.1 Mathematical Model ................................................................................ 1 1.2 Objective of the Thesis ............................................................................ 5 1.3 Outline of the Thesis ................................................................................ 7 Chapter 2 Preliminaries ................................................................................................ 8 Chapter 3 Review of the Literature ............................................................................ 17 Chapter 4 Methodology ............................................................................................... 23 4.1 Overview ................................................................................................ 23 4.2 Wiener-Hopf Factorization Method ....................................................... 27 4.3 Our Explanation for using the Wiener-Hopf Factorization Method in 1‐Uሺzሻ .................................................................................................... 36 4.4 Wiener-Hopf Factorization by Successive Substitution ........................ 42 Chapter 5 Analytical Comparison of GI/G/1 Queue with Geom/Geom/1 Queue .. 51 5.1 Overview ................................................................................................ 51 5.2 The Waiting Time Distribution for the Geom/Geom/1 Queue .............. 52  Chapter 6 Numerical Experiment .............................................................................. 56 6.1 Numerical Results and Examples .......................................................... 56  6.2 Charts for the Queue Lengths ................................................................ 69  Chapter 7 Conclusion and Recommendations for Future Work............................. 72 7.1 Conclusions ............................................................................................ 72  iv    7.2 Future Works ......................................................................................... 73  Bibliography .......................................................................................................................... 74 Appendix ................................................................................................................................ 77   v    List of Tables Table 4.1   Results by equating the coefficients of ݖ௝ and ݖି௝of Eq. (4.25) ................ 38  Table 6.1 Results of using Uniform Distributions by using our Algorithm and by Algorithm 3 .................................................................................................. 58  Table 6.2 Results for the Average Queue Length in queue using our method ............ 59  Table 6.3 Results for Uniform Distributions with Large Bounds by using our Algorithm and by Algorithm 3 .................................................................... 60  Table 6.4 Results for the Average Queue Length in queue using our method ............ 61  Table 6.5 Results of using Truncated Geometric Distributions by using our Algorithm and by Algorithm 3 .................................................................... 63  Table 6.6 Results for the Average Queue Length in queue using our method ............ 64  Table 6.7 Results of using Gamma distribution for the interarrival time and truncated Geometric distribution for the service time by using our Algorithm and by Algorithm 3 .................................................................... 67  Table 6.8 Results for the Average Queue Length in queue using our method ............ 68  vi    List of Figures Figure 2.1 Queueing process. ........................................................................................ 12  Figure 2.2 Use of the Erlang for phased service. .......................................................... 15  Figure 6.2.1 The Average Queue Length in queue for the U/U/1 queue ......................... 69  Figure 6.2.2 The Average Queue Length of large bound for the U/U/1 queue ................ 70  Figure 6.2.3 The Average Queue Length in queue for the Geom/Geom/1 queue ........... 70  Figure 6.2.4 The Average Queue Length for the Gamma/Geom/1 queue in queue ......... 71  vii    List of Symbols ሺߚଵ, ߛଵሻ strict ascending ladder point ሺߚଵതതത, ߛଵഥ ሻ weak descending ladder point ܿ௞ convolution of discrete interarrival time and service time ܥܦܨ cumulative distribution function ܥ௡ ݊th customer ܩܫ/ܩ/1 notation with single-server queue with general ܳሺݐሻ and ܲሺݐሻ ܷ/ܷ/1 notation with single-server queue where interarrival and service times are uniformly distributed ܩ݁݋݉/ܩ݁݋݉/1 notation with single-server queue where interarrival and service times are geometrically distributed ܩܽ݉݉ܽ/ܩ݁݋݉/1 notation with single-server queue where interarrival times are distributed by gamma distribution and service times are geometrically distributed ܪሺݔሻ distribution function for first ladder height ߛଵ ܬሺݔሻ distribution function for first ladder height ߛଵഥ ߩ traffic intensity ܲݎ probability of an event ݌௞ discrete distribution of service time ݌ሺݐሻ probability density function of service time ܲሺݐሻ distribution function of service time ݍ௞  discrete distribution of interarrival time ݍሺݐሻ probability density function of interarrival time ܳሺݐሻ distribution function of interarrival time ܵ௡ service time for customer ܥ௡ ߬ step size ௡ܶ ௡ܶ is the interarrival time for customer ܥ௡ and ܥ௡ାଵ ௡ܹ waiting time (in queue) for customer ܥ௡ ܹሺݐሻ waiting time distribution ܹሺ௡ሻሺݐሻ waiting time distribution function for ௡ܹ ܮ௤ mean queue length in the queue viii    Acknowledgements First, I would like to express my utmost indebtedness and gratitude to my supervisor, Dr. Javad Tavakoli, for his careful supervision and guidance in completing this thesis, and for providing me with the opportunity to work in my chosen field of study, Mathematics, and in particular, queueing systems. I would also like to express my sincere appreciation and respect for my honourable committee member, Dr. Winfried K. Grassmann, for his valuable comments and constructive suggestions in preparing this thesis. Then, I would like to thank all of my colleagues and professors from UBC Okanagan for their support and encouragement. Finally, I am beholden to my husband, Biddut Bhattacharjee, for his constant support, which helped me to become more confident and enabled me to pursue my education in Mathematics, and to complete my thesis for the M.Sc. degree. ix    Dedication This work is dedicated to my parents. 1    Chapter 1 Introduction 1.1 Mathematical Model A queueing system consists of “customers” arriving at random times to some facility where they receive service of some kind and depart. We use “customer” as a generic term. There are varieties of applications of Queueing Theory in our daily life, such as in traffic flow (vehicles, aircraft, people and communications), in scheduling (patients in hospitals, jobs on machines) and also in facility design (banks, post office, fast food restaurants) and so on. Queueing systems are classified according to 1. The input process, the probability distribution of the pattern of arrivals of customers in time; 2. The service distribution, the probability distribution of the random time to serve a customer (or group of customers in the case of batch service); and 3. The queue discipline, the number of servers and the order of customer service. 1. The input process: Input is the pattern of customer arrival and conjunction with the system. It is given by the distribution time between successive arrivals which is known as the interarrival time distribution. While a variety of input processes may arise in practice, two simple and frequently occurring types are mathematically tractable. First is the scheduled input where customers arrive at a fixed time ܶ, 2ܶ, 3ܶ,… and the second most common model is the “completely random” arrival process where the times of customer arrivals form a Poisson process. Exponentially distributed interarrival times then corresponds to a Poisson process of arrival as a special case. The input can be characterized by the arrival rate which gives the expected number of arrival per time unit. The number of arriving customers and potential customers for a queueing system can be finite or infinite, and they can arrive 2    individually or in groups. Once customers join in the system, a waiting line is formed. It may also happen that customers cannot join in the system because the system is fully occupied. 2. The service distribution: The service pattern describes the way customers are served. With each customer, associate a service time. We will always assume that the duration of service for individual customers is independent and identically distributed which are nonnegative random variables and it is independent of the arrival process. An important measure to describe the service pattern is the service rate, which is the reciprocal of the expected service time. The service rate gives the expected number of services completed per time unit. The situation in which all service times is in the same fixed duration, ܦ, is then a special case. The number of servers in a queueing system is an important parameter. A queueing system may have only a single stage of service (e.g., hair styling salon) and multistage service (physical examination procedure). If the number of servers is infinite, then all customers can enter the service facility immediately, therefore no waiting line will be formed. 3. The queue discipline: The queueing discipline is the order according to which customers are served. The most common queue discipline is first come, first serve (FCFS) where customers are served in the same order in which they arrive, for example, people are served by the cashier at a supermarket by following FCFS. The model we consider here is of this type; however, this is certainly not the only possible queue discipline. Another one commonly used discipline is the last come, first serve (LCFS), which means that the customer selected for serving is the one who enters the system latest, for example, jobs arriving in a production facility with one or more machining centers are often scheduled according to last-come-first-served basis because of ease of access to stored jobs or in order to meet due date requirements. Customers may also be served according to service in random order (SRO), which means that the service order for customer is decided randomly, for example, the procedure of checking the quality of the commodities in the industry is followed by service in random order. The priority discipline is another common process that is employed in queueing systems. In the priority queue, customers are allowed to be of different types and both the service discipline and the service time distribution may vary with the customer type.  3    Queueing models aid the design process by predicting system performance. For example, a queueing model might be used to evaluate the costs and benefits of adding server to an existing system. The models enable us to calculate system performance measures in terms of their basis quantities. Some important measures of the system behaviour are: 1) The probability distribution of the number of customers in the system. Not only do customers in the system often incur costs, but generally in many systems, physical space for waiting customers must be planed for and provided. 2) The utilization of the server(s).Idle servers may incur costs without contributing to system performance. 3) System throughput. The long run number of customers passing through the system in a direct measure of system performance. 4) Customers waiting time. Long waits for service are annoying in the simplest queueing situations and are directly associated with major costs in many large systems, such as patients awaiting emergency care at a hospital. The subject of queueing theory originated from the studies of telephone conversations. The Danish mathematician A.K. Erlang was the pioneer researcher in this area in the early 1900s. Later, in the 1930s, F. Pollaczek did further research and expanded much of Erlang’s work. Moreover, one of the most powerful relationships in queueing theory was developed by John D.C. Little in the early 1960s. Little related the steady-state mean system sizes to the steady- state average customer waiting times. Let ܮ = the average number of customers in the system ߣ = the rate of arrival of customers to the system and ܹ = the average time spent by a customer in the system. The equation ܮ ൌ ߣܹ, is known as Little’s formula, which directly relates to the two of our important measures of system performance, the mean queue size and the mean customer waiting time in steady-state. The total waiting time in the system is the sum of the waiting time before service, plus the service time. The busy period is the length of time during which the server is continuously 4    busy for serving the customers, and the length of time when the server is not serving or there are no customers in the system, is called the idle period. For a single server queue, the probability of the system being idle is the same as the probability of a server being idle. If we denote the average rate of customers entering the queueing system as ߣ and the average rate of serving customers as ߤ, then the ratio of the rate of input and the rate of service is known as traffic intensity which is denoted by ߩ ൌ ఒఓ . The ratio of the arrival rate to the service rate plays a significant role in measuring the performance of queueing system. The higher the ratio, the longer the customer will tend to wait. If ߣ ൏ ߤ ሺ݋ݎ, ߩ ൏ 1ሻ, then the queue has a steady-state distribution. In a steady-state situation, ߩ is the probability that the server is busy, and therefore it gives the expected number in service. As ߩ increases, the length of the busy period also increases. When ߩ ൐ 1ሺ݋ݎ, ߣ ൐ ߤሻ, the average number of arrivals in the system exceeds the service rate of the system, and the queue size never settles down, so no steady-state exists. It turns out that for steady-state results to exist, ߩ must be less than 1. When ߩ ൌ 1, unless arrivals and service are deterministic and perfectly scheduled, no steady state exists. The quantity ߩ is sometimes referred to as the offered load, since, on average, each customer requires 1 ߤ⁄ time units of service and the average number of customers arriving per unit time is ߣ, so that the product ߣሺ1 ߤ⁄ ሻ is the average amount of work arriving in to the system per unit time. The limiting behaviours, ߩ → 0 and ߩ → 1, are known as light traffic and heavy traffic intensities, respectively. A standard shorthand is used in much of the queueing literature for identifying simple queueing models. The shorthand assumes that the arrival times form a renewal process, and the format ܣ/ܤ/ܿ uses ܣ to describe the interarrival time distribution, ܤ to specify the individual customer service time distribution, and ܿ to indicate the number of servers. There are some well known queueing models like ܯ/ ܯ/1, ܯ/ ܯ/∞, ܯ/ ܩܫ/1, ܩܫ/ ܯ/1, ܧ௞/ ܯ/1, ܯ/ ܦ/1 and so on, which are very common in queuing systems. The symbol ܯ (Markovian or memoryless) is used for exponential distribution, ܧ௞(Erlang) for the gamma distribution order ݇, and ܦ for a deterministic distribution, a schedule of arrivals or fixed service times. The symbol ܩ represents a general or arbitrary probability distribution. 5    1.2 Objective of the Thesis The use of Markov models in queueing theory is very common because they are appropriate for basic systems and they lend themselves to easy applications. But real-world systems are so complex and so general that simple Markov and renewal process models do not represent them well. Thus, the importance of general queueing model has proved useful in handling emerging complex situations.  In this study, we discuss the steady state waiting time distribution and the expected queue length of a discrete GI/G/1 queue and then present a numerical method for calculating these. The notation GI stands for general independent interarrival time distribution, G stands for general service time distribution and number 1 means that there is one server in the system. More clearly, this symbol G represents a general or arbitrary probability distribution; that is, no assumption is made as to the precise form of the distribution and results in these cases are applicable to any probability distribution. These general-time distributions, however, are required to represent independent and identically distributed random variables. Moreover, this GI/G/1 queue can be used for both the discrete and continuous case, which means that the interarrival and service times could be discrete or continuous. As we mentioned, we consider discrete-time queues in our work, that is, the queues where the times between arrivals and service are both integers. In general, when the interarrival and service times are discrete random variables in a general queueing model, then it is called discrete GI/G/1 queue. In discrete-time queues, the time axis is segmented into a sequence of equal intervals (of unit duration), called slots, and arrivals and departures of customers are assumed to take place at the slot boundaries. On the other hand, when the interarrival and service times are continuous random variables, the queue is known as a continuous GI/G/1 queue. The calculation of the waiting time distribution in the continuous GI/G/1 queue is more difficult. In this case, one needs to discretize the distribution function of the interarrival and service times for the numerical analysis. Some authors showed that under certain conditions, the continuous GI/G/1 queue can be approximated quite well by a discrete GI/G/1 queue. In our numerical analysis, we use Gamma/Geom/1 queue, where the interarrival times are continuously distributed. 6    Approaches in our case, based on Lindley’s equation and the derivations are based on a Wiener- Hopf factorization of random walks, using ladder height distribution. This is carried out using a successive approximation method, first suggested by Grassmann [11].   By solving a discrete-time version of Lindley’s equation [24], several authors, including Ponstein [28], Konheim [22], Ackroyd [1], Grassmann and Chaudhry [13], Grassmann [11], and Fryer and Winsten [10], also numerically determined the waiting time distribution of the arithmetic GI/G/1 queue, when the queue is bounded on a finite interval. It was mentioned repeatedly [13, 1, 10] that the discrete-time version of Lindley’s equation is mathematically equivalent to certain equations arising in bulk queues. We will introduce their methods in the following chapters. A number of alternatives to Lindley’s approach have also been suggested. For instance, one can approximate the GI/G/1 queue by the GI/PH/1 queue, an approach originally devised by Neuts [26]. One can also use approximations rather than the exact results to find the important measures of the GI/G/1 queue [37, 34]. This approximation may be satisfactory in a given situation. At present, there are several approaches available to solve such queueing problems. We consider first an approach where one can set up generating functions, and uses partial fraction expansions to invert these generating functions by using the Wiener-Hopf factorization method. Generally, these expansions require one to find the roots of an equation of degree ݃ ൅ ݄, where ݃ and ݄ are certain parameters that considered as the finite bounds of that equation. It turns out that ݃ of the roots of this expression lie inside the unit circle of the complex plane and that ݄ of them lie outside the unit circle. Moreover, the coefficients of the factor whose roots are the ݃ inner roots are equal to the ratios of the steady state probabilities; that the server idles ݆ units of time between a departure and the next arrival, ݆ ൌ 0,1, … , ݃. The numerical method consists in first finding an approximation of the probabilities from which an approximate factor of outer roots ݄ is obtained to find out the steady state probabilities of the waiting time distribution. However, by following the iterative method given by [12] a numerical method is presented in this study by improving the weighted average to find an approximation for the waiting time distribution of a GI/G/1 queue which eventually converges faster than the method given by [12]. Applying approximating techniques to these general formulae seems to cause problems if the traffic 7    intensity ߩ is close to one. The technique developed here has been applied successfully with traffic intensities of up to 0.99. We also compare this approximation to the other procedures to find the benefit and effectiveness of our method. 1.3 Outline of the Thesis The thesis consists of seven chapters including an appendix. In the first chapter, a brief summary of GI/G/1 queues is given and the objective of the thesis is stated. The second chapter states some important definitions on the related topics. The third chapter includes an overview of the literature of different authors in solving the waiting time distribution of the GI/G/1 queue, and different solution techniques are outlined. Chapter 4 explains about the Wiener-Hopf factorization method and how to use it to obtain the discrete waiting time distribution. Based on this method and the algorithm given by Grassmann and Jain [12], we also introduce an efficient algorithm by improving the weighted average to solve the discrete waiting time distribution. Then, by using Little’s formula, we calculate the average queue length of the system. We also use this algorithm for the numerical experimentation. Moreover, chapter 5 deals with an analytical interpretation of finding the waiting time for the Geom/Geom/1 queue, which is not based on a finite interval, as an example of the GI/G/1 queue. We give some numerical examples with different distribution like uniform, gamma and geometric, in Chapter 6. Chapter 7 summarises the thesis and offers conclusion, and some ideas for the future research. Finally, MATLAB programs for numerical experimentation are included in the appendix. 8    Chapter 2 Preliminaries Stochastic process Definition 2.1: Stochastic process is a family (sequence) of random variables, ܺ௧, where ݐ is a parameter that runs over a suitable index set ܶ. (Where convenient, we write ܺሺݐሻ instead of ܺ௧) ሼܺ௧: ݐ ∈ ܶሽ . In a common situation, the index ݐ corresponds to a discrete unit of time and the index set is ܶ ൌ ሼ0,1,2, … ሽ. Examples 2.1: ݅ሻ ܺ௧ = Temperature at time ݐ ∈ ሾ0,∞ሻ ݅݅ሻ ܺ௧ = Number of students present in the class on day ݐ ∈{0, 1, 2, 3, 4, 5, 6, 7, 8}. a. If ܶ is countable (or finite), then the process is called a discrete-time process. b. If it consists of a set of finite or infinite intervals on the time axis, it is a continuous- time process. Markov process Definition 2.2: A discrete parameter stochastic process ሼܺሺݐሻ, ݐ ൌ 0, 1, 2, … ሽ or a continuous parameter stochastic process ሼܺሺݐሻ, ݐ ൐ 0ሽ is said to be a Markov process if for any set of time points ݐଵ ൏ ݐଶ ൏ ⋯ ൏ ݐ௡ in the index set or time range of the process, the conditional distribution of ܺሺݐ௡ሻ, given the values of ܺሺݐଵሻ, ܺሺݐଶሻ, ܺሺݐଷሻ,…, ܺሺݐ௡ିଵሻ, depends only on the immediately preceding value ܺሺݐ௡ିଵሻ; More precisely, for any real numbers ݔଵ, ݔଶ, ݔଷ, … , ݔ௡, 9    ܲݎሼܺሺݐ௡ሻ ൑ ݔ௡|ܺሺݐଵሻ ൌ ݔଵ, … , ܺሺݐ௡ିଵሻ ൌ ݔ௡ିଵሽ ൌ ܲݎሼܺሺݐ௡ሻ ൑ ݔ௡|ܺሺݐ௡ିଵሻ ൌ ݔ௡ିଵሽ . Thus, the process is memoryless. Markov Chain Definition 2.3: A Markov chain is a discrete-time Markov process with discrete state space which is stationary and satisfies, ܲݎሼܺ௡ାଵ ൌ ݆|ܺ௡ ൌ ݅ሽ ൌ ݌௜௝. Here, ݌௜௝ represents the probability that a system in state ݅ will enter state ݆ at the next transition. Examples 2.2:  State may be a certain level of energy for an atom.  In the public opinion poll, state may be one of the voter’s possible state of mind.  Weather has two states → wet and dry. Theorem 2.1: A Markov process is completely defined by its transition probability matrix and the initial state ܺ଴. Steady-state probability Definition 2.5: Consider a discrete-parameter Markov chain and suppose that lim௡→∞ ݌௜௝ሺ௡ሻ ൌ ߨ௝ (for all ݅), which is independent of ݊. That is, after a long time, the probability that the process is in state ݆ given that it started in state ݅, is independent of the starting state i. We call ሼߨ௝ሽ the limiting or steady-state probability of Markov chain.  The steady-state or equilibrium distribution of a Markov chain is given by the following equations ߨ௝ ൌ ∑ ߨ௜∞௜ୀ଴ ݌௜௝, ݆ ൒ 0 and ∑ ߨ௝ ൌ 1∞௝ୀ଴ . 10     Transition probability Matrix or Markov matrix: ܲ ൌ ൫݌௜௝൯ ൌ ቈ ݌଴଴ ݌଴ଵ ݌଴ଶ …݌ଵ଴ ݌ଵଵ ݌ଵଶ …⋮ ⋮ ⋮ ቉, ݌௜௝ ൒ 0; ݅, ݆ ൌ 0,… ,∞ and ∑ ݌௜௝ ൌ 1 ∀ ݅∞௝ୀଵ [each row’s sum = 1]. Irreducible Markov Chain Definition 2.6: A Markov chain is said to be irreducible if there is only one class of states, that is all states communicate with each other. State ݆ is accessible from state ݅ if ݌௜௝ሺ௡ሻ ൐ 0 for some integer ݊ ൒ 0. In a regular chain, each state is accessible from each other. Random Walk Definition 2.7: When we discuss random walks, it is an aid to intuition to speak about the state of the system as the position of a moving “particle”. A one-dimensional random walk is a Markov chain whose state space is a finite or infinite subset of the integers, in which the particle, if it is in state ݅, can in a single transition either stay in ݅ or move to one of the neighbouring states ݅ െ 1, ݅ ൅ 1. If the state space is taken as the nonnegative integers, the transition matrix of a random walk has the form 0 1 2… ݅ ݅ െ 1 ݅ ൅ 1 ܲ = i . . 2 1 0 ........ ... ... . . . . 0 . . . . . . . . ...00...0 ...00... ...00...0 22 111 00 iii prq rq prq pr where ݌௜ ൐ 0, ݍ௜ ൐ 0, ݎ௜ ൒ 0, and ݍ௜ ൅ ݎ௜ ൅ ݌௜ ൌ 1, ݅ ൌ 1,2, … , ݌଴ ൒ 0, ݎ଴ ൒ 0, ݎ଴ ൅ ݌଴ ൌ 1. Specifically, if ܺ௡ ൌ ݅ then, for ݅ ൒ 1, 11    ܲݎሼܺ௡ାଵ ൌ ݅ ൅ 1|ܺ௡ ൌ ݅ሽ ൌ ݌௜, ܲݎሼܺ௡ାଵ ൌ ݅ െ 1|ܺ௡ ൌ ݅ሽ ൌ ݍ௜, and ܲݎሼܺ௡ାଵ ൌ ݅|ܺ௡ ൌ ݅ሽ ൌ ݎ௜, with obvious modifications holding for ݅ ൌ 0. Example 2.3: i) The path of a person moving randomly one step forward or backward gives us a realization of the random walk. ii) The fortune of a player engaged in a series of contests is often depicted by a random walk process. Renewal (counting ) process Definition 2.8: A renewal process ሼܰሺݐሻ, ݐ ൒ 0ሽ is a nonnegative integer-valued stochastic process that registers the successive occurrences of an event during the time intervalሺ0, ݐሿ, where the time durations between consecutive events are positive, independent, identically distributed random variables. Let the successive occurrence times between events be ሼܺ௞ሽ௞ୀଵ∞ such that ௜ܺ is the elapsed time from the ሺ݅ െ 1ሻݏݐ event until the occurrence of the ݅ݐ݄ event. We write ܨሺݔሻ ൌ ܲݎሼܺ݇ ൑ ݔሽ , , ݇ ൌ 1, 2, 3, … For the common probability distribution of ଵܺ, ܺଶ, … , a basic stipulation for renewal processes is ܨሺ0ሻ ൌ 0, signifying that ଵܺ, ܺଶ, … are positive random variables. We refer to ܵ௡ ൌ ଵܺ ൅ ܺଶ ൅ ⋯൅ ܺ௡ , ݊ ൒ 1 (ܵ଴ ൌ 0, by convention) as the waiting time until the occurrence of the nth event. Therefore, in this case, ܵ௡ is the sum of ݊ random variables. Queueing System Definition 2.9: A queueing system can be described as customers arriving for service, waiting for service if the server is not idle, receive service, then leaving the system after being served. Queues are common in everyday life. Customers wait to be served in front of service facilities. For example, people wait to use the autoteller in the bank, cars wait to be repaired in an autobody shop, airplanes wait to take off at an airport, machines wait to be repaired in a 12    factory, people wait to use the phones in phones booths. These are but a few examples among many about the waiting lines. Figure 2.1 Queueing process. There are six basic characteristics of queueing processes provide that an adequate description of a queueing system: (1) arrival pattern of customers, (2) service pattern of servers, (3) queue discipline, (4) system capacity, (5) number of service channels and (6) number of service stages. Queueing model Definition 2.10: A queueing process is a process in which customers arrive at some designated place where a service of some kind is being arranged, for example, at the teller’s window in a bank or beside the cashier at a supermarket. It is assumed that the time between arrivals, or interarrival time, and the time that is spent in providing a service for a given customer, are governed by the probabilistic laws. In short, this procedure is known as queueing model. Queueing models assist the design process by predicting system performance. For example, a queueing model might be used to evaluate the costs and benefits of adding a server to an existing system. A standard shorthand is used in much of the queueing literature for identifying simple queueing models. The shorthand assumes that the arrival times form a renewal process, and the format A/B/c uses ܣ to describe the interarrival distribution, ܤ to specify the individual customer service time distribution, and ܿ to indicate the number of servers. The following are some definitions of some of the most popular queueing models: The ࡹ/ࡹ/૚ queueing model Definition 2.10.1: The ܯ/ܯ/1 model is known as a single-server exponential queueing system. In this case, arrivals follow a Poisson process, service times are exponentially distributed, and there is a single server. QueueArriving customer Service facility Departure 13    In general, ܯ refers mainly to three facts: (1) Arrival is Poisson and interarrival time distribution is exponential, (2) Service time distribution is also exponential, (3) Thus, the process is memoryless or Markovian. Since the successive times for interarrival and service are continuously distributed, they are therefore exponential. The number of customers in the system at time ݐ forms a birth and death process where arrivals can be considered as “births” to the system, and departures can be considered as “deaths”. The parameters ߣ and ߤ represent the rates for the arrivals and service per unit time. The ࡹ/ࡳࡵ/1 queueing model Definition 2.10.2: In this model there are Poisson arrivals with rate ߣ, but arbitrarily distributed service times which are i.i.d. with mean 1 ߤ⁄ and server is single. The analysis proceeds with the help of an embedded Markov chain. The ࡳࡵ/ࡹ/1 queueing model Definition 2.10.3: In the ܩܫ/ܯ/1 model, service times have the Markovian property. From the notation it is apparent that the model is a single-server system with an arbitrary or general distribution function of interarrival times which are i.i.d. with mean 1 ߣ⁄ , and the service times are i.i.d. exponential random variables. The ࡹ/ࡰ/c queue Definition 2.10.4: This is a multiserver queueing model where customers arrive in a Poisson process with the rate ߣ, and service times are constant or deterministic. The ࡹࢄ/ࡹࢅ/c queue Definition 2.10.5: The ܯ௑/ܯ௒/c is also a multiserver Markovian queueing model. In a queueing system, if customer arrivals and /or service occur in groups, then the system is known as a bulk queue. Here, 14    ܯ௑ means customers arrive in group of size ܺ, where ܺ a is random variable greater than zero and the group of arrival occurs in a Poisson process with rate ߣ. ܯ௒ means that customers are served in a group of size ܻ and the service time distribution is exponential with rate ߤ. Phase-Type distribution Definition 2.10.6: A phase-type distribution is a probability distribution that results from a system of one or more inter-related Poisson process occurring in sequence, or phases. The sequence in which each of the phases occur may itself be a stochastic process. The distribution can be represented by a random variable describing the time until absorption of a Markov process with one absorbing state. Each of the states of the process represents one of the phases. Some distributions that use the concept of phases are the Erlang, Hyperexponential, hypo exponential or generalized Erlang, and Coxian distribution. The ࡱ࢐/ࡱ࢑/1 model Definition 2.10.7: The ܧ௝/ܧ௞/1 is a single server queueing model with arrival rate ߣ and service rate ߤ (i.e. each arrival phase has rate ݆ߣ and each service phase has rate ݇ߤ). Specifically, ܧ௞ represents the sum of ݇ i.i.d. exponential random variables with mean 1 ݇ߤൗ , which is also known as Erlang type- ݇ distribution. The Erlang family provides more flexibility in modeling than the exponential family, which only has one parameter. Example 2.4: There are some practical examples of Erlang distribution in our daily life. i. In a visa office, people have to follow some steps to enter in the system. ii. For the physical examination in a pathology centre, a patient needs to pass several stages to get the service. 15    Figure 2.2 Use of the Erlang for phased service. The ࡳࡵ/ࡳ/૚ queueing model Definition 2.10.8: The symbol ܩ represents a general or arbitrary probability distribution; that is, no assumption is made as to the precise form of the distribution. Results in these cases are applicable to any probability distribution. Thus, by ܩܫ/ܩ/1, we mean that I. The inter-arrival times are i.i.d with a distribution function ܨ் having mean 1 , II. The service times independent of arrival times are i.i.d with a distribution function ܨௌ having mean 1 ߤ⁄ , III. The service is provided by one server, IV. Infinite waiting space is available and V. The queue discipline is FCFS. Waiting time Definition 2.9: The amount of time a new arrival has to wait until its service begins, is known as the waiting time. Queue length Definition 2.10: The actual number of customers in the system, is called the queue length. Busy period Definition 2.11: The length of time during which the server is continuously busy with serving their customers is called the busy period. Step 1         Step 2          Step 3        Step 4  1 4ߤ⁄          1 4ߤ⁄              1 4ߤ⁄         1 4ߤ⁄       16    Idle period Definition 2.12: The length of time when the server is not serving or there are no customers in the system is called the idle period. Probability generating function Definition 2.13: Suppose, ܲሺݖሻ ൌ ∑ ݌௡ݖ௡ ൌ ݌଴ ൅ ݌ଵݖ∞௡ୀ଴ ൅ ݌ଶݖଶ ൅ ⋯ If the series converges for some range of ݖ, then ܲሺݖሻ is called a generating function of the sequence ሼ݌௡ሽ. Let, ܺ be a random variable with ܲݎሼܺ ൌ ݊ሽ ൌ ݌௡ , ݊ ൌ 0, 1, 2, … and ∑ ݌௡∞௡ୀ଴ ൌ 1. Then ܲሺݖሻ ൌ ∑ ݌௡ݖ௡∞௡ୀ଴ , which is known as the probability generating function (p.g.f) of ܺ. Probability generating function can be used to directly obtain moments of a random variable. If we take the first derivatives of ܲሺݖሻ with respect to ݖ at ݖ ൌ 1, then we can calculate the moment. Therefore, we get ܲ′ሺݖሻ ൌ ∑ ݊݌௡ݖ௡ିଵ∞௡ୀଵ and then evaluate ܲ′ሺݖሻ at ݖ ൌ 1, to get ܲ′ሺ1ሻ ൌ ∑ ݊݌௡ ൌ ܧሾܺሿ∞௡ୀଵ . Moreover, we can determine the variance of a random variable from the probability generating function. In this case, we also need to take the second derivatives of ܲሺݖሻ with respect to ݖ at ݖ ൌ 1, since, we know, Var ሾܺሿ ൌ ܧሾܺଶሿ െ ܧଶሾܺሿ. Thus, ܲ′ሺݖሻ ൌ ∑ ݊݌௡ݖ௡ିଵ∞௡ୀଵ , ܲ′′ሺݖሻ ൌ ∑ ݊ሺ݊ െ 1ሻ݌௡ݖ௡ିଶ∞௡ୀଶ , and then by evaluating ܲ′ሺݖሻ and ܲ′′ሺݖሻ at ݖ ൌ 1, we get ܲ′ሺ1ሻ ൌ ∑ ݊݌௡ ൌ ܧሾܺሿ∞௡ୀଵ and ܲ′′ሺ1ሻ ൌ ∑ ݊ሺ݊ െ 1ሻ݌௡ ൌ ܧሾܺሺܺ െ 1ሻሿ∞௡ୀଶ . Now, Var ሾܺሿ can be obtained from ܲ′ሺ1ሻ and ܲ′′ሺ1ሻ: Var ሾܺሿ ൌ ሺܧሾܺଶሿ െ ܧሾܺሿሻ ൅ ܧሾܺሿ െ ܧଶሾܺሿ. Generally, p.g.f can be used to determine the probabilities ݌଴, ݌ଵ, … and ܲሺݖሻ is written as a power series and ݌௡ is the coefficient in front of ݖ௡ in the series. 17    Chapter 3 Review of the Literature The queueing model GI/G/1 has been of much interest to queueing theorists and practitioners since the 1950’s, and the study of this model is becoming more significant, since it has wide applications in computers, communication systems and networks of queues. The most important and much-studied aspect of this model is the steady-state distribution of the actual queue time of a customer. The analysis of the waiting time of a customer has been given by several authors, including Ponstein [28], Konheim [22], Ackroyd [1], Grassmann and Chaudhry [13], Grassmann [11], Grassmann and Jain [12] and Fryer and Winsten [10]. All of these authors numerically determined the waiting time distribution of the arithmetic GI/G/1 queue by solving a discrete-time version of Lindley’s equation (Lindley [24]). Before doing so, one can demand that the random variables ሼ ௡ܶሽ and ሼܵ௡ሽ be discrete and that their distribution functions are both defined on finite intervals. It was pointed out many times that the discrete-time version of Lindley’s equation is mathematically equivalent to certain equations arising in bulk queues (Grassmann and Chaudhry [13]; Fryer and Winsten [10]). In this case, there are various methods of finding the waiting time distribution in a GI/G/1 queue. In this chapter, we present some of these approaches to deal with the waiting time distribution of the GI/G/1 queue. The method presented by Grassmann and Jain [12] is an efficient numerical method for calculating the waiting time and idle time distribution of the arithmetic GI/G/1 queue. The method is based on Wiener – Hopf factorization, and compared to related methods of others, their method appears to perform very well and is often faster by several orders of magnitude than other methods. There are three algorithms suggested by them which we will introduce in chapter 4. However, based on the same procedure, our method works better than their methods in numerical experimentations. 18    Here, I should also mention the work of Konheim [22], who gave an elementary method for calculating the stationary distribution of waiting time in a GI/G/1 queue. Konheim based his method on generating function techniques, and his solution required only the ability to factor polynomials, which is part of our method as well. Konheim set a relation ܵሺݖሻ ൌ ሺ1 െ ܷሺݖሻሻ ሺ1 െ ݖሻ⁄ , where ሺ1 െ ܷሺݖሻሻ is the same as in our Eq. (4.25) and ܷሺݖሻ ൌ ݑି௚ݖି௚ ൅ ⋯൅ ݑ௛ݖ௛ for െ݃ ൑ ܷ ൑ ݄. Then, he factored ܵሺݖሻ into ܵାሺݖሻ and ܵିሺݖሻ, where ܵାሺݖሻ contains all the zeros of ܵሺݖሻ outside the unit circle, and ܵିሺݖሻ contains all other zeroes. The purpose of this factorization is to find the roots of ܵሺݖሻ which are outside the unit circle. Moreover, ܵାሺݖሻ is scaled such way that ܵାሺ1ሻ ൌ 1. Konheim showed that the generating function of waiting time distribution ܹሺݖሻ can be obtained through the equation ܹሺݖሻ ൌ 1 ܵାሺݖሻ⁄ , which is essentially our Eq. (4.30). A similar factorization was also suggested by Ackroyd [1]. Konheim basically used ݄ zeros of 1 െ ܷሺݖሻ to find ܣሺݖሻ for obtaining the waiting time distribution, where ܣሺݖሻ is a polynomial that has ݄ roots outside the unit circle. In Eq. (4.26), we also mentioned ܣሺݖሻ ൌ ∑ ߙ௜ݖ௜ஶ௜ୀଵ , is the generating function for the distribution that the server remains busy for serving the customers in the system. Some authors, in particular Fryer and Winsten [9] also followed the same procedure as [22] to find ܣሺݖሻ. A number of methods have been suggested to find ܣሺݖሻ iteratively. The earliest such attempt was done by Ponstein [28], who solved the waiting time distribution in a discrete GI/G/1 queueing model, by a method based upon Markov chains. He also defined ݑ௞ ൌ Prሺܷ௡ ൌ ݇ሻ for ݇ ൌ െܭଵ,… , ܭଶ , where ܷ௡ ൌ ܵ௡ െ ௡ܶ is the difference between the service time of customer ܥ௡, and the interarrival time of the ܥ௡ and ܥ௡ାଵ customers. In our work, we used ݃ and ݄ for ܭଵ and ܭଶ. Ponstein then formed a Markov chain with a one-step transition matrix, ܳ ൌ ሾ݌௝௞ሿ, where ݌௝௞ represents the one-step transition probability that if the queueing system is in state, say ܨ௝, for some time unit, the system will be in state ܨ௞ for the next time unit, and ݌௝௞ is given as ݌௝௞ ൌ ݑ௞ି௝. Here the state is in ܨ௝ for some time unit generally stands for a nth customer waits ݆ units of time before being served, ݆ ൌ 1, 2, …, or the server idles ሺെ݆ሻ consecutive units of time before a customer arrives, ݆ ൌ 0,െ1,… ,െܭଵ. Then, as a standard procedure in Markov chains, let the steady-state probabilities for the queueing system in state ܨ௝ for some time units be ∑ ൌ ൫ߨି௄భᇱ, ߨି௄భାଵᇱ, , … , ߨ଴ᇱ, ߨଵᇱ, … ൯. When the 19    system is in equilibrium, it is considered that ∑ܳ ൌ ∑ and ݍଵ ൌ ߨି௄భᇱ , . . . , ݍ௄భ ൌ ߨିଵᇱ ; ߨ଴ ൌ ߨି௄భᇱ ൅ ⋯൅ ߨିଵᇱ ൅ ߨ଴ᇱ and ߨ௝ ൌ ߨ௝′ for ݆ ൌ 1, 2, …. Note that according to the definition of ߨ௝ᇱ, the ߨ௝, ݆ ൌ 1, 2, … gives the required waiting time distribution and ݍ௝, for ݆ ൏ 0 gives the idle time distribution. Moreover, Ponstein gave an algorithm in order to find the ߨ௝s for ݆ ൌ 1, 2, …. He also factored ܳሺݖሻݖ௄భ, where ܳሺݖሻ ൌ 1 െ ܷሺ1 ݖ⁄ ሻ to locate the roots outside and inside of the unit circle. Here, ܷሺݖሻ ൌ ݑି௄భݖି௄భ ൅ ⋯൅ ݑ௄మݖ௄మ. It is also noticed that the main difference between Ponstein’s method and Konheim’s [22] method is: those roots which are inside the unit circle, or inner roots, in Konheim’s method corresponds to those roots which are outside the unit circle, or outer roots, in Ponstein’s method and those outer roots in Konheim’s method corresponds to those inner roots in Ponstein’s method. Powell [29] used a an efficient implementation of Newton’s method to find the zeros of 1 െ ܷሺݖሻ in the case of bulk queues, and in his particular case, Newton’s method may be more efficient than our method. However, in the case of bulk queues, ܷሺݖሻ have a very special form, and Powell’s method can thus only be used to solve very special GI/G/1 queues. Another useful method was suggested by Ackroyd [1], who used complex logarithms to separate the zeros inside and outside the unit circle. This can be done very efficiently, using Fast Fourier Transforms. Grassmann and Chaudhry [13] combined the approach of Neuts, which is known as the matrix-geometric methods, with their method. Neuts’ method can also be used to find ܣሺݖሻ.. In Neuts’ approach, a matrix ܴ of dimension ݄ ൈ ݄ was determined, which satisfied a certain matrix equation. Grassmann and Chaudhry mainly modified the method of Neuts. According to [13], the ߙ௝ were given by only the first column of this matrix ܴ, which reduced the number of operations in their algorithm by a factor of ݄. In order to determine the waiting time in the queue of a GI/G/1 model, they first set up a Markov chain by following the Ponstein method [28]. In their Markov chain, they considered the state spaces as the waiting times rather than the number of customers in the system. However, the number of iterations to obtain ܴ in Neuts’ method was often high, it may be 50 or more, and the number of operations per iteration is on the order ݄ଶ݃. The algorithm that was developed by these 20    authors, is a faster and efficient algorithm to solve many queueing problems efficiently and precisely. Grassmann [11] also developed a method to find the equilibrium distribution of the queue length at a certain epoch for such Markov chains which can be applied to the MX /MY /c queue, the M /GIX /c queue, the discrete GI/G/1 queue, and many other related problems in an efficient way. He numerically solved the steady-state distribution of some finite Markov chain models by using the state reduction method. The problem was solved by factorizing 1 െ ܷሺݖሻ, and thus, this method also can be used to find ܣሺݖሻ. Moreover, the algorithm proposed in [11] can be used for continuous-time Markov processes without any change. However, the author claimed that the approach in [11] is better than the Neuts’ method and he also mentioned that it converged quickly and was numerically stable. He asserted that there was a probabilistic significance in all steps, which made the algorithm easy to understand. For the calculation of ܣሺݖሻ, he also used idea of the inside and outside zeros of the polynomial 1 െ ܷሺݖሻ. As we see, the idea of our method for successive approximation algorithm to investigate the waiting time process of the GI/G/1 queue, was first used in [11]. Whitt [38] showed a method to develop a closed form approximation for the mean steady- state workload or virtual waiting time in a GI/G/1 queue. This approximation can be solved by using the first two moments of service-time distribution and the first three moments plus the density at the origin of the interarrival-time distribution. The approximation was based on light and heavy traffic limiting behaviour. Basically, Whitt used S.L.Brumelle’s formula, ܧ൫ ఘܼ൯ ൌ ߩܧ൫ ఘܹ൯ ൅ ఘ൫௖ೞ మାଵ൯ ଶ , to relate the mean workload to the mean waiting time, and K.T. Marshall’s formula, ܧ൫ ఘܹ൯ ൌ ா൫ሺఘ షభ்ିௌሻ൯మ ଶாሺሺఘషభ்ିௌሻሻ െ ா൫ூഐమ൯ ଶாሺூഐሻ ; 0 ൏ ߩ ൏ 1, to relate the mean waiting time to the first two moments of the idle period. The interpolation between light and heavy traffic limits was chosen to satisfy the differentiability and monotonicity regularity conditions. Finally, the approximation was compared with the exact values of some queueing models. Moreover, according to Whitt, heavy traffic derivative is more useful than light traffic derivative for determining the normalized mean workload. However, Whitt utilized approximations in his method, but, in our research, we are dealing with exact methods. In particular, we can get as close to the true value as desired by using our method. 21    A discrete time version of the system GI/G/1/N was analyzed by Hasslinger [15], using a two-component state model at the arrival and departure instants of customers. The equilibrium equations were solved by a polynomial factorization method and the steady state distribution of the queue size was then represented as a linear combination of geometric series, whose parameters were evaluated by closed formulae depending on the roots of a characteristic polynomial. Considering modified boundary constraints, systems with finite waiting room or with an exceptional first service in each busy period, were also included. Hasslinger [16] also considered a workload-based approach to the single server queue in a discrete time domain with semi-Markov arrivals (SMP/GI/1). He mainly investigated the SMP/GI/1queue, extending a GI/G/1 algorithm by [12], which performs a Wiener-Hopf factorization with the help of rapidly converging iteration. By deviding the busy period into phases, these authors generalized a computationally attractive algorithm for the discrete-time GI/G/1 queue, and also computed the stationary distributions of the waiting and idle time, as well as the moments of the busy period. Moreover, performance results were given for deterministic servers with autoregressive input, and the output process of a server was modeled by adapting a SMP with a few states to information obtained from busy and idle period analysis. Rao and Feldman [31] focused on easily computable numerical approximation for the distribution and moments of the steady-state waiting times in a stable GI/G/1 queue and their approximation methodology is based on the theory of Fredholm integral equations. Making comparisons with the competing approaches established by others, they claimed that their methodology was not only more accurate, but also more amenable for obtaining waiting time approximations from the operational data. Approximations were also obtained for the distributions of the steady-state idle times and interdeparture times. Kim and Chaudhry [20] first established a discrete-time version of what is called the distributional Little’s Law (DDL); a relation between the stationary distributions of the number of customers in a system (or queue length), and the number of slots a customer spends in that system (or waiting time). Based on this relation, they then presented a simple computational procedure to obtain the queue-length distribution of the discrete-time GI/G/1 queue from its waiting time distribution, which is readily available by various existing 22    methods: such as, the method based on the zeros of the so-called charateristic equation (Konheim [22], Chaudhry and Gupta [5]), the iterative method based on the Wiener-Hopf factorization (Grassmann and Jain [12], Hasslinger [16]) and the matrix-analytic method. Using the same procedure, they also obtained the queue-length distribution of the discrete- time multi-server GI/D/c queue in a unified manner. Thus, there is a great deal of literature dealing with the waiting time distribution of GI/G/1 models. There are especially a number of good algorithms available to solve the waiting time of the GI/G/1 queue numerically, for a discrete-time version of Lindley’s equation. Of the algorithms that are competitive, none dominates the other consistently, and the best one depends on the situation. The strengths of the algorithm that has been used in this thesis are that it is easily programmed, generally very efficient and very robust. These properties suggest that the algorithm we used in our work is an ideal algorithm compared to others. 23    Chapter 4 Methodology 4.1 Overview In this chapter, we present the approach to solve Lindley’s equation based on Wiener-Hopf factorization. We use the same notation as given in [12] in the following discussion: The aim of this project is to find out the waiting time distribution in equilibrium (i.e., how much time required for ሺ݊ ൅ 1ሻth arrival to get into the service if we know the position of the ݊th customer), and then calculate the average queue length of the system for the discrete-time queue GI/G/1. Thus, the basic assumptions are as follows: Assuming customers ...,..., 10 nCCC arrive singly at a service station at epochs ሼ߬௡: 0 ൑ ݊ ൏ ∞ሽ, ሺ߬଴ ൌ 0ሻ and are served by a single server in the order of their arrivals. In this case there are four types of random variables: 1. ௡ܶ is the interarrival time between the ݊th and ሺ݊ ൅ 1ሻth customer. The interarrival times ሼ ௡ܶ ൌ ߬௡ െ ߬௡ିଵ: 0 ൑ ݊ ൏ ∞ሽ are i.i.d random variables distributed as ܶ with probability distribution ܲݎሺܶ ൌ ݇ሻ ൌ ݍ௞. 2. ܵ௡ is the service time of the ݊th customer. The service times ሼܵ௡: 0 ൑ ݊ ൏ ∞ሽ are i.i.d random variables distributed as ܵ with probability distribution. ܲݎሺܵ ൌ ݇ሻ ൌ ݌௞ . 3. ܷ௡ ൌ ܵ௡ െ ௡ܶ, which could be negative, zero or positive. 24    4. ௡ܹ is the waiting time (in queue) for ݊th customer. Customer ܥ௡ waits in queue for a time, ௡ܹ, and the service time is ܵ௡. Thus, for ܥ௡, the time in the system in his waiting time plus his service time, i.e. ௡ܹ ൅ ܵ௡, and he leaves at the moment ߬௡ ൅ ௡ܹ ൅ ܵ௡. If ܥ௡ାଵ arrives before that time, therefore, if ߬௡ାଵ ൏ ߬௡ ൅ ௡ܹ ൅ ܵ௡, ܥ௡ାଵ has to wait for ߬௡ ൅ ௡ܹ ൅ ܵ௡ െ ߬௡ାଵ. Since ௡ܶ ൌ ߬௡ାଵ െ ߬௡ , we have ௡ܹାଵ ൌ ߬௡ ൅ ௡ܹ ൅ ܵ௡ െ ߬௡ାଵ (4.1) ൌ ௡ܹ ൅ ܵ௡ െ ௡ܶ if ௡ܹ ൅ ܵ௡ െ ௡ܶ ൒ 0 Otherwise, ௡ܹାଵ ൌ 0 . (4.2) Moreover, the idle time for the server becomes ܫ௡ ൌ െሺ ௡ܹ ൅ ܵ௡ െ ௡ܶሻ ൐ 0, if ௡ܹ ൅ ܵ௡ െ ௡ܶ ൏ 0. (4.3) We denote ܷ௡ ൌ ܵ௡ െ ௡ܶ (4.4) which is the difference between the service time of customer ܥ௡, and interarrival time of the ܥ௡ and ܥ௡ାଵ customers. Therefore, (4.1) and (4.2) can be written in terms of ܷ௡: ௡ܹାଵ ൌ ൜ ௡ܹ ൅ ܷ௡ , ݂݅ ௡ܹ ൅ ܷ௡ ൒ 00 , ݂݅ ௡ܹ ൅ ܷ௡ ൏ 0 . (4.5) This relation was first established by Linley [23]. Using the notation ሺݔሻା ≡ maxሾ0, ݔሿ, (4.5) can be written as ௡ܹାଵ ൌ ሺ ௡ܹ ൅ ܷ௡ ሻା. The sequence of random variables ሼ ௡ܶሽ, for ݊ ൒ 0, is a sequence of i.i.d. random variables, as is ሼܵ௡ሽ, for ݊ ൒ 0. Consequently, the sequence of random variables ሼܷ௡ ሽ, for ݊ ൒ 0, is also a sequence of identically distributed random variables. Moreover, ௡ܶ is independent of ௡ܶିଵ, ௡ܶିଶ, …, and independent of ܵ௡, ܵ௡ିଵ, …, which means that the ܷ௡ are also independent 25    random variables. Let ܷሺݔሻ be the distribution function of the random variables ܷ௡ (which obviously can take negative values with a positive probability, according to its definition). From (4.4), it is not difficult to obtain the following expression, which is commonly known as Lindley’s equation: ܹሺ௡ାଵሻሺݐሻ ൌ ቊ׬ ܹሺ௡ሻሺݐ െ ݔሻܷ݀ሺ௡ሻሺݔሻ, 0 ൑ ݐ ൏ ∞ ௧ ିஶ 0 otherwise (4.6) where ܹሺ௡ሻሺݐሻ is the waiting time distribution function for ௡ܹ, i.e. ܹሺ௡ሻሺݐሻ ൌ ܲݎሾ ௡ܹ ൑ ݐሿ and ܷሺݔሻ ൌ ׬ ܲሺݕሻܳሺݕ െ ݔሻஶ଴ (4.7) where ܳሺݐሻ and ܲሺݐሻ are the distribution functions of the distributions of ௡ܶ and ܵ௡, respectively. If ܳሺݐሻ and ܲሺݐሻ have probability density functions, then we define the probability density of ܳሺݐሻ and ܲሺݐሻ respectively by, ݍሺݐሻ ൌ ௗொሺ௧ሻௗ௧ ݌ሺݐሻ ൌ ௗ௉ሺ௧ሻௗ௧ Moreover, in this case (4.7) becomes ݑሺݔሻ ൌ ׬ ݌ሺݕሻݍሺݕ െ ݔሻ݀ݕஶ଴ ൌ ݍሺെݔሻ ∗ ݌ሺݑሻ where ݑሺݔሻ ൌ ௗ௎ሺ௫ሻௗ௨ and ∗ denotes the convolution. If the random variables ௡ܶ , ܵ௡ are discrete, ݍ௞ ൌ ܲݎሺ ௡ܶ ൌ ݇߬ሻ and ݌௞ ൌ ܲݎሺܵ௡ ൌ ݇߬ሻ, where ݇ is an integer, and ߬ is a real number. In this case, ܷ௡ ൌ ܵ௡ െ ௡ܶ is also discrete. Moreover, if ܿ௞ ൌ ܲݎሺܷ௡ ൌ ݇߬ሻ, 26    then the discrete version of (4.7) becomes ܿ௞ ൌ ∑ ݍି௞ା௜ஶ௜ୀ଴ ݌௜ . (4.8) It has been shown by Lindley [23] that a necessary and sufficient condition for a stable system is ܧሺܷ௡ሻ ൏ 0. If we substitute ܷ௡ by ܵ௡ െ ௡ܶ according to (4.4), then ܧሺܵ௡ െ ௡ܶ ሻ ൌ ܧሺܵ௡ሻ െ ܧሺ ௡ܶሻ ൏ 0, from which we obtain the usual stable condition for a queueing system, namely the utilization factor ߩ ൌ ܧሺܵ௡ሻ ܧሺ ௡ܶሻ ൏ 1⁄ , where ߩ is the traffic intensity. Under this stable condition, there exist a ܹሺݐሻ such that lim௡→ஶ ܲݎሾ ௡ܹ ൑ ݐሿ ൌ ܹሺݐሻ and the Lindley’s equation in (4.6) becomes ܹሺݐሻ ൌ ቊ׬ ܹሺݐ െ ݔሻܷ݀ሺݔሻ ݐ ൒ 0 ௧ ିஶ 0 ݐ ൏ 0 (4.9) or ܹሺݐሻ ൌ ൜െ׬ ܹሺݔሻܷ݀ሺݐ െ ݔሻ ݐ ൒ 0 ஶ ଴ 0 ݐ ൏ 0 (4.10) Note that the waiting time distribution function ܹሺݐሻ depends only on the distribution function ܷሺݔሻ, rather than on the individual distribution function ܳሺݐሻ and ܲሺݐሻ, and the (4.9) and (4.10) is an integral equation of the Wiener-Hopf type. It is further assumed that the ܷ௡s are i.i.d. random variables, which can only assume the values –݃ߜ, ሺെ݃ ൅ 1ሻߜ,… ,0, ߜ, 2ߜ, … , ݄ߜ, where ݃ and ݄ are integers. Without loss of generality, we take ߜ ൌ 1 by choosing ߜ as the time unit, these time values become integers. Our aim is to find the waiting-time in equilibrium, which can assume only the values 0, 1, 2, …. 27    4.2 Wiener-Hopf Factorization Method In this section, we introduce one of the most important factorization methods which is known as the Wiener-Hopf factorization method. As we mentioned in section 4.1, Lindley’s integral equation, as given by (4.9) or (4.10), is of Wiener-Hopf type; therefore, using Wiener-Hopf factorization method becomes the intuitive approach. The Wiener-Hopf factorization ultimately facilitates us to determine the actual distributions of limiting waiting and idle times. This factorization also involves the difficult task of locating zeros of a complex function. As it will be shown, this method can be used to solve the waiting time distribution for both continuous GI/G/1 queues and discrete GI/G/1 queues. In particular, based on the Wiener-Hopf factorization technique, a method was given in [12] to calculate the waiting time distribution for a discrete GI/G/1 queue and we present their method elaborately in this chapter. The powerful tool to link queueing theory and Wiener-Hopf factorization is the theory of random walks. Since our derivations are based on the Wiener-Hopf factorization of random walks, at first, it is necessary to briefly discuss how the Wiener-Hopf method factored the discrete version of Lindley’s equation. In our work, the waiting time is shown to be the maximum of a random walk sequence constrained at zero. The distribution of a maximum is related to ladder points and ladder heights of the random walks. Consider ሼܺ௡ሽ as a set of random variables. Then the discrete-time process ሼ ௡ܻ, ݊ ൌ 0, 1, … ሽ where ଴ܻ ൌ 0, ௡ܻ ൌ ଵܺ ൅ ܺଶ ൅ ⋯൅ ܺ௡ ൌ ௡ܻିଵ ൅ ܺ௡ ݊ ൌ 1, 2, … 28    is called a random walk on the real line. It may be thought of as a particle moving from a state ௡ܻିଵ at stage ݊ െ 1 to ௡ܻ at the next stage by an amount ܺ௡. Therefore, in our case, ௡ܹ is the maximum of partial sums of ܷ௡ , ܷ௡ିଵ, … , ଵܷ. We also have assumed that the queue begins in an empty state, so that ଴ܹ ൌ 0. Thus, ௡ܹ is also a random walk. However, in section 4.1, we already defined, ܷ௡ ൌ ܵ௡ െ ௡ܶ to be the difference between the service time of customer ܥ௡, and the interarrival time of the ܥ௡ and ܥ௡ାଵ customers, and ௡ܹ to be the waiting time (in queue) for ݊th customer. One commonly uses the following definitions from [9]: Definition 4.2.2: Consider the sequence of the points ሺ݊ , ௡ܻሻ for ൌ 1, 2,… ; the first strict ascending ladder point ሺߚଵ, ߛଵሻ is the first term in the above sequence for which ௡ܻ ൐ 0, i.e. ሼߚଵ ൌ ݊ሽ ൌ ሼ ଵܻ ൑ 0, ଶܻ ൑ 0,… , ௡ܻିଵ ൑ 0, ௡ܻ ൐ 0ሽ and ߛଵ ൌ ఉܻభ. The variable ߚଵ is called first ladder epoch, ߛଵ is the first ladder height. Definition 4.2.3: Consider the sequence of the points ሺ݊ , ௡ܻሻ for ൌ 1, 2,… ; the first weak descending ladder point ሺߚଵതതത, ߛଵഥ ሻ is the first term in the sequence for which ௡ܻ ൑ 0, i.e. ൛ߚଵതതത ൌ ݊ൟ ൌ ሼ ଵܻ ൐ 0, ଶܻ ൐ 0,… , ௡ܻିଵ ൐ 0, ௡ܻ ൑ 0ሽ and ߛଵഥ ൌ ܻఉభതതതത. Definition 4.2.4: A probability distribution function ܨ is defective if ܨሺ∞ሻ ൏ 1, and its defect is defined to be 1 െ ܨሺ∞ሻ. Let ܪ௡ሺݔሻ and ܬ௡ሺݔሻ be the joint distribution function of ሺߚଵ, ߛଵሻ and ሺߚଵതതത, ߛଵഥ ሻ, then ܪ௡ሺݔሻ ൌ ܲݎሼߚଵ ൌ ݊, ߛଵ ൑ ݔሽ; ݔ ൐ 0 and ܬ௡ሺݔሻ ൌ ܲݎ൛ߚଵതതത ൌ ݊, ߛଵഥ ൑ ݔൟ; ݔ ൑ 0. The marginal distributions are given by ܲݎሼߚଵ ൌ ݊ሽ ൌ ܪ௡ሺ∞ሻ , ݊ ൌ 1, 2, … , 29    ܲݎሼ ߛଵ ൑ ݔሽ ൌ ∑ ܪ௡ሺݔሻஶ௡ୀଵ ൌ ܪሺݔሻ ݔ ൐ 0 and ܲݎ൛ߚଵതതത ൌ ݊ൟ ൌ ܬ௡ሺ∞ሻ ݊ ൌ 1, 2, … , ܲݎሼߛଵഥ ൑ ݔሽ ൌ ∑ ܬ௡ሺݔሻஶ௡ୀଵ ൌ ܬሺݔሻ ݔ ൑ 0 . The random variables ߚଵ, ߛଵ have the same defect 1 െ ܪሺ∞ሻ, and ߚଵതതത, ߛଵഥ have the same defect, 1 െ ܬሺ∞ሻ. The section of the random walk ൫0, ௝ܻାଵ െ ௝ܻ, … , ௞ܻ െ ௝ܻ൯ ݇ ൒ ݆ following the first ladder point is a probabilistic replica of the whole random walk. Its first ladder point is the second point of the whole random walk with the property that ௡ܻ ൐ ଴ܻ, ௡ܻ ൐ ଵܻ, … , ௡ܻ ൐ ௡ܻିଵ, and it is called the second strict ascending ladder point of the entire random walk. It has the form ሺߚଵ ൅ ߚଶ, ߛଵ ൅ ߛଶሻ.The pairs (ߚଵ, ߛଵሻ and ሺߚଶ, ߛଶሻ are i.i.d. Similarly, we can define rth ascending ladder point (if it exists), and obviously it is in the form of ሺߚଵ ൅ ߚଶ ൅ ⋯൅ ߚ௥ , ߛଵ ൅ ߛଶ ൅ ⋯൅ ߛ௥ሻ and the pairs ሺߚ௞, ߛ௞ሻ ݇ ൌ 1, 2, … , ݎ are mutually independent and have the common distribution function ܪ௡ሺݔሻ. The expected number of ladder points in ሺ0, ݔሿ ሺݔ ൒ 0ሻ can be found as Ψሺݔሻ ൌ ∑ ܪ∗௡ሺݔሻஶ௡ୀ଴ where ܪ∗଴ሺݔሻ and Ψ଴ሺݔሻ are defined as Ψ଴ሺݔሻ ൌ ܪ∗଴ሺݔሻ ൌ ቄ1 ݔ ൒ 00 ݔ ൏ 0 Since the strict ascending ladder heights form a renewal process, one can obtain by using the renewal equation (Feller [9], page 359), Ψሺݔሻ ൌ Ψ଴ሺݔሻ ൅ ׬ Ψሺݔ െ ݕሻ݀ܪሺݕሻ௫଴ or Ψ ൌ Ψ଴ ൅ Ψ ∗ ܪ . (4.11) 30    Furthermore, a relation between Ψ and ܬ exists as given by the following theorem: Theorem 4.1: If ܭሺݔሻ ൌ ܲݎሼܺ௞ ൑ ݔሽ, ݇ ൌ 0, 1, 2, …, then ܬሺݔሻ ൌ ׬ ܭሺݔ െ ݕሻ݀ஶ଴శ Ψሺݕሻ ݔ ൑ 0 (4.12) and Ψሺݔሻ ൌ Ψ଴ሺݔሻ െ ܬሺ0ሻ ൅ ׬ ܭሺݔ െ ݕሻ݀ஶ଴శ Ψሺݕሻ ݔ ൐ 0 . (4.13) Proof: (see [9] chap. XII. Section 3) Since ܬሺݔሻ ൌ ܬሺ0ሻ for ݔ ൒ 0 and Ψሺݔሻ ൌ 0 for ݔ ൑ 0, then (4.12) and (4.13) can be written as ܬሺݔሻ ൅ Ψሺݔሻ ൌ Ψ଴ሺݔሻ ൅ Ψ ∗ ܭሺݔሻ . (4.14) Equation (4.14) is called the Wiener-Hopf factorization equation. Using the renewal equation (4.11) and convolving (4.14) with Ψ଴ െ ܪ, we have ሺܬ ൅ Ψሻ ∗ ሺΨ଴ െ ܪሻ ൌ ሺΨ଴ ൅ Ψ ∗ ܭሻ ∗ ሺΨ଴ െ ܪሻ. Hence, ܬ ∗ Ψ଴ െ ܬ ∗ ܪ ൅ Ψ ∗ Ψ଴ െ Ψ ∗ ܪ ൌ Ψ଴ ∗ Ψ଴ െ Ψ଴ ∗ ܪ ൅ Ψ ∗ ܭ ∗ Ψ଴ െ Ψ ∗ ܭ ∗ ܪ. Finally, we have ܭ ൌ ܪ ൅ ܬ െ ܬ ∗ ܪ . (4.15) Let ܭ∗ሺݏሻ, ܪ∗ሺݏሻ and ܬ∗ሺݏሻ be the two-sided Laplace-Stieltjes transform (LST) of ܭሺݔሻ, ܪሺݔሻ and ܬሺݔሻ respectively, and take the Laplace-Stieltjes transform on both sides of Eq. (4.15), one has ܭ∗ሺݏሻ ൌ ܪ∗ሺݏሻ ൅ ܬ∗ሺݏሻ െ ܬ∗ሺݏሻܪ∗ሺݏሻ . (4.16) This gives ൫1 െ ܭ∗ሺݏሻ൯ ൌ ൫1 െ ܪ∗ሺݏሻ൯ሺ1 െ ܬ∗ሺݏሻሻ. (4.17) 31    Definition 4.2.5: Laplace transform: Let ݂ሺݐሻ be a real-valued function defined on the interval 0 ൑ ݐ ൏ ∞. The Laplace transform (LT) of ݂ is ࣦሼ݂ሺݐሻ ሽ ≡ ݂ሺ̅ݏሻ ≡ ׬ ݁ି௦௧݂ሺݐሻ݀ݐஶ଴ , where ݏ is a complex variable. Definition 4.2.6: Laplace-Stieltjes transform: Let ܨሺݐሻ be a real-valued function defined on the interval 0 ൑ ݐ ൏ ∞. The Laplace-Stieltjes transform (LST) of ܨሺݐሻ is ࣦ∗ሼܨሺݐሻ ሽ ≡ ܨ∗ሺݏሻ ≡ ׬ ݁ି௦௧݀ܨሺݐሻஶ଴ . We consider ܨሺݐሻ to be the CDF of a nonnegative random variable ܺ, so ܨ∗ሺݏሻ ≡ ܧሾ݁ି௦௑ሿ ൌ ׬ ݁ି௦௧݀ܨሺݐሻஶ଴ . Laplace transform is useful in solving the differential equations and also used for transforming the continuous function. Two useful properties of Laplace transform are the following: First, there is a one-to-one correspondence between probability distributions and their transforms. Thus a probability distribution can be uniquely determined from its LST. Second, the LST of the convolution of independent random variables is the product of the LSTs of the individual random variables. That is, if ܨ∗ሺݏሻ is the LST of ܺ and ܩ∗ሺݏሻ is the LST of ܻ, then ܨ∗ሺݏሻܩ∗ሺݏሻ is the LST of ܺ ൅ ܻ, provided ܺ and ܻ are independent. Because the inverse of the LST is unique, this allow one to determine the distribution of a sum of random variables by inverting the LST resulting from the product of individual LSTs. The LST of a random variable evaluated at ݏ is the same as its moment generating function evaluated at – ݏ, where ݏ is a complex variable. Definition 4.2.7: Moment Generating Function (MGF): If ܺ is a random variable with CDF ܨሺݔሻ, then ܯ௑ሺݐሻ ≡ ܧሾ݁௧௑ሿ ൌ ׬ ݁௧௫݀ܨሺݔሻஶ଴ . This is similar to the LST, but with ݐ replacing – ݏ. In other words, ܯ௑ሺݐሻ ൌ ܨ∗ሺെݐሻ. Moment generating functions have properties similar to Laplace transform. In particular, there is a one-to-one correspondence between moment generating functions and probability 32    distribution, and the MGF of the sum of independent random variables is the product of the MGFs of the individual random variables. However, as shown in [9], because of (4.3), ܬሺݔሻ surprisingly gives the idle time distribution. For the waiting time distribution, we have the following result: Proposition 4.2.1: The waiting time distribution is the same as the distribution of the maximum of the random walk ሼ ௡ܻሽ. The distribution is given as ܹሺݐሻ ൌ ሾ1 െ ܪሺ∞ሻሿ∑ ܪ∗௡ሺݐሻஶ௡ୀ଴ ൌ ሾ1 െ ܪሺ∞ሻሿߖሺݐሻ, (4.18) where ߖሺݐሻ is completely determined by ܪሺݐሻ, because it is the renewal measure generated by ܪሺݐሻ. The Wiener-Hopf factorization in Eq. (4.15) can be used for both discrete and continuous cases, but for numerical application purposes, we consider a discrete GI/G/1 queue. Eq.(4.16) is derived by taking the LST on both sides of Eq. (4.15), therefore this relation in (4.16) is Wiener-Hopf factorization for continuous case. On the other hand, the Z-transform plays an important role in discrete analysis. Its role in discrete analysis is same as that of Laplace transform in continuous system. Thus, if we use Z-transform in a function that we get by using Laplace transforms in (4.16), we can get a relation for Wiener-Hopf factorization in discrete case. Definition 4.2.8: Z-transform: In Mathematics and signal processing, the Z-transform converts a discrete-time domain signal, which is a sequence of real or complex numbers, into a complex frequency domain representation. The Z-transform of a sequence ሼݔ௡ሽ is denoted as ܼሾሼݔ௡ሽ ሿ. It is defined as ܼሾሼݔ௡ሽ ሿ ൌ ܺሺݖሻ ൌ ∑ ݔ௡ݖି௡ஶ௡ୀିஶ , where ݖ is a complex number and ܺሺݖሻ is the Z-transform of ሼݔ௡ሽ. We already defined the generating function and probability generating function in definition 2.15 of chapter 2. The generating function is closely related to the one sided z-transform and defined as ∑ ݔ௡ݖି௡ஶ௡ୀ଴ . On the other hand, the probability generating function for an integer- 33    valued random variable is similar to the moment generating functions, but with ݖ replacing ݁௧, i.e. ܺሺݖሻ ≡ ܧሾݖ௑ሿ. In definition 4.2.5, we define ܯ௑ሺݐሻ ≡ ܧሾ݁௧௑ሿ ൌ ׬ ݁௧௫݀ܨሺݔሻஶ଴ . If we use Z-transform in this relation, we get ܺሺ݁ି௧ሻ ൌ ∑ ݔሺ݉ሻ݁௧௠ஶ௠ୀ଴ , which implies ܺሺݖሻ ൌ ∑ ݔሺ݉ሻݖି௠ஶ௠ୀ଴ , by replacing ݁ି௧ with z and we assume ݉ to take integer values from 0 to ∞. According to the definition of random walk, the strict ascending ladder heights and the weak descending ladder heights, we construct the following probability distribution functions: ݑ௜ ൌ ܲݎሼܷ௞ ൌ ݅ሽ െ݃ ൑ ݅ ൑ ݄ ߙ௜ ൌ ܲݎሼߛ௞ ൌ ݅ሽ ݅ ൌ 1, 2, … ܾ௜ᇱ ൌ ܲݎሼߛ௞തതത ൌ െ݅ሽ and ܾ௜ ൌ ܾି௜ᇱ ൌ ܲݎሼߛ௞തതത ൌ ݅ሽ ݅ ൌ 0, 1, … and also define the following probability generating function for the random variables ܷ, ܣ, and ܤ, respectively: ܷሺݖሻ ൌ ∑ ݑ௜ݖ௜ஶ௜ୀିஶ , ܣሺݖሻ ൌ ∑ ߙ௜ݖ௜ஶ௜ୀଵ , and ܤሺݖሻ ൌ ∑ ܾ௜ݖ௜ஶ௜ୀ଴ . Here ܣ is the strong ascending ladder height and ܤ is the weak descending ladder height, and ݖ is a complex number. Now, using Z-transform in Eq. (4.16), we get the following relation ܷሺݖሻ ൌ ܣሺݖሻ ൅ ܤሺ1 ݖ⁄ ሻ െ ܣሺݖሻܤሺ1 ݖ⁄ ሻ (4.19) which is equivalent to 1 െ ܷሺݖሻ ൌ ൫1 െ ܣሺݖሻ൯൫1 െ ܤሺ1 ݖ⁄ ሻ൯ (4.20) which is known as the Wiener-Hopf factorization of ܷሺݖሻ in discrete case for the GI/G/1 queue. We discuss more about the use of Wiener-Hopf factorization in (4.20), in the following section. By equating the coefficients of ݖ௝, ݆ ൌ 0,1, 2, … and ݖି௝, ݆ ൌ 1, 2, …, we get 34    ݑ௝ ൌ ߙ௝ െ ∑ ܾ௜ᇱ଴௜ୀିஶ ߙ௝ି௜ ൌ ߙ௝ െ ∑ ܾି௜ᇱஶ௜ୀଵ ߙ௝ା௜ െ ܾ଴ᇱߙ௝ ൌ ߙ௝ െ ∑ ߙ௝ା௜ܾ௜ஶ௜ୀଵ െ ܾ଴ߙ௝ , (4.21) or, ߙ௝ ൌ ൫ݑ௝ ൅ ∑ ߙ௝ା௜ܾ௜ஶ௜ୀଵ ൯ ሺ1 െ ܾ଴ሻ⁄ and ݑି௝ ൌ ܾି௝ᇱ െ ∑ ܾି௝ି௜ᇱஶ௜ୀଵ ߙ௜ ൌ ௝ܾ െ ∑ ௝ܾା௜ߙ௜ஶ௜ୀଵ , or, ௝ܾ ൌ ݑି௝ ൅ ∑ ௝ܾା௜ߙ௜ஶ௜ୀଵ . (4.22) Depending on the above derivation of Wiener –Hopf factorization, the following theorem was proposed in [12], Theorem 4.2: (Wiener –Hopf factorization) For ߩ ൏ 1 ሺ ݓ݄݁ݎ݁ ߩ ൌ ߣ ߤ⁄ ሻ there exists a unique defective distribution given by ߙ௜ ൌ ܲݎሼܣ ൌ ݅ሽ, ݅ ൐ 0, and a unique proper distribution given by ܾ௜ ൌ ܲݎሼܤ ൌ ݅ሽ, ݅ ൒ 0 such that, ௝ܾ ൌ ݑି௝ ൅ ∑ ߙ௜ܾ௜ା௝, ݆ ൌ 0, 1, … , ݃ஶ௜ୀଵ (4.23) ߙ௝ ൌ ௨ೕା∑ ఈ೔శೕ௕೔ ಮ೔సభ ଵି௕బ , ݆ ൌ 1, 2, … , ݄. (4.24) The Wiener-Hopf factorization of ܷሺݖሻ is as follows, 1 െ ܷሺݖሻ ൌ ൫1 െ ܣሺݖሻ൯ሺ1 െ ܤሺ1 ݖ⁄ ሻሻ (4.25) with ܣሺݖሻ ൌ ∑ ߙ௜ݖ௜ஶ௜ୀଵ and ܤሺݖሻ ൌ ∑ ܾ௜ݖ௜ஶ௜ୀ଴ (4.26) 35    Eq. (4.23) and (4.24) can easily be derived similar to Eq. (4.21) and (4.22), which represent the Wiener-Hopf factorization for the discrete case and they also can easily derived from ([9], page 400). Since we considering the waiting time distribution of the arithmetic GI/G/1 queue, when queue is bounded on a finite interval, for that reason, we assume ܷሺݖሻ ൌ ∑ ݑ௜ݖ௜௛௜ୀି௚ where we set ݑ௜ ൌ 0 for ݅ ൏ െ݃ and ݅ ൐ ݄. This convention allows us to make the sum from െ∞ to ൅∞. Moreover, we consider ݓ௜ ൌ Pr ሼܹ ൌ ݅ሽ is the equilibrium distribution of ௡ܹ and ܹሺݖሻ ൌ ∑ ݓ௜ݖ௜ஶ௜ୀ଴ denote the probability generating functions (p.g.f) of the random variable ܹ. According to the discrete version of Lindley’s equation, ݓ௝ is given by ݓ௝ ൌ ∑ ݓ௝ି௜ݑ௜, ݆ ൐ 0௛௜ୀି௚ and (4.27) ݓ଴ ൌ ∑ ∑ ݓ௞ݑ௜ି௜௞ୀ଴଴௜ୀି௚ (when there are no wait in queue), where we assume that ݓ௞ ൌ 0 for ݇ ൏ 0. Theorem 4.2 is helpful to find the solution of (4.27). Example 4.2.1: To understand clearly the formation of Lindley’s equation in our case, which is given by equation (4.27), consider a transition matrix say, ܯ of ܷ where service time and interarrival time are discrete, and also consider, ݃ ൌ ݄ ൌ 2,  ܵ                                            0 1 2 3 4                           ܶ                  4 3 2 1 0         012 1012 21012 2101 210 00 0 0 00 uuu uuuu uuuuu uuuu uuu                                                                                                                 36    Here, the columns represent the service time and the rows represent the interarrival time. Suppose ݓܯ ൌ ݓ in an equilibrium situation, where ܯ denotes the above transition matrix. Then, ሾݓ଴, ݓଵ, … ሿܯ ൌ ሾݓ଴,ݓଵ, … ሿ.   Now, if we use the second column, we get ݓଵ ൌ ݓ଴ݑଵ ൅ ݓଵݑ଴ ൅ ݓଶݑିଵ ൅ ݓଷݑିଶ ൌ ∑ ݓଵି௜ݑ௜ଶ௜ୀିଶ , which is same as ݓ௝ in equation (4.27) when ݆ ൌ 1. 4.3 Our Explanation for using the Wiener-Hopf Factorization Method in ૚ െ ࢁሺࢠሻ Consider an expression in terms of a power series of ܷሺݖሻ for െ݃ ൑ ݅ ൑ ݄ and expand it after using the Z-transform, ܷሺݖሻ ൌ ݑି௚ݖ௚ ൅ ⋯൅ ݑିଶݖଶ ൅ ݑିଵݖଵ ൅ ݑ଴ݖ଴ ൅ ݑଵݖିଵ ൅ ݑଶݖିଶ ൅ ⋯൅ ݑ௛ݖି௛ (4.28) If we substitute ݖ ൌ ଵ௭ in (4.28), we get the following relation for ܷሺݖሻ, ܷ ቀଵ௭ቁ ൌ ݑି௚ݖି௚ ൅ ⋯൅ ݑିଶݖିଶ ൅ ݑିଵݖିଵ ൅ ݑ଴ݖ଴ ൅ ݑଵݖଵ ൅ ݑଶݖଶ ൅ ⋯൅ ݑ௛ݖ௛ This series in (4.28) is known as Laurent’s series that contains negative and positive finite powers of ݖ. The series ܷሺݖሻ converges when 0 ൏ |ݖ| ൑ 1. Moreover, if we consider ܷ ቀଵ௭ቁ ൌ ܴሺݖሻ, then we will get ܴሺݖሻ ൌ ∑ ݎ௜ݖ௜௛௜ୀି௚ , where ܴሺݖሻhas the same form of a probability generating function. We have from (4.19), ܷሺݖሻ ൌ ܣሺݖሻ ൅ ܤሺ1 ݖ⁄ ሻ െ ܣሺݖሻܤሺ1 ݖ⁄ ሻ which is known as the Wiener-Hopf factorization of ܷሺݖሻ in discrete case for the GI/G/1 queue. Then, we get (4.20), after factoring the above relation which is 1 െ ܷሺݖሻ ൌ ൫1 െ ܣሺݖሻ൯൫1 െ ܤሺ1 ݖ⁄ ሻ൯. The main reason of factoring Eq. (4.19) is to obtain two separate expressions for ܣሺݖሻ and ܤሺ1 ݖ⁄ ሻ in R.H.S. Surprisingly, it is factored in such a way that ൫1 െ ܤሺ1 ݖ⁄ ሻ൯ contains the constant and positive powers of ݖ and ൫1 െ ܣሺݖሻ൯ contains the negative powers of ݖ. 37    Clearly, (4.28) is equivalent to 1 െ ܷሺݖሻ ൌ 1 െ ݑି௚ݖ௚ െ ⋯െ ݑିଶݖଶ െ ݑିଵݖଵ െ ݑ଴ݖ଴ െ ݑଵݖିଵ െ ݑଶݖିଶ െ ⋯െ ݑ௛ݖି௛. and (4.20) can also be expressed in the following way 1 െ ݑି௚ݖ௚ െ ⋯െ ݑିଶݖଶ െ ݑିଵݖଵ െ ݑ଴ݖ଴ െ ݑଵݖିଵ െ ݑଶݖିଶ െ ⋯െ ݑ௛ݖି௛ ൌ ሺ1 െ ߙଵݖିଵ െ ߙଶݖିଶ െ ⋯െ ߙ௛ݖି௛ሻሺ1 െ ܾ଴ െ ܾଵݖଵ െ ܾଶݖଶ െ ⋯െ ܾ௚ݖ௚ሻ (4.29) where ܷሺݖሻ ൌ ∑ ݑ௜ݖି௜௛௜ୀି௚ , ܣሺݖሻ ൌ ∑ ߙ௜ݖି௜௛௜ୀଵ and ܤሺݖሻ ൌ ∑ ܾ௜ݖି௜௚௜ୀ଴ . Previously, we discussed how to convert a moment generating function of the continuous case to the discrete case, by using the Z-transformation. Moreover, we defined ܺሺݖሻ ൌ ∑ ݔ௡ݖି௡ஶ௡ୀ଴ and clearly ܺሺ1 ݖ⁄ ሻ ൌ ∑ ݔ௡ݖ௡ஶ௡ୀ଴ . It is also noticed that in moment generating function ܯ௑ሺݐሻ ≡ ܧሾ݁௧௑ሿ ൌ ׬ ݁௧௫݀ܨሺݔሻஶ଴ , if ݐ ൏ 0, then ݁௧ converges inside the unit circle and if ݐ ൐ 0, then ݁௧ converges outside the unit circle. Furthermore, we know that a probability generating function converges inside the unit circle and therefore, ܺሺ1 ݖ⁄ ሻ ൌ ∑ ݔ௡ݖ௡ஶ௡ୀ଴ , will converge inside the unit circle. On the other hand, it is clear that the function we get by using Z-transformation, ܺሺݖሻ ൌ ∑ ݔ௡ݖି௡ஶ௡ୀ଴ , will converge outside the unit circle. Moreover, in the property of the Z-transform, it is assured that the Z-transform of a causal sequence converges outside the unit circle. The number of degree in 1 െ ܷሺݖሻ is ݃ ൅ ݄. Among the two factors in R.H.S of (4.29), clearly ܣሺݖሻ has the same formation as the Z-transform, thus it converges outside the unit circle and contains ݄ zeros in ሼݖ: |ݖ| ൐ 1ሽ that means ݄ zeros lie outside the unit circle. On the other hand, ܤሺ1 ݖ⁄ ሻ converges inside the unit circle and it contains ݃ zeros where ݃ zeros in ሼݖ: |ݖ| ൏ 1ሽ which means that ݃ zeros lie inside or on the unit circle. Therefore, by using the Wiener-Hopf Factorization method in (4.29), we also can locate the zeros inside and outside of the unit circle easily. The Wiener-Hopf factorization given by theorem 4.2 have some probabilistic interpretations; ܣ is the strong ascending ladder height and ܤ is the weak descending ladder height. Moreover, ߙ௜ is the probability that the server remains busy (always have a queue) and ௝ܾ is the probability that the server remains idle. It is very interesting to notice that ߙ௜ and ௝ܾ in 38    Eq. (4.21) and (4.22) also have some probabilistic interpretations as indicated in [11] and [13]. For example, ߙ௜ is interpreted as the probability that the system goes from state ݆ െ ݅ to state ݆ without visiting any state ݇ ൏ ݆ in between. Therefore, the Wiener-Hopf factorization method is significant in research on queueing systems. Moreover, Equation (4.23) and (4.24) represent the solutions of Wiener-Hopf factorization for the discrete case can easily be derived from equation (4.25) by equating the coefficients of ݖ௝, ݆ ൌ 1, 2, … , ݄ and ݖି௝, ݆ ൌ 0, 1, 2, … , ݃ if we expand (4.25) as the form of a p.g.f in the following way: Table 4.1 Results by equating the coefficients of ࢠ࢐ and ࢠି࢐ of Eq. (4.25) L.H.S R.H.S Constant power of ݖ 1 െ ݑ଴ ሺ1 െ ܾ଴ሻ ൅෍ߙ௜ܾ௜ ஶ ௜ୀଵ ݖି௝ െݑି௝ ෍ߙ௜ܾ௜ା௝ െ ௝ܾ ஶ ௜ୀଵ ݖ௝ െݑ௝ ሺ1 െ ܾ଴ሻߙ௝ ൅෍ߙ௜ା௝ܾ௜ ஶ ௜ୀଵ Therefore, by equating the coefficients of ݖି௝ , we get െݑି௝ ൌ ∑ ߙ௜ܾ௜ା௝ െ ௝ܾஶ௜ୀଵ which implies ௝ܾ ൌ ݑି௝ ൅ ∑ ߙ௜ܾ௜ା௝, ݆ ൌ 0, 1, … , ݃ஶ௜ୀଵ , and by equating the coefficients of ݖ௝, we have െݑ௝ ൌ ሺ1 െ ܾ଴ሻߙ௝ ൅ ∑ ߙ௜ା௝ܾ௜ஶ௜ୀଵ which implies ߙ௝ ൌ ௨ೕା∑ ఈ೔శೕ௕೔ ಮ೔సభ ଵି௕బ , ݆ ൌ 1, 2, … , ݄. Example 4.3.1: To clarify the solution of Wiener-Hopf factorization, let us discuss it with an example by representing (4.25) as a matrix notation, where we consider ݃ ൌ 2 and ݄ ൌ 3: 39    1 െ ܾ଴ ܾଵ ܾଶ െߙଵെߙଶെߙଷ െߙଵሺ1 െ ܾ଴ሻ െߙଵܾଵ െߙଵܾଶ െߙଶሺ1 െ ܾ଴ሻ െߙଶܾଵ െߙଶܾଶ െߙଷሺ1 െ ܾ଴ሻ െߙଷܾଵ െߙଷܾଶ Here, ߙ௝, ݆ ൌ 1, 2, 3 is the coefficients of ݖ௝ and ௝ܾ , ݆ ൌ 0, 1, 2 is the coefficients of ݖି௝. To get ݑ଴, we have to add all the coefficients of ݖ଴ diagonally, from the above matrix, which are definitely the constant part of (4.25). Moreover, we can get (4.24) for ݆ ൌ 1 by equating the coefficients of ݖଵ and adding them together. If we add all coefficients of ݖିଵ, we get (4.23) for ݆ ൌ 1. Once ߙ௜ are found, the waiting time distribution can be obtained without difficulties, as the following theorem shows and we give our explanation to proof it: Theorem 4.3: The probability generating function (p.g.f) ܹሺݖሻ is given by ܹሺݖሻ ൌ ݓ଴ ሺ1 െ ܣሺݖሻሻ⁄ , (4.30) or, ݓ௜ ൌ 0 when ݅ ൏ 0, ݓ଴ ൌ 1 െ ߙଵ െ ߙଶ െ⋯െ ߙ௛ , (4.31) ݓ௜ ൌ ∑ ݓ௜ି௝ߙ௝௛௝ୀଵ ݅ ൐ 0. (4.32) Proof: Eq. (4.32) is the discrete version of Eq. (4.18). Eq. (4.32) can be found directly by applying the Wiener- Hopf method to solve Lindley’s equation. Furthermore, Eq. (4.30) and (4.31) can be derived from (4.32) by using p.g.f ܹሺݖሻ and the fact ∑ ݓ௜ ൌ 1ஶ௜ୀ଴ . The derivation of (4.30) is shown as follows, ܹሺݖሻ ൌ ∑ ݓ௜ݖ௜∞௜ୀ଴ ൌ ݓ଴ ൅ ∑ ݓ௜ݖ௜∞௜ୀଵ 40    ൌ ݓ଴ ൅ ∑ ሺ∑ ݓ௜ି௝ߙ௝௛௝ୀଵ ሻ∞௜ୀଵ ݖ௜ [Since, ݓ௜ ൌ ∑ ݓ௜ି௝ߙ௝௛௝ୀଵ ] ൌ ݓ଴ ൅ ∑ ߙ௝௛௝ୀଵ ∑ ݓ௜ି௝ݖ௜∞௜ୀଵ ൌ ݓ଴ ൅ ∑ ߙ௝௛௝ୀଵ ∑ ݓ௞ݖ௞ା௝∞௞ୀ଴ [By replacing ݅ െ ݆ by ݇] ൌ ݓ଴ ൅ ∑ ߙ௝௛௝ୀଵ ݖ௝ሺ∑ ݓ௞ݖ௞∞௞ୀ଴ ሻ ൌ ݓ଴ ൅ ܣሺݖሻܹሺݖሻ. Rearranging the relation, we get ܹሺݖሻ൫1 െ ܣሺݖሻ൯ ൌ ݓ଴, Thus, ܹሺݖሻ ൌ ௪బሺଵି஺ሺ௭ሻሻ which is our Eq.(4.30). We know the p.g.f of ܹ is ܹሺݖሻ ൌ ∑ ݓ௜ݖ௜∞௜ୀ଴ and we can write this in the following way ܹሺݖሻ ൌ ݓ଴ ൅ ݓଵݖ ൅⋯൅ ݓ௛ݖ௛ ൅ ⋯ , where ܹሺݖሻ is analytic in |ݖ| ൏ 1. Thus the series converges when |ݖ| ൑ 1. In order to interpret the formation of Eq. (4.30), we consider ܹሺݖሻ൫1 െ ܷሺݖሻ൯ ൌ ܸሺݖሻ, where ܸሺݖሻ is such a function that satisfies this equation. Therefore, ሺݖሻ ൌ ௏ሺ௭ሻ൫ଵି௎ሺ௭ሻ൯ ൌ ௏ሺ௭ሻ ൫ଵି஺ሺ௭ሻ൯ሺଵି஻ሺଵ ௭⁄ ሻሻ. As we know, ܹሺݖሻ is analytic in unit circle on L.H.S, so the R.H.S must be analytic in unit circle in the above relation. However, a rational function can be analytic if the denominator is not equal to zero inside the unit circle. We know that the zeros of ሺ1 െ ܤሺ1 ݖ⁄ ሻሻ lie inside the unit circle in the denominator. On the other hand, we assume ܸሺݖሻ is a polynomial that also has the same number of zeros as ሺ1 െ ܤሺ1 ݖ⁄ ሻሻ inside the unit circle. Therefore, we can cancel the common zeros from the numerator and denominator. Finally, the rest of the zeros in the numerator are considered together as a constant term, ݓ଴. Thus, ultimately we get ܹሺݖሻ ൌ ௪బଵି஺ሺ௭ሻ which is clearly analytic in |ݖ| ൏ 1. Therefore, one needs to find ߙ௜ to calculate the waiting time distribution as it is the probability that the system remains busy. 41    Definition 4.3.1: Analytic Function: A single valued function ݂ሺݖሻ which is differentiable at ݖ ൌ ݖ଴ is said to be Analytic at the point ݖ ൌ ݖ଴. By taking the first derivative of Eq. (4.30), in terms of ݖ and evaluating at ݖ ൌ 1, we can easily find the expectation of the waiting time, ܹሺݖሻ ൌ ௪బଵି∑ ఈ೔௭೔ಮ೔సభ ൌ ݓ଴൫1 െ ∑ߙ௜ݖ ௜൯ିଵ . ܹᇱሺݖሻ ൌ ݓ଴ ቂቄെ൫1 െ ∑ߙ௜ݖ௜൯ିଶቅ ∙ ൛െ∑ ݅ߙ௜ݖ௜ିଵൟቃ , ൌ ௪బ∙∑ ௜ఈ೔௭೔షభ൫ଵି∑ఈ೔௭೔൯మ . By putting ݖ ൌ 1, we get ܹᇱሺ1ሻ ൌ ܧሺܹሻ ൌ ௪బ.∑ ௜ఈ೔௪బమ ൌ ∑ ௜ఈ೔ ௪బ (since ݓ଴ ൌ 1 െ ∑ ߙ௜ ௛௜ୀଵ ). Thus, ܧሺܹሻ ൌ ∑ ݅ߙ௜ ݓ଴ൗ . In order to obtain the variance of the waiting time distribution, we are considering the p.g.f of ܹ which is ܹሺݖሻ ൌ ∑ ݓ௜ݖ௜ஶ௜ୀ଴ . By taking the first derivative of ܹሺݖሻ in terms of ݖ and evaluating at ݖ ൌ 1, we obtain, ܹᇱሺݖሻ ൌ ∑ ݅ݓ௜ݖ௜ିଵஶ௜ୀଵ and ܹᇱሺ1ሻ ൌ ܧሾܹሿ ൌ ∑ ݅ݓ௜ஶ௜ୀଵ . We need to take the second derivatives of ܹሺݖሻ in terms of ݖ and need to evaluate at ݖ ൌ 1, for calculating the variance of the waiting time; since we know, ܸܽݎ ሾܺሿ ൌ ܧሾܺଶሿ െ ܧଶሾܺሿ. ܹᇱᇱሺݖሻ ൌ ∑ ݅ሺ݅ െ 1ሻݓ௜ݖ௜ିଶஶ௜ୀଶ and ܹᇱᇱሺ1ሻ ൌ ܧሾܹሺܹ െ 1ሻሿ ൌ ∑ ݅ሺ݅ െ 1ሻݓ௜ஶ௜ୀଶ . Therefore, ܸܽݎሺܹሻ ൌ ሺܧሾܹଶሿ െ ܧሾܹሿሻ ൅ ܧሾܹሿ െ ܧଶሺܹሻ, ൌ ∑ ݅ሺ݅ െ 1ሻݓ௜ஶ௜ୀଶ ൅ ∑ ݅ݓ௜ஶ௜ୀଵ െ ܧଶሺܹሻ, 42    ൌ ∑ ݅ଶݓ௜ஶ௜ୀଶ െ ∑ ݅ݓ௜ ൅ ∑ ݅ݓ௜ஶ௜ୀଵ െ ܧଶሺܹሻஶ௜ୀଶ , ൌ ∑ ݅ଶݓ௜ஶ௜ୀଵ െ ݓଵ െ ∑ ݅ݓ௜ ൅ ݓଵ ൅ ∑ ݅ݓ௜ஶ௜ୀଵ െ ܧଶሺܹሻஶ௜ୀଵ , ൌ ∑ ݅ଶݓ௜ஶ௜ୀଵ െ ܧଶሺܹሻ. Finally, we note that ܤ represents the idle time, and the moments of the idle time can be found by straightforward calculation. From equation (4.26), it is easy to see that ܤሺݖሻ ൌ ∑ ܾ௜ݖ௜ஶ௜ୀ଴ . By differentiating with respect to ݖ, we get ܤᇱሺݖሻ ൌ ∑ ܾ݅௜ஶ௜ୀ଴ ݖ௜ିଵ. Now the moment of idle time at ݖ ൌ 1 is ܧሺܤሻ ൌ ܤᇱሺ1ሻ ൌ ∑ ܾ݅௜ஶ௜ୀ଴ . 4.4 Wiener-Hopf Factorization by Successive Substitution In this section, we will show a method to determine the ߙ௜ and ௝ܾ from (4.23) and (4.24). When doing this, one may conceivably obtain the wrong factorization. Hence, criteria to determine whether or not a given factorization is Wiener-Hopf, become important. In [12], the following conditions are sufficient to assure that a given factorization is the correct Wiener –Hopf factorization. Theorem 4.4: [Sufficient condition for Wiener –Hopf factorization] ߙ௜ ൒ 0, ݅ ൌ 1,2, … (4.33) ௝ܾ ൒ 0, ݆ ൌ 0,1,2,… (4.34) ∑ ௝ܾஶ௝ୀ଴ ൌ 1 (4.35) To satisfy (4.34), set ܵ ൌ 1 െ ܾ଴ ൌ ∑ ௝ܾஶ௝ୀଵ and notice that ܽ௜ ൌ ߙ௜ሺ1 െ ܾ଴ሻ. If we consider, ܽ଴ ൌ ܾ଴, ܽ௜ ൌ ߙ௜ሺ1 െ ܾ଴ሻ, and ܵ ൌ 1 െ ܾ଴ ൌ 1 െ ܽ଴, and using these assumptions in Eq. (4.21) and (4.22) , we can get the two new solutions of Wiener –Hopf factorization, which are as follows: ௝ܽ ൌ ൫ݑ௝ ൅ ∑ ܽ௜ା௝ܾ௜ஶ௜ୀଵ ൯ ܵ⁄ ݆ ൒ 0 (4.36) and 43    ௝ܾ ൌ ݑି௝ ൅ ∑ ܽ௜ܾ௜ା௝ ܵ ݆ ൒ 0 ⁄ஶ௜ୀଵ . (4.37) The recursive equation (4.36) and (4.37) are the starting point of the algorithm 1 in [12]. In [12], they combine (4.21), (4.22) to give the following three algorithms in order to obtain ߙ௜ and ௝ܾ. Here we give a proof for Algorithm 1, where the solution obtained in this way always converges to the Wiener-Hopf factorization. Algorithm 1by [12]: 1. ܽ௜଴ ൌ 0, ݅ ൌ 1, 2, … , ݄ ௝ܾ଴ ൌ 0, ݆ ൌ 0, 1, 2, … , ݃ . 2. For ݉ ൌ 0, 1, 2, … do the following until ݉ܽݔ൫หܽ௜௠ െ ܽ௜௠ାଵห൯ ൏ ߳. 2.1. ௝ܾ௠ାଵ ൌ ݑି௝ ൅ ∑ ܽ௜௠ܾ௜ା௝௠ ሺ1 െ ܾ଴௠ሻ⁄ஶ௜ୀଵ , ݆ ൌ 0, 1, … , ݃. 2.2. ௝ܽ௠ାଵ ൌ ݑ௝ ൅ ∑ ܽ௜ା௝௠ ܾ௜௠ ሺ1 െ ܾ଴௠ሻ⁄ஶ௜ୀଵ , ݆ ൌ 1, 2, … , ݄. Proof: We want to prove that the values of ௝ܽ௠ and ௝ܾ௠ of Algorithm 1 converge to the Wiener-Hopf factorization. To see this, set up two sequences ሼܽ௜௠ሽ and ሼܾ௝௠ሽ as follows: 1. For ݆ ൌ 0, 1, … , ݃ and ݅ ൌ 1, 2, … , ݄; it is given that ௝ܾ଴ ൌ 0 and ܽ௜଴ ൌ 0. Define 2.1. ௝ܾ௠ାଵ ൌ ݑି௝ ൅ ∑ ܽ௜௠ܾ௜ା௝௠ ሺ1 െ ܾ଴௠ሻ⁄ஶ௜ୀଵ 2.2. ܽ௜௠ାଵ ൌ ݑ௜ ൅ ∑ ܽ௞ା௜௠ ܾ௞௠ ሺ1 െ ܾ଴௠ሻ⁄ஶ௞ୀଵ and they work recursively in terms of m. Let ܽ௜ and ௝ܾ be the solutions of the Wiener-Hopf factorization, and let ܽ௜௠ ൑ ܽ௜ for all m. We have, ܽ௜௠ାଵ ൌ ݑ௜ ൅ ∑ ܽ௞ା௜௠ ܾ௞௠ ሺ1 െ ܾ଴௠ሻ⁄ஶ௞ୀଵ 44    ൑ ݑ௜ ൅ ∑ ܽ௞ା௜ܾ௞ஶ௞ୀଵ ሺ1 െ ܾ଴ሻ ൌ ܽ௜⁄ [from Eq. (4.36)] Thus, ܽ௜௠ାଵ ൑ ܽ௜ for all ݉ and clearly ܽ௜௠ାଵ ൒ ܽ௜௠. (4.38) Similarly, for ൛ ௝ܾ௠ൟ if ௝ܾ௠ ൑ ௝ܾ, for all ݉, then ௝ܾ௠ାଵ ൌ ݑି௝ ൅ ∑ ܽ௜௠ܾ௜ା௝௠ ሺ1 െ ܾ଴௠ሻ⁄ஶ௜ୀଵ ൑ ݑି௝ ൅ ∑ ௔೔௕೔శೕଵି௕బ ஶ௜ୀଵ ൌ ௝ܾ Correspondingly for ܽ௜, we also have ܽ௜ሺ௠ାଵሻାଵ ൌ ݑ௜ ൅ ∑ ܽ௞ା௜௠ାଵܾ௞௠ାଵ ሺ1 െ ܾ଴௠ାଵሻ⁄ஶ௞ୀଵ which implies ܽ௜௠ାଶ ൒ ܽ௜௠ାଵ. (4.39) Note: Let, ܾ଴௠ ൑ ܾ଴ or, െܾ଴௠ ൒ െܾ଴ i.e. 1 െ ܾ଴௠ ൒ 1 െ ܾ଴ or, ଵଵି௕బ೘ ൑ ଵ ଵି௕బ . We now prove that ܽ௜௠ାଵ ൒ ܽ௜௠. This is clearly true for ݉ ൌ 0, because we get ܽ௜ଵ ൒ ܽ௜଴ [This is also clear from step 1 and the 1st iteration]. We then use induction to prove it for all ݉. Suppose, the statement ܵሺ݉ሻ in our case is: ܵሺ݉ሻ ൌ ܽ௜௠ାଵ ൒ ܽ௜௠. When ݉ ൌ 1, ܽ௜ଶ ൒ ܽ௜ଵ and consequently, the statement is also true for ݉ ൌ 1. Now, if ܽ௜௠ାଵ ൒ ܽ௜௠ , then clearly, ܽ௜௠ାଶ ൒ ܽ௜௠ାଵ and similarly, we can show that ௝ܾ௠ାଵ ൒ ௝ܾ௠ and ௝ܾ௠ାଶ ൒ ௝ܾ௠ାଵ. Thus, the statement is true by the principle of induction, which follows that ܽ௜௠ is bounded and monotonic with respect to ݉. In a similar way, we can show that ௝ܾ௠ is also bounded and monotonic. So, the sequences ሼܽ௜௠ሽ and ሼܾ௝௠ሽ converge to the Wiener-Hopf solution. 45    Algorithm 2 by [12]: 1. ߙ௜଴ ൌ 0, ݅ ൌ 1, 2, … , ݄. 2. For ݉ ൌ 0, 1, 2, … do the following until ݉ܽݔ൫หܽ௜௠ െ ܽ௜௠ାଵห൯ ൏ ߳. 2.1. ௝ܾ௠ ൌ ݑି௝ ൅ ∑ ߙ௜௠ିଵ ௝ܾା௜௠ஶ௜ୀଵ , ݆ ൌ ݃, ݃ െ 1,… , 0. 2.2. ܵ௠ ൌ ∑ ௝ܾ௠ஶ௝ୀଵ . 2.3. ߙ௝௠ ൌ ൫ݑ௝ ൅ ∑ ߙ௜ା௝ ௠ ܾ௜௠ஶ௜ୀଵ ൯ ܵ௠൘ , ݆ ൌ ݄, ݄ െ 1,… , 1. In Algorithm 2 in [12], the sequence ሼߙ௝௠ሽ is considered instead of ሼܽ௜௠ሽ and used ܵ௠ ൌ ∑ ௝ܾ௠ஶ௝ୀଵ in step 2.2 to get ߙ௝௠. For this algorithm [12], it is assumed that ߙ௝௠ and ௝ܾ௠ converges to the Wiener-Hopf factorization, and it is also verified that Algorithm 2 will always satisfy (4.32), (4.33) and (4.34). Hence, they did not show it theoretically that ߙ௝௠ remains bounded, we derive a simple proof to show that ߙ௝௠ is indeed bounded, as it follows: Step 2.1 assures that ௝ܾ௠ ൐ ݑି௝, which implies that ܵ௠ ൐ ∑ ݑି௜ஶ௜ୀଵ . From step 2.3, in [12] is already shown that ߙ௝௠ ൌ ௨ೕௌ೘ ൅ ∑ ߙ௜ା௝௠ஶ௜ୀଵ ௕೔೘ ∑ ௕ೖ೘ಮೖసభ ൏ ௨ೕ∑ ௨೔ಮ೔సభ . ൅ max௞வ௝ሼߙ௞ ௠ሽ. To see that the above inequality is true, let ܮ ൌ max௞வ௝ሼߙ௞௠ሽ.Thus, ߙ௜ା௝௠ ൑ ܮ ሾܵ݅݊ܿ݁ ݅ ൅ ݆ ൐ ݆ሿ and clearly, ∑ ߙ௜ା௝௠ஶ௜ୀଵ ௕೔ ೘ ∑ ௕ೖ೘ಮೖసభ ൑ ∑ ܮ.ஶ௜ୀଵ ௕೔ ೘ ∑ ௕ೖ೘ಮೖసభ ൌ ܮ. ∑ ௕೔೘∑ ௕ೖ೘ಮೖసభ ஶ௜ୀଵ ൌ ܮ. ∑ ௕೔೘ಮ೔సభ∑ ௕ೖ೘ಮೖసభ 46    ൌ ܮ ൌ max௞வ௝ሼߙ௞௠ሽ. The following algorithm (Algorithm 3 in [12]) is a combined form of some of the features of Algorithm 1 and 2. In [12], it was also not proved theoretically that the given sequences converge, nor it obtains the Wiener-Hopf factorization. Algorithm 3 by [12]: 1. ߙ௜଴ ൌ 0, ݅ ൌ 1, 2, … , ݄. 2. For ݉ ൌ 0, 1, 2, … do the following until ݉ܽݔ൫หܽ௜௠ െ ܽ௜௠ାଵห൯ ൏ ߳ for given ߳ ൐ 0. 2.1. ௝ܾ௠ ൌ ݑି௝ ൅ ∑ ߙ௜௠ିଵ ௝ܾା௜௠ஶ௜ୀଵ , ݆ ൌ ݃, ݃ െ 1,… , 0. 2.2. ܵ௠ ൌ ൫1 െ ܾ଴ ௠ ൅ ∑ ௝ܾ௠௚௝ୀଵ ൯ 2൘ . 2.3. ߙ௝௠ ൌ ൫ݑ௝ ൅ ∑ ߙ௜ା௝ ௠ ܾ௜௠ஶ௜ୀଵ ൯ ܵ௠൘ , ݆ ൌ ݄, ݄ െ 1,… , 1. Like [12], we couldn’t prove mathematically that Algorithm 3 converges to the Wiener-Hopf factorization. Moreover, we have the following important theorem from [12] and we add our explanation to make the proof clear: Theorem 4.5: There are exactly two solutions satisfying (4.23), (4.24), (4.33) and (4.34). Of these two solutions, one satisfies (4.35), whereas the other one satisfies ∑ ߙ௜ஶ௜ୀଵ ൌ ܣሺ1ሻ ൌ 1 . (4.40) Proof: At first, it is showed that there are two factorizations satisfying (4.23), (4.24), (4.33) and (4.34): 47    Let ߙ௜ሺଵሻ and ௝ܾሺଵሻ be the Wiener-Hopf solution, and let ܣଵሺݖሻ and ܤଵሺݖሻ be the corresponding p.g.f.s. We now generate a second solution as follows: ߙ௜ሺଶሻ, ௝ܾሺଶሻ, ܣଶሺݖሻ and ܤଶሺݖሻ. Because ܣଵሺݖሻ is the p.g.f of a defective distribution, ܣଵሺݖሻ ൏ 1. For ݖ ൐ 0, ܣଵሺݖሻ is monotonously increasing with ܣଵሺ∞ሻ ൌ ∞. Note: We know, ܣଵሺݖሻ ൌ ∑ ߙ௜ଵݖ௜ஶ௜ୀଵ  and ܣଵሺ1ሻ ൌ ∑ ߙ௜∞௜ୀଵ ൏ 1. Since, ܣଵሺ0ሻ ൌ 0, thus ܣଵሺݖሻ increases monotonously when  ݖ ൐ 0. Moreover, 1 െ ܣଵሺݖሻ is decreasing, but 1 െ ܣଵሺ0ሻ ൌ 1, hence 1 െ ܣଵሺݖሻ has one root. Consequently, 1 െ ܣଵሺݖሻ has exactly one positive zero ݖ ൌ ݖ଴, where ݖ଴ ൐ 1. Using this zero and also the zero ݖ ൌ 1 of 1 െ ܤଵሺ1 ݖ⁄ ሻ, one can write (4.25) as 1 െ ܷሺݖሻ ൌ ൫ሺ1 െ ݖ ݖ଴⁄ ሻܣ∗ሺݖሻ൯൫ሺ1 െ 1 ݖ⁄ ሻܤ∗ ሺ1 ݖ⁄ ሻ൯, (4.41) Since, 1 െ ܣଵሺݖሻ ൌ ሺݖ଴ െ ݖሻ̅ܣሺݖሻ ൌ ሺ1 െ ݖ ݖ଴⁄ ሻ൫ݖ଴̅ܣሺݖሻ൯, ൌ ሺ1 െ ݖ ݖ଴⁄ ሻܣ∗ሺݖሻ and 1 െ ܤଵሺ1 ݖ⁄ ሻ=( ݖ െ 1) ܤതሺݖሻ ൌ ሺ1 െ 1 ݖ⁄ ሻ൫ݖܤതሺݖሻ൯ ൌ ሺ1 െ 1 ݖ⁄ ሻܤ∗ ሺ1 ݖ⁄ ሻ, where, ܣ∗ሺݖሻ and ܤ∗ሺݖሻ are polynomials of the form ܣ∗ሺݖሻ ൌ 1 ൅ ߙଵ∗ݖ ൅ ߙଶ∗ݖଶ ൅ ⋯൅ ߙ௛ିଵ∗ ݖ௛ିଵ ܤ∗ሺݖሻ ൌ ሺ1 െ ܾ଴ሻ ൅ ܾଵ∗ݖ ൅ ܾଶ∗ݖଶ ൅ ⋯൅ ܾ௚ିଵ∗ ݖ௚ିଵ . Here, 1 and ሺ1 െ ܾ଴ሻ of ܣ∗ሺݖሻ and ܤ∗ሺݖሻ are considered to be the constant part, respectively, where 1 comes in both cases, by factoring the polynomials 1 െ ܣଵሺݖሻ and 1 െ ܤଵሺ1 ݖ⁄ ሻ. Since 1 െ ܣଵሺݖሻ ൌ ሺ1 െ ݖ ݖ଴⁄ ሻܣ∗ሺݖሻ, one has 1 െ ∑ ߙ௜ଵݖ௜ஶ௜ୀଵ ൌ ሺ1 െ ݖ ݖ଴⁄ ሻ∑ ߙ௜∗ݖ௜௛ିଵ௜ୀଵ . By expanding ∑ ߙ௜ଵݖ௜௛௜ୀଵ and ∑ ߙ௜∗ݖ௜௛ିଵ௜ୀଵ , we get 48    1 െ ߙଵଵݖଵ െ ߙଶଵݖଶ െ ⋯െ ߙ௛ଵݖ௛ ൌ ∑ ߙ௜∗ݖ௜௛ିଵ௜ୀଵ െ ݖ ݖ଴⁄ ሺߙଵ∗ݖଵ ൅ ߙଶ∗ݖଶ ൅ ⋯൅ ߙ௛ିଵ∗ ݖ௛ିଵሻ By rearranging the above, one yields 1 െ ߙଵଵݖଵ െ ߙଶଵݖଶ െ ⋯െ ߙ௛ଵݖ௛ ൌ ሺߙଵ∗ݖଵ ൅ ߙଶ∗ݖଶ ൅ ⋯൅ ߙ௛ିଵ∗ ݖ௛ିଵሻ െ 1 ݖ଴ሺߙଵ∗ݖଶ ൅ ߙଶ∗ݖଷ ൅ ⋯൅ ߙ௛ିଵ∗ ݖ௛ሻ⁄ Now, equating the coefficients of ݖଵ, ݖଶ,…, ݖ௛, we finally get െߙ௜ሺଵሻ ൌ ߙ௜∗ െ ߙ௜ିଵ∗ ݖ଴⁄ , ݅ ൌ 1,2, … (4.42) Equation (4.42), together with ߙ௛∗ ൌ 0, implies that ߙ௜∗, ݅ ൐ 0 is nonnegative, nonincreasing in ݅, as we subtract ߙ௜ିଵ∗ ݖ଴⁄ from ߙ௜∗. This can easily be shown by complete induction. Similarly, ௝ܾ∗ satisfies െ ௝ܾሺଵሻ ൌ ௝ܾ∗ െ ௝ܾିଵ∗ . (4.43) ௝ܾ∗ are also nonnegative. In fact, one has ௝ܾ∗ ൌ 1 െ ܾ଴ሺଵሻ െ ܾଵሺଵሻ െ ⋯െ ௝ܾሺଵሻ ൒ 0 (Because of equation (4.33) and (4.34)). Since, ሺ1 െ ݖ ݖ଴⁄ ሻሺ1 െ 1 ݖ⁄ ሻ ൌ 1 ݖ଴⁄ ሺ1 െ ݖሻሺ1 െ ݖ଴ ݖ⁄ ሻ . Equation (4.25) can also be written as 1 െ ܷሺݖሻ ൌ ݖ଴ି ଵሺ1 െ ݖሻܣ∗ሺݖሻሺ1 െ ݖ଴ ݖ⁄ ሻܤ∗ሺ1 ݖ⁄ ሻ, ൌ ൫1 െ ܣଶሺݖሻ൯ሺ1 െ ܤଶሺ1 ݖ⁄ ሻሻ. where, ܣଶሺݖሻ ൌ ሺ1 െ ݖሻܣ∗ሺݖሻ (4.44) ܤଶሺݖሻ ൌ ሺ1 െ ݖ଴ ݖ⁄ ሻܤ∗ሺݖሻ ݖ଴⁄ . (4.45) 49    Hence, the ܣଶሺݖሻ and ܤଶሺݖሻ define a factorization of 1 െ ܷሺݖሻ, and it remains to show that all the coefficients ߙ௜ሺଶሻ and ௝ܾሺଶሻ are nonnegative. Using (4.42) and (4.44), one has െߙ௜ሺଶሻ ൌ ߙ௜∗ െ ߙ௜ିଵ∗ ൏ ߙ௜∗ െ ߙ௜ିଵ∗ ݖ଴⁄ ൌ െߙ௜ሺଵሻ ൑ 0 . Similarly, from (4.45), one finds െ ௝ܾሺଶሻ ൌ ௝ܾ∗ െ ݖ଴ ௝ܾିଵ∗ < ௝ܾ∗ െ ௝ܾିଵ∗ = െ ௝ܾሺଵሻ. Now, we can conclude that ߙ௜ሺଶሻ ൐ ߙ௜ሺଵሻ ൒ 0, and ௝ܾሺଶሻ ൐ ௝ܾሺଵሻ ൒ 0. (4.46) Thus, it is clear from the above discussion that there are two factorizations satisfying equation (4.23), (4.24), (4.33) and (4.34). Now we show that there are no other factorization satisfying (4.23), (4.24), (4.33) and (4.34). To do this, we classify the zeros of 1 െ ܷሺݖሻ, and by using equation (4.41), one can divide these zeros into 3 groups: The first group consists of the two positive zeros ݖ ൌ 1 and ݖ ൌ ݖ଴, the second group includes all zeros of ܣ∗ሺݖሻ, and the third group has all zeros of ܤ∗ሺ1 ݖ⁄ ሻ. All zeros of ܣ∗ሺݖሻ must satisfy the condition 0 ൑ |ݖ| ൏ ݖ଴. This is because all zeros of ܣ∗ሺݖሻ are also zeros of 1 െ ܣଵሺݖሻ, which is true from equation (4.41). Since ܣଵሺݖሻ increases from ܣଵሺ0ሻ, reaching 1 at ݖ ൌ ݖ଴, clearly 1 െ ܣଵሺݖሻ ൌ ሺ1 െ ݖ଴ ݖ଴⁄ ሻܣ∗ሺݖሻ ൌ 0 which implies, ܣଵሺݖሻ ൌ 1 . But, ܣଵሺݖሻ ൏ 1, 0 ൑ ݖ ൏ ݖ଴ (4.47) 50    Since ܣଵሺݖሻ has only positive coefficients, ܣଵሺݖሻ ൏ ܣଵሺ|ݖ|ሻ, and because of (4.47), ܣଵሺ|ݖ|ሻ ൏ 1 for |ݖ| ൏ ݖ଴. For example, if we consider 1 െ ܣଵሺݖሻ ൐ 0 for ݖ ൐ ݖ଴, then 1 െ ܣଵሺݖሻ have zeros inside a circle of radius ݖ଴. Because, 1 െ ܣଵሺݖሻ ൌ ሺ1 െ ݖ ݖ଴⁄ ሻܣ∗ሺݖሻ ൌ െ ∙ ߜ ൏ 0 (which satisfies). Note: δ indicate any number. and if ݖ ൏ ݖ଴, 1 െ ܣଵሺݖሻ ൌ ൅ߜ ൐ 0 . Hence, 1 െ ܣଵሺݖሻ cannot have any zeros inside a circle of radius ݖ଴, and as a consequence, all zeros of ܣ∗ሺݖሻ must lie outside this circle. By using a similar argument shows that all zeros of ܤ∗ሺݖሻ must lie outside the unit circle, which means that the zeros of ܤ∗ሺ1 ݖ⁄ ሻ must lie inside the unit circle. If 1 െ ܣଵሺݖሻ is a factor of 1 െ ܷሺݖሻ, the only two zeros available of this factor are ݖ ൌ ݖ଴ and ݖ ൌ 1. Suppose, we let ݖ ൌ ݖ଴ be the unique positive zero of ܣሺݖሻ. If this is the case, one can show by applying essentially the same arguments as before, that all other zeros of ܣሺݖሻ have a modulus greater than ݖ଴ i.e. |ݖ| ൐ ݖ଴. Moreover, for complex ݖ, ܣሺݖሻ ൑ ܣሺ|ݖ|ሻ ൏ 1, which implies that there are no zeros, |ݖ| ൏ ݖ଴ where ݖ଴ is real. Hence, ܣሺݖሻ can only include zeros of ܣ∗ሺݖሻ. Since a similar condition can be shown to hold for ܤሺݖሻ. Thus, it follows that ܣሺݖሻ must include all zeros of ܣ∗ሺݖሻ, that is, ܣሺݖሻ ൌ ܣଵሺݖሻ. In a similar fashion, it can be shown that ܣሺݖሻ must be equal to ܣଶሺݖሻ, if ݖ ൌ 1 is chosen as the unique positive zero of ܣሺݖሻ. Therefore, depending on the two available zeros are ݖ ൌ ݖ଴ and ݖ ൌ 1 for ܣሺݖሻ, and similarly for ܤሺݖሻ respectively, it is proved that there are no other factorizations that can satisfy (4.23), (4.24), (4.33) and (4.34). This completes the proof of theorem 5. 51    Chapter 5 Analytical Comparison of GI/G/1 Queue with Geom/Geom/1 Queue 5.1 Overview As we know, the notation GI stands for general independent interarrival time distribution, G stands for general service time distribution and number 1 means that there is one server in the system. More clearly, this symbol G represents a general or arbitrary probability distribution; that is, no assumption is made as to the precise form of the distribution and results in these cases are applicable to any probability distribution. But, in our work, we are interested about the arithmetic GI/G/1 queue where the interarrival and service times are discrete random variables and their distribution functions are both defined on finite intervals. So far, we have discussed several methods to determine the waiting time distribution in a discrete GI/G/1 queue in chapters 3 and 4. Now we are interested to use Geom/Geom/1 queue as an example of a GI/G/1 queue to explore what happens when the interval is infinite to obtain the waiting time distribution. This is the topic we are going to discuss in this chapter. To do so, let the Geom/Geom/1 queue be defined first. Geom/Geom/1 is a single-server queue where interarrival and service times are geometrically distributed. In this situation, the time axis is segmented into a sequence of time intervals (slots) of unit duration, but the times are unbounded like exponential distribution. Moreover, it is known that the Geom/Geom/1 queue is the discrete analogue of the M/M/1 queue. First of all, we need to make ourselves clear about the difference between the M/M/1 queue and the Geom/Geom/1 queue. M/M/1 queue means exponential interarrival time, exponential service time and one single server queueing system. In general, M refers mainly to three facts: (1) The customers 52    arrive according to a Poisson process and the interarrival time distribution is exponential, (2) the service time distribution is also exponential, (3) Thus, the process is memoryless or Markovian. Since the successive times for interarrival and service are continuously distributed, they are therefore exponential. However, we already came to know, in the case of arithmetic GI/G/1 queue which is based on a finite interval, that ݃ of the roots of the polynomial 1 െ ܷሺݖሻ ൌ ൫1 െ ܣሺݖሻ൯ሺ1 െ ܤሺ1 ݖ⁄ ሻሻ lie inside or on the unit circle of the complex plane and that ݄ of them lie outside the unit circle. On the other hand, if we use the above polynomial for Geom/Geom/1 queue having the infinite intervals for the interarrival times and service times, clearly the degree of this polynomial goes to infinity and the number of roots must be infinite. In order to verify our idea, at first we assume that the waiting time of Eq.(4.29) for the Geom/Geom/1 queue is geometrically distributed. 5.2 The Waiting Time Distribution for the Geom/Geom/1 Queue In this section, we discuss if the Geom/Geom/1 queue has strictly geometric distribution or not. We considered the same notation for the Geom/Geom/1 queue by following the chapter 4: ܣ௡ is the time of ݊th arrival ܥ௡, ௡ܶ is the interarrival time, i.e. ܣ௡ାଵ ൌ ܣ௡ ൅ ௡ܶ and ܵ௡ is the service time of ܥ௡. ௡ܹ is the waiting time of ܥ௡. Then we have, ௡ܹାଵ ൌ ܦ௡ െ ܣ௡ାଵ ൌ ܦ௡ െ ܣ௡ െ ௡ܶ, where ܦ௡ is the departure time of ܥ௡ and ܦ௡ ൌ ܣ௡ ൅ ௡ܹ ൅ ܵ௡ . Hence, ௡ܹାଵ ൌ ܣ௡ ൅ ௡ܹ ൅ ܵ௡ െ ܣ௡ െ ௡ܶ ൌ ௡ܹ ൅ ܵ௡ െ ௡ܶ . (5.1) Since we are dealing with the Geom/Geom/1 queue, ܵ௡ and ௡ܶ are both geometrically distributed. We assume that ܵ௡ ൐ 0 and ௡ܶ ൐ 0.Thus, we know the probability density functions of ܵ௡ and ௡ܶ are ܲݎሺܵ௡ ൌ ݅ሻ ൌ ݌௜ሺ1 െ ݌ሻ, ݅ ൌ 1, 2, … ; and ܲݎሺ ௡ܶ ൌ ݅ሻ ൌ ݍ௜ሺ1 െ ݍሻ, ݅ ൌ 1, 2, … ; respectively, where ݌ and ݍ are the corresponding parameters. 53    We know, the difference between the service time of customer ܥ௡, and interarrival time of the ܥ௡ and ܥ௡ାଵ customers, is ܷ௡ ൌ ܵ௡ െ ௡ܶ. We now have to find ܲݎሺܷ௡ ൌ ݅ሻ for the Geom/Geom/1 queue. To do so, set ܷ௡ ൌ ݅ if ܵ௡ ൌ ݇ ൅ ݅ and ௡ܶ ൌ ݇. Thus, ܲݎሺܷ௡ ൌ ݅ሻ ൌ ∑ ܲݎሺܵ௡ ൌ ݇ ൅ ݅ ሻܲݎሺ ௡ܶ ൌ ݇ሻஶ௞ୀ଴ ൌ ∑ ݌௞ା௜ሺ1 െ ݌ሻݍ௞ሺ1 െ ݍሻஶ௞ୀ଴ ൌ ݌௜ሺ1 െ ݌ሻሺ1 െ ݍሻ∑ ݌௞ݍ௞ஶ௞ୀ଴ ൌ ݌௜ሺ1 െ ݌ሻሺ1 െ ݍሻ ሺ1 െ ݌ݍሻ൘ . (5.2) For ݅ ൏ 0, we can easily exchange ݌ and ݍ and get the following: ܲݎሺܷ௡ ൌ െ݅ሻ ൌ ݍ ௜ሺ1 െ ݌ሻሺ1 െ ݍሻ ሺ1 െ ݌ݍሻ൘ . (5.3) Now, Eq.(5.1) allow us to find a relation for the distribution of ௡ܹ for the Geom/Geom/1 queue in the following way ܲݎሺ ௡ܹାଵ ൌ ݆ሻ ൌ ∑ ܲݎሺ ௡ܹ ൌ ݇ሻܲݎሺܷ௡ ൌ ݆ െ ݇ሻ ݆ ൐ 0ஶ௞ୀ଴ . (5.4) Then we consider, the distribution of ௡ܹ is geometric with unknown parameter ݔ ܲݎሺ ௡ܹାଵ ൌ ݆ሻ ൌ ܲݎሺ ௡ܹ ൌ 0ሻ ∙ ݔ௝ ൌ ߨ଴ݔ௝ (5.5) ൌ ܲݎሺ ௡ܹ ൌ ݆ሻ; because in equilibrium ௡ܹ and ௡ܹାଵeventually have the same distribution. Now, using (5.5) in (5.4), we get ߨ଴ݔ௝ ൌ ∑ ߨ଴ݔ௞ܲݎሺܷ௡ ൌ ݆ െ ݇ሻஶ௞ୀ଴ By cancelling ߨ଴ from both sides, one yeilds ݔ௝ ൌ ∑ ݔ௞௝௞ୀ଴ ܲݎሺܷ௡ ൌ ݆ െ ݇ሻ ൅ ∑ ݔ௞ஶ௞ୀ௝ାଵ ܲݎሺܷ௡ ൌ ݆ െ ݇ሻ 54    ൌ ൣ∑ ݔ௞௝௞ୀ଴ ݌௝ି௞ ൅ ∑ ݔ௞ஶ௞ୀ௝ାଵ ݍ௞ି௝൧ ∙ ሺଵି௣ሻሺଵି௤ሻଵି௣௤ Therefore, ݔ௝ ൌ ሺଵି௣ሻሺଵି௤ሻଵି௣௤ ቂ݌௝ ∙ ଵିሺ௫ ௣⁄ ሻೕశభ ଵିሺ௫ ௣⁄ ሻ ൅ ଵ ௤ೕ ∙ ௫ೕశభ௤ೕశభ ଵି௫௤ ቃ ൌ ሺଵି௣ሻሺଵି௤ሻଵି௣௤ ቂ݌௝ ∙ ቀ ଵିሺ௫ ௣⁄ ሻೕశభ ଵିሺ௫ ௣⁄ ሻ ቁ ൅ ௫ೕశభ∙௤ ଵି௫௤ ቃ . (5.6) We then need to find the value for ݔ to know the waiting time for Geom/Geom/1 queue. In order to simplify the above relation, we get from (5.6) ݔ௝ ൌ ሺଵି௣ሻሺଵି௤ሻଵି௣௤ ൤ ௣ೕି ௫ೕశభ ௣ൗ ଵିሺ ௫ ௣ൗ ሻ ൅ ௤௫ೕశభ ଵି௫௤൨ ൌ ሺଵି௣ሻሺଵି௤ሻଵି௣௤ ቂ ௣ೕశభି௫ೕశభ ௣ି௫ ൅ ௤௫ೕశభ ଵି௫௤ቃ i.e. ݔ௝ ൌ ሺଵି௣ሻሺଵି௤ሻଵି௣௤ ቂݔ௝ାଵ ቂ ௤ ଵି௫௤ െ ଵ ௣ି௫ቃ ൅ ௣ೕశభ ௣ି௫ቃ ൌ ሺଵି௣ሻሺଵି௤ሻଵି௣௤ ቂݔ௝ାଵ ቂ ሺ௣ି௫ሻ௤ିଵା௫௤ ሺଵି௫௤ሻሺ௣ି௫ሻ ቃ ൅ ௣ೕశభ ௣ି௫ቃ ൌ ሺଵି௣ሻሺଵି௤ሻଵି௣௤ ቂݔ௝ାଵ ∙ ௣௤ିଵ ሺଵି௫௤ሻሺ௣ି௫ሻ ൅ ௣ೕశభ ௣ି௫ቃ. Now by multiplying ሺଵି௣ሻሺଵି௤ሻଵି௣௤ with ݔ௝ାଵ ∙ ௣௤ିଵ ሺଵି௫௤ሻሺ௣ି௫ሻ ൅ ௣ೕశభ ௣ି௫ , we get ݔ௝ ൌ െሺ1 െ ݌ሻሺ1 െ ݍሻ ∙ ݔ௝ାଵ ∙ ଵሺଵି௫௤ሻሺ௣ି௫ሻ ൅ ሺଵି௣ሻሺଵି௤ሻ ଵି௣௤ ∙ ௣ೕశభ ௣ି௫ i.e. ሺ݌ െ ݔሻݔ௝ ൌ ݌௝ାଵ ∙ ሺଵି௣ሻሺଵି௤ሻଵି௣௤ െ ݔ௝ାଵ ∙ ሺଵି௣ሻሺଵି௤ሻ ሺଵି௫௤ሻ i.e. ሺ݌ െ ݔሻݔ௝ ൅ ሺ1 െ ݌ሻሺ1 െ ݍሻ ∙ ௫ೕశభሺଵି௫௤ሻ ൌ ሺ1 െ ݌ሻሺ1 െ ݍሻ ∙ ௣ೕశభ ଵି௣௤ i.e. ݔ௝ ቂሺ݌ െ ݔሻ ൅ ሺ1 െ ݌ሻሺ1 െ ݍሻ ∙ ௫ሺଵି௫௤ሻቃ ൌ ሺ1 െ ݌ሻሺ1 െ ݍሻ ∙ ௣ೕశభ ଵି௣௤ i.e. ݔ௝ ቂሺ௣ି௫ሻሺଵି௫௤ሻାሺଵି௣ሻሺଵି௤ሻ௫ଵି௫௤ ቃ ൌ ሺ1 െ ݌ሻሺ1 െ ݍሻ ∙ ௣ೕశభ ଵି௣௤ 55    i.e. ݔ௝ ቂ௣ା௫మ௤ି௫ሺ௣ା௤ሻଵି௫௤ ቃ ൌ ሺ1 െ ݌ሻሺ1 െ ݍሻ ∙ ௣ೕశభ ଵି௣௤ . (5.7) If we let ݔ ൌ 1 in (5.7), it works out as ௣ା௤ି௣ି௤ଵି௤ ൌ 0 ൌ ሺ1 െ ݌ሻሺ1 െ ݍሻ ∙ ௣ೕశభ ଵି௣௤ . and if we let ݔ ൌ 0 in (5.7), this does not work to give a solution. Now, if we consider the first case where ݔଶݍ െ ݔሺ݌ ൅ ݍሻ ൅ ݌ ൌ 0, we can solve it for ݔ. Therefore, ݔ ൌ ௣ା௤േඥሺ௣ା௤ሻమିସ௣௤ଶ௤ ൌ ௣ା௤േሺ௣ି௤ሻ ଶ௤ ൌ ݌ ݍൗ , or 1. (5.8) Thus, the waiting time distribution in the Geom/Geom/1 queue which is the discrete analogue of the M/M/1 queue, is asymptotically geometric with ݔ ൌ ݌ ݍൗ . 56    Chapter 6 Numerical Experiment 6.1 Numerical Results and Examples A GI/G/1 queue is frequently used in the performance evaluation of manufacturing, telecommunications and computer systems. The steady-state waiting time is an important measure of the effectiveness of this model. The closed form expressions for the distribution and moments of the steady-state waiting times are available for particular cases of this model. However, the exact analysis, in many cases, has been difficult. The numerical approximations presented in this chapter are intended to bridge this gap. In this section, we use the various techniques discussed in previous chapters to solve the steady-state waiting time and the average queue length for the discrete time GI/G/1 queue. We use Uniform distribution, Geometric distribution and Gamma distribution to calculate the computed waiting time distribution, and then we compare them with the closed form approximation formula in order to determine the effectiveness of our method. We now find the waiting time distribution in a discrete GI/G/1 queue with the method described in chapter 4. An algorithm is developed by us for the purpose of figuring this out and is given below: Our Algorithm: 1. ߙ௜଴ ൌ 0, ݅ ൌ 1,2, … , ݄. 2. For ݉ ൌ 0, 1, 2, … do the following until ݉ܽݔ൫หߙ௜௠ െ ߙ௜௠ାଵห൯ ൏ ߳ 2.1. ௝ܾ௠ ൌ ݑି௝ ൅ ∑ ߙ௜௠ିଵ ௝ܾା௜௠ ,ஶ௜ୀଵ ݆ ൌ ݃, ݃ െ 1,… ,0 2.2. For 0 ൑ ݐ ൑ 1, 57    ܵ௠ ൌ ൫ሼሺ1 െ ݐሻ ∗ ሺ1 െ ܾ଴௠ሻሽ ൅ ሼݐ ∗ ∑ ௝ܾ௠௚௝ୀଵ ሽ൯ 2.3. ߙ௝௠ ൌ ቀ௨ೕା ∑ ఈ೔శೕ೘ಮ೔సభ ௕೔೘ቁ ௌ೘ , ݆ ൌ ݄, ݄ െ 1,… ,1. Our Algorithm is identical to Algorithm 3 in [12]. In step 2.2, we use the fact that ܵ௠ ൌ ൫ሼሺ1 െ ݐሻ ∗ ሺ1 െ ܾ଴௠ሻሽ ൅ ሼݐ ∗ ∑ ௝ܾ௠௚௝ୀଵ ሽ൯ for 0 ൑ ݐ ൑ 1. In Algorithm 3, Grassmann and Jain [12] considered ݐ ൌ ଵଶ as the weight in step 2.2, but we are considering ሺ1 െ ݐሻ and ݐ as the weights for ሺ1 െ ܾ଴௠ሻ and ∑ ௝ܾ௠௚௝ୀଵ , respectively, instead of ଵଶ. We utilize 100 different values of ݐ between 0 and 1, where ݐ increases by 0.01 and thus, we can justify at which values of ݐ the convergence is faster, apart from ݐ ൌ ଵଶ. On the other hand, we increase the value of ݉ from 0 to a maximum value for which the condition ݉ܽݔ൫หߙ௜௠ െ ߙ௜௠ାଵห൯ ൏ ߳ is fulfilled. For each value of ݉, we do the calculation repeatedly by using our Algorithm. Furthermore, the approximations for the expected waiting time and the average queue length can be obtained at each value of ݐ and the number of iteration can also be easily investigated. Obviously, we are considering the minimum number of iteration for a particular value of ݐ, which is always less than or equal to the number of iterations in Algorithm 3 in [12]. In addition, it is possible to calculate the total time required to converge for each value of ݐ by using our method and thus we can find the time required for those values of ݐ, where we get the minimum number of iteration. In this way, the best value of ݐ can be determined such that we reach to an optimal situation. We can also find the actual number of iteration to get ߙ௝ and ௝ܾ. In all the numerical experiments, our Algorithm always converges to the correct solution, and most of the time, its convergence is faster than the convergence of Algorithm 3. However, analytically so far there is no proof that our Algorithm will converge, and even if it does, there is no proof that it obtains the Wiener-Hopf factorization. For our numerical calculations, the precision ߳ of our Algorithm is set to 0.000001, and it is possible to solve problems with ݄ and ݃ above 2000 by using our Algorithm. Several values of ߩ are used and the number of iteration are also provided. Then by using Little’s formula (ܮ௤ ൌ ߣ ∙ ௤ܹ), we can easily calculate the approximate average queue length, which we call ܮ௤భ. Thus, ܮ௤భ ൌ ߣ ∙ ௤ܹభ. 58    Uniform Distributions: A random variable, ܺ, is said to have a discrete uniform distribution on the integers 1, 2, … , ܰ, if the probability density function (pdf) has the form ݂ሺݔሻ ൌ 1 ܰ ; ݔ ൌ 1, 2, … ,ܰ⁄ , ܧሺܺሻ ൌ ேାଵଶ and ܸܽݎሺܺሻ ൌ ሺேାଵሻሺேିଵሻ ଵଶ . In our case, ܲݎሺ ௡ܶ ൌ ݅ሻ ൌ ଵே ; ݅ ൌ 1, 2, … ,ܰ and ሺܵ௡ ൌ ݅ሻ ൌ ଵ ே ; ݅ ൌ 1, 2, … ,ܰ. The following tables give results of some queues with uniform interarrival and service times. In all cases, the minimal service time is 1, as is the minimal interarrival time. The maximum interarrival and service times vary, as shown in Table 6.1-6.4. Table 6.1 contains the results obtained by Algorithm 3 of [12], as well as the results obtained by our method. In all cases, we get better approximation for ܧሺܹሻ and in most of the cases; the number of iteration in our method is less than in theirs. Table 6.1 Results of using Uniform Distributions by using our Algorithm and by Algorithm 3 [12] Case Max (T) Max (S) ࣋ Algorithm 3 Iter. Our method Iter. ࡱሺࢃሻ ࡱሺࢃሻ 1 50 25 0.510 3.720 5 3.7199 4 2 50 40 0.804 24.9 5 24.8725 5 3 49 48 0.980 381.0 6 380.9675 5 4 66 65 0.985 700.1 6 700.0980 5 5 99 98 0.990 1596.6 6 1594.4171 5 Note: Iter. is representing the number of iteration, in all the Tables of chapter 6. 59    Table 6.2 represents the results of the approximation for average Queue Length in queue which is calculated by using our method. We also include the optimal values of t and required time (sec) for minimum iteration. Table 6.2 Results for the Average Queue Length in queue using our method Case Max (T) Max (S) Our method Our method Our method Queue Length ࡸࢗ Optimum values of ࢚ Best value of ࢚ & required time (sec) for minimum iter. 1 50 25 0.1459 0.22 to 0.42 0.42 (time=0.0014) 2 50 40 0.9754 0.43 to 0.62 0.60 (time=0.0023) 3 49 48 15.2387 0.57 to 0.67 0.60 (time=0.0025) 4 66 65 20.8984 0.53, 0.54, 0.56 to 0.67 0.60 (time=0.0034) 5 99 98 31.8883 0.53 to 0.67 0.55 (time=0.0053) In table 6.1 and 6.2, we get more than 10 values of ݐ between 0 and 1 where we have the optimal situation. But, in each case, we choose the best approximation among the results at the optimal points depending on the minimum number of iterations, where the required time for convergence is also shortest. For example, in our first case, the minimum number of iteration is 4 for t = 0.22 to t = 0.42 and the shortest time that required for this iteration among these values of t is 0.0014 sec. at t = 0.22, 0.25, 0.26, 0.27, 0.30, 0.31, 0.32, 0.35, 0.36, 0.37, 0.41, and 0.42. Therefore, we can choose any of these values as the best value of t and we have chosen o.42 as the best value of t for the first case in table 6.2. However, we noticed that in some cases, we have the same approximation in all 100 points of ݐ, but the number of iterations are not same. Table 6.3 & 6.4 shows that with our algorithm, we can easily compute the parameters with h and g exceeding 2000. In the last two rows we worked with h and g, which exceeding 2000 60    and mentioned by new data and therefore, we do not have results for Algorithm 3. However, in most of cases, the numbers of iterations are similar to Algorithm 3. Table 6.3 Results for Uniform Distributions with Large Bounds by using our Algorithm and by Algorithm 3 [12] Max (T) Max (S) ࣋ Algorithm 3 Algorithm 3 Our method Our method ࡱሺࢃሻ Iter. ࡱሺࢃሻ Iter. 200 180 0.9 260.4 5 260.4448 (at t=0.50) 4 500 450 0.9 651.1 4 651.0762 (at t=0.55) 4 1000 900 0.9 1302.2 4 1302.097 (at t=0.63) 4 2000 1800 0.9 2604.5 4 2602.44 (at t=0.65) 4 2700 2400 0.9 New data New data 3072.3 (at t=0.65) 4 3000 2700 0.9 New data New data 3901.3 (at t=0.66) 4 61    Table 6.4 Results for the Average Queue Length in queue using our method Case Max (T) Max (S) Our method Our method Our method Queue Length ࡸࢗ Optimum values of ࢚ Best value of ࢚ and required time (sec) for minimum iter. 1 200 180 2.5915 0.50 0.50 (time=0.0099) 2 500 450 2.5991 (at t=0.55) 0.49 to 0.51 & 0.55 to 0.62 0.55 (time=0.0355) 3 1000 900 2.6016 (at t=0.63) 0.48 to 0.63 0.63 (time=0.0882) 4 2000 1800 2.6011 (at t=0.65) 0.47 to 0.65 0.65 (time=0.2801) 5 2700 2400 2.2749 (at t=0.65) 0.45 to 0.65 0.65 (time=0.5037) 6 3000 2700 2.6000 (at t=0.66) 0.46 to 0.66 0.66 (time=0.5848) In the tables, 6.3 and 6.4, we have mentioned the optimal values of ݐ where we have minimum number of iteration for convergence. Therefore, we selected the better approximation of ܧሺܹሻ and ܮ௤ from the set of optimal points of ݐ where the total number of iterations also takes less time. It is apparent that, if we start going to the right from the point 0.50, we usually get the optimum situation. But, in the first case, we see that 0.50 is the only point where we have the minimum number of iteration and can get the required approximation. Geometric Distributions: The following tables (Table 6.5, Table 6.6) contain the results for the GI/G/1 Queue, where we used the geometric distribution for both the interarrival and service time. In this case, we have 62    ܲݎሺ ௡ܶ ൌ ݅ሻ ൌ ሺ1 െ ݍሻݍ௜ିଵ, ݅ ൌ 1, 2, … ; ܧሺ ௡ܶሻ ൌ ଵଵି௤ ; ܸܽݎሺ ௡ܶሻ ൌ ௤ ሺଵି௤ሻమ . ܲݎሺܵ௡ ൌ ݅ሻ ൌ ሺ1 െ ݌ሻ݌௜ିଵ, ݅ ൌ 1, 2, … ; ܧሺܵ௡ሻ ൌ ଵଵି௣ ; ܸܽݎሺܵ௡ሻ ൌ ௣ ሺଵି௣ሻమ . But, our Algorithm does not work in this case because the service times and the interarrival times are both unbounded in the above situation. Hence, we need to truncate them. For the purpose of computation, one has to truncate the distribution function to avoid a possible overflow and to define on a finite range. Thus, the truncated geometric distribution for both the interarrival time and the service time is ܲݎሺ ௡ܶ ൌ ݅ሻ ൌ ሺ1 െ ݍሻݍ௜ିଵ ሺ1 െ ݍ௠ሻ⁄ ; ݅ ൌ 1, 2, … ,݉. ܲݎሺܵ௡ ൌ ݅ሻ ൌ ሺ1 െ ݌ሻ݌௜ିଵ ሺ1 െ ݌௠ሻ⁄ ; ݅ ൌ 1, 2, … ,݉ . ܧሺ ௡ܶሻ ൌ ଵିሺ௠ାଵሻ௤ ೘ା௠௤೘శభ ሺଵି௤ሻሺଵି௤೘ሻ ; ܧሺܵ௡ሻ ൌ ଵିሺ௠ାଵሻ௣೘ା௠௣೘శభ ሺଵି௣ሻሺଵି௣೘ሻ ; ܸܽݎሺ ௡ܶሻ ൌ ଵିଶ௤ିሺ௠ାଵሻ మ௤೘ାሺ௠మାଶ௠ାଶሻ௤೘శభ ሺଵି௤ሻሺଵି௤೘ሻ ; ܸܽݎሺܵ௡ሻ ൌ ଵିଶ௣ିሺ௠ାଵሻ మ௣೘ାሺ௠మାଶ௠ାଶሻ௣೘శభ ሺଵି௣ሻሺଵି௣೘ሻ . We consider different values for the parameter ݍ and ݌. For example, if we let ݍ ൌ 0.6 for the interarrival time, and ݌ ൌ 0.2 for the service time, we get approximately ߩ ൌ 0.5. Table 6.5 gives us a comparison of some queues with the geometric interarrival and the service times. In all cases the minimal service time is 1, as are the minimal interarrival times. The maximum service and interarrival times vary, as shown in the table below. The first row of Table 6.5, compares the results obtained in [12] with ours, where the approximations obtained in our case are better than their results. For the rest of the cases, we used the Algorithm 3 of [12] to compare the results with ours, which were not shown in [12]. In all 63    cases, the number of iteration is less than Algorithm 3, which is still considered as a fast convergence. Table 6.5 Results of using Truncated Geometric Distributions by using our Algorithm and by Algorithm 3 [12] Max (T) Max (S) ࣋ Algorithm 3 Iter. Our method Iter. ࡱሺࢃሻ ࡱሺࢃሻ 10 10 0.51 0.2636 4 0.2537 3 20 20 0.61 0.3952 4 0.3952 3 50 50 0.75 0.7500 6 0.7500 3 65 65 0.88 1.7500 9 1.7500 4 99 99 0.99 10.8764 15 10.8764 4 Table 6.6 contains the results of the same cases which we have used in Table 6.5 above, and gives the average queue length in queue by using our method. We also include the optimum values of t and required time (sec) for minimum iteration. 64    Table 6.6 Results for the Average Queue Length in queue using our method Case Max (T) Max (S) ࣋ Our method Our method Our method Queue Length ࡸࢗ Optimum values of ࢚ Best value of ࢚ & required time (sec) for minimum iter. 1 10 10 0.512 ݍ ൌ 0.6, ݌ ൌ 0.2 ߣ ൌ 0.4, ߤ ൌ 0.8 0.1040 0.46 to 0.48 0.46 (time = 0.0004) 2 20 20 0.612 ݍ ൌ 0.51, ݌ ൌ 0.2 ߣ ൌ 0.49, ߤ ൌ 0.8 0.1936 0.55 & 0.56 0.55 (time = 0.0045) 3 50 50 0.75 ݍ ൌ 0.4, ݌ ൌ 0.2 ߣ ൌ 0.6, ߤ ൌ 0.8 0.4500 Only at 0.66 0.66 (time = 0.0015) 4 65 65 0.88 ݍ ൌ 0.3, ݌ ൌ 0.2 ߣ ൌ 0.7, ߤ ൌ 0.8 1.2250 0.74 to 0.77 0.76 (time = 0.0026) 5 99 99 0.99 ݍ ൌ 0.12, ݌ ൌ 0.11 ߣ ൌ 0.88, ߤ ൌ 0.89 9.5712 0.90 & 0.91 0.90 (time = 0.0043) In the tables, 6.5 and 6.6, we get the same approximations at all optimum points for the Geom/Geom/1 queue. For the first four cases we obtained the same results of ܧሺܹሻ and ܮ௤ for all 100 values of ݐ, but for case 5, the results vary at the different points of ݐ. We also included the values of the parameters for the interarrival time distribution, ݍ, and the service time distribution, ݌, in Table 6.6 which eventually help us to determine ߣ and ߤ from the truncation formula of ܧሺ ௡ܶሻ and ܧሺܵ௡ሻ. In addition, in Table 6.6, we mentioned the result where ݐ is optimum and also mentioned the best value of ݐ where the minimum iteration takes shorter time. However, in case 3, we have an optimal situation only at ݐ ൌ 0.66, which is apparently the best value of ݐ. 65    Note: It makes a huge difference whether one starts the interarrival and service time at 0 or at 1 when ݌ and ݍ not close to 1 and this does not affect the waiting-time distribution. The waiting time depends on the difference between ݌ and ݍ, but it does affect ߣ, ߤ and ߩ significantly. We can check this easily because starting with 1 rather than with 0, increases the expected interarrival time and service time by 1. Gamma distributions: A continuous variable ܺ is said to have a gamma distribution with parameter ݇ ൐ 0 and ߠ ൐ 0 if it has a pdf of the form ݂ሺݔ; ߠ, ݇ሻ ൌ ቊ ଵ ఏೖ୻ሺ௞ሻ ݔ௞ିଵ݁ି௫ ఏ⁄ ; ݔ ൐ 0 0; ݔ ൑ 0 . CDF: If ܺ~Γሺߠ, ݊ሻ, where ݊ is a positive integer, then the CDF can be written as ܨሺݔ; ߠ, ݊ሻ ൌ 1 െ ∑ ሺ௫ ఏሻ⁄ ೕ௝!௡ିଵ௝ୀ଴ ݁ି௫ ఏ⁄ ; ܧሺܺሻ ൌ ߠ݇, ܸܽݎሺܺሻ ൌ ݇ߠଶ. Gamma/Geom/1 queue: The following table (Table 6.7) contains the results for the GI/G/1 queue, where we used the gamma distribution for the interarrival time and the truncated geometric distribution for the service time. So far, we discussed several methods to solve the waiting time distribution in a discrete GI/G/1 queue. But, can we obtain the waiting time distribution for a GI/G/1 queue if the given interarrival time distribution and service time distribution are continuous distribution function? Since all computers are digital machines, the logical way to obtain the waiting time distribution in a continuous GI/G/1 queue is to first discretize the given distribution function, and we can do this by replacing it with a sequence of discrete distribution function. It has been suggested by Kimura [18] that to discretize the distribution functions of the interarrival time ܳሺݐሻ and the service time ܲሺݐሻ in a GI/G/1 queue, we let the probability distribution of interarrival time ሼ ௡ܶሽ be ݍ௜ ൌ ܳ௜ െ ܳ௜ିଵ and the probability distribution of service time ሼܵ௡ሽ be ݌௜ ൌ ௜ܲ െ ௜ܲିଵ. e.g. ݌ଵ ൌ ܲݎሺ0 ൑ ܺ ൑ 0.1ሻ ൌ ܨሺ0.1ሻ െ ܨሺ0ሻ and 66    ݌ଶ ൌ ܲݎሺ0.1 ൑ ܺ ൑ 0.2ሻ ൌ ܨሺ0.2ሻ െ ܨሺ0.1ሻ, and so on. In this case, we compare the results of our method versus Algorithm 3 [12], though this was not done in [12]. This is our new idea, to show how well Gamma distribution works for the interarrival times where the service times are considered as the truncated geometric distribution. Since the service time has the truncated geometric distribution, we use the same formula as we used in the previous discussion in order to find the probability density function, mean and variance. The minimal time for both service and interarrival is 1 and the maximal time is varying which is the same as shown in Tables, 6.5 and 6.6. In the case of Gamma distribution for the interarrival time, we have not truncated it, even though it is not compactly supported (not bounded). But in each case, we have considered the maximum time for interarrival as its upper bound. Moreover, we considered different values for the parameters ݇ and ߠ to get ߣ as well different values for ݌ to get ߤ. In all cases, the results of ܧሺܹሻ obtained by Algorithm 3 coincide with ours. We also calculated the average queue length in queue in Table 6.8. ۼܗܜ܍: we assume ݇ ൌ ܣ and ߠ ൌ ܤ in our computer program. 67    Table 6.7 Results of using Gamma distribution for the interarrival time and truncated Geometric distribution for the service time by using our Algorithm and by Algorithm 3 [12] Case Max(T) Max (S) ࣋ Algorithm 3 Iter. Our method Iter. ࡱሺࢃሻ ࡱሺࢃሻ 1 10 10 0.5 ݌ ൌ 0.2, ܣ ൌ 2, ܤ ൌ 1.25 0.1611 4 0.1601 3 2 20 20 0.61 ݌ ൌ 0.2, ܣ ൌ 2 , ܤ ൌ 1.02 0.2397 4 0.2429 3 3 50 50 0.75 ݌ ൌ 0.2, ܣ ൌ 2 , ܤ ൌ 0.83 0.3621 5 0.3821 3 4 65 65 0.88 ݌ ൌ 0.2, ܣ ൌ 2 , ܤ ൌ 0.71 0.4896 6 0.5653 3 5 99 99 0.99 ݌ ൌ 0.2, ܣ ൌ 2 ܤ ൌ 0.63 0.600 6 0.7970 3 68    Table 6.8 Results for the Average Queue Length in queue using our method Max (T) Max (S) ࣋ Our method Our method Our method Queue Length ࡸࢗ Optimum values for ࢚ Best value of ࢚ and required time (sec) for minimum iter. 10 10 0.5 0.0640 from 0.45 to 0.50 0.45 (time = 0.0003) 20 20 0.61 0.1191 0.55 to 0.58 0.58 (time = 0.0005) 50 50 0.75 0.2302 0.65, 0.66 0.65 (time = 0.0037) 65 65 0.88 0.3981 0.72 0.72 (time = 0.0051) 99 99 0.99 0.6325 0.77 0.77 (time = 0.0086) In Table 6.7 & 6.8, we see that there are many optimal values of ݐ between 0 and 1. Except in the fourth case, we get only one optimal point of ݐ. We choose the best approximation among those points of ݐ where we get the lowest value for ܧሺܹሻ and ܮ௤ for the minimum number of iteration. We also added the required time for the minimum number of iteration. Moreover, in all cases, the number of iteration in our method is less than Algorithm 3, which is still considered as a fast convergence. The above output shows that most of the cases the best values for t of the U/U/1, the Geom/Geom/1 and the Gamma/Geom/1 queue are found when we start going to the right 69    from the point 0.50, where we usually get the optimum situation by using our method. Moreover, we also showed that we are always able to get a better approximation for ܧሺܹሻ and our execution times are shorter compare to Algorithm 3. Thus, between the two algorithms (ours and Algorithm 3), our result always proves its superiority. 6.2 Charts for the Queue Lengths In figures 6.2.1 to 6.2.4, we added some charts to show the results of the average queue length in queue, ܮ௤, with different values of traffic intensity (ߩ) for the U/U/1 queue (Figure 6.2.1 and Figure 6.2.2), the Geom/Geom/1 queue (Figure 6.2.3), and the Gamma/Geom/1 queue (Figure 6.2.4). Figure 6.2.1 The Average Queue Length in queue for the U/U/1 queue based on the data in Table 6.1 and 6.2. -5 0 5 10 15 20 25 30 35 0 0.2 0.4 0.6 0.8 1 1.2 Q ue ue L en gt h ρ L_q for U/U/1 L_q 70    Figure 6.2.2 The Average Queue Length of large bound for the U/U/1 queue based on the data in Table 6.3. Figure 6.2.3 The Average Queue Length in queue for the Geom/Geom/1 queue based on the data in Table 6.4. 2.25 2.3 2.35 2.4 2.45 2.5 2.55 2.6 2.65 0 0.2 0.4 0.6 0.8 1 Q ue ue L en gt h ρ L_q for Large Bound & ρ=0.9 L_q 0 2 4 6 8 10 12 0 0.2 0.4 0.6 0.8 1 1.2 Q ue ue L en gt h ρ L_q for Geom/Geom/1 L_q 71    Figure 6.2.4 The Average Queue Length for the Gamma/Geom/1 queue in queue based on the data in Table 6.7. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.2 0.4 0.6 0.8 1 1.2 Q ue ue L en gt h ρ L_q for Gamma/Geom/1 L_q 72    Chapter 7 Conclusion and Recommendations for Future Work 7.1 Conclusions In this thesis, we have discussed some of the popular methods of finding the waiting time distribution of the arithmetic GI/G/1 queue. In particular, the methods given by Grassmann [11], Grassmann and Jain [12], Konheim [22] and Ponstein [28], respectively could be the best ways of obtaining the discrete waiting time distribution. Other techniques for solving the problem of finding the waiting time distribution were also mentioned. Furthermore, based on the theory of random walk and the Wiener-Hopf factorization method, Grassmann and Jain [12] gave an efficient algorithm which we described in some detail. We also explained the reasons for using the Wiener-Hopf factorization method for 1 െ ܷሺݖሻ. Moreover, we added our proofs for Algorithm 1of [12], where we showed why it is monotonic and converges to the Wiener-Hopf factorization. Furthermore, we added our explanation for Algorithm 2 of [12] to show ߙ௜௠ to be bounded. In addition, we thoroughly discussed the preliminary ideas related to our topic. We also indicated the various probabilistic interpretations in the above- mentioned methods and more importantly, we demonstrated that all of these techniques [11, 12, 22, 28] are fully equivalent. Moreover, we discussed an analytical interpretation to investigate the waiting time distribution of the Geom/Geom/1 queue, which is based on an infinite interval, as an example of the arithmetic GI/G/1 queue. Thereby, we proved that the waiting time distribution of the Geom/Geom/1 queue is asymptotically geometric. The most important part we have done in this thesis is, based on Grassmann and Jain’s [12] method, we developed a new algorithm by improving the weighted average and nicely represented how one can implement this technique to find the waiting time distribution in a 73    discrete GI/G/1 queue. We also calculated the average queue length in queue by using Little’s formula. Finally, we gave some numerical examples to conclude this thesis. At first, we implement this method in numerical test for the U/U/1 queue and it worked out nicely. We also applied this in the Geom/Geom/1 queue where we truncated the distribution, and we briefly discussed the approach of truncation and showed the convergence of the waiting time distribution for the truncation case. Furthermore, our method could be used not only for the discrete type input distribution functions, but also for the continuous type input distribution function. Discretizing a continuous distribution function into a discrete sequence is a critical step in practical computation in queueing models. However, for the Gamma/Geom/1 queue, we used Gamma distribution for the interarrival time and we also discussed the technique of discretizing the gamma distribution function. Moreover, we numerically experimented with it and we found that very precise results are achievable. It can be concluded from the above discussion that the method shown in this research is numerically stable, effective, simple and robust. 7.2 Future Works We numerically experimented with our method in the Uniform distribution and Geometric distribution for the interarrival time and service time, and also with gamma distribution for the interarrival time. In future, we could implement our method in various ways to develop the concept of our work. For example, we could try to develop our method to work for the Geom/Gamma/1 queue, which is not possible by using our method. We could also compare the different ݐ with the other distributions, like Poisson, Normal, Log Normal, etc., and then make a comparison with the above distributions in such a way that one easily figures out which distribution works better in order to find the best value for ݐ by our method. Furthermore, we could also try to find a better ݐ for both the discrete and continuous cases by using the hyperbola concept. Additionally, we could explore the roots of ߙ௜ and ௝ܾ of 1 െ ܷሺݖሻ by using the Geom/Geom/1 queue, which is based on an infinite interval, as an example of the arithmetic GI/G/1 queue 74    Bibliography [1] ACKROYD, M. H., 1980. Computing the Waiting Time Distribution for the G/G/1 Queue by Signal processing Methods. IEEE Trans. Commun .COM-28 ,52-58. [2] ACKROYD, M. H., 1984. Stationary and Cyclostationary Finite Buffer Behaviour Computation via Levinson’s Method. AT &T Bell Lab. Tech. J. 63, 2159-2170. [3] ALFA, A. S., and LI, W., 2001. Matrix-Geometric Analysis of the Discrete Time Gi/G/1 System. Stochastic Model, 17, 541-554. [4] ALFA, A. S., and XUE, J., 2007. Efficient Computations for the Discrete GI/G/1 System. Inform J. on computing, 19, 480-484. [5] CHAUDHRY, M. L., and GUPTA, U. C., 2001. Computing waiting-time probabilities in the discrete-time queue: GIX/G/1, perform. Eval. 43, 123-131. [6] CHAUDHRY, M. L., and TEMPLETON, J. G. C., 1983. A First Course on Bulk queues. Wiley, New York. [7] CHAUDHRY, M. L., JAIN, J. L., and TEMPLETON, J. G. C., 1987. Numerical Analysis for Bulk-Arrival Queueing System: Root finding and Steady-State Probabilities in GIrM/1 Queues. Ann. Opns. Res.8, 307-320. [8] CHAUDHRY, M. L., AGARWAL, M., and TEMPLETON, J. G. C., 1992. Exact and Approximate Numerical solutions of Steady-State distributions arising in the Queue GI/G/1. Queueing Systems, 10, 105-152. [9] FELLER, W., 1971. An Introduction to probability Theory and its Applications, Vol. 2 Ed.2 . John Wiley & Sons, New York. [10] FRYER, M. J., and WINSTEN, C. B., 1986. An Algorithm to compute the Equilibrium Distribution of a one Dimensional Bounded Random Walk. Opns. Res. 34, 449-454. [11] GRASSMANN, W. K., 1985. The Factorization of Queueing Equations and Their Interpretation. J. Opnl. Res. Soc. 36, 1041-1050. [12] GRASSMANN, W. K., and JAIN, J. L., 1989. Numerical Solutions of the Waiting Time Distribution and Idle Time Distribution of the Arithmetic GI/G/1 Queue. Opns. Res.Vol. 37, 141-150.  75    [13] GRASSMANN, W. K., and CHAUDHURY, M. L., 1982. A New Method to Solve Steady State Queueing Equations. Naval Res. Logist. Quart. 29, 461-473. [14] GROSS, D., HARRIS, C. M., SHORTLE, J. F. and THOMPSON, J. M., Fundamental of Queueing Theory, Ed.4. John Wiley & Sons, New Jersey, 2008. [15] HASSLINGER, G., 1995. A Polynomial Factorization Approach to the Discrete time GI/G/1/N Queue Size Distribution, performance Evaluation, 23, 217-240. [16] HASSLINGER, G., 2000. Waiting Time, Busy Periods and Output Models of a Server Analyzed via Wiener-Hopf Factorization. performance Evaluation, 40, 3-26. [17] HWANG, G. U., CHOI, B. D., and KIM, J. K., 2002. The Waiting Time Analysis of a Discrete-Time Queue with Arrivals as a Discrete Autoregressive Process of Order 1. J. of Appl. Prob. 39, 619-629. [18] JAIN, J. L., MOHANTY, S. G., and BOHM, W., A Course on Queueing Models, Ed.1. Chapman & Hall/CRC, New York, 2007. [19] KIMURA, T., 1985. An Analytical Interpretation of the Discrete Approximation for the GI/GI/1 Queue, Optimization, Vol. 2, 285-296. [20] KIM, N. K., and CHAUDHRY, M. L., 2008. The Use of the Distributional Little’s Law in the Computational Analysis of the Discrete-Time GI/G/1 and DI/D/c queues. performance Evaluation, 65, 3-9.  [21] KOBAYASHI, H., 1983. Discrete-Time Queueing Systems. In probability Theory and Queueing Models. G. Louchard and G. Latouche (eds.). Academic Press, New York. [22] KONHEIM, A. G., 1975.An Elementary Solution of the Queueing System G/G/1 . SIAM J. Comput.4, 540-545. [23] LI, S. Q., 1991. A General Solution Technique for Discrete Queueing Analysis of Multimedia Traffic on ATM, IEEE Trans. Commun. Vol 39, 1115-1132. [24] LINDLEY, D. V., 1952. The Theory of Queues with a Single Server. Proc. Cambridge Philos. Soc. 48, 277-289. [25] LINWONG, P., KATO, N., and NEMOTO, Y., 2004. A Polynomial Factorization Approach for the Discrete Time GIX/G/1/k Queue. Methodology & Computing in Appl. Prob. 6, 277- 291. [26] NEUTS, M. F., 1981. Matrix-Geometric Solutions in Stochastic Models. Johns Hopkins, University Press, Baltimore. 76    [27] OTT, T. J., 1987. On the Stationary Waiting-Time Distribution in The GI/G/1 Queue I: Transform Method & Almost-Phase-Type Distributions. Advanced in Appl. Prob. 19, 240- 265. [28] PONSTEIN, J., 1974. Theory and Solution of a Discrete Queueing Problem. Statist. Neerland. 20, 139-152. [29] POWEL, W. B., 1986. Analysis of Vehicle Holding and Cancellation Strategies in Bulk Arrival, Bulk Service Queues. Trans. Sci. 20, 352-377. [30] PRABHU, N. U., 1980. Stochastic Storage Processes. Springer, New York. [31] RAO, B. V., and FELDMAN, R.M., 1999. Numerical Approximation for the Steady State Waiting Time in a GI/G/1 Queue, Queueing Systems, 31, 25-42. [32] RIEGER, E. S., and HASSLINGER, G., 1994. An Analytical Solution for The Discrete Time Single Server System with Semi-Markovian Arrivals, Queueing Systems, 18, 69-105. [33] ROSS, S. M., Introduction to Probability Models, Ed. 6. HARCOURT ASIA PTE LTD, India, 2000. [34] SHANTHIKUMAR, J. G., and BUZACOTT, J. A., 1980. On the Approximations of the Single Server Queue. Int. J. Prod. Res. 18, 761-733. [35] SHENG, P., The Waiting Time Distribution in Queueing Systems, MSc dissertation, Graduate School of Mathematics, University of Saskatchewan, Saskatoon, Saskatchewan, 1988. [36] TAYLOR, H. M., and KARLIN, S., An Introduction to Stochastic Modeling, Academic Press, Florida. [37] WHITT, W., 1982. The Marshall and Stoyan Bounds for IMRL/G/1 Queues are Tight. Opns. Res. Lett. 1, 209-213. [38] WHITT, W., 1988. An Interpolation Approximation for the Mean Workload in a GI/G/1 Queue. Opns. Res. Vol. 37, 936-952. [39] ZHOU, W. H., 2005. Performance Analysis of Discrete-time Queue GI/G/1 with Negative Arrivals. Appl. Maths. & computation, 170, 1349-1355. 77    Appendix Program Listing 1. % THIS IS THE PROGRAM TO SOLVE THE U/U/1 WAITING TIME DISTRIBUTION PROBLEM clear; Max_S=25;Max_T=50; P_S(1:Max_S)=1/(Max_S);P_T(1:Max_T)=1/(Max_T);UJ(1:Max_S- 1)=0;U_J(1:Max_T)=0; for k=1:Max_S-1 for i=1+k:Max_S PST=P_S(i)*P_T(i-k);UJ(k)=UJ(k)+PST; end end for k=0:Max_T-1 if k<=Max_T-Max_S for i=k:Max_S+k-1 P_ST=P_S(i+1-k)*P_T(i+1);U_J(k+1)=U_J(k+1)+P_ST; end else for i=k:Max_T-1 P_ST=P_S(i+1-k)*P_T(i+1);U_J(k+1)=U_J(k+1)+P_ST; end end end g=Max_T-1; h=Max_S-1; Epsl=1e-6;t=0;Count_t=1; TIME=[]; for t=0:0.01:1 tic;alfa(1:h,1)=0; m=1; b(1:g+1,m)=U_J(1:g+1)';S(1)=0; m=2;b(1:g+1,m)=U_J(1:g+1)'; S(m)=((1-t)*(1-b(1,m))+t*sum(b(2:g+1,m))); alfa(1:h+h,m)=0; for j=h:-1:1 alfa(j,m)=(UJ(j)+alfa(j+1:h+j,m)'*b(2:h+1,m))/S(m); end Max_dif=max(abs(alfa(1:h,m-1)-alfa(1:h,m))); if Max_difEpsl m=m+1;M(Count_t)=m; b(1:g+h+1,m)=0; 78    for j=g:-1:0 b(j+1,m)=U_J(j+1)+alfa(1:h,m-1)'*b(j+2:j+1+h,m); end S(m)=((1-t)*(1-b(1,m))+t*sum(b(2:g+1,m))); alfa(1:h+h,m)=0; for j=h:-1:1 alfa(j,m)=(UJ(j)+alfa(j+1:h+j,m)'*b(2:h+1,m))/S(m); end Max_dif=max(abs(alfa(1:h,m-1)-alfa(1:h,m))); end end A123=1:h;w0=1-sum(alfa(1:h,m));Exp_w(Count_t)=(A123*alfa(1:h,m))/w0; Time_exp=toc;TIME(Count_t)=Time_exp; A321=fliplr(alfa(1:h,m)'); W=[w0 zeros(1,h)]; for i=1:h W(i+1)=[A321(h-i+1:h) zeros(1,h-i+1)]*W'; end EXP_w(Count_t)=A123*W(2:h+1)'; variance(Count_t)=((A123.^2)*W(2:h+1)')- EXP_w(Count_t)^2; %variance(Count_t)=(A123.^2)*alfa(1:h,m)/w0+(Exp_w(Count_t))^2; Count_t=Count_t+1; end Opt_m=min(M); Exp_w(find(M<=Opt_m)); %variance=(A123.^2)*alfa(1:h,m)/w0-(Exp_w).^2; rho=(Max_S+1)/(Max_T+1); lamda=1/((1+Max_T)/2); L=lamda*Exp_w; Miu=1/((1+Max_S)/2); Var_T=((Max_T)^2-1)/12; Var_S=((Max_S)^2-1)/12; SCV_T=(lamda)^2*Var_T; SCV_S=(Miu)^2*Var_S; %COMPARING WITH THE FOLLOWING APPROXIMATE FORMULA W_q=((SCV_T+SCV_S)/2)*(rho/(1-rho)*(1/Miu)); L_q=lamda*W_q; 2. % THIS IS THE PROGRAM TO SOLVE THE Geom/Geom/1 WAITING TIME DISTRIBUTION PROBLEM clear; Max_S=65;Max_T=65;p=0.2;q=0.3 ; for ai=1:Max_S P_S(ai)=((1-p)*p^(ai-1))/(1-p^(Max_S)); end for ai=1:Max_T P_T(ai)=((1-q)*q^(ai-1))/(1-q^(Max_T)); end UJ(1:Max_S-1)=0;U_J(1:Max_T)=0; 79    for k=1:Max_S-1 for i=1+k:Max_S PST=P_S(i)*P_T(i-k);UJ(k)=UJ(k)+PST; end end for k=0:Max_T-1 if k<=Max_T-Max_S for i=k:Max_S+k-1 P_ST=P_S(i+1-k)*P_T(i+1);U_J(k+1)=U_J(k+1)+P_ST; end else for i=k:Max_T-1 P_ST=P_S(i+1-k)*P_T(i+1);U_J(k+1)=U_J(k+1)+P_ST; end end end g=Max_T-1; h=Max_S-1; Epsl=1e-6;t=0;Count_t=1; TIME=[]; for t=0:0.01:1 tic;alfa(1:h,1)=0; m=1; b(1:g+1,m)=U_J(1:g+1)';S(1)=0; m=2;b(1:g+1,m)=U_J(1:g+1)'; S(m)=((1-t)*(1-b(1,m))+t*sum(b(2:g+1,m))); alfa(1:h+h,m)=0; for j=h:-1:1 alfa(j,m)=(UJ(j)+alfa(j+1:h+j,m)'*b(2:h+1,m))/S(m); end Max_dif=max(abs(alfa(1:h,m-1)-alfa(1:h,m))); if Max_difEpsl m=m+1;M(Count_t)=m; b(1:g+h+1,m)=0; for j=g:-1:0 b(j+1,m)=U_J(j+1)+alfa(1:h,m-1)'*b(j+2:j+1+h,m); end S(m)=((1-t)*(1-b(1,m))+t*sum(b(2:g+1,m))); alfa(1:h+h,m)=0; for j=h:-1:1 alfa(j,m)=(UJ(j)+alfa(j+1:h+j,m)'*b(2:h+1,m))/S(m); end Max_dif=max(abs(alfa(1:h,m-1)-alfa(1:h,m))); end end A123=1:h;w0=1-sum(alfa(1:h,m));Exp_w(Count_t)=(A123*alfa(1:h,m))/w0; Time_exp=toc;TIME(Count_t)=Time_exp; A321=fliplr(alfa(1:h,m)'); W=[w0 zeros(1,h)]; for i=1:h W(i+1)=[A321(h-i+1:h) zeros(1,h-i+1)]*W'; end 80    EXP_w(Count_t)=A123*W(2:h+1)'; variance(Count_t)=((A123.^2)*W(2:h+1)')-EXP_w(Count_t)^2; %variance(Count_t)=(A123.^2)*alfa(1:h,m)/w0+(Exp_w(Count_t))^2; Count_t=Count_t+1; end Opt_m=min(M) Exp_w(find(M<=Opt_m)) %variance=(Exp_w).^2 + (A123.^2)*alfa(1:h,m)/w0; lamda=((1-q^(Max_T))*(1-q))/(1-(Max_T+1)*q^Max_T+Max_T*q^(Max_T+1)); L=lamda*Exp_w; Miu=((1-p^Max_S)*(1-p))/(1-(Max_S+1)*p^(Max_S)+Max_S*p^(Max_S+1)); rho=lamda/Miu; Var_T=(1-2*q- (Max_T+1)^2*(q^(Max_T))+((Max_T)^2+2*Max_T+2)*q^(Max_T+1))/((1- q^Max_T)*(1-q)); Var_S=(1-2*p- (Max_S+1)^2*(p^(Max_S))+((Max_S)^2+2*Max_S+2)*p^(Max_S+1))/((1- p^Max_S)*(1-p)); SCV_T=(lamda)^2*Var_T; SCV_S=(Miu)^2*Var_S; % COMPARING WITH THE FOLLOWING APPROXIMATE FORMULA W_q=((SCV_T+SCV_S)/2)*(rho/(1-rho)*(1/Miu)); L_q=lamda*W_q; 3. % THIS IS THE PROGRAM TO SOLVE THE Gamma/Geom/1 WAITING TIME DISTRIBUTION PROBLEM clear; Max_S=10;Max_T=10;p=0.2;A=2;B=1.25; for ai=1:Max_S P_S(ai)=((1-p)*p^(ai-1))/(1-p^(Max_S)); end % for ai=1:Max_T % if ai==1 % P_T(1)=gamcdf(1,A,B); % else P_T(ai)=gamcdf(ai,A,B)-P_T(ai-1); % end % % end for ai=1:Max_T % GMCDF=0; % for N=0:1:A-1 % GCDF=((ai/B)^N)*(exp(-ai/B))/factorial(N); % GMCDF=GMCDF+GCDF; % end % Gama_CDF=1-GMCDF; % P_T(ai)=Gama_CDF-F0; % F0=Gama_CDF; % N=0:A-1; 81    % Secndprt=((ai/B).^N)./factorial(N); % P_T(ai+1)=1-exp(-ai/B)*sum(Secndprt)-P_T(ai); P_T(ai)=((B^-A)*(ai^(A-1))*exp(-ai/B))/factorial(A-1); end UJ(1:Max_S-1)=0;U_J(1:Max_T)=0; for k=1:Max_S-1 for i=1+k:Max_S PST=P_S(i)*P_T(i-k);UJ(k)=UJ(k)+PST; end end for k=0:Max_T-1 if k<=Max_T-Max_S for i=k:Max_S+k-1 P_ST=P_S(i+1-k)*P_T(i+1);U_J(k+1)=U_J(k+1)+P_ST; end else for i=k:Max_T-1 P_ST=P_S(i+1-k)*P_T(i+1);U_J(k+1)=U_J(k+1)+P_ST; end end end g=Max_T-1; h=Max_S-1; Epsl=1e-6;t=0;Count_t=1;TIME=[]; for t=0:0.01:1 tic;alfa(1:h,1)=0; m=1; b(1:g+1,m)=U_J(1:g+1)';S(1)=0; m=2;b(1:g+1,m)=U_J(1:g+1)'; S(m)=((1-t)*(1-b(1,m))+t*sum(b(2:g+1,m))); alfa(1:h+h,m)=0; for j=h:-1:1 alfa(j,m)=(UJ(j)+alfa(j+1:h+j,m)'*b(2:h+1,m))/S(m); end Max_dif=max(abs(alfa(1:h,m-1)-alfa(1:h,m))); if Max_difEpsl m=m+1;M(Count_t)=m; b(1:g+h+1,m)=0; for j=g:-1:0 b(j+1,m)=U_J(j+1)+alfa(1:h,m-1)'*b(j+2:j+1+h,m); end S(m)=((1-t)*(1-b(1,m))+t*sum(b(2:g+1,m))); alfa(1:h+h,m)=0; for j=h:-1:1 alfa(j,m)=(UJ(j)+alfa(j+1:h+j,m)'*b(2:h+1,m))/S(m); end Max_dif=max(abs(alfa(1:h,m-1)-alfa(1:h,m))); end end A123=1:h;w0=1- sum(alfa(1:h,m));Exp_w(Count_t)=(A123*alfa(1:h,m))/w0;Time_exp=toc;TIME(Co unt_t)=Time_exp; A321=fliplr(alfa(1:h,m)'); 82    W=[w0 zeros(1,h)]; for i=1:h W(i+1)=[A321(h-i+1:h) zeros(1,h-i+1)]*W'; end EXP_w(Count_t)=A123*W(2:h+1)'; variance(Count_t)=((A123.^2)*W(2:h+1)')- EXP_w(Count_t)^2; %variance(Count_t)=(A123.^2)*alfa(1:h,m)/w0+(Exp_w(Count_t))^2; Count_t=Count_t+1; end Opt_m=min(M) Exp_w(find(M<=Opt_m)); %variance=(Exp_w).^2 + (A123.^2)*alfa(1:h,m)/w0; lamda=1/(A*B); L=(lamda)*(Exp_w); Miu=((1-p^Max_S)*(1-p))/(1-(Max_S+1)*p^(Max_S)+Max_S*p^(Max_S+1)); rho=lamda/Miu Var_T=A*B^2; Var_S=(1-2*p- (Max_S+1)^2*(p^(Max_S))+((Max_S)^2+2*Max_S+2)*p^(Max_S+1))/((1- p^Max_S)*(1-p)); SCV_T=1/A; SCV_S=(Miu)^2*(Var_S); % COMPARING WITH THE FOLLOWING APPROXIMATE FORMULA W_q=((SCV_T+SCV_S)/2)*(rho/(1-rho)*(1/Miu)); L_q=lamda*W_q; """@en ; edm:hasType "Thesis/Dissertation"@en ; vivo:dateIssued "2012-05"@en ; edm:isShownAt "10.14288/1.0072458"@en ; dcterms:language "eng"@en ; ns0:degreeDiscipline "Mathematics"@en ; edm:provider "Vancouver : University of British Columbia Library"@en ; dcterms:publisher "University of British Columbia"@en ; dcterms:rights "Attribution-NonCommercial-NoDerivatives 4.0 International"@en ; ns0:rightsURI "http://creativecommons.org/licenses/by-nc-nd/4.0/"@en ; ns0:scholarLevel "Graduate"@en ; dcterms:title "A better numerical approach for finding the steady-state waiting time and the average queue length of a system for the arithmetic GI/G/1 queue"@en ; dcterms:type "Text"@en ; ns0:identifierURI "http://hdl.handle.net/2429/39784"@en .