Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Active queue management techniques to improve Quality of Service for real-time flows in third generation.. Chen, Jian 2002

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
ubc_2002-0360.pdf [ 6.92MB ]
[if-you-see-this-DO-NOT-CLICK]
Metadata
JSON: 1.0065262.json
JSON-LD: 1.0065262+ld.json
RDF/XML (Pretty): 1.0065262.xml
RDF/JSON: 1.0065262+rdf.json
Turtle: 1.0065262+rdf-turtle.txt
N-Triples: 1.0065262+rdf-ntriples.txt
Original Record: 1.0065262 +original-record.json
Full Text
1.0065262.txt
Citation
1.0065262.ris

Full Text

ACTIVE QUEUE MANAGEMENT TECHNIQUES TO IMPROVE QUALITY OF SERVICE FOR R E A L - T I M E FLOWS IN THIRD GENERATION WIRELESS NETWORKS by JIAN C H E N B . E . , Shanghai Jiao Tong University, C h i n a , 1995  A THESIS SUBMITTED IN PARTIAL F U L F I L M E N T OF T H E REQUIREMENTS FOR T H E D E G R E E M A S T E R OF APPLIED SCIENCE in T H E F A C U L T Y OF G R A D U A T E STUDIES Department of Electrical and Computer Engineering We accept this thesis as conforming to the required standard  T H E UNIVERSITY O F BRITISH October 2002 © J i a n C h e n , 2002  COLUMBIA  In presenting this thesis in  partial fulfilment  of  the  requirements  for  an advanced  degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department  or  by  his  or  her  representatives.  It  is  understood  that  copying  or  publication of this thesis for financial gain shall not be allowed without my written permission.  Department The University of British Columbia Vancouver, Canada Date  DE-6 (2/88)  OcX. / o t i  Abstract In this thesis, we analyze the behavior of real-time flows with hard time deadline using link layer retransmission to overcome wireless errors, and propose two new mechanisms to improve these flows' end-to-end Quality-of-Service (QoS) over a wireless network.  Wireless channels have the characteristic that link quality varies according to propagation conditions. For real-time flows with hard time deadlines, link layer retransmissions over a wireless network, necessitated by fluctuations in link quality, may result in a high number of packets being dropped, as deadlines expire. The expired packets waste network resources and lead to long queuing delays for subsequent packets. After analyzing the characteristics of the Radio Link Control (RLC) layer of the General Packet Radio Service (GPRS)/Universal Mobile Telecommunication Services (UMTS) network, we have developed a new set of mechanisms to minimize expiration packet drops in favour of overflowlike packet drops. We propose to use active queue management to limit transmission queue length, hence queuing delay, thus eliminating expiration packet drops. This allows the buffer and wireless bandwidth, otherwise be wasted by expiring packets, to be released earlier for other packets. We apply this mechanism to the radio link control layer in the G P R S / U M T S wireless networks. The effectiveness of the proposed mechanism is verified by simulations.  Furthermore, we extend the similar idea from the radio link control layer to the whole G P R S / U M T S domain. We adapt the Early Regulation of Unresponsive Flows (ERUF) to the characteristics of wireless channels employing link layer retransmissions. We propose to regulate those congested flows at the DiffServ domain ingress edge nodes of the G P R S /  ii  U M T S core network, and drop undeliverable packets earlier to release some shared network resources for other flows. We also present simulation results to show that this new Wireless Early Regulation of Unresponsive Flows ( W E R U F ) scheme can significantly improve the overall end-to-end quality-of-service.  iii  Table of Contents Abstract  ii  Table of Contents  iv  List of Tables  vi  List of Figures.  vii  Acknowledgments  x  Chapter 1  Introduction  1  Background Knowledge  3  1.1.1 General Architecture of G P R S Domain  3  1.1.2 Some Mechanisms for QoS  5  1.2  Motivation  6  1.3  Contributions  8  1.4  Outline of the Thesis  9  1.1  Chapter 2  Overview of Previous Work  10  2.1  End-to-end QoS in G P R S / U M T S Architecture  10  2.2  G P R S / U M T S Core Network Mechanisms for QoS  12  2.2.1 I E T F QoS Architectures  12  2.2.2 Mapping between G P R S / U M T S and DiffServ  16  2.2.3 Active Queue Management  17  Mechanisms at Wireless Access Nodes for QoS  21  2.3.1 Radio L i n k Control ( R L C ) Layer in G P R S / U M T S  21  2.3.2 Other Mechanisms at Wireless Access Nodes  24  Applying A Q M to Link Layer Buffers for Real-time Traffic  27  2.3  Chapter 3  iv  3.1  Analysis of Real-time Flows over R L C with A M  28  3.2  Simulation Results  34  3.2.1 Two-slope Piece-wise Linear A Q M Function II  35  3.2.2 Comparison of Different A Q M Functions with Varying Traffic L o a d  40  3.2.3 A Q M Functions with Different Error Rates  45  Summary  48  Applying W E R U F to R L C Queue  50  4.1  Early Regulation in G P R S / U M T S Network  50  4.2  Simulation Results  54  4.2.1 Real-time F l o w and T C P Flows  55  3.3 Chapter 4  4.2.1.1 One Real-time F l o w in a Noisy L i n k and Multiple T C P Flows in Clean Links with Equal R T T 55 4.2.1.2 One Real-time Flow in a Noisy L i n k and Multiple T C P Flows in Clean Links with Different R T T 60  4.3 Chapter 5  4.2.2 Real-time Flows Over Different L i n k Conditions  62  4.2.2.1 Two Real-time Flows in Different Links  62  4.2.2.2 Multiple Real-time Flows in Different Wireless Links  67  Summary Conclusions and Future Work  68 70  Bibliography  73  Appendix A.List of Abbreviations and Acronyms  78  Appendix B. Simulation Models in O P N E T 8.1  82  List of Tables Table 3.1  Parameters for Simulation of A Q M in R L C layer.  35  Table 4.1  Common Parameters for Simulation of W E R U F .  55  Table B . 1  O P N E T modeling domains  82  vi  List of Figures Figure 1.1  Overview of the G P R S packet domain architecture  Figure 2.1  Protocol stacks of the U M T S user plane  10  Figure 2.2  G P R S / U M T S QoS architecture  11  Figure 2.3  Mapping of the DiffServ domain and the G P R S / U M T S network  16  Figure 2.4  General algorithm for R E D gateways  19  Figure 2.5  R E D and Gentle-RED marking/dropping rate vs. average queue length  20  Figure 2.6  Acknowledged Mode in the Radio L i n k Control layer (window size = 8)  22  Figure 3.1  Gaussian distribution with means M M a n d o' A <o' B  A  and M , variances a' B  3  A  and a' ,  f i  B  where M  A  < 31  Figure 3.2  Gaussian expiration drop rate and A Q M approximations  32  Figure 3.3  Simulation model for A Q M in the link layer buffer.  34  Figure 3.4  B E R , transmission queue length, and throughput without A Q M  36  Figure 3.5  Different packet drops without A Q M  37  Figure 3.6  B E R , transmission queue length, and throughput with A Q M  38  Figure 3.7  Different packet drops with A Q M  39  Figure 3.8  Throughput vs. traffic load with three schemes  41  Figure 3.9  End-to-end delay vs. traffic load with three schemes  41  Figure 3.10  Average transmission queue length with three schemes  42  Figure 3.11  Different packet drops vs. traffic load without A Q M  43  Figure 3.12  Different packet drops vs. traffic load with A Q M function II  44  Figure 3.13  Different packet drops vs. traffic load with A Q M function III  45  Figure 3.14  Packet drop rate with different long-run packet error rates  46  vii  Figure 3.15  Throughput vs. long-term packet error rates P  47  B  Figure 3.16  End-to-end delay vs. long-term packet error rates P  Figure 4.1  The general concept behind Wireless Early Regulation of Unresponsive Flows  B  48  (WERUF)  51  Figure 4.2  Network topology of one real-time plus three T C P flows  56  Figure 4.3  Real-time flow throughput vs. offered load  57  Figure 4.4  End-to-end delay of the real-time flow vs. offered load  57  Figure 4.5  Total T C P throughput vs. real-time traffic offered load  58  Figure 4.6  Real-time flow packet drops vs. offered load without W E R U F .  59  Figure 4.7  Real-time flow packet drops vs. offered load with W E R U F .  59  Figure 4.8  Network topology of one real-time and five T C P flows with different Internet latencies  60  Figure 4.9  T C P throughput per flow vs. Internet latency  61  Figure 4.10  Network topology of two real-time flows  62  Figure 4.11  Throughput of the low drop precedence real-time flow in a noisy link vs. offered load 63 Throughput of the high drop precedence real-time flow in a clean link vs. offered load 64  Figure 4.12  Figure 4.13  End-to-end delay of the low drop precedence real-time flow in a noisy link vs. offered load 64  Figure 4.14  End-to-end delay of the high drop precedence real-time flow in a clean link vs. offered load 65  Figure 4.15  Packet drops of the low drop precedence real-time flow in a noisy link vs. offered load without W E R U F . 66  Figure 4.16  Packet drops of the low drop precedence real-time flow in a noisy link vs. offered load with W E R U F . 66  Figure 4.17  Network topology of multiple real-time flows in noisy and clean links  viii  67  Figure 4.18  Total throughput in noisy and clean links vs. number of noisy links  ix  Acknowledgments I would like to extend my sincere gratitude to my supervisor, Dr. Victor Leung, who has been a constant source of inspiration, support and guidance. But not less, I would like to thank the helpful staff of the Department of Electrical and Computer Engineering at the University of British Columbia.  Motorola Canada L t d . i n Richmond and the Canadian Natural Sciences and Engineering Research Council sponsored this research project. I would also like to express my thanks to them.  Chapter 1 Introduction The Internet has seen phenomenal growth in the past few years. Recent research has begun to focus on the provision of Internet service in wireless networks, as it is widely expected to be the next growth area in the Internet. In order to transmit data packets over air interfaces, factors like inferior transmission links, insufficient spectrum and limited capacity should be considered. A n error control mechanism for data link layers that supports stable performance under high error rate is especially needed for wireless data transmission.  The Internet protocol (IP) is not designed to offer quality guarantees to applications, as it only provides a best-effort data delivery service. However, the Internet Engineering Task Force (IETF) [1] has proposed Integrated Services (IntServ) [2] [3] [4], as w e l l as the Differentiated Services (DiffServ) [5], to offer diverse service levels rather than just the single best-effort class. The IntServ model uses explicit resource reservation for every flow requesting Quality-of-Service (QoS). O n the other hand, the DiffServ model relies on the prioritization of some flows over others.  The Universal M o b i l e Telecommunications System ( U M T S ) architecture [6], as defined by the 3rd Generation Partnership Project ( 3 G P P ) , provides the b u i l d i n g blocks for the vast deployment of the next generation mobile Internet. U M T S promises to support a wide range of applications with different QoS profiles. In the technical specification [7], there are four different QoS classes targeting conversational, streaming, interactive and background traffic. The first two classes are defined for real-time applications. Conversational class serves the traffic with the most stringent transfer delay and delay variation requirements, such as Voice over IP. The Stream class serves real-time traffic with relatively loose delay requirement, such as streaming video. The  1  Chapter 1 Introduction  latter two QoS classes typically serve traffic without bounded transfer delay requirements. The most popular type o f traffic is the Transmission Control Protocol ( T C P ) [8]. The promised QoS diversity is adequate to support future mobile Internet applications. Expected situations rely on the proper i n t e r - w o r k i n g o f U M T S and I P networks. D i f f S e r v architecture is one o f the mechanisms suggested for use i n the U M T S core network ( C N ) to offer interoperability at the QoS level.  The p r o v i s i o n o f end-to-end Q o S not only depends on the core network, but also on wireless links. Different resource management schemes must be deployed at the wireless-wireline interface nodes so as to cater to all requirements of QoS. Our proposed idea is one such scheme.  We focus our efforts on link layer retransmission schemes to overcome wireless errors, especially for real-time applications. Wireless links are error-prone channels. A lot o f effort has been made to attack this problem. The two main classes of techniques proposed for this problem are Forward Error Correction ( F E C ) , and link layer retransmission of lost packets i n response to Automatic Repeat Request ( A R Q ) messages [9] [10]. They can either be separately implemented, or combined to get the best performance. For example, i n a hybrid A R Q scheme [11], when the number of errors i n a received packet is lower than the error-correcting capacity o f the configured F E C code, this packet is accepted. Otherwise it is discarded and A R Q is triggered to request a retransmission.  A s suggested by the 3 G P P specification [7], for real-time applications w i t h l o w and stringent transfer delay requirements, F E C is normally used as the only error correcting scheme because A R Q may be too time-consuming. Conversational class |raffic falls into this category. For real-time traffic where transfer delay requirements are relatively lax, such as Streaming class  2  Chapter 1 Introduction  applications for example, A R Q or the H y b r i d A R Q can be used to reduce the bit error rate. We concentrate on the latter in our research work.  This chapter briefly introduces some background information, then presents the motivation and an overview of our concept. Further details are illustrated i n following chapters.  1.1 Background Knowledge 1.1.1 General Architecture of GPRS Domain General Packet Radio Service ( G P R S ) [12] is originally proposed for G l o b a l System for M o b i l e communication ( G S M ) data service. 3 G P P adopts it as the c o m m o n standard packet domain core networks for U M T S .  G P R S uses a packet-mode technique to transfer high-speed and low-speed data and signalling in an efficient manner. It is designed to support transfers from intermittent and bursty data through to occasional transmission of large volumes of data. It supports Q o S and other features.  Figure 1.1 presents the interfaces and network nodes relating to the topics of this thesis.  UMTS  MS - H - i BSS Um  Figure 1.1  Overview of the GPRS packet domain architecture.  3  Chapter 1 Introduction  A s we can see, the G P R S backbone can serve both U M T S and G S M access networks. G P R S introduces two new network nodes in the Public L a n d M o b i l e Network ( P L M N ) . The Serving G P R S Support Node ( S G S N ) keeps track of the individual M o b i l e Station's ( M S ) location and performs security functions and access control. The S G S N is connected to the G S M base station system through the G b interface and/or to the U M T S R a d i o A c c e s s N e t w o r k through the Iu interface. The Gateway G S N ( G G S N ) provides interworking w i t h external packet-switched networks, and is connected w i t h S G S N s v i a an IP-based packet d o m a i n P L M N backbone network.  The meanings of other elements and interfaces are explained below.  Network nodes:  •  M S - M o b i l e Station. In this thesis we use this term or the word "mobile" to refer to both the M o b i l e Terminal ( M T ) and Terminal Equipment (TE) at the end user terminal site.  •  U T R A N - U M T S Terrestrial Radio Access Network. ( U M T S only)  •  B S S - Base Station System. ( G S M only)  •  P D N - Packet Data Network. We refer here to an external data network (Internet or others), either with QoS guarantees, or just best effort.  •  T E - Terminal Equipment. We refer here to an Internet host or another terminal communicating with the M S .  Interfaces:  Chapter 1 Introduction  •  Iu and G b - The interfaces connecting S G S N with U T R A N in U M T S and B S S in G S M respectively.  •  G n - Interface between two G S N s within a P L M N .  •  U u and U m - The wireless interfaces between mobile stations ( M S ) and the G S M or U M T S fixed network part.  From Figure 1.1, we can see that an S G S N can connect to more than one G G S N even i f they are in different P L M N s . Actually, each G G S N can also serve more than one S G S N . Other network elements and interfaces are specified in the technical documents. The figure does not show them because they are unrelated to our topic. The l i n k layer retransmission scheme for overcoming wireless errors resides in the wireless interfaces U m and U u .  1.1.2 Some Mechanisms for QoS The mechanisms introduced below basically demonstrate the impetus behind our idea. In the following chapters they are discussed in detail.  Random Early Detection ( R E D ) [13] is one of the most popularly used A c t i v e Queue Management ( A Q M ) methods. It actively drops/marks packets in the aggregation of flows at core routers by a probability proportional to the average queue length. A t first, this method targets the T C P / I P network, in which a single marked or dropped packet is sufficient to signal the presence of congestion to the transport-layer protocol. The emphasis on avoiding the global synchronization, which results when many connections reduce their windows at the same time, is particularly relevant in a network with T C P [8] where each connection goes through Slow-Start, reducing the window to one in response to a dropped packet. With the growing number of Internet applications r e l y i n g on unresponsive protocols, w i t h little or no end-to-end congestion control, R E D is  5  Chapter 1 Introduction  proposed for use in core routers in order to avoid unfairness and congestion collapses in the Internet [14] [15]. The general idea is similar to the TCP case, except with a new functionality to measure the unresponsive flows link share so that the "misbehaved" flows can be identified and regulated. The regulation may be implemented by differentially scheduling the packets from the identified flows, or by dropping the packets from the flows at the router.  The reference [16] proposed to extend such idea as Early Regulation of Unresponsive Flows (ERUF). The general concept deals with cases when unresponsive flows encounter network congestion at certain nodes over the end-to-end path, thus destining some packets to be dropped. The author in [16] argues that dropping packets at an earlier node (before reaching the shared core routers) over the same path frees shared network resources for other flows, with the assumption that network resources assigned to certain traffic aggregates are fixed. The author in [16] proposes a notification scheme to inform the core routers about the congestion status, and an effective regulation method at earlier nodes upon the signalling.  The regulation method relies on shaping elements in flow control. Such elements also exist in the DiffServ architecture [17]. There are many other functional elements in DiffServ. We introduce them in the next chapter. The main advantage of the DiffServ model is the minimal state information needed in the backbone. The aggregation of traffic performed in DiffServ architecture ensures high scalability.  1.2 Motivation Because wireless links in PLMN are subject to noticeable variations in received signal strength due to local variations in terrain, buildings, and foliage, in order to ensure high reliability of data communications, it is necessary to resort to an error-control method to eliminate transmis6  Chapter 1 Introduction  sion errors caused by the channel noise. A s we mentioned above, A R Q and F E C are the main methods used for this purpose.  Most real-time applications require hard time deadlines. If packets in a real-time flow can not reach their destinations before their deadlines, they become useless. W h e n wireless links experience high bit error rate ( B E R ) , many packets accumulate at the link layer transmission buffer, or queue, residing at wireless-wireline interface nodes. Note that, in this thesis, we assume that packets i n the transmission buffer form a first-in-first-out queue, except for some higher priority packets carrying control information. So the term "queue length" also refers to buffer occupancy in this thesis. If the queue length grows to a large enough value, the end-to-end latency is dominated by the queuing delay, which leads to many real-time packets being dropped due to deadline expiration. In this case, packet expiration caused by queuing delay becomes a critical factor degrading overall performance.  S u c h p h e n o m e n a i n s p i r e the a p p l i c a t i o n o f the A Q M to the t r a n s m i s s i o n buffer. Intuitively, packets in real-time flows expire within a certain probability relating to queue length. L o n g e r queue lengths generate more queuing delay, and reduce the margin to the d e l i v e r y deadline. Actively dropping packets with probabilities analogous to such "natural" dropping rates due to expiry may result in releasing network resources earlier, thereby reducing the queuing delay for following packets. With shorter queuing delays, the expiring probability of the remaining packets is largely reduced. If the active drop rate is kept lower than the "natural" one, we can guarantee better system performance.  1.3 Contributions With this motivation, we set out to show the effectiveness of our idea both by analysis and  7  Chapter 1 Introduction  by s i m u l a t i o n . There are many aspects to be resolved i n the implementation o f such a new mechanism. They are presented one by one in the following chapters.  The main goals of our research work are:  •  Based on the G P R S / U M T S models and an analysis of transmission queue behavior, mathematically express of the relation between transmission queue length and packets' natural drop rate due to expiry in real-time flows with hard deadlines. With the analysis results, demonstrate by simulation of the effectiveness of applying A Q M to drop real-time packets earlier to release network resources in wireless access nodes.  •  A further proposal and demonstration that the E R U F can also be applied to the transmission queue at wireless access nodes so that, not only can network resources at these nodes be freed earlier for other packets in the same flow, but also the network resources in the shared core network can be released earlier for other flows. The mechanism is realized by the DiffServ functional elements in G P R S / U M T S core network.  A s far as we know, no publication has combined the link layer transmission queue with the A Q M . A l s o no one has yet used link layer transmission buffer occupancy as an indicator to signal the G P R S / U M T S core network regarding the congestion status caused by wireless errors. The importance of applying appropriate buffer management schemes to the link layer transmission buffer has not been widely realized. We hope our work can attract more industrial and academic research interest to this area.  1.4 Outline of the Thesis In Chapter 2, we introduce some previous work relating to end-to-end QoS in the G P R S /  8  Chapter 1 Introduction  U M T S wireless network. First, an overview of G P R S / U M T S network architecture is discussed. Second, we show the mechanisms for the provision of end-to-end QoS over G P R S / U M T S core networks. T h i r d , we introduce the techniques used i n G P R S / U M T S wireless access nodes catering to the end-to-end Q o S requirement. Chapter 3 analyzes the behavior o f l i n k layer transmissions i n a G P R S / U M T S specification, and derives the mathematical expression o f the real-time packets expiration rate. This chapter also presents our idea of applying A Q M to the link layer transmission queue, and the simulation results that verify its effectiveness. Chapter 4 extends the idea o f E R U F by applying it to the link layer transmission queue i n wireless access nodes and the G P R S / U M T S core network in combination so that the overall performance can be improved, and demonstrates its effectiveness by simulation. Chapter 5 gives our conclusion and future work.  9  Chapter 2 Overview of Previous Work  Chapter 2 Overview of Previous Work In this chapter, we introduce some previous work relating to the provision of end-to-end QoS in the GPRS/UMTS network.  2.1 End-to-end QoS in GPRS/UMTS Architecture 3GPP defines four QoS classes: Conversational, Streaming, Interactive and Background. Each QoS class is specified by a QoS profile associated with the Packet Data Protocol (PDP) context.  UMTS definition divides the protocol stacks into a Control Plane and a User Plane. The User Plane is the layered protocol structure providing the user information transfer and the associated transfer control procedures such as flow control and error recovery. The Control Plane supplies support to the User Plane. We focus on the User Plane protocol stacks illustrated in Figure 2.1. The protocols relating to our research work are shaded in the figure, while others are  IP/ PPP  ~~iiv  PPP PDCP  PDCP  GTP-U GTP-U  GTP-U  GTP-U UDP/  UDP/  RLC  UDP/ II'  UDP/  MAC  MAC  AAL5  AAL5  L2  L2  ATM  ATM  Ll  Ll  Ll  MS Figure 2.1  Ll  1'  IP  IP  UTRAN  SGSN  Protocol stacks of the U M T S user plane.  10  GGSN  Chapter 2 Overview of Previous Work  ignored in this thesis.  User Datagram Protocol/Internet Protocol (UDP/IP) [8] are the backbone network protocols used for routing user data and control signalling. The QoS management functions are also located at this layer. The Radio Link Control (RLC) [18] layer protocol provides logical link control over the radio interface. The link layer retransmission is one of its working modes.  In order to achieve the differentiation of the four classes' end-to-end QoS, both the core network and the radio access network must be involved. Compare Figure 2.2 with Figure 2.1. In  UMTS MT  UTRAN  CN lu EDGE NODE  End-to-End Service  Figure 2.2  GPRS/UMTS QoS architecture.  11  CN Gateway  Chapter 2 Overview of Previous Work  Figure 2.2, the Terminal Equipment (TE) and M o b i l e Terminal ( M T ) form the M o b i l e Station (MS) in Figure 2.1. The C N Iu edge node and C N Gateway correspond to the S G S N and G G S N in Figure 2.1, respectively. In this thesis, we elaborate on the U M T S Bearer Service to improve the end-to-end QoS. The other bearer services under the U M T S Bearer Service provide the necessary lower layer functions.  2.2 GPRS/UMTS Core Network Mechanisms for QoS The 3 G P P technical specification [7] suggests the implementation o f the IntServ or D i f f S e r v architectures i n the G P R S / U M T S core network to realize Q o S management. W e introduce these I E T F mechanisms i n this section, with emphasis on DiffServ and its mapping to the G P R S / U M T S d o m a i n . D i f f S e r v architecture includes many f u n c t i o n a l elements. We emphasize A c t i v e Queue Management ( A Q M ) at the droppers, because A Q M is the most important buffer management scheme at the core routers, which we shall apply to the data link layer transmission buffer at wireless-wireline interface nodes in this thesis.  2.2.1 IETF QoS Architectures The existing Internet service (namely, the best-effort service of IP) cannot satisfy the QoS requirements of emerging multimedia applications, primarily because of variable queuing delays and packet loss during network congestion. There has been a significant amount of work in the past decade on extending the Internet architecture and protocols to provide Q o S support for multimedia applications. This has led to the development of a number of service models and mechanisms as suggested by IETF.  The IntServ model is proposed as an extension to support real-time applications. The key  12  Chapter 2 Overview of Previous Work  here is to provide some control over end-to-end packet delays in order to meet real-time QoS. The fundamental assumption of the IntServ model is that resources (e.g., bandwidth and buffer) must be explicitly managed for each real-time flow. This requires routers to reserve resources in order to provide specific QoS for packet streams, or flows, which in turn requires flow-specific states in each router. Such mechanisms usually need to maintain states, manage buffers, and/or perform packet scheduling on a per flow basis. This complexity may prevent them from being costeffectively implemented and widely deployed.  The DiffServ architecture is proposed in [5] to not only supply the QoS differentiation, but also to achieve scalability.  The Diffserv architecture is based on a simple model where traffic entering a network is classified and possibly conditioned at the boundaries of the network [19] [20], and assigned to different behavior aggregates (BAs), with each BA being identified by a single DiffServ codepoint (DSCP). Users request a specific performance level on a packet-by-packet basis, by marking the DiffServ field of each packet with a specific value. This value specifies the per-hop behavior (PHB) to be allotted to the packet within the provider's network. Within the core of the network, packets are forwarded according to the PHB associated with the DSCP.  Sophisticated classification, metering, marking, policing, and shaping operations only need to be implemented at network boundaries or hosts. Multi-field (MF) classifiers select packets based on the content of some arbitrary number of header fields, typically some combination of source address, destination address, DS field, protocol ID, source port and destination port. Markers are devices that perform marking. Shapers are devices that perform shaping, which refers to the process of delaying packets within a traffic stream to cause it to conform to some defined  13  Chapter 2 Overview of Previous Work  traffic profile. Metering is the process of measuring the temporal properties (e.g., rate) of a traffic stream selected by a classifier. Policing is the process of discarding packets (by a dropper) within a traffic stream in accordance with the state of a corresponding meter enforcing a traffic profile. Network resources are allocated to traffic streams by service provisioning policies which govern how traffic is marked and conditioned upon entry to a DiffServ-capable network, and how this traffic is forwarded within that network. A wide variety of services can be implemented on top of these building blocks. Such traffic control functions at hosts, or access or boundary routers are generically called traffic conditioning.  A salient feature o f the D i f f S e r v framework is its scalability, w h i c h a l l o w s it to be deployed in very large networks. This scalability is achieved by forcing much complexity out of the core of the network into boundary devices which process smaller volumes of traffic and fewer flows, and by offering services for aggregated traffic rather than on a per-microflow basis. That is, complex traffic classification functions are only implemented at network boundary nodes; inside the core network, P H B s are applied to aggregates of traffic which have been appropriately marked using the Diffserv field in the IP headers. P H B s are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams. Perapplication flow or per-user forwarding state need not be maintained w i t h i n the core o f the network.  In this thesis, we assume one of the IETF-suggested P H B s , Assured Forwarding ( A F ) [21], is implemented i n the core network of the G P R S / U M T S domain. The Assured Forwarding ( A F ) P H B group constitutes a means for a provider DiffServ domain to offer different levels of forwarding assurances for IP packets received from a customer DiffServ domain. Four A F classes  14  Chapter 2 Overview of Previous Work  are defined, where each A F class in each DiffServ node is allocated a certain amount of forwarding resources (buffer space and bandwidth). IP packets that seek to use the services provided by the A F P H B group are assigned by the customer or the provider DiffServ domain into one or more of these A F classes according to the services that the customer has subscribed to. Within each A F class, IP packets are marked (again by the customer or the provider DiffServ domain) with one of three possible drop precedence values. In cases of congestion, the drop precedence of a packet determines the relative importance of the packet within the A F class. A congested DiffServ node tries to protect packets with a lower drop precedence value from being lost by preferably discarding packets with a higher drop precedence value.  In a DiffServ node, the level of forwarding assurance in an IP packet thus depends on (1) what amount of forwarding resources are allocated to the A F class that the packet belongs to, (2) what the current load of the A F class is, and, (3) in case of congestion within the class, what the drop precedence of the packet is.  In order for a user to receive DiffServ from Internet service provider (ISP), it must have a service-level agreement ( S L A ) with its ISP. A n S L A basically specifies the service classes supported and the amount of traffic allowed in each class, respectively. Users can mark DiffServ fields of individual packets to indicate the desired service at hosts, or have them marked by the access or boundary routers. A t the ingress of the ISP networks, packets are classified, policed, and possibly shaped. The classification, policing, and shaping rules used at the ingress routers are derived from the S L A s . When a packet enters one domain from another, its D S field may be remarked, as determined by the S L A between the two domains.  15  Chapter 2 Overview of Previous Work  2.2.2 Mapping between GPRS/UMTS and DiffServ Ref. [7] illustrates the QoS management functions in G P R S / U M T S . The traffic conditioning functions reside in the core network gateways and the U T R A N . It is very reasonable to treat the G P R S / U M T S network as a special DiffServ domain. In this domain, G G S N and U T R A N can be regarded as the D i f f S e r v edge nodes. G G S N connects the G P R S network to the Internet through wired links, while U T R A N s connect the G P R S network to mobile users through wireless links. Similarly, S G S N can be considered to be the core router. Figure 2.3 presents the mapping of  A special DiffServ domain  *MF Classifier *Dropper  Figure 2.3  ^Conditioner including Meter, Marker, and Shap er  *Conditioner implicitly provided by resource manager  Mapping of the DiffServ domain and the GPRS/UMTS network.  these two architectures.  It is worth mentioning that the traffic conditioners at G G S N must be implemented explic-  16  Chapter 2 Overview of Previous Work  itly, w h i l e at U T R A N they can be either implemented explicitly or realized by wireless l i n k resource control functions. The traffic conditioners in G G S N include functional elements such as M F classifiers, meters, markers, shapers and droppers. Because S G S N is treated as the core router, only droppers are implemented.  The droppers are used for buffer management as w e l l as differentiated Q o S . A n A F implementation must detect and respond to long-term congestion within each class by dropping packets, while handling short-term congestion (packet bursts) by queueing packets. This implies the presence of a smoothing or filtering function that monitors the instantaneous congestion level and computes a smoothed congestion level. The dropping algorithm uses this smoothed congestion l e v e l to determine when packets should be discarded. The most popularly used dropper scheme is the R E D we introduce below.  2.2.3 Active Queue Management A c t i v e Queue Management ( A Q M ) was originally proposed to handle T C P - l i k e flows. The T C P transport protocol detects congestion only after a packet has been dropped at the gateway. However, it is clearly undesirable to have large queues (possibly on the order of a delaybandwidth product) that are full much of the time; this would significantly increase the average delay i n the network. Therefore, w i t h increasingly high-speed networks, it is increasingly important to have mechanisms that keep throughput high but average queue sizes low. The R E D gateway is designed for a network where a single dropped packet is sufficient to signal the presence of congestion to the transport-layer protocol.  The R E D gateway calculates the average queue size avg (in this thesis, the unit of queue length or buffer occupancy is always in packets), using a low-pass filter with an exponential  17  Chapter 2 Overview of Previous Work  weighted moving average,  avg = (l - w ) avg + w q q  (2.1)  q  where the w is the weight, which is normally 0.002 as [22] proposes, and q is the current queue q  size.  The average queue size avg is compared to two thresholds, a minimal threshold min  th  and  a maximal threshold max . When the average queue size is less than the minimal threshold, no th  packets are marked. W h e n the average queue size is greater than the maximal threshold, every arriving packet is marked. If marked packets are in fact dropped, or i f all source nodes are cooperative, this ensures that the average queue size does not significantly exceed the maximum threshold. A s avg varies from min  tn  to max , the packet-marking probability p varies linearly from 0 to th  a  maxp,  Ph = max (avg - min ) / (max p  tn  - min )  tn  th  (2.2) Pa=Pb ( /  1  -  c  o  u  n  t  *Pb)  where count is the packets since the last marked packet.  Ref. [23] proposed Random Early Detection with I N - a n d - O U T (RIO). It is a simple but effective scheme to supply differentiated services. The general idea is to mark the packets that comply with the S L A as I N packets, otherwise as O U T packets, at ingress edge nodes. After that different sets o f max , min and max are applied to the I N and O U T packet queues so as to p  p  p  generate different packet drop/mark precedences.  18  Chapter 2 Overview of Previous Work  The general algorithm for the R E D gateway is presented i n Figure 2.4. We use the term "mark" here because the T C P flows can carry the congestion information back to the source, so their packets may not need to be dropped immediately.  for each packet arrival calculate the average queue size avg if min  th  < avg < max  tft  calculate probability p  a  with probability p : a  mark the arriving packet else if maxfo <. avg mark the arriving packet  Figure 2.4  General algorithm for RED gateways.  The need for and advantages of limiting unresponsive best-effort traffic has been proven in [15]. Unresponsive flows are flows that do not use end-to-end congestion control, and in particular those that do not reduce their load on the network when subjected to packet drops/marks. Without knowledge of the congestion ahead, unresponsive traffic sources keep sending packets, whereas many data packets are actually undeliverable due to packet expiration and possible overflow. These undeliverable packets waste shared network resources. To solve the problem, the Early Regulation of Unresponsive Flows ( E R U F ) scheme has been proposed [16]. This concept dictates that when unresponsive flows encounter network congestion at certain nodes over the end-to-end path, and cause some packets to miss their delivery deadlines or to be dropped due to buffer overflow, dropping them at an earlier node frees the shared network resources for other  19  Chapter 2 Overview of Previous Work  flows. Source quench packets are generated by the R E D algorithm, and sent from the congestion point to the earlier nodes. The earlier nodes regulate the bandwidth i n a multiplicative-decrease/ additive-increase manner over a round trip time (RTT) estimate.  In order to get better performance, gentle-RED [24] was proposed for application to the high-speed network. The difference between the R E D and g e n t l e - R E D is their marking rate variation curves, which are displayed i n Figure 2.5. The p  a b  r  e  equal when the average queue  RED  y  /jentle-RED  I'  1  1 1  min  Figure 2.5  —  max  th  th  2  1  I  •  * max  th  RED and Gentle-RED marking/dropping rate vs. average queue length.  length is less than max . Once the avg crosses max , the normal R E D makes P 1.0 immediately, th  th  b  while the gentle-RED continues to increase p with a slope leading the marking rate curve to 1.0 b  at the point of avg = 2 * max . Again, the gentle-RED is originally proposed for T C P - l i k e traffic, th  20  Chapter 2 Overview of Previous Work  which is responsive to single packet drops/marks. In this thesis, we make necessary modifications to handle the unresponsive traffic.  2.3 Mechanisms at Wireless Access Nodes for QoS In order to offer end-to-end QoS guarantees and differentiation to end users, not only the core network, but also the wireless access nodes of G P R S / U M T S must be taken into account. M u c h previous research work has been done on the mechanisms at the access nodes. F E C and link layer A R Q are used to attack the high B E R of wireless links. Different scheduling and resource management schemes are proposed for the sake of provisioning the Q o S , or utilizing resources (normally bandwidth and buffer) efficiently. In this section, we focus on the link layer A R Q and resource management, and present general knowledge of other mechanisms.  2.3.1 Radio Link Control (RLC) Layer in GPRS/UMTS Wireless links are error-prone with relatively high and fluctuating bit-error rates ( B E R s ) . Different methods have been developed to mitigate transmission errors so as to deliver data packets to mobile users with guaranteed low B E R . A s we mentioned above, two main classes of techniques are F E C coding and link layer retransmissions of lost packets using A R Q protocols. F o r those real-time applications with l o w and stringent transfer delay requirements, F E C is normally used as the only error mitigation scheme, because A R Q may consume too much time. F o r real-time traffic where transfer delay requirements are relatively lax, for example, most applications in the Streaming class, A R Q or hybrid A R Q can be used. In our research, we assume the F E C code is fixed during the lifetime of a connection.  In the G P R S / U M T S networks, link layer retransmissions occur at the R L C layer residing  21  Chapter 2 Overview of Previous Work  at the wire-wireless interface nodes, that is, the Radio N e t w o r k Controller ( R N C ) i n U M T S networks. Our investigation focuses on the Radio L i n k Control ( R L C ) layer i n A c k n o w l e d g e M o d e ( A M ) (Figure 2.6). It is actually a kind of sliding-window / multiple-reject A R Q . In the  Upper layer New PDUs Transmission queue  I Copfc  R,bamissioj5!  Datapackets  eanACKe[1PDlJsmbuffer  ACK  BuTer  PDU handler NAK Move NAKed PDUs to Transmission queue  Send PDUs  Transmitting side . 7-  2 ,  - - M I - -t-w-f--ta t  1  2  3,4,5  Air interface  t  6  7,8,1,2  3  Receiving side Figure 2.6  Acknowledged Mode in Radio Link Control layer (window size = 8).  model used i n our research, we assume that the receiver must deliver the in-sequence packets to the higher layers.  R L C protocol does the segmentation and reassembly. After segmentation, the higher layer data segments are encapsulated into the R L C layer's Protocol Data Units ( P D U ) . The R L C layer has a buffer holding newly arriving P D U s and outstanding P D U s which have been sent but not been Acknowledged ( A C K ) . A P D U is deleted from this buffer i f it is A C K e d by a status report  22  Chapter 2 Overview of Previous Work  from the receiver, or transmitted more than a certain number of times. Senders can also actively set a "polling bit" in the P D U header to request a status report from receivers. This action can be triggered in different ways, for example, by a timer time-out, or using a window based rule, or when the last P D U remained in buffer, and so on. We carefully model the "Poll_PDU"-based and the timer-based trigger functions. The former is a value set by the system. Every time a sender sends out P o l l _ P D U P D U s (including newly transmitted and retransmitted P D U s ) , a polling bit is set on the next P D U ' s header. The latter is merely a timer that periodically triggers the polling.  The receiver reports on status by sending a report using Super-Fields (SUFI). There are several situations that can trigger the receiver to send a status report. For example, i f the sender sends a polling request, a status report is triggered. Another situation is i f the receiver detects one or more P D U missing. A third way is i f the receiver transmits status reports periodically to the sender. A forth trigger occurs i f not all P D U s requested for retransmission have been received before the Estimated P D U Counter ( E P C ) has expired. Finally, either peer entity can trigger the R E S E T function to reset the wireless connection when it detects an extremely abnormal situation on the wireless link, for example, a very long idle time or other phenomena, indicating high error rates.  If the transmission queue is full, an arriving Acknowledged M o d e Data ( A M D ) P D U gets dropped. Otherwise, it joins the tail of the transmission queue waiting for its first transmission. Once it moves to the front, the R L C stores a copy of it in the retransmission buffer prior to passing it to the lower layer for transmission.  The receiver accepts the "good" P D U s and discards P D U s with more errors than the F E C code can correct. It generates status reports to request the sender to resend the lost P D U s that have  23  Chapter 2 Overview of Previous Work  not expired. R L C defines the multiple reject A R Q scheme, in which every status report not only contains N A K s , but also A C K s . Status reports can be sent as dedicated c o n t r o l P D U s or piggybacked in data P D U s over the return link. The control P D U s are put into the transmission queue with higher priority. In Figure 2.6, we assume these actions are incorporated in the " P D U handler" unit.  The sender keeps sending P D U s in its transmission queue until it receives a status report or reaches the sending window boundary. Once a status report arrives, the R L C takes two actions according to the A C K s and N A K s received: delete A C K e d P D U s from the retransmission buffer, and copy N A K e d P D U s from the retransmission buffer to the transmission queue with higher priority than the new (first time transmission) data P D U s . This retransmitted P D U gets dropped when its number o f transmissions reaches a pre-configured m a x i m a l value, w h i c h is called M a x D A T i n the 3 G P P document [18].  2.3.2 Other Mechanisms at Wireless Access Nodes The wireless-wireline interface nodes may deploy a number of resource management mechanisms, including traffic scheduling, call admission control ( C A C ) / bandwidth reservation, and so o n . These mechanisms are not defined i n the 3 G P P documents, instead, they are implementation dependent.  C A C and bandwidth reservation are used to address performance degradation due to mobile handoff. When a mobile changes cells, the new cell may not guarantee supplying enough network resources to keep the same QoS as the o l d cell, so the mobile may experience performance degradation during the handoff period [25] [26]. C A C and bandwidth reservation seek to estimate the amount of resources needed by the incoming mobiles, and make admission decisions  24  Chapter 2 Overview of Previous Work  to reduce the performance degradation or handoff dropping rates. Another k i n d o f bandwidth reservation occurs when multiple users try to get access to the same media.  Packet scheduling at the wireless-wireline interface nodes is important for QoS provision. M u c h research work has been done in this area, for example, the Weighted Fair Queuing ( W F Q ) [27] variances such as Effort-Limited Fair ( E L F ) scheduling [28] and the Idealized Wireless FairQueuing (IWFQ)[29], the extended Earliest-Due-Date ( E D D ) / Delay-Earliest-Due-Date (DelayE D D ) for wireless [30], and so on. From the viewpoint of implementation, the E D D mechanisms need to check every packet's time stamp, thus it is hard to realize in reality. The W F Q variances guarantee delay bounds on weight tightly coupled to a preserved service rate, so it is relatively easier to implement. However, to determine a weight matching a specific delay requirement is still quite complex, because a number of factors such as propagation delay, queuing delay, packet arrival rate, and so on, have to be counted for.  In order to s i m p l i f y our research m o d e l , we assume M e d i a A c c e s s C o n t r o l ( M A C ) protocols work perfectly to eliminate the negative impact of all of the above mentioned competition for resources. We make the assumption that both the assigned bandwidth (or the service data rate) of a real-time flow and the time consumed by the M A C layer are constant during the lifetime of a connection. The impact of the above factors on our proposed mechanisms can be further investigated in the future.  The buffer management issue at wireless-wireline interface nodes has not attracted much research interest. So far, all researchers assume that, at the l i n k layer, drop-tail is used as the default queue management scheme. Under the drop-tail assumption, [31] [32] [33] estimate the effects of throughput and the buffer overflow dropping process on buffer capacity when the link  25  Chapter 2 Overview of Previous Work  layer A R Q is employed, but they neither consider the delay o f packets, nor target the buffer management issue from the viewpoint of active queue management.  Our analysis in the next chapter shows that i f a real-time flow with a hard time deadl  ine  uses link layer A R Q , when the transmission queue length grows too long due to retransmissions caused by wireless error, the queuing delay dominates the other delays, causing a great deal of expired packet drops.  Our proposal is that A Q M not only be applied to traffic aggregates at the G P R S / U M T S core routers, but also to the link layer buffer at the wireless access nodes, namely, the DiffServ egress edge nodes, to control the queuing delay caused by wireless errors.  In the following chapters, we analyze the behavior of the link layer A R Q , and derive the necessary mathematical expression of the packet expiration rate on queue length with the link layer A R Q . We also prove that introducing A Q M into the link layer transmission buffer significantly reduce packet dropping due to expiry, and improves the end-to-end Q o S .  26  Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic  Chapter 3 Applying AQM to Link Layer Buffers for Realtime Traffic In Chapter 2, we observed that the possible causes of a real-time flow P D U drop are (1), the number o f transmissions exceeding M a x D A T , (2), transmission queue overflow, and (3), packet A M D P D U expiry.  The first point is fully determined by the transient wireless bit error rate ( B E R ) , which is not a focus of this thesis. Points two and three are also caused by wireless link quality degradation, but they can be eliminated by appropriately setting the transmission queue capacity. If we set the capacity to a small enough value, expirations disappear, but overflow is very likely to occur. If the queue capacity is set to a large enough value, the situation reverses. This gives us a chance to improve the end-to-end performance of real-time flows.  In the above cases causing packet drops, we believe the dropping of real-time packets through overflow has more advantages than by expiration. With overflow, a newly arriving A M D P D U does not have a chance to consume buffer space and wireless bandwidth at the interface node. Dropping a packet this way does not affect the transfer of other packets in the same flow and other flows sharing the wireless channel. In contrast, such is not the case for expiration drops, as the packets have to be sent, possibly several times, until they are received - before expiration dropping occurs. We propose to use active queue management, such as Random Early Detection ( R E D ) , to drop those packets that are likely to expire at a much earlier point i n the end-to-end connection, using an overflow-like mechanism. Because the packets are dropped before being enqueued to the buffer, they need not consume any network resources. However, i f the packets are dropped at a rate higher than the natural dropping rate due to expiry, the system's performance  27  Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic  degrades unnecessarily. In order to avoid this problem, we need an analytic expression o f the natural drop rate due to expiry, and to adjust our active drop rate to be lower than but close to it.  3.1 Analysis of Real-time Flows over RLC with A M We assume a real-time flow with effective service data rate R. Each link layer packet with fixed size / takes l/R seconds for one transmission. In an extreme case, such as where the wireless link is clean enough to transmit all P D U s successfully the first time, the upper bound of the transmission queue length that guarantees expiration packet drop is approximated thus,  Lmax _  UT-t\-t2)-R~  (3.1)  I  where T is the maximum allowable transfer delay of the real-time application, tl is the latency caused by the Internet, and tl is the sum of the one-way wireless propagation delay and processing/scheduling delays at both ends. When a new packet arrives at the transmission queue, i f the observed queue length is longer than L , max  then even i f the wireless link is clean, this packet is  still dropped due to expiration. A queue capacity larger than L  max  is clearly not useful.  If the wireless link is so noisy that every P D U has to be transmitted N times before being accepted or dropped, where N is the maximum allowable number of transmission, every packet approximately spends  (3.2)  in the transmission queue waiting for its transmissions to complete, during which following packets have to wait.  28  Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic  On the other hand, in this situation every packet also has to spend approximately  t4 = (N-\)-t2-2  (3.3)  time in the retransmission buffer waiting for an A C K or N A K to come from the receiver, during which time it does not have any impact on the transmission of other packets. The factor 2 takes into account round trip delays. We assume that the delay t2 is exactly the same on both forward and backward paths. Thus, the lower bound of the queue length that guarantees no packet expiration is  Lmin =  T-tl  -?3-f4  (3.4)  t3  If the observed queue length falls into the range of [L , min  L ], max  there is a certain probabil-  ity that a newly arrived P D U w i l l be dropped due to expiration. Intuitively, we expect that a longer queue causes a larger queuing delay, and hence a higher probability of packet expiration. We propose to actively drop some packets at the R L C transmission queue in accordance with the above observation, so that buffer length is limited to avoid packet expirations, and bandwidth can be released earlier to improve utilization.  The relationship between the average packet delay and the average queue length has been analyzed in [33] [34]. Here, we derive the relationship between the expiration probability of realtime packets and queue length.  Assuming the wireless link has a long-term average packet loss rate of P due to transmisB  sion errors, we can approximate the wireless link with an error-free data link, one that has a vari-  29  Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic  able data rate ranging between [R/N, R], with an average data rate of p = (1 - Pg)R. Note that this approximation does not change (3.1) - (3.4) above. The transmission time for each packet is thus i n d e p e n d e n t o f e a c h others, w i t h a mean m = Up and v a r i a n c e o  2  upper bounded by  {max(l/p - l/R, I N/R - / / p ) } . 2  The independence here is extremely important for further analysis. We assume the expired real-time flow packets are dropped only at the application layer of mobile terminals, w h i c h is opposite to E D D or D e l a y - E D D as we mentioned in the previous chapter. Because the R L C and the lower layers do not retain any timing information regarding real-time applications, they are unable to determine whether a packet has expired or not. In fact, timing information is normally stored i n upper layer packets that have been encapsulated into the R L C layer P D U s . E v e n the R L C layer manages to retrieve timing information (which would break through the protocol layering rules), it is very unlikely that every R L C layer P D U ' s delay time is recorded for its expiry status to be determined. When packets are never dropped at the R L C or the lower layers i n the interface nodes due to expiry, the transmission time of any packet never affects the transmission time of other packets, so we can consider their transmission time to be independent of each other.  If a newly arrived packet observes n packets in the queue, L  min  <n<  L , max  its queuing  delay becomes a new random variable V, which is the sum of the previous n independent random variables. When L  min  is large enough, this sum of n packet transmission times forms a Gaussian  d i s t r i b u t i o n a c c o r d i n g to the c e n t r a l - l i m i t theorem, w i t h mean  o  ,2  =  nx  2  o .  30  M = mxn  and variance  Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic  N o w with the Gaussian distributed random variable V, we can analyze the relationship between the expiration drop rate p and the observed queue length n. When a packet arrives at time T  1 (  its deadline is computed as % =  + (T-ti).  2  Since V i s a Gaussian variable, the probabil-  ity of expiration p is ,x  2  -  Ms  p = P r { x + v > x } = Q [—r-) 1  (3.5)  2  where Q(.) is the well-known Q function giving the tail area of a Gaussian distribution, as shown by the shaded area i n F i g u r e 3.1. In F i g u r e 3.1, we draw the G a u s s i a n p r o b a b i l i t y density  Tj Figure 3.1  M  A  M  B  x  2  Gaussian distribution with means M and M , variances a ' and a ' , where M < M and A  B  A  B  A  B  ° ' A < ° ' B .  functions for rc = A and n = B, where A < B. W h e n the value of n is A, both the corresponding  31  Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic  mean and variance (M and o') are small, so the shaded area is also small. When n = B, its M and a' increase, thus the shaded area increases as well. This result matches the intuitive expectation earlier.  Formula (3.5) is extremely important for the scheme we propose in this thesis. It is the natural drop rate caused by expiration. Our active drop rate must not be higher than its value i n a wide range of queue lengths, but must come as close to it as possible.  F r o m (3.5) we can get the Gaussian curve I i n Figure 3.2, w h i c h is the relationship  Observed q u e u e length Figure 3.2  Gaussian expiration drop rate and AQM approximations.  between the expiration drop probability p and the observed queue length n. To obtain this result, we fix the long-term packet error rate at P  B  = 15%, and queue capacity to L . max  32  To smooth the  Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic  changes in p, the observed queue length n here is actually made to be the calculated average queue size using a low-pass filter with an exponential weighted moving average w [13]. q  Note that the above result is an approximation as the possible loss of some dedicated control P D U s for status reports has been ignored. These losses introduce some extra queuing delay that causes the actual Gaussian curve I to be slight higher than shown in Figure 3.2. Because the dedicated control P D U s are much smaller than the data P D U s in length, for simplicity we ignore this additional delay.  To actively drop packets with the exact probability obtained above is difficult to realize, due to computing complexity. Instead, we propose to use a simpler scheme similar to gentle-RED to approximate the expiration rate. In Figure 3.2 we present two piece-wise, linear A Q M functions, II and III, to approximate the Gaussian curve. We set these two approximation curves to be lower than the Gaussian one in most of the range of the observed queue length, so as to guarantee that the number of packets actively dropped is less than the natural expiration drops.  Clearly, the better the approximation is, the better the performance should be. Therefore function III yields the best result as shown in the next section. However, in reality, it is hard to measure the Gaussian curve because of varying factors. We consider the two-slope, piece-wise, linear function II, defined in (3.6), to be the easiest to implement, as it can be uniquely fixed by L  max  and L , min  while the other parameters are chosen somewhat arbitrarily. We can see that this  simple scheme works w e l l enough i n controlling queue length and i m p r o v i n g system performance.  33  Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic  (n-L max  P  ,L  L  . )  . +L min max  if  • min  <n<  • +L min  min. L  P =  n (1 - max  L  )  . +L min max\ N  L  . +L \ min max)  if  n>  ^min  +  ^max  (3.6)  max  if  0  n<L  min  3.2 Simulation Results The simplified system simulation model shown in Figure 3.3 has been built and exercised  Server  RNC Internet & G P R S / U M T S core network  App UDP/IP  Application  UDP/ IP RLC RLC  UDP/IP  ~-..  Wireless  Figure 3.3  Simulation model for AQM in the link layer buffer.  in the O P N E T 8.1 environment. The R L C layer protocols shown above are carefully modeled. We simulate the behavior of link layer A R Q with polling functions and other conditions that can trigger the generation o f status report, introduced i n Section 2.3.1. F o r the status report, we modeled the L I S T Super-Fields (SUFI) [18] to carry the N A K and A C K information.  The wireless channel model in O P N E T 8.1 takes into account multiple access interference, background noise and multipath fading, thus, it forms a more comprehensive error model  34  Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic  than the Gilbert-Elliott model [35] [36]. To focus the performance evaluations on the management of the downlink transmission queue, we make a simplifying assumption that the wireless uplink and the wireline Internet links are error-free. We assume that the arrivals of real-time packets follow the Poisson distribution, that packets have a constant length, and that the unit of queue length in the figures given below are the number of packets. Thus, the traffic load varies in direct proportion to the average inter-arrival time of real-time packets.  3.2.1 Two-slope Piece-wise Linear A Q M Function II In this section, we take the two-slope piece-wise linear approximation A Q M function II as an illustrative example of the effects of the scheme. The parameters for the wireless link and realtime application are given in Table 3.1. Table 3.1 Parameters for Simulation of AQM in RLC layer. Parameters  Value  Effective wireless data rate R  1024 K bps  Max. number of transmissions MaxDAT  5 40 packets (AMD PDUs)  Lmax  210 packets (AMD PDUs)  Maximum allowable end-to-end delay T  250ms  Propagation delay in Internet & GPRS/UMTS core network  50ms  Bandwidth of Internet link  2048 K bps  Average long-term packet error rate P  15%  B  To get the values of the R E D parameters such as w and max , we set them as suggested in q  [23], w = 0.002 and q  p  max =0.1. p  Figure 3.4 ~ Figure 3.7 come from the two-slope, piece-wise, linear A Q M function II i n (3.6) with 90% offered load. In this thesis, the term "offered load" refers to the ratio of the traffic  35  Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic  generating rate o f a specific traffic source, and the effective service rate o f the corresponding wireless link. From Figure 3.4 we can see the relationship between the wireless B E R , throughput,  0. 0003  Wireless b i t error rate  0. 0002 0. 0001 0. 0000 300  Average transmission queue length  200 100 0 100 , 000  Receiving throughput (butes/sec)  80 , 000 eo , 000 40 , 000 20 , 000 0 100  Figure 3.4  200  300  time (sec)  BER, transmission queue length, and throughput without AQM.  and the average transmission queue length (in packets) as functions of time, without A Q M . When the queue length becomes large with a high wireless bit error rate, the corresponding throughput drops sharply. We must note that wireless error does not directly cause packet loss because of the link layer A R Q recovery mechanism. However, the high wireless bit error rate causes frequent  36  Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic  retransmission at the link layer and large queuing delays, leading to expiration drops. Figure 3.5  E x p i r a t i o n drops  (packets)  80, 000 60,000 40, 000 20,000  ****  0  Drops f o r exceeding MaxDAT  (packets)  80 60  •7*  40 20 !  Overflow drops  (packets)  0 20,000 15,000 10,000 5, 0 0 0 300  0  Figure 3.5  time  (sec)  Different packet drops without AQM.  shows this clearly.  We can see that there are three types of packet drops when no A Q M is applied: expiration, transmission time exceeding maximum value (MaxDAT), and overflow. A m o n g them, the expiration packet drops account for the largest share. When the queue length or wireless B E R increases, the number of all three types of packet drops increases as well, but expiration drops are the most  37  Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic  harmful to system performance, due to their large share of total packet drops. Figure 3.5 shows how serious the problem of expiration can become when the system load is high.  A p p l y i n g the A Q M scheme in (3.6) to constrain the queue length changes the situation dramatically. Figure 3.6 shows the relationship between the B E R , queue length (in packets), and  Wireless b i t error  0.0003  rate  0.0002 0.0001 0.0000 200  100.000  i..  I.  .i.  Average  transmission  Receiving throughput  queue  length  (bytes/sec)  60,000  time  Figure 3.6  300 (sec)  BER, transmission queue length, and throughput with AQM.  throughput over time, w i t h A Q M function II. We can see that the average queue length is  38  Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic  effectively controlled near the middle point of (L  min  +L ) max  I 2, and that the throughput curve  does not fluctuate much on the time scale, which is a better situation than i n Figure 3.4. T h e simulations also show that the maximum queue length increases with the B E R and traffic arrival rate, presented later.  Figure 3.7 shows the reasons behind the improvements. We can see that one o f the main  Expiration drops (packets)  1  300 200  _  ! 1 !1  _  . B  1  I  .1  •  Active random, drops (packets)  30,000  Drops for exceeding MaxDAT (packets)  100  _  mM  ^  50  0l  *°  B  B  _  B  B  4^  B  100  Figure 3.7  da B  .  200  Different packet drops with AQM.  39  300  time (sec)  Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic  differences between F i g u r e 3.5 and F i g u r e 3.7 is that the types o f packet drops have been changed. With A Q M , all overflow and most expiration drops are eliminated, replaced by a new kind of packet drop, active random drops. The figure shows that the amount o f active random drops is much less than of expiration drops in Figure 3.5, but still dominates other types of packet drops. This demonstrates the effectiveness of A Q M i n improving system throughput, without degrading the overall packet drop rate. Note that the latter condition is an important criteria i n maintaining the quality of service for real-time applications, as mentioned above.  3.2.2 Comparison of Different A Q M Functions with Varying Traffic Load In this section, we present the effect of the two A Q M functions II and III with different traffic loads. We fix a l l other parameters as in Table 3.1, and vary the traffic load from 50% to 100%.  The A Q M function III has been presented in Figure 3.2. We set the two turning points corresponding to the drop rate 0.1 and 0.9 in the A Q M function III as 100 and 180, with the parameters listed i n Table 3.1. Figure 3.8 and Figure 3.9 show the throughput and end-to-end delay of these dropping functions.  Under heavy traffic (traffic load exceeding 70%), the A Q M schemes yield great improvements to the overall throughput and end-to-end delay. A s expected, the A Q M function III gives better performance than function II, thanks to better approximation of the Gaussian curve.  Figure 3.10 shows the moving averaged queue length of the above three dropping schemes with varying traffic loads. Here, the average queue length increases with the traffic load when the B E R is constant.  40  Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic  0.5  0.S  0.7  0.8  0.9  1.0  Traffic load Figure 3.8  0.5  Throughput vs. traffic load with three schemes.  0.6  0.7  0.8  0.9  1.0  Traffic load Figure 3.9  End-to-end delay vs. traffic load with three schemes.  Figure 3.11, Figure 3.12 and Figure 3.13 show different types of packet loss with varying  41  Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic  200  0.5  0.6  0.7  0.8  0.9  1.0  Traffic load Figure 3.10 Average transmission queue length with three schemes. traffic loads, without A Q M , and with different types of A Q M functions (II and III). O w i n g to the rule that the active packet drops rate is less than the Gaussian drop rate, the total numbers of drops with A Q M functions II and III are much less than that without A Q M .  It is apparent that when traffic is light, the queue length is not long enough to trigger the A Q M (see Figure 3.10), and the expiration packet drop rate is small enough not to damage system performance. Therefore the performances in both scenarios, with and without A Q M , are similar. W i t h an increasing traffic load, both the expiration drop rate and queue length grow. When the traffic load exceeds 70%, the advantages of our proposed A Q M schemes become quite obvious. Under such conditions, without A Q M , the wireless link is saturated by an ever increasing number of retransmissions, resulting in the reduction of throughput as traffic load increases, as shown in Figure 3.8. A s the transmission queue length approaches L , max  42  the expiration drop rate becomes  Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic  140000  Traffic load Figure 3.11 Different packet drops vs. traffic load without AQM. the dominant (Figure 3.11) and most harmful factor to system performance, in both throughput and end-to-end delay. With A Q M , we can control queue length and limit it to a lower value, thus eliminating expiration packet drops. Since the A Q M releases resources that would otherwise be used ineffectively by expired packets, it is also able to mitigate throughput saturation, as shown in Figure 3.8, where the throughput curve for the A Q M continues to increase up to a traffic load of 100%.  F r o m these figures we can also see that the better the A Q M functions approximate the Gaussian curve, the better they can constrain queue length, and the better performance becomes. When we compare Figure 3.12 to Figure 3.13, we see that the A Q M function III does a better job of eliminating expiration packet drops, and it outperforms the A Q M function II. In Figure 3.12  43  Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic  Traffic load Figure 3.12 Different packet drops vs. traffic load with AQM function II. some expiration packet drops still remain, while in Figure 3.13, none do. This complete conversion from expiration drops to active random drops is critical to improving the end-to-end Qualityof-Service in this scenario.  A s we can see from Figure 3.2, the A Q M function II can be uniquely fixed by L  min  L , max  and  while the A Q M function III lacks a middle point from w h i c h to determine the turning  points, and requires more accurate information regarding the factors effecting the Gaussian curve for approximation. If we have all the necessary information to derive the Gaussian curve, we can easily determine an A Q M function with a good approximation. A n y A Q M function can be actually chosen to approximate the Gaussian curve, i f the computing capacity is powerful enough.  44  Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic  45000  40000  a o •J5  a? o a n  Total packet drops Active random packet drops Expiration packet drops  35000  30000  25000  20000  15000  E 10000  5000  0.4  0.5  0.6  0.7  0.8  0.9  1.1  Traffic load Figure 3.13 Different packet drops vs. traffic load with AQM function III. One of the parameters most difficult to determine, but deeply effecting the Gaussian drop rate, is the long-run wireless packet error rate P . In the next section we show results using A Q M B  functions to approximate different Gaussian curves caused by different B E R s .  3.2.3 A Q M Functions with Different Error Rates Different wireless packet error rates in the long term (during transmission periods) lead to different Gaussian drop rates, as illustrated i n Figure 3.14 below.  The Gaussian curves vary with different long-term packet error rates i n a trend where the larger P  B  moves the Gaussian curves closer to the L . min  45  Understandably, the larger P leads to B  Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic  Observed q u e u e length Figure 3.14 Packet drop rate with different long-run packet error rates. more retransmission and longer queue length, thus causing a higher probability o f real-time packet expiration. The wisest way to choose an A Q M function is to sample the error rate for a short period at the beginning of the transmission, and use the sampled values to determine an initial Gaussian curve, and then derive the appropriate A Q M function to approximate it. Because such short periods may not generate reliable enough sample values, the Gaussian curve and the A Q M function may need to be further adjusted during the following transmission period. The basic rule, however, that the A Q M drop rate be not higher than the Gaussian drop rate in most queue length ranges, must be adhered to.  Every A Q M function may perform w e l l within a certain queue length range, even i f it does not accurately approach to the Gaussian curve. B e l o w we show the effect o f the A Q M functions II and III in Figure 3.2 with different error rates.  46  Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic  Again, we use the parameters listed in Table 3.1. The packet error rate P ranges from 5% B  to 35%. The traffic load is constant at 80%. The system performance of the A Q M functions and the n o n - A Q M (Gaussian) cases with a varying long-term packet error rate of P  B  is presented in  Figure 3.15 and Figure 3.16.  0.4  L o n g - t e r m p a c k e t e r r o r rate Figure 3.15  Throughput vs. long-term packet error rates P . B  When the long-term packet error rate P varies within a certain range (around 15%), both B  A Q M functions achieve very good performance. If the error rate grows to higher values, as in Figure 3.14, the Gaussian curve moves closer to L . min  Here, the effect of the A Q M function II  gradually weakens because the gap between the Gaussian curves widens, while the A Q M function III still performs well because it is closer to the Gaussian curves. W h e n P  B  is small, the average  queue length rarely grows to a large enough value, thus the A Q M rarely takes effect, and the results with and without A Q M are similar. This shows the importance of accurately measuring the  47  Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic  Long-term packet error rate Figure 3.16 End-to-end delay vs. long-term packet error rates P . B  Gaussian curve and carefully choosing a good A Q M function. A l s o , i n order to get the best A Q M function, the long-term packet error rate must be measured before the A Q M function is chosen, and further adjustments may be needed during the following transmission period.  In this simulation scenario, the R E S E T function of the R L C layer i n G P R S / U M T S is not considered. If P grows to a high value, the wireless link is very likely to be reset by this function. B  3.3 Summary In this chapter, we analyze the behavior of the 3 G P P G P R S / U M T S R L C layer's transmission queue i n Acknowledged Mode, and derive a novel expression for the relationship between the average queue length and drop rate due to expiry of real-time flows with hard time deadlines. We prove that this drop rate can be approximated as a Gaussian curve on the average queue  48  Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic  length. W i t h this approximation, we propose to actively drop those packets that are l i k e l y to expire, according to probability, following piece-wise, linear functions before they are enqueued. This action guarantees no QoS degradation by making the active drop rate to be lower than the Gaussian drop rate i n every case, and on this basis, improves overall performance when traffic load or the wireless error rate is high.  Through simulations, we prove the effectiveness of the mechanism. We also prove that the better the active drop rate curve is at approximating the Gaussian curve, the more the improvement.  49  Chapter 4 Applying WERUF to RLC Queue  Chapter 4 Applying WERUF to RLC Queue We not only apply the idea of dropping real-time packets earlier at the wireless-wireline interface nodes, but also extend it to the whole G P R S / U M T S domain in this chapter.  The reasons for and advantages to l i m i t i n g unresponsive best-effort traffic have been proven in [15] [16]. The idea behind E R U F is that when unresponsive flows encounter network congestion at certain nodes over the end-to-end path, causing some packets to miss their delivery deadlines or overflow, dropping them at an earlier node frees shared network resources for other flows. We apply a similar idea in G P R S / U M T S wireless networks. In these networks, although the shaping functions at edge (both wireless and fixed sites) nodes limits the volume of unresponsive flows, the problem o f these flows unnecessarily using up valuable network resources is exacerbated by the costliness of wireless bandwidth. Furthermore, propagation-induced variability of transmission quality over wireless links causes congestion at the wire-wireless interface nodes, that is, R N C s in U M T S networks. We adapt the E R U F scheme to address these problems. The new scheme is called Wireless Early Regulation of Unresponsive Flows ( W E R U F ) . Ref. [14] also proposes an algorithm to identify flows that are not TCP-friendly for regulating. In our case, however, there is no need for identification because the link layer transmission queue length is the best indicator of the flows to be regulated.  4.1 Early Regulation in GPRS/UMTS Network Figure 4.1 demonstrates the general idea of W E R U F . Leaky Bucket 1 ( L B 1 ) and Leaky B u c k e t ( L B 2 ) can be realized by the DiffServ traffic conditioners and the wireless channel resource management functions, respectively. When packets accumulate at R N C , due to R L C retransmission caused by wireless error, some signals are sent back to the DiffServ edge node,  50  Chapter 4 Applying WERUF to RLC Queue  RNC  Figure 4.1  Core routers  GGSN  The general concept behind Wireless Early Regulation of Unresponsive Flows (WERUF).  namely, G G S N i n the G P R S domain, to notify it to shrink the token rate of the corresponding traffic shaper (the token bucket L B 2 ) . With this operation, the shared network resources i n the forwarding nodes (or core routers) such as S G S N s are released for other flows. Because the packets accumulating at R N C may expire, as presented i n discussed last chapter, we intend to generate the congestion signal using a lower probability and drop these packets at the G G S N . In this way, we actually convert the expiration packet drops to overflow-like packet drops at the DiffServ ingress edge node, and at the same time, guarantee performance degradation w i l l not occur at the constrained flows. It is worth noting that the packets in R N C are the A M D P D U s , while the packets in G G S N are the IP packets. They are handled by different layers.  A g a i n , we need to analyze the behavior of the R L C layer retransmission queue length as stated i n previous chapter. Equations (3.1) - (3.5) s t i l l apply, except the meaning o f some variables are changed. In this chapter we are concerned with finding an algorithm that caters to the QoS requirement of the U M T S bearer service, and making improvements.  Focusing on the G P R S / U M T S domain, we assume the latency caused by the Internet,  51  Chapter 4 Applying WERUF to RLC Queue  external to the G P R S / U M T S domain, is a fixed value. The variable T i n (3.1) and (3.4) becomes the amount of transfer delay over the U M T S bearer service. According to [7], T < 250 ms. In the simulation we use 250 ms for simplicity. A s a result, L  min  and L  max  in these equations becomes  the minimal and maximal transmission queue length that can cater to the transfer delay requirement o f the U M T S bearer service instead o f the end-to-end delay requirement o f real-time applications. We also assume that no P D U is dropped in the R L C layer due to expiry, so that the independence of each P D U ' s transmission time is still maintained. Another change is that the meaning of tl in these equations becomes the latency caused by the G P R S / U M T S core network, excluding the Internet. The equations still hold true, despite the changes listed.  Further, we use O P N E T 8.1 to model the DiffServ functional elements i n the G P R S / U M T S routers, as shown in Figure 2.3. In G G S N , we build the Multi-field classifier, the traffic shaper, the multiplexer and the dropper. In S G S N we build only a dropper. These droppers are R I O droppers handling I N and O U T packets. They strictly follow the I E T F document [21] to complete the Assured Forwarding ( A F ) architecture accurately.  One of the base of this scheme is that every flow must be independent, except at the core routers. Therefore we assume the Complete Partitioning (CP) [37] policy is used at the G G S N for each flow shaped by the Leaky Bucket. A t R N C , i f no explicit traffic conditioner is implemented and the shaping relies on wireless resource management, some assumptions must be clarified. If these flows are multiplexed at the same transmission buffer, our scheme can either apply to cases where each flow goes through different R N C s , served by the same core network router, or the C P policy is also used at the R N C for every flow. In order to separate the flows, a Multi-field Classifier resides at the R N C (Figure 2.3). Whether there are one or more R N C s i n the simulation, the  52  Chapter 4 Applying WERUF to RLC Queue  network topology figures below show only one R N C for simplicity.  Source quenches employ Internet Control Message Protocol ( I C M P ) [38] packets, sent to the G G S N to adjust the shaper's token rate for the flow (here we consider the per-flow shaping only). These I C M P packets are generated according to the A Q M functions described i n last chapter. In the G G S N shapers, the token rate adjustment is made by multiplicative-decreases/ additive-increases. When an I C M P packet is received, and the token rate has not been cut within the current estimated end-to-end round trip time (RTT), the new token rate is calculated by new rate=min{guaranteed  token rate, current token rate/2)  (4-1)  where the guaranteed data rate is pre-configured by the system [7]. This c u t - b y - R T T scheme protects the regulator from being rapidly driven down by a sequence of source quenches before the source has had a chance to respond to the congestion.  The end-to-end R T T is difficult to measure for real-time traffic, due to the lack of return link acknowledgments. [14] proposes to use twice the propagation delay on the next link towards the congested router to approximate the RTT, but this proves unfeasible due to the extra delay caused by link layer retransmissions. With "lossy" wireless links, the queuing delay dominates the propagation delay. When this type of estimate is made at the DiffServ ingress edge node, G G S N , there is no easy way to predict the queuing delay at the egress edge node, like R N C . To resolve this problem, we consider that the 3GPP-defined maximum transfer delay i n the U M T S bearer service, T, to be a reasonable estimate for real-time applications for several reasons. First, in most cases, the latency introduced by the external Internet is much less than the latency in the G P R S / U M T S domain with noisy wireless links. We can assume the delay in the G P R S / U M T S domain  53  Chapter 4 Applying WERUF to RLC Queue  represents the total trip time. Second, i f the one trip delay i n the G P R S / U M T S network for a l l real-time applications is uniformly distributed, the mean value of this delay is likely to be located in the middle of [0, 250 ms]. Therefore using 250 ms to approximate the round trip delay is quite reasonable. The token rate recovery mechanism proposed in [16] consists of the bandwidth of the ratereduced shaper associated with a flow being increased by one average packet size for every R T T estimate that passes, without the arrival of a source quench for the flow until the token rate recovers to the original value. This scheme recovers the token rate too slowly, however, and does not work well in wireless networks. For congestion caused by wireless errors, which normally has a much smaller time scale than congestion periods i n wired networks, the edge-node token rate should preferably not stay at a low level for an extended period of time. Otherwise, i f the wireless link has recovered but the token rate has not, the throughput degrades unnecessarily. Our recovery rule is that after a token rate reduction, the token rate increases by one average packet size every time a new downlink packet arrives at the token bucket, until the token rate has recovered to the initial value. Our simulation results prove the effectiveness of this mechanism.  A s mentioned before, we use a complete partitioning (CP) policy to manage the buffer of the G G S N for each token bucket. The buffer size is set to a small value so new packets start dropping earlier, after the token rate is reduced.  4.2 Simulation Results We use the same O P N E T 8.1 R L C models as in last chapter with the new models of G S N s . Some parameters c o m m o n l y used i n a l l below simulations are listed in Table 4.1. A l l of the f o l l o w i n g simulation scenarios are with some background traffic i n the core network. I C M P  54  Chapter 4 Applying WERUF to RLC Queue  packets are generated by the R N C with probabilities determined by the A Q M function II. The Table 4.1 Common Parameters for Simulation of WERUF. Parameters  Value  Max. number of transmissions MaxDAT  3  Packet error rate in noisy link  15%  Packet error rate in clean link  0.2%  Lmin  30 packets (AMD PDUs)  Lmax  180 packets (AMD PDUs)  Propagation delay in the GPRS/UMTS core network  1.0 (ms)  Wireless effective service data rate for each real-time flow  1 M bps  Wireless effective service data rate for each TCP flow  384 K bps  token rate adjustment and R T T estimate methods at the G G S N , upon receiving an I C M P packet, are exactly the same as described in section 4.1. The term "offered load" in scenarios given below refers to the offered load for every real-time flow, for example, a 90% offered load means every real-time flow has a 90% offered traffic load, no matter what its wireless link condition. We also let the wireless links be independent of each other. The wireless errors are caused by a separated interference source.  4.2.1 Real-time Flow and TCP Flows Real-time flows and T C P flows normally have different t i m i n g requirements. In the following two scenarios, we apply the W E R U F to real-time flows only because T C P flows do not have any expired packet drops.  4.2.1.1 One Real-time Flow in a Noisy Link and Multiple TCP Flows in Clean Links with Equal RTT In this scenario, M l , M 2 , M 3 and M 4 are downloading traffic from H I , H 2 , H 3 and H 4 ,  55  Chapter 4 Applying WERUF to RLC Queue  respectively (Figure 4.2). The M l - H l pair is transferring real-time traffic, experiencing high  i  G P R S / U M T S domain  Figure 4.2  j  Internet  Network topology of one real-time plus three TCP flows.  B E R i n the wireless link, and has a maximal token rate of up to 1 M bps, as well as a 256 K bps guaranteed rate. The other pairs are running long-lived F T P applications, and have maximal token rates of up to 384 K bps with clean wireless links, but do not have guaranteed rates. The total core network bandwidth is limited to 1 M bps. The external Internet introduces a 30 ms delay to a l l hosts. We set the real-time packets with a higher priority over T C P packets in the core network to cater to their different delay requirements. In this network topology, the S G S N acts as the core router i n the DiffServ domain.  Figures 4.3 and 4.4 show that after applying W E R U F , the throughput of the real-time flow in the noisy wireless link improves significantly, when its offered load increases o f more than 60%. Further, the end-to-end transfer delay reduces to less than 6 5 % . Figure 4.5 shows the improvement to T C P flows. W h e n the real-time f l o w ' s offered load is higher than 60%, the improvement for T C P flows also increases significantly, due to the network resources in the core network being released by the real-time flow.  56  Chapter 4 Applying WERUF to RLC Queue  ,4-  5 5 0 0 0  - © -+-•  -.  Without W E R U F With W E R U F  J O CO CD  5 0 0 0 0  h  -e~  Z3 &r  4 5 0 0 0 h  Z3  O  4 0 0 0 0 1 -  o cu 3 5 0 0 0 CO  cu  on  3 0 0 0 0 0.5  0.6  0.7  0.8  0.9  Offered load Figure 4.3  Real-time flow throughput vs. offered load.  1 1 0  With - © -  4 0 0.5  W E R U F  Without  0.6  W E R U F  0.7  0.8  0 . 9  Offered load Figure 4.4  End-to-end delay of the real-time flow vs. offered load.  We can see that improvements only occur when the real-time traffic flow is heavy. In this case, the noisy wireless link is close to saturation and many real-time P D U s accumulate in the  57  Chapter 4 Applying WERUF to RLC Queue  3 0 0 0 0  0" 0.5  1 0 . 6  • 0 . 7  1 0 . 8  X 0 . 9  O f f e r e d l o a d of t h e real-time flow  Figure 4.5  Total TCP throughput vs. real-time traffic offered load  R N C , the average transmission queue length is long enough to trigger the W E R U F . O n the other hand, there is much competition for the shared bandwidth/buffer i n the core network, thus the saved resources due to W E R U F can make a positive impact on overall performance.  Next, we investigate what causes to the improvements. In our simulation, the real-time packet drops due to transmission time exceeding M a x D A T are almost equal in both cases, so they are not presented in the figures. Figure 4.6 shows that without W E R U F , most packets are dropped due to expiration, but with W E R U F applied, packets drop due to overflow i n the G G S N only, while none drop due to expiry (Figure 4.7). More importantly, the total packet drops in Figure 4.7 is less than the total packet drops in Figure 4.6. This results from our strategy of making the active packet drop rate (actually the I C M P packets', generating rates following the A Q M function II here) less than the natural packet drop rate. The fewer expired drops results i n more efficient resource utilization, discussed in Chapter 3.  58  Chapter 4 Applying WERUF to RLC Queue  35000  | T o t a l packets d r o p s D Expired packet d r o p s J O v e r f l o w packet d r o p s at RNC  30000 CO §"25000 ~CD - g 20000 CO Q_  M—  °15000 CD 10000  50001-  0.5  0.6  0.7  Offered load Figure 4.6  0.8  0.9  Real-timeflowpacket drops vs. offered load without WERUF.  25000 • • Total packet drops [_ I O v e r f l o w p a c k e t d r o p s at Expired packet drops  G G S N  20000 CO Q. O "O "CD  15000  O CO Q_  o  »CD  10000  5000 h  0.5  0.6  11 0.7  0.8  0.9  Offered load Figure 4.7  Real-time flow packet drops vs. offered load with WERUF.  This effect is similar to the packet drops described in the last chapter, except we also  59  Chapter 4 Applying WERUF to RLC Queue  improve the performance of l o w priority T C P flows by early regulation. The undeliverable packets are dropped at the G G S N because of the overflow caused by a smaller token rate and limited buffer space. T h i s mechanism has a drop-tail effect, thus, it is l i k e l y to drop packets continuously, contrary to the A Q M schemes discussed in Chapter 3. Different real-time applications may favour different dropping disciplines. T h i s topic w o u l d be interesting for future research.  4.2.1.2 One Real-time Flow i n a Noisy L i n k and Multiple T C P Flows i n Clean L i n k s with Different R T T In this scenario, the Internet latency of the real-time flow ( M l - H l pair) is 30 ms, while the latencies from H 2 , H3,..., H 6 to G G S N are 5 ms, 20 ms, 80 ms, 160 ms and 250 ms, respectively (Figure 4.8). We set the real-time traffic load to be constant at 80%. A g a i n , the real-time  Figure 4.8  Network topology of one real-time andfiveTCPflowswith different Internet latencies.  flow has a higher priority i n the core network, but suffers from a noisy wireless link, while T C P flows with lower priority in the core network are sent over clean wireless links. Their token rates  60  Chapter 4 Applying WERUF to RLC Queue  in G G S N and wireless data rates in R N C are the same as in section 4.2.1.1. The total core network bandwidth is 2 M bps.  The simulations yield throughputs of 43357 bytes/s and 51719 bytes/s for the real-time flow without W E R U F and with W E R U F , respectively - an improvement of better than 19% for W E R U F . The total packet drop rate with W E R U F is also less than the packet expiry rate without W E R U F . T C P throughput with different Internet latencies are shown i n Figure 4.9. Here the  Q |  0  Figure 4.9  1  1  50  100  ,  1  1  150  200  Internet latency (ms)  1  250  TCP throughput per flow vs. Internet latency.  improvement favors T C P flows with smaller Internet latency. Because T C P uses A C K s to "clock out" new segments [8], flows with shorter RTTs occupy more core network bandwidth given up by the real-time flow under W E R U F .  61  Chapter 4 Applying  WERUF  to RLC Queue  4.2.2 Real-time Flows Over Different Link Conditions These scenarios show cases when multiple real-time flows compete for l i m i t e d core network bandwidth. W E R U F is applied to their transmission queues at R N C s .  4.2.2.1 Two Real-time Flows in Different Links In this scenario, only two real-time flows are investigated (Figure 4.10). These two real-  Internet  Figure 4.10 Network topology of two real-time flows.  time flows both get a 1 M bps token rate in the G G S N , and the same Internet latency of 30 ms. The flow in the noisy link gets a higher priority than the one in the clean link, that is, their packets in the core network are handled by different drop precedences. The purpose of this setting is to more easily show the effects of W E R U F . When these two flows have the same drop precedence when competing for core network resources, most packets are dropped by the R I O dropper in the core network, while the surviving packets arriving the R N C are not dense enough to accumulate a long enough queue to trigger the W E R U F .  The Internet latency is 30ms for both flows. The other parameters are the same as in Table  62  Chapter  4  Applying  WERUF  to RLC  Queue  4.1.  F r o m Figures 4.11 to 4.14, we can see that W E R U F not only significantly improves the low drop precedence real-time flow performance in the noisy link, but also the performance of the high drop precedence flow in the clean link, under heavy load condition.  F r o m Figures 4.11 and 4.13, we can see that without W E R U F , when the traffic load  52000  50000Y  -€>-  Without  - + -  With  W E R U F  W E R U F  480001-  "575  46000 44000h  CL  =3  o  42000 40000 38000 36000 3400 32000 0.5  0.6  0.7  0.8  0.9  Offered load Figure 4.11 Throughput of the low drop precedence real-time flow in a noisy link vs. offered load.  becomes heavy, the l o w drop precedence flow i n a noisy link approaches to saturation, that is, from 70% to 90%, the throughput hardly increases, while the end-to-end delay grows. But with W E R U F , its throughput continues to increase with the offered load, and the incremental end-toend delay is less than that without W E R U F .  Without W E R U F , the performance o f the high drop precedence flow i n a clean l i n k  63  Chapter 4 Applying WERUF to RLC Queue  54000  •  :es/s)  H  52O0O  -  50000  -  48000  -  >-»  -O  roughpi  -*—'  X  "~ ~—— ——-.4. \  /  ~—  S  -  46000  —s  -  44000 •  42000  - O 40000  38000 0.5  —  1  —  Without W E R U F With W E R U F  O.S  \ 0.7  0.8  0.9  Offered load Figure 4.12 Throughput of the high drop precedence real-time flow in a clean link vs. offered load.  20  1 0.5  ,  ,  ,  1  0.6  0.7  0.8  0.9  Offered load Figure 4.13 End-to-end delay of the low drop precedence real-time flow in a noisy link vs. offered load. degrades when the total traffic load increases (Figure 4.12 and Figure 4.14), because packets of  64  Chapter 4 Applying WERUF to RLC Queue  20' 0.5  •  •  •  0.6  0.7  0.8  Offered load  1 0.9  Figure 4.14 End-to-end delay of the high drop precedence real-time flow in a clean link vs. offered load. the l o w drop precedence flow uses up shared resources o f the core network. M a n y o f these packets are actually undeliverable due to expiration, thus network resources are wasted. W i t h W E R U F , these undeliverable packets are dropped earlier at G G S N , network resources are utilized more efficiently, and the performance of this high drop precedence flow significantly improves.  Figure 4.15 and Figure 4.16 show the packet drops of the low drop precedence flow in the noisy link. The total number of packet drops in these two figures does not include those drops made due to transmission times exceeding M a x D A T , which are roughly equal in both cases for the same P and offered load. B  Obviously, W E R U F converts the expired packet drops and overflow at the R N C to the overflow packet drops at the G G S N . Earlier packet drops prevent undeliverable packets from  65  Chapter 4 Applying WERUF to RLC Queue  80000  • • I I  70000  «/) O"  Total packet drops I Expired packet drops I O v e r f l o w p a c k e t d r o p s at R N C  60000  T3 a>  o ns  50000  ° ~  4 0 0 0 0  _g  30000  20000  10000  0.5  0.6  0.7  0.8  0.9  Offered load Figure 4.15 Packet drops of the low drop precedence real-time flow in a noisy link vs. offered load without WERUF.  45000  Total packet drops Expired packet drops O v e r f l o w p a c k e t d r o p s at  40000  Q.  g  "3  o ro  G G S N  35000  30000  25000  CL.  J O  20000  -  15000  -  10000  -  5000  -  o  0.5  0.6  0.7  0.8  0.9  Offered load Figure 4.16 Packet drops of the low drop precedence real-time flow in a noisy link vs. offered load with WERUF. occupying the limited core network resources, and improve overall performance. Another  66  Chapter 4 Applying WERUF to RLC Queue  phenomenon worth mentioning is that for this flow (with low drop precedence, and over the noisy link) the total packet drops i n W E R U F are less than without W E R U F , w h i c h guarantees no observable QoS loss due to the introduction of W E R U F from end users' viewpoint.  4.2.2.2 Multiple Real-time Flows in Different Wireless Links The network topology of this scenario is illustrated i n Figure 4.17 below. F i v e real-time  Figure 4.17 Network topology of multiple real-time flows in noisy and clean links.  flows come from five Internet sources, with the same Internet latency of 30 ms. The m a x i m u m token rate o f each of them is 1 M bps, and the guaranteed token rate is 2 5 6 K bps. The other parameters are listed i n Table 4.1.  We set the offered load of these flows to 90%, and vary the number of noisy links from 1 to 4. A s in the last scenario, in the core network we assign a low drop precedence to those flows experiencing noisy wireless links, and assign a high drop precedence to flows experiencing clean l i n k s . T h e total throughput o f flows i n n o i s y and c l e a n l i n k s is measured. T h e result is demonstrated in Figure 4.18.  67  Chapter 4 Applying WERUF to RLC Queue  '  o' 1  2  '  •  3  4  Number of n o i s y links  Figure 4.18 Total throughput in noisy and clean links vs. number of noisy links I n t h i s f i g u r e , the n u m b e r o f n o i s y l i n k s i n c r e a s e s f r o m l e f t to r i g h t (1 to 4).  Clearly,  W E R U F i m p r o v e s the performance not o n l y i n n o i s y l i n k s but also i n c l e a n ones.  T h e t r e n d o f the i m p r o v e m e n t i s i n t e r e s t i n g . T h e i m p r o v e m e n t o f c l e a n l i n k f l o w s g r a d u a l l y shrinks w i t h the g r o w t h o f the n u m b e r o f n o i s y l i n k s . A s the n u m b e r o f f l o w s o v e r c l e a n l i n k s d e c r e a s e s , the traffic l o a d o n these f l o w s b e c o m e s t o o l i g h t to f u l l y u t i l i z e c o r e n e t w o r k resources made a v a i l a b l e b y constrained f l o w s over n o i s y l i n k s . O n the other h a n d , the i m p r o v e m e n t o f n o i s y l i n k f l o w s g r o w s w i t h the n u m b e r o f n o i s y l i n k s , b e c a u s e e a r l y r e g u l a t i o n i s t r i g g e r e d i n m o r e a n d m o r e f l o w s ' t r a n s m i s s i o n queues, s h o w i n g a greater a n d greater p o s i t i v e effect.  4.3 Summary In this chapter, w e extend the idea o f active packet drops d i s c u s s e d i n the last chapter f r o m  68  Chapter 4 Applying WERUF to RLC Queue  the wireless-wireline interface nodes to the whole G P R S / U M T S domain by Wireless E a r l y Regulation of Unresponsive Flows ( W E R U F ) . We utilize DiffServ functional elements at the edge nodes to restrict real-time flows with hard time deadlines, so as to release shared core network resources to other flows. These specific real-time flows are identified by the transmission queue length at the wireless-wireline interface nodes with the probability generated by the piece-wise linear A Q M functions. We also propose some important rules for the rate adjustment at the edge nodes to make the scheme work efficiently. In this way the congestion caused by wireless error at the interface nodes can be propagated to the edge nodes.  Simulations are excised in a wide spectrum of scenarios. The results prove the W E R U F can not only significantly improve the QoS of congested real-time flows experiencing high traffic load and wireless error rates, but can also improve the QoS o f other flows when shared core network resources are scarce.  69  Chapter 5 Conclusions and Future Work  Chapter 5 Conclusions and Future Work In this thesis, we have analyzed the relationship between the real-time expired packet drop rate and the transmission queue length, when link layer retransmissions are used to recover packet losses over the wireless link. We propose a novel active queue management method for the transmission queue that actively drops potentially expiring packets before they are enqueued, in order to free network resources and control transmission queue lengths effectively. Using the RLC layer, defined in UMTS as specified by the 3GPP as an example, we have presented simulation results to show that the proposed AQM method is effective in reducing queuing delays and eliminating expiration packet drops. This improves overall system performance by increasing throughput and reducing end-to-end delay. Analysis and simulations have been made to prove the effectiveness of this scheme.  In a similar vein, we extend source quenches to DiffServ ingress edge nodes, namely the Gateway GPRS Support Node (GGSN) in the GPRS/UMTS domain, to prevent undeliverable packets from consuming shared network resources in core networks, thus utilizing network resources more efficiently and improving overall system performance. We have proposed a new mechanism, Wireless Early Regulation of Unresponsive Flows (WERUF) in this thesis, which converts the undeliverable expired packet drops to overflow packet drops that can be applied to packet ingress nodes, freeing the network resources for other flows. This mechanism is a variation  70  Chapter 5 Conclusions and Future Work  of active queue management. Simulation results have been presented to prove that this scheme can efficiently improve the end-to-end QoS of G P R S / U M T S networks. It achieves its highest efficiency when the real-time traffic load is heavy and the core network resources are limited. We, believe the mechanism can be applied to any wireless communication systems with link layer retransmissions and shaped traffic.  The drawbacks of these mechanisms include needed extra computing power and uplink bandwidth for the source quenches. We minimize computing complexity by choosing simple active queue management functions at the wireless-wireline interface node, and using a simple token rate operation at the edge nodes. However, the impact of such actions on computing power is still unknown and needs further investigation. We believe the extra uplink bandwidth consumed by the source quenches has a minor effect, because in G P R S / U M T S , the uplink traffic is normally slight, thus the core network bandwidth of the uplink is redundant i n most cases. The source quenches can also be treated as flow control signals, therefore, it is possible to implement them in control plane protocols, which can be guaranteed by the system [12].  While this paper does not distinguished between different types of packets that may exist in real-time flows, there may be a need for differentiation to be made, because in many real-time applications, some types of packets carry more important information than others. In these cases, R a n d o m E a r l y Detection w i t h I N - a n d - O U T (RIO) [25] can be used as a basis for our A Q M  71  Chapter 5 Conclusions and Future Work  method, at wireless-wireline interface nodes, to implement differentiation. Furthermore, the best way to apply A Q M effectively to a m i x of different real-time applications remains to be investigated.  A s we mentioned in the previous chapter, the W E R U F drops packets in a drop-tail mode. The side-effects of this dropping discipline to some real-time applications may need further analysis. A g a i n , for those packets carrying more important information than others, some effective mechanisms w i l l be developed to avoid inappropriate drops.  Another question that may prove interesting to research is what impact the resource management schemes have on our proposed mechanism. Different b a n d w i d t h reservation methods used by scheduling, C A C and M A C eventually affect resources assigned to a single flow, which results in varying service rates and times. In this thesis we have ignored these impacts for the sake of clarity so that we can focus on the presentation of our idea. Further analysis is needed, however, to show their effect.  In this thesis, we have successfully presented the problem of an absence of buffer management schemes at the link layer transmission queue, and a novel solution to the problem. We hope our work attracts attention to this area to resolve these important problems.  72  Bibliography  Bibliography [1] The Internet Engineering Task Force, http://www.ietf.org.  [2] J. Wroclawski, "The Use of R S V P with I E T F Integrated Services.", R F C 2210, http:// www.ietf.org.  [3] J. Wroclawski, "Specification of the Controlled-Load Network Element Service.", R F C 2211, http://www.ietf.org.  [4] S. Shenker, C . Partridge and R. Guerin, "Specification of Guaranteed Quality of Service.", R F C 2212, http://www.ietf.org.  [5] S. Blake, D . Black, M . Carlson, E . Davies, Z . Wang and W. Weiss, " A n Architecture for Differentiated Services", R F C 2475, http://www.ietf.org.  [6] 3 G T S 22.100 V3.7.0 (2001-10), " U M T S phase 1 Release 99", http://www.3gpp.org.  [7] 3 G T S 23.107 v5.3.0 (2002-01), "QoS Concept and Architecture" , http://www.3gpp.org.  [8] W . R. Stevens: T C P / I P Illustrated, Vol. 1: The Protocols. Addison-Wesley, 1994.  [9] J. B . Cain and D . N . McGregor, " A Recommended Error Control Architecture for A T M Networks with Wireless L i n k s " , I E E E Journal on Selected Areas in Communications, v o l . 15, pp.16-28,Jan. 1997.  [10] N . Guo and S. D . Morgera, "Frequency-Hopped A R Q for Wireless Data Services", I E E E Journal on Selected Areas in Communications, v o l . 12, pp. 1324 -1337, Oct. 1994.  73  Bibliography [11] J. J. Spilker: Digital communications by satellite. Prentice-Hall, c l 9 7 7 .  [12] 3 G P P T S 23.060 V3.10.0 (2002-01), "General Packet Radio Service ( G P R S ) ; Service description; Stage 2 (Release 1999)", http://www.3gpp.org.  [13] S. F l o y d and V. Jacobson, "Random Early Detection Gateways for Congestion Avoidance", I E E E / A C M Transactions on Networking, vol. 1, pp. 397-413, A u g . 1993.  [14] S. F l o y d and K . Fall, "Router Mechanisms to Support End-to-end Congestion Control", http://www.icir.org/floyd/papers/collapse.ps.  [15] S. F l o y d and K . Fall, "Promoting the Use of End-to-end Congestion Control in the Internet", I E E E / A C M Trans, on Networking, vol. 7 no. 4 , pp. 458 -472, A u g . 1999.  [16] A . Acharya and A . Rangarajan, " E R U F : Early Regulation of Unresponsive Best-effort Traffic", in Proc. I E E E I C N P ' 9 9 , Oct. 1999.  [17] K . Nichols, V. Jacobson, and L . Zhang, " A Two-bit Differentiated Services Architecture for the Internet", Internet Draft,  ftp://ftp.ee.lbl.gov/papers/dsarch.pdf.  [18] 3 G P P T S 25.322 V4.2.0 (2001-09), " R L C Protocol Specification (Release 4)", http:// www.3gpp.org.  [19] K . Bala, I. Cidon and K . Sohraby, "Congestion Control for H i g h Speed Packet Switched Networks", in Proc. I N F O C O M ' 9 0 , vol. 2, pp. 520-526, San Francisco, Apr. 1990.  74  Bibliography [20]  I.  Stoica, S.  Shenker  and  H . Zhang,  "Core-Stateless  Fair Queueing:  Achieving  Approximately Fair Bandwidth Allocations in H i g h Speed Networks", in Proc. A C M S I G C O M M ' 9 8 , Vancouver, Sep. 1998.  [21] J. Heinanen, F. Baker, W. Weiss and J. Wroclawski, " Assured Forwarding P H B Group.", R F C 2597, http://www.ietf.org.  [22] S. Floyd, "Discussion of Setting Parameters", http://www.icir.org/floyd/REDparameters.txt.  [23] D . D . Clark, and W. Fang, "Explicit Allocation of Best-Effort Packet Delivery Service", I E E E / A C M Transactions on Networking, v o l . 6, pp. 362 -373, A u g . 1998.  [24] V. Rosolen, O. Bonaventure and G. Leduc, " A R E D Discard Strategy for A T M Networks and its Performance Evaluation with T C P / I P Traffic", A C M Computer Communication Review, vol. 29, no. 3, Jul. 1999.  [25] C . C . Chao and W. Chen, "Connection admission control for mobile multiple-class personal communications networks", I E E E Journal on Selected Areas in Communications, vol. 15, pp. 1618-1626, Oct. 1997.  [26] M . Naghshineh and M . Schwartz, "Distributed call admission control in mobile/wireless networks", I E E E Journal on Selected Areas in Communications, v o l . 14, pp. 711-717, M a y 1996.  [27] A . K . Parekh and R. G. Gallager, " A generalized processor sharing approach to flow control in  integrated services  networks:  the  single-node  Networking, vol. 1, pp. 344-357, Jun. 1993.  75  case", I E E E / A C M  Transactions  on  Bibliography [28] D . A . Eckhardt and P. Steenkiste, "Effort-limited fair ( E L F ) scheduling for wireless networks", in Proc. I N F O C O M ' O O , vol. 3, pp. 1097-1106, Israel, Mar. 2000.  [29] S. L u , V. Bharghavan and R. Srikant, "Fair scheduling in wireless packet networks", I E E E / A C M Transactions on Networking, vol. 7, pp. 473-489, A u g . 1999.  [30] S. L . Tsao, "Extending earliest-due-date scheduling algorithms for wireless networks with location-dependent errors", in Vehicular Technology Conference, v o l . 1, pp. 223-228, Boston, Sep. 2000.  [31]  M . Zorzi,  "Packet  communications",  in  dropping IEEE  6th  statistics  of a data-link  International  protocol  Conference  on  for  wireless  Universal  local  Personal  Communications Record, vol. 2, pp. 536 - 540, San Diego, Oct. 1997.  [32] Y. Uooyeol, P. Seongsoo and P.S. M i n , "Performance analysis of data transmission in W C D M A system", in I E E E 54th Vehicular Technology Conference, v o l . 3, pp. 1584 - 1588, Atlantic City, Oct. 2001.  [33] R. Fantacci, "Queuing Analysis of the Selective Repeat Automatic Repeat Request Protocol Wireless Packet Networks", I E E E Transactions on Vehicular Technology, v o l . 45, pp. 258 264. M a y 1996.  [34] E . Chan and X . Hong, "Analytical Model for an Assured Forwarding Differentiated Service over Wireless L i n k s " , I E E Proc. on Communications, vol. 148, pp. 19 -23 Feb. 2001.  [35] E . N . Gilbert, "Capacity of a Burst-noise Channel", B e l l System Technical Journal, v o l . 39, pp. 1253-1265, Sep. 1960.  76  Bibliography [36] E . O. Elliott, "Estimates of Error Rates for Codes on Burst-noise Channels", B e l l System Journal, v o l . 42, pp: 1977-1997, Sep. 1963.  [37] S. Bhagwat, D . Tipper, K . Balakrishnan and A . Mahapatra. "Comparative Evaluation of Output Buffer Management Schemes in A T M Networks", in Proc. I E E E ICC'94, N e w Orleans, May, 1994.  [38] J. Postel, "Internet Control Message Protocol", R F C 777, http://www.ietf.org.  77  Appendix A. List of Abbreviations and Acronyms  Appendix A. List of Abbreviations and Acronyms 3GPP:  ACK:  3rd Generation Partnership Project  ACKnowledge  AF:  Assured Forwarding  AM:  Acknowledge Mode  AMD:  Acknowledged Mode Data  AQM:  Active Queue Management  ARQ:  Automatic Repeat Request  BA:  Behavior Aggregates  BER.  Bit Error Rate  BSS:  Base Station System  CAC:  Call Admission Control  CN:  Core Network  CP:  Complete Partitioning  CS:  Complete Sharing  DiffServ:  Differentiated Services  78  Appendix A . List of Abbreviations and Acronyms  DSCP:  DiffServ CodePoint  EDD:  Earlest-Due-Date  ELF:  Effort-Limited Fair  EPC:  Estimated P D U Counter  ERUF:  Early Regulation of Unresponsive Flows  FEC:  Forward Error Correction  GGSN:  Gateway GPRS Support Node  GPRS:  General Packet Radio Service  GSM:  Global System for Mobile  ICMP:  Internet Control Message Protocol  ISP:  Internet Service Provider  IP :  Internet Protocol  IETF:  Internet Engineering Task Force  IWFQ:  Idealized Wireless Fair-Queuing  IntServ:  Integrated Services  LB:  Leaky Bucket  79  Appendix A. List of Abbreviations and Acronyms  MAC:  Media Access Control  MF:  Multi-Field  MS:  Mobile Station  MT:  Mobile Terminal  NAK:  Negative Acknowledge  PDN:  Packet Data Network.  PDP:  Packet Data Protocol  PDU:  Protocol Data Unit  PHB:  Per-Hop Behavior  PLMN:  Public Land Mobile Network  QoS:  Quality of Service  RED:  Random Early Detection  RIO:  Random Early Detection with IN-and-OUT  RLC:  Radio Link Control  RNC:  Radio Network Controller  SGSN:  Serving GPRS Support Node  80  Appendix A. List of Abbreviations and Acronyms SLA:  Service Level Agreement  SUFI:  Super-Fields  TCP:  Transmission Control Protocol  TE:  Terminal Equipment  UMTS:  Universal Mobile Telecommunications System  UTRAN:  UMTS Terrestrial Radio Access Network.  WERUF:  Wireless Early Regulation of Unresponsive Flows  WFQ:  Weighted Fair Queuing  81  Appendix B. Simulation models in OPNET 8.1  Appendix B. Simulation models in OPNET 8.1 O P N E T provides a comprehensive development environment supporting the modeling of communication networks and distributed systems. It is a discrete-event driven simulator. There are three m o d e l i n g domains (Table B . l ) in O P N E T : N e t w o r k , N o d e and Process m o d e l i n g environments. These three modeling domains span all the hierarchical levels of a model.  Table B.l OPNET modeling domains. Domain Network Node Process  Editor  Modeling Focus  Project  N e t w o r k t o p o l o g y d e s c r i b e d in t e r m s of s u b n e t w o r k s , n o d e s , links, a n d g e o g r a p h i c a l c o n t e x t .  Node  N o d e internal a r c h i t e c t u r e d e s c r i b e d in t e r m s of f u n c t i o n a l e l ements a n d data flow between t h e m .  Process  B e h a v i o r of p r o c e s s e s (protocols, a l g o r i t h m s , a p p l i c a t i o n s ) , specified u s i n g finite state m a c h i n e s a n d e x t e n d e d high-level language.  The topology of a system consists of both an inventory of its devices and the communications links between them. The network domain refers to these devices as nodes, and has several types of links that can be defined to connect them together for the purpose of sending information between them. Groups of nodes and links can be used to form subnetworks, and subnetworks can in turn contain lower-level subnetworks to form unlimited hierarchies. Some system topologies can be entirely represented in the node domain by interconnecting the node level building blocks, called modules. In most cases the network domain is used to represent high-level system topology because of its direct support for unlimited depth hierarchies, sophisticated communication links, and measured physical layout.  82  

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
United States 11 1
India 11 0
China 5 2
Nigeria 5 3
Estonia 2 0
Sweden 1 0
Republic of Korea 1 0
Canada 1 0
Botswana 1 0
Philippines 1 0
Sudan 1 0
Japan 1 0
City Views Downloads
Unknown 16 4
Ashburn 5 0
Mountain View 4 0
Beijing 4 0
New Delhi 3 0
Hyderabad 2 0
Gurgaon 1 0
Gaborone 1 0
Seoul 1 0
Shenzhen 1 2
Isabang 1 0
Toronto 1 0
Tokyo 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0065262/manifest

Comment

Related Items