Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Active queue management techniques to improve Quality of Service for real-time flows in third generation.. 2002

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
ubc_2002-0360.pdf
ubc_2002-0360.pdf [ 6.92MB ]
Metadata
JSON: 1.0065262.json
JSON-LD: 1.0065262+ld.json
RDF/XML (Pretty): 1.0065262.xml
RDF/JSON: 1.0065262+rdf.json
Turtle: 1.0065262+rdf-turtle.txt
N-Triples: 1.0065262+rdf-ntriples.txt
Citation
1.0065262.ris

Full Text

ACTIVE QUEUE MANAGEMENT TECHNIQUES TO IMPROVE QUALITY OF SERVICE FOR REAL-TIME FLOWS IN THIRD GENERATION WIRELESS NETWORKS by J I A N C H E N B . E . , Shanghai Jiao Tong University, China , 1995 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF T H E REQUIREMENTS FOR T H E D E G R E E M A S T E R OF APPLIED SCIENCE in T H E F A C U L T Y OF G R A D U A T E STUDIES Department of Electrical and Computer Engineering We accept this thesis as conforming to the required standard T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A October 2002 © Jian Chen, 2002 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department The University of British Columbia Vancouver, Canada Date OcX. / o t i DE-6 (2/88) Abstract In this thesis, we analyze the behavior of real-time flows with hard time deadline using link layer retransmission to overcome wireless errors, and propose two new mecha- nisms to improve these flows' end-to-end Quality-of-Service (QoS) over a wireless network. Wireless channels have the characteristic that link quality varies according to propa- gation conditions. For real-time flows with hard time deadlines, link layer retransmissions over a wireless network, necessitated by fluctuations in link quality, may result in a high number of packets being dropped, as deadlines expire. The expired packets waste network resources and lead to long queuing delays for subsequent packets. After analyzing the char- acteristics of the Radio Link Control (RLC) layer of the General Packet Radio Service (GPRS)/Universal Mobile Telecommunication Services (UMTS) network, we have devel- oped a new set of mechanisms to minimize expiration packet drops in favour of overflow- like packet drops. We propose to use active queue management to limit transmission queue length, hence queuing delay, thus eliminating expiration packet drops. This allows the buffer and wireless bandwidth, otherwise be wasted by expiring packets, to be released ear- lier for other packets. We apply this mechanism to the radio link control layer in the GPRS/ U M T S wireless networks. The effectiveness of the proposed mechanism is verified by simu- lations. Furthermore, we extend the similar idea from the radio link control layer to the whole GPRS/UMTS domain. We adapt the Early Regulation of Unresponsive Flows (ERUF) to the characteristics of wireless channels employing link layer retransmissions. We propose to regulate those congested flows at the DiffServ domain ingress edge nodes of the GPRS/ ii U M T S core network, and drop undeliverable packets earlier to release some shared network resources for other flows. We also present simulation results to show that this new Wireless Early Regulation of Unresponsive Flows (WERUF) scheme can significantly improve the overall end-to-end quality-of-service. iii Table of Contents Abstract ii Table of Contents iv List of Tables vi List of Figures. vii Acknowledgments x Chapter 1 Introduction 1 1.1 Background Knowledge 3 1.1.1 General Architecture of G P R S Domain 3 1.1.2 Some Mechanisms for QoS 5 1.2 Motivation 6 1.3 Contributions 8 1.4 Outline of the Thesis 9 Chapter 2 Overview of Previous Work 10 2.1 End-to-end QoS in G P R S / U M T S Architecture 10 2.2 G P R S / U M T S Core Network Mechanisms for QoS 12 2.2.1 IETF QoS Architectures 12 2.2.2 Mapping between G P R S / U M T S and DiffServ 16 2.2.3 Active Queue Management 17 2.3 Mechanisms at Wireless Access Nodes for QoS 21 2.3.1 Radio L i nk Control (RLC) Layer in G P R S / U M T S 21 2.3.2 Other Mechanisms at Wireless Access Nodes 24 Chapter 3 Applying A Q M to Link Layer Buffers for Real-time Traffic 27 iv 3.1 Analysis of Real-time Flows over R L C with A M 28 3.2 Simulation Results 34 3.2.1 Two-slope Piece-wise Linear A Q M Function II 35 3.2.2 Comparison of Different A Q M Functions with Varying Traffic Load 40 3.2.3 A Q M Functions with Different Error Rates 45 3.3 Summary 48 Chapter 4 Applying W E R U F to R L C Queue 50 4.1 Early Regulation in G P R S / U M T S Network 50 4.2 Simulation Results 54 4.2.1 Real-time Flow and T C P Flows 55 4.2.1.1 One Real-time Flow in a Noisy L ink and Multiple T C P Flows in Clean Links with Equal R T T 55 4.2.1.2 One Real-time Flow in a Noisy L ink and Multiple T C P Flows in Clean Links with Different R T T 60 4.2.2 Real-time Flows Over Different L ink Conditions 62 4.2.2.1 Two Real-time Flows in Different Links 62 4.2.2.2 Multiple Real-time Flows in Different Wireless Links 67 4.3 Summary 68 Chapter 5 Conclusions and Future Work 70 Bibliography 73 Appendix A.List of Abbreviations and Acronyms 78 Appendix B. Simulation Models in O P N E T 8.1 82 List of Tables Table 3.1 Parameters for Simulation of A Q M in R L C layer. 35 Table 4.1 Common Parameters for Simulation of W E R U F . 55 Table B . 1 O P N E T modeling domains 82 vi List of Figures Figure 1.1 Overview of the G P R S packet domain architecture 3 Figure 2.1 Protocol stacks of the U M T S user plane 10 Figure 2.2 G P R S / U M T S QoS architecture 11 Figure 2.3 Mapping of the DiffServ domain and the G P R S / U M T S network 16 Figure 2.4 General algorithm for R E D gateways 19 Figure 2.5 R E D and Gentle-RED marking/dropping rate vs. average queue length 20 Figure 2.6 Acknowledged Mode in the Radio L ink Control layer (window size = 8) 22 Figure 3.1 Gaussian distribution with means MA and MB, variances a'A and a'B, where MA < M f i a n d o'A<o'B 31 Figure 3.2 Gaussian expiration drop rate and A Q M approximations 32 Figure 3.3 Simulation model for A Q M in the link layer buffer. 34 Figure 3.4 B E R , transmission queue length, and throughput without A Q M 36 Figure 3.5 Different packet drops without A Q M 37 Figure 3.6 B E R , transmission queue length, and throughput with A Q M 38 Figure 3.7 Different packet drops with A Q M 39 Figure 3.8 Throughput vs. traffic load with three schemes 41 Figure 3.9 End-to-end delay vs. traffic load with three schemes 41 Figure 3.10 Average transmission queue length with three schemes 42 Figure 3.11 Different packet drops vs. traffic load without A Q M 43 Figure 3.12 Different packet drops vs. traffic load with A Q M function II 44 Figure 3.13 Different packet drops vs. traffic load with A Q M function III 45 Figure 3.14 Packet drop rate with different long-run packet error rates 46 vii Figure 3.15 Throughput vs. long-term packet error rates PB 47 Figure 3.16 End-to-end delay vs. long-term packet error rates PB 48 Figure 4.1 The general concept behind Wireless Early Regulation of Unresponsive Flows ( W E R U F ) 51 Figure 4.2 Network topology of one real-time plus three T C P flows 56 Figure 4.3 Real-time flow throughput vs. offered load 57 Figure 4.4 End-to-end delay of the real-time flow vs. offered load 57 Figure 4.5 Total T C P throughput vs. real-time traffic offered load 58 Figure 4.6 Real-time flow packet drops vs. offered load without W E R U F . 59 Figure 4.7 Real-time flow packet drops vs. offered load with W E R U F . 59 Figure 4.8 Network topology of one real-time and five T C P flows with different Internet latencies 60 Figure 4.9 T C P throughput per flow vs. Internet latency 61 Figure 4.10 Network topology of two real-time flows 62 Figure 4.11 Throughput of the low drop precedence real-time flow in a noisy link vs. offered load 63 Figure 4.12 Throughput of the high drop precedence real-time flow in a clean link vs. offered load 64 Figure 4.13 End-to-end delay of the low drop precedence real-time flow in a noisy link vs. offered load 64 Figure 4.14 End-to-end delay of the high drop precedence real-time flow in a clean link vs. offered load 65 Figure 4.15 Packet drops of the low drop precedence real-time flow in a noisy link vs. offered load without W E R U F . 66 Figure 4.16 Packet drops of the low drop precedence real-time flow in a noisy link vs. offered load with W E R U F . 66 Figure 4.17 Network topology of multiple real-time flows in noisy and clean links 67 viii Figure 4.18 Total throughput in noisy and clean links vs. number of noisy links ix Acknowledgments I would like to extend my sincere gratitude to my supervisor, Dr. Victor Leung, who has been a constant source of inspiration, support and guidance. But not less, I would like to thank the helpful staff of the Department of Electrical and Computer Engineering at the University of British Columbia. Motorola Canada Ltd . in Richmond and the Canadian Natural Sciences and Engineering Research Council sponsored this research project. I would also like to express my thanks to them. Chapter 1 Introduction The Internet has seen phenomenal growth in the past few years. Recent research has begun to focus on the provision of Internet service in wireless networks, as it is widely expected to be the next growth area in the Internet. In order to transmit data packets over air interfaces, factors like inferior transmission links, insufficient spectrum and limited capacity should be considered. A n error control mechanism for data l ink layers that supports stable performance under high error rate is especially needed for wireless data transmission. The Internet protocol (IP) is not designed to offer quality guarantees to applications, as it only provides a best-effort data delivery service. However, the Internet Engineering Task Force (IETF) [1] has proposed Integrated Services (IntServ) [2] [3] [4], as wel l as the Differentiated Services (DiffServ) [5], to offer diverse service levels rather than just the single best-effort class. The IntServ model uses explicit resource reservation for every flow requesting Quality-of-Service (QoS). On the other hand, the DiffServ model relies on the prioritization of some flows over others. The Universal Mobi le Telecommunications System ( U M T S ) architecture [6], as defined by the 3rd Generation Partnership Project (3GPP) , provides the bui lding blocks for the vast deployment of the next generation mobile Internet. U M T S promises to support a wide range of applications with different QoS profiles. In the technical specification [7], there are four different QoS classes targeting conversational, streaming, interactive and background traffic. The first two classes are defined for real-time applications. Conversational class serves the traffic with the most stringent transfer delay and delay variation requirements, such as Voice over IP. The Stream class serves real-time traffic with relatively loose delay requirement, such as streaming video. The 1 Chapter 1 Introduction latter two QoS classes typically serve traffic without bounded transfer delay requirements. The most popular type of traffic is the Transmission Control Protocol (TCP) [8]. The promised QoS diversity is adequate to support future mobile Internet applications. Expected situations rely on the proper in ter -working of U M T S and IP networks. D i f fSe rv architecture is one of the mechanisms suggested for use in the U M T S core network (CN) to offer interoperability at the QoS level. The provis ion of end-to-end QoS not only depends on the core network, but also on wireless links. Different resource management schemes must be deployed at the wireless-wireline interface nodes so as to cater to all requirements of QoS. Our proposed idea is one such scheme. We focus our efforts on link layer retransmission schemes to overcome wireless errors, especially for real-time applications. Wireless links are error-prone channels. A lot of effort has been made to attack this problem. The two main classes of techniques proposed for this problem are Forward Error Correction (FEC) , and link layer retransmission of lost packets in response to Automatic Repeat Request (ARQ) messages [9] [10]. They can either be separately implemented, or combined to get the best performance. For example, in a hybrid A R Q scheme [11], when the number of errors in a received packet is lower than the error-correcting capacity of the configured F E C code, this packet is accepted. Otherwise it is discarded and A R Q is triggered to request a retransmission. A s suggested by the 3 G P P specification [7], for real-time applications wi th low and stringent transfer delay requirements, F E C is normally used as the only error correcting scheme because A R Q may be too time-consuming. Conversational class |raffic falls into this category. For real-time traffic where transfer delay requirements are relatively lax, such as Streaming class 2 Chapter 1 Introduction applications for example, A R Q or the Hybr id A R Q can be used to reduce the bit error rate. We concentrate on the latter in our research work. This chapter briefly introduces some background information, then presents the motiva- tion and an overview of our concept. Further details are illustrated in following chapters. 1.1 Background Knowledge 1.1.1 General Architecture of GPRS Domain General Packet Radio Service (GPRS) [12] is originally proposed for Global System for M o b i l e communication ( G S M ) data service. 3 G P P adopts it as the common standard packet domain core networks for U M T S . G P R S uses a packet-mode technique to transfer high-speed and low-speed data and signalling in an efficient manner. It is designed to support transfers from intermittent and bursty data through to occasional transmission of large volumes of data. It supports QoS and other features. Figure 1.1 presents the interfaces and network nodes relating to the topics of this thesis. UMTS MS - H - i BSS Um Figure 1.1 Overview of the GPRS packet domain architecture. 3 Chapter 1 Introduction A s we can see, the G P R S backbone can serve both U M T S and G S M access networks. G P R S introduces two new network nodes in the Public Land Mobi l e Network ( P L M N ) . The Serving G P R S Support Node (SGSN) keeps track of the individual Mobi le Station's (MS) location and performs security functions and access control. The S G S N is connected to the G S M base station system through the Gb interface and/or to the U M T S Radio Access Network through the Iu interface. The Gateway G S N ( G G S N ) provides interworking with external packet-switched networks, and is connected with S G S N s v ia an IP-based packet domain P L M N backbone network. The meanings of other elements and interfaces are explained below. Network nodes: • M S - Mobi le Station. In this thesis we use this term or the word "mobile" to refer to both the Mobi le Terminal (MT) and Terminal Equipment (TE) at the end user terminal site. • U T R A N - U M T S Terrestrial Radio Access Network. ( U M T S only) • BSS - Base Station System. ( G S M only) • P D N - Packet Data Network. We refer here to an external data network (Internet or others), either with QoS guarantees, or just best effort. • T E - Terminal Equipment. We refer here to an Internet host or another terminal com- municating with the M S . Interfaces: Chapter 1 Introduction • Iu and Gb - The interfaces connecting S G S N with U T R A N in U M T S and BSS in G S M respectively. • G n - Interface between two GSNs within a P L M N . • U u and U m - The wireless interfaces between mobile stations (MS) and the G S M or U M T S fixed network part. From Figure 1.1, we can see that an S G S N can connect to more than one G G S N even i f they are in different P L M N s . Actually, each G G S N can also serve more than one S G S N . Other network elements and interfaces are specified in the technical documents. The figure does not show them because they are unrelated to our topic. The l ink layer retransmission scheme for overcoming wireless errors resides in the wireless interfaces U m and U u . 1.1.2 Some Mechanisms for QoS The mechanisms introduced below basically demonstrate the impetus behind our idea. In the following chapters they are discussed in detail. Random Early Detection (RED) [13] is one of the most popularly used Ac t ive Queue Management ( A Q M ) methods. It actively drops/marks packets in the aggregation of flows at core routers by a probability proportional to the average queue length. A t first, this method targets the TCP/ IP network, in which a single marked or dropped packet is sufficient to signal the presence of congestion to the transport-layer protocol. The emphasis on avoiding the global synchroniza- tion, which results when many connections reduce their windows at the same time, is particularly relevant in a network with T C P [8] where each connection goes through Slow-Start, reducing the window to one in response to a dropped packet. With the growing number of Internet applications re lying on unresponsive protocols, with little or no end-to-end congestion control , R E D is 5 Chapter 1 Introduction proposed for use in core routers in order to avoid unfairness and congestion collapses in the Internet [14] [15]. The general idea is similar to the TCP case, except with a new functionality to measure the unresponsive flows link share so that the "misbehaved" flows can be identified and regulated. The regulation may be implemented by differentially scheduling the packets from the identified flows, or by dropping the packets from the flows at the router. The reference [16] proposed to extend such idea as Early Regulation of Unresponsive Flows (ERUF). The general concept deals with cases when unresponsive flows encounter network congestion at certain nodes over the end-to-end path, thus destining some packets to be dropped. The author in [16] argues that dropping packets at an earlier node (before reaching the shared core routers) over the same path frees shared network resources for other flows, with the assumption that network resources assigned to certain traffic aggregates are fixed. The author in [16] proposes a notification scheme to inform the core routers about the congestion status, and an effective regulation method at earlier nodes upon the signalling. The regulation method relies on shaping elements in flow control. Such elements also exist in the DiffServ architecture [17]. There are many other functional elements in DiffServ. We introduce them in the next chapter. The main advantage of the DiffServ model is the minimal state information needed in the backbone. The aggregation of traffic performed in DiffServ architecture ensures high scalability. 1.2 Motivation Because wireless links in PLMN are subject to noticeable variations in received signal strength due to local variations in terrain, buildings, and foliage, in order to ensure high reliability of data communications, it is necessary to resort to an error-control method to eliminate transmis- 6 Chapter 1 Introduction sion errors caused by the channel noise. A s we mentioned above, A R Q and F E C are the main methods used for this purpose. Most real-time applications require hard time deadlines. If packets in a real-time flow can not reach their destinations before their deadlines, they become useless. When wireless links experience high bit error rate ( B E R ) , many packets accumulate at the l ink layer transmission buffer, or queue, residing at wireless-wireline interface nodes. Note that, in this thesis, we assume that packets in the transmission buffer form a first-in-first-out queue, except for some higher priority packets carrying control information. So the term "queue length" also refers to buffer occupancy in this thesis. If the queue length grows to a large enough value, the end-to-end latency is dominated by the queuing delay, which leads to many real-time packets being dropped due to deadline expiration. In this case, packet expiration caused by queuing delay becomes a critical factor degrading overall performance. Such phenomena inspire the appl icat ion of the A Q M to the t ransmission buffer. Intuitively, packets in real-time flows expire within a certain probability relating to queue length. Longer queue lengths generate more queuing delay, and reduce the margin to the delivery deadline. Actively dropping packets with probabilities analogous to such "natural" dropping rates due to expiry may result in releasing network resources earlier, thereby reducing the queuing delay for following packets. With shorter queuing delays, the expiring probability of the remain- ing packets is largely reduced. If the active drop rate is kept lower than the "natural" one, we can guarantee better system performance. 1.3 Contributions With this motivation, we set out to show the effectiveness of our idea both by analysis and 7 Chapter 1 Introduction by simulation. There are many aspects to be resolved in the implementation of such a new mechanism. They are presented one by one in the following chapters. The main goals of our research work are: • Based on the G P R S / U M T S models and an analysis of transmission queue behavior, mathematically express of the relation between transmission queue length and pack- ets' natural drop rate due to expiry in real-time flows with hard deadlines. With the analysis results, demonstrate by simulation of the effectiveness of applying A Q M to drop real-time packets earlier to release network resources in wireless access nodes. • A further proposal and demonstration that the E R U F can also be applied to the trans- mission queue at wireless access nodes so that, not only can network resources at these nodes be freed earlier for other packets in the same flow, but also the network resourc- es in the shared core network can be released earlier for other flows. The mechanism is realized by the DiffServ functional elements in G P R S / U M T S core network. A s far as we know, no publication has combined the link layer transmission queue with the A Q M . Also no one has yet used link layer transmission buffer occupancy as an indicator to signal the G P R S / U M T S core network regarding the congestion status caused by wireless errors. The importance of applying appropriate buffer management schemes to the l ink layer transmission buffer has not been widely realized. We hope our work can attract more industrial and academic research interest to this area. 1.4 Outline of the Thesis In Chapter 2, we introduce some previous work relating to end-to-end QoS in the G P R S / 8 Chapter 1 Introduction U M T S wireless network. First, an overview of G P R S / U M T S network architecture is discussed. Second, we show the mechanisms for the provision of end-to-end QoS over G P R S / U M T S core networks. Th i rd , we introduce the techniques used in G P R S / U M T S wireless access nodes catering to the end-to-end QoS requirement. Chapter 3 analyzes the behavior of l ink layer transmissions in a G P R S / U M T S specification, and derives the mathematical expression of the real-time packets expiration rate. This chapter also presents our idea of applying A Q M to the link layer transmission queue, and the simulation results that verify its effectiveness. Chapter 4 extends the idea of E R U F by applying it to the l ink layer transmission queue in wireless access nodes and the G P R S / U M T S core network in combination so that the overall performance can be improved, and demonstrates its effectiveness by simulation. Chapter 5 gives our conclusion and future work. 9 Chapter 2 Overview of Previous Work Chapter 2 Overview of Previous Work In this chapter, we introduce some previous work relating to the provision of end-to-end QoS in the GPRS/UMTS network. 2.1 End-to-end QoS in GPRS/UMTS Architecture 3GPP defines four QoS classes: Conversational, Streaming, Interactive and Background. Each QoS class is specified by a QoS profile associated with the Packet Data Protocol (PDP) context. UMTS definition divides the protocol stacks into a Control Plane and a User Plane. The User Plane is the layered protocol structure providing the user information transfer and the associ- ated transfer control procedures such as flow control and error recovery. The Control Plane supplies support to the User Plane. We focus on the User Plane protocol stacks illustrated in Figure 2.1. The protocols relating to our research work are shaded in the figure, while others are ~~iiv PPP GTP-U L2 L l MS UTRAN SGSN GGSN Figure 2.1 Protocol stacks of the UMTS user plane. 10 IP/ PPP PDCP RLC M A C L l PDCP GTP-U U D P / II' U D P / IP M A C A A L 5 L l A T M GTP-U GTP-U U D P / IP U D P / 1' A A L 5 L2 A T M L l Chapter 2 Overview of Previous Work ignored in this thesis. User Datagram Protocol/Internet Protocol (UDP/IP) [8] are the backbone network protocols used for routing user data and control signalling. The QoS management functions are also located at this layer. The Radio Link Control (RLC) [18] layer protocol provides logical link control over the radio interface. The link layer retransmission is one of its working modes. In order to achieve the differentiation of the four classes' end-to-end QoS, both the core network and the radio access network must be involved. Compare Figure 2.2 with Figure 2.1. In UMTS MT UTRAN CN lu EDGE NODE CN Gateway End-to-End Service Figure 2.2 GPRS/UMTS QoS architecture. 11 Chapter 2 Overview of Previous Work Figure 2.2, the Terminal Equipment (TE) and Mobi l e Terminal ( M T ) form the M o b i l e Station (MS) in Figure 2.1. The C N Iu edge node and C N Gateway correspond to the S G S N and G G S N in Figure 2.1, respectively. In this thesis, we elaborate on the U M T S Bearer Service to improve the end-to-end QoS. The other bearer services under the U M T S Bearer Service provide the necessary lower layer functions. 2.2 GPRS/UMTS Core Network Mechanisms for QoS The 3 G P P technical specification [7] suggests the implementation of the IntServ or Di f fServ architectures in the G P R S / U M T S core network to realize QoS management. We introduce these I E T F mechanisms in this section, with emphasis on DiffServ and its mapping to the G P R S / U M T S domain. Di f fServ architecture includes many funct ional elements. We emphasize A c t i v e Queue Management ( A Q M ) at the droppers, because A Q M is the most important buffer management scheme at the core routers, which we shall apply to the data l ink layer transmission buffer at wireless-wireline interface nodes in this thesis. 2.2.1 IETF QoS Architectures The existing Internet service (namely, the best-effort service of IP) cannot satisfy the QoS requirements of emerging multimedia applications, primarily because of variable queuing delays and packet loss during network congestion. There has been a significant amount of work in the past decade on extending the Internet architecture and protocols to provide QoS support for multimedia applications. This has led to the development of a number of service models and mechanisms as suggested by IETF. The IntServ model is proposed as an extension to support real-time applications. The key 12 Chapter 2 Overview of Previous Work here is to provide some control over end-to-end packet delays in order to meet real-time QoS. The fundamental assumption of the IntServ model is that resources (e.g., bandwidth and buffer) must be explicitly managed for each real-time flow. This requires routers to reserve resources in order to provide specific QoS for packet streams, or flows, which in turn requires flow-specific states in each router. Such mechanisms usually need to maintain states, manage buffers, and/or perform packet scheduling on a per flow basis. This complexity may prevent them from being cost- effectively implemented and widely deployed. The DiffServ architecture is proposed in [5] to not only supply the QoS differentiation, but also to achieve scalability. The Diffserv architecture is based on a simple model where traffic entering a network is classified and possibly conditioned at the boundaries of the network [19] [20], and assigned to different behavior aggregates (BAs), with each BA being identified by a single DiffServ codepoint (DSCP). Users request a specific performance level on a packet-by-packet basis, by marking the DiffServ field of each packet with a specific value. This value specifies the per-hop behavior (PHB) to be allotted to the packet within the provider's network. Within the core of the network, packets are forwarded according to the PHB associated with the DSCP. Sophisticated classification, metering, marking, policing, and shaping operations only need to be implemented at network boundaries or hosts. Multi-field (MF) classifiers select packets based on the content of some arbitrary number of header fields, typically some combina- tion of source address, destination address, DS field, protocol ID, source port and destination port. Markers are devices that perform marking. Shapers are devices that perform shaping, which refers to the process of delaying packets within a traffic stream to cause it to conform to some defined 13 Chapter 2 Overview of Previous Work traffic profile. Metering is the process of measuring the temporal properties (e.g., rate) of a traffic stream selected by a classifier. Policing is the process of discarding packets (by a dropper) within a traffic stream in accordance with the state of a corresponding meter enforcing a traffic profile. Network resources are allocated to traffic streams by service provisioning policies which govern how traffic is marked and conditioned upon entry to a DiffServ-capable network, and how this traffic is forwarded within that network. A wide variety of services can be implemented on top of these building blocks. Such traffic control functions at hosts, or access or boundary routers are generically called traffic conditioning. A salient feature of the Dif fServ framework is its scalabili ty, which a l lows it to be deployed in very large networks. This scalability is achieved by forcing much complexity out of the core of the network into boundary devices which process smaller volumes of traffic and fewer flows, and by offering services for aggregated traffic rather than on a per-microflow basis. That is, complex traffic classification functions are only implemented at network boundary nodes; inside the core network, P H B s are applied to aggregates of traffic which have been appropriately marked using the Diffserv field in the IP headers. P H B s are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams. Per- application flow or per-user forwarding state need not be maintained within the core of the network. In this thesis, we assume one of the IETF-suggested P H B s , Assured Forwarding ( A F ) [21], is implemented in the core network of the G P R S / U M T S domain. The Assured Forwarding (AF) P H B group constitutes a means for a provider DiffServ domain to offer different levels of forwarding assurances for IP packets received from a customer DiffServ domain. Four A F classes 14 Chapter 2 Overview of Previous Work are defined, where each A F class in each DiffServ node is allocated a certain amount of forward- ing resources (buffer space and bandwidth). IP packets that seek to use the services provided by the A F P H B group are assigned by the customer or the provider DiffServ domain into one or more of these A F classes according to the services that the customer has subscribed to. Within each A F class, IP packets are marked (again by the customer or the provider DiffServ domain) with one of three possible drop precedence values. In cases of congestion, the drop precedence of a packet determines the relative importance of the packet within the A F class. A congested DiffServ node tries to protect packets with a lower drop precedence value from being lost by preferably discard- ing packets with a higher drop precedence value. In a DiffServ node, the level of forwarding assurance in an IP packet thus depends on (1) what amount of forwarding resources are allocated to the A F class that the packet belongs to, (2) what the current load of the A F class is, and, (3) in case of congestion within the class, what the drop precedence of the packet is. In order for a user to receive DiffServ from Internet service provider (ISP), it must have a service-level agreement ( S L A ) with its ISP. A n S L A basically specifies the service classes supported and the amount of traffic allowed in each class, respectively. Users can mark DiffServ fields of individual packets to indicate the desired service at hosts, or have them marked by the access or boundary routers. A t the ingress of the ISP networks, packets are classified, policed, and possibly shaped. The classification, policing, and shaping rules used at the ingress routers are derived from the S L A s . When a packet enters one domain from another, its D S field may be remarked, as determined by the S L A between the two domains. 15 Chapter 2 Overview of Previous Work 2.2.2 Mapping between GPRS/UMTS and DiffServ Ref. [7] illustrates the QoS management functions in G P R S / U M T S . The traffic condition- ing functions reside in the core network gateways and the U T R A N . It is very reasonable to treat the G P R S / U M T S network as a special DiffServ domain. In this domain, G G S N and U T R A N can be regarded as the Dif fServ edge nodes. G G S N connects the G P R S network to the Internet through wired links, while U T R A N s connect the G P R S network to mobile users through wireless links. Similarly, S G S N can be considered to be the core router. Figure 2.3 presents the mapping of A special DiffServ domain *MF Classifier *Dropper ^Conditioner including *Conditioner implicitly provided Meter, Marker, and Shap er by resource manager Figure 2.3 Mapping of the DiffServ domain and the GPRS/UMTS network. these two architectures. It is worth mentioning that the traffic conditioners at G G S N must be implemented explic- 16 Chapter 2 Overview of Previous Work itly, while at U T R A N they can be either implemented explici t ly or realized by wireless l ink resource control functions. The traffic conditioners in G G S N include functional elements such as M F classifiers, meters, markers, shapers and droppers. Because S G S N is treated as the core router, only droppers are implemented. The droppers are used for buffer management as we l l as differentiated Q o S . A n A F implementation must detect and respond to long-term congestion within each class by dropping packets, while handling short-term congestion (packet bursts) by queueing packets. This implies the presence of a smoothing or filtering function that monitors the instantaneous congestion level and computes a smoothed congestion level. The dropping algorithm uses this smoothed conges- tion level to determine when packets should be discarded. The most popularly used dropper scheme is the R E D we introduce below. 2.2.3 Active Queue Management Act ive Queue Management ( A Q M ) was originally proposed to handle T C P - l i k e flows. The T C P transport protocol detects congestion only after a packet has been dropped at the gateway. However, it is clearly undesirable to have large queues (possibly on the order of a delay- bandwidth product) that are full much of the time; this would significantly increase the average delay in the network. Therefore, with increasingly high-speed networks, it is increasingly important to have mechanisms that keep throughput high but average queue sizes low. The R E D gateway is designed for a network where a single dropped packet is sufficient to signal the presence of congestion to the transport-layer protocol. The R E D gateway calculates the average queue size avg (in this thesis, the unit of queue length or buffer occupancy is always in packets), using a low-pass filter with an exponential 17 Chapter 2 Overview of Previous Work weighted moving average, avg = (l - wq) avg + wqq (2.1) where the wq is the weight, which is normally 0.002 as [22] proposes, and q is the current queue size. The average queue size avg is compared to two thresholds, a minimal threshold minth and a maximal threshold maxth. When the average queue size is less than the minimal threshold, no packets are marked. When the average queue size is greater than the maximal threshold, every arriving packet is marked. If marked packets are in fact dropped, or i f all source nodes are cooper- ative, this ensures that the average queue size does not significantly exceed the maximum thresh- old. A s avg varies from mintn to maxth, the packet-marking probability pa varies linearly from 0 to maxp, Ph = maxp (avg - mintn) / (maxtn - minth) (2.2) Pa=Pb/(1 - c o u n t *Pb) where count is the packets since the last marked packet. Ref. [23] proposed Random Early Detection with IN-and-OUT (RIO). It is a simple but effective scheme to supply differentiated services. The general idea is to mark the packets that comply with the S L A as I N packets, otherwise as O U T packets, at ingress edge nodes. After that different sets of maxp, minp and maxp are applied to the I N and O U T packet queues so as to generate different packet drop/mark precedences. 18 Chapter 2 Overview of Previous Work The general algorithm for the R E D gateway is presented in Figure 2.4. We use the term "mark" here because the T C P flows can carry the congestion information back to the source, so their packets may not need to be dropped immediately. for each packet arrival calculate the average queue size avg if minth < avg < maxtft calculate probability pa with probability pa: mark the arriving packet else if maxfo <. avg mark the arriving packet Figure 2.4 General algorithm for RED gateways. The need for and advantages of limiting unresponsive best-effort traffic has been proven in [15]. Unresponsive flows are flows that do not use end-to-end congestion control, and in partic- ular those that do not reduce their load on the network when subjected to packet drops/marks. Without knowledge of the congestion ahead, unresponsive traffic sources keep sending packets, whereas many data packets are actually undeliverable due to packet expiration and possible overflow. These undeliverable packets waste shared network resources. To solve the problem, the Early Regulation of Unresponsive Flows (ERUF) scheme has been proposed [16]. This concept dictates that when unresponsive flows encounter network congestion at certain nodes over the end-to-end path, and cause some packets to miss their delivery deadlines or to be dropped due to buffer overflow, dropping them at an earlier node frees the shared network resources for other 19 Chapter 2 Overview of Previous Work flows. Source quench packets are generated by the R E D algorithm, and sent from the congestion point to the earlier nodes. The earlier nodes regulate the bandwidth in a multiplicative-decrease/ additive-increase manner over a round trip time (RTT) estimate. In order to get better performance, gentle-RED [24] was proposed for application to the high-speed network. The difference between the R E D and gent le-RED is their marking rate variation curves, which are displayed in Figure 2.5. The pb a r e equal when the average queue R E D y / j e n t l e - R E D I' 1 — 1 • 1 I minth maxth 2 * maxth Figure 2.5 RED and Gentle-RED marking/dropping rate vs. average queue length. length is less than maxth. Once the avg crosses maxth, the normal R E D makes Pb 1.0 immediately, while the gentle-RED continues to increase pb with a slope leading the marking rate curve to 1.0 at the point of avg = 2* maxth. Again, the gentle-RED is originally proposed for TCP- l ike traffic, 20 Chapter 2 Overview of Previous Work which is responsive to single packet drops/marks. In this thesis, we make necessary modifications to handle the unresponsive traffic. 2.3 Mechanisms at Wireless Access Nodes for QoS In order to offer end-to-end QoS guarantees and differentiation to end users, not only the core network, but also the wireless access nodes of G P R S / U M T S must be taken into account. M u c h previous research work has been done on the mechanisms at the access nodes. F E C and link layer A R Q are used to attack the high B E R of wireless links. Different scheduling and resource management schemes are proposed for the sake of provisioning the QoS, or uti l izing resources (normally bandwidth and buffer) efficiently. In this section, we focus on the l ink layer A R Q and resource management, and present general knowledge of other mechanisms. 2.3.1 Radio Link Control (RLC) Layer in GPRS/UMTS Wireless links are error-prone with relatively high and fluctuating bit-error rates (BERs) . Different methods have been developed to mitigate transmission errors so as to deliver data packets to mobile users with guaranteed low B E R . A s we mentioned above, two main classes of techniques are F E C coding and link layer retransmissions of lost packets using A R Q protocols. For those real-time applications with low and stringent transfer delay requirements, F E C is normally used as the only error mitigation scheme, because A R Q may consume too much time. For real-time traffic where transfer delay requirements are relatively lax, for example, most applications in the Streaming class, A R Q or hybrid A R Q can be used. In our research, we assume the F E C code is fixed during the lifetime of a connection. In the G P R S / U M T S networks, link layer retransmissions occur at the R L C layer residing 21 Chapter 2 Overview of Previous Work at the wire-wireless interface nodes, that is, the Radio Network Controller ( R N C ) in U M T S networks. Our investigation focuses on the Radio L i n k Control ( R L C ) layer in Acknowledge Mode ( A M ) (Figure 2.6). It is actually a kind of sliding-window / multiple-reject A R Q . In the Upper layer New PDUs Transmission queue I Copfc Send PDUs R,bamissioj5!eanACKe[1PDlJsmbuffer BuTer Move NAKed PDUs to Transmission queue ACK NAK Datapackets PDU handler Transmitting side . 7 - 2 , - - M I - -t-w-f--ta Air interface 1 2 t t 3,4,5 6 7,8,1,2 3 Receiving side Figure 2.6 Acknowledged Mode in Radio Link Control layer (window size = 8). model used in our research, we assume that the receiver must deliver the in-sequence packets to the higher layers. R L C protocol does the segmentation and reassembly. After segmentation, the higher layer data segments are encapsulated into the R L C layer's Protocol Data Units (PDU). The R L C layer has a buffer holding newly arriving PDUs and outstanding P D U s which have been sent but not been Acknowledged ( A C K ) . A P D U is deleted from this buffer i f it is A C K e d by a status report 22 Chapter 2 Overview of Previous Work from the receiver, or transmitted more than a certain number of times. Senders can also actively set a "polling bit" in the P D U header to request a status report from receivers. This action can be triggered in different ways, for example, by a timer time-out, or using a window based rule, or when the last P D U remained in buffer, and so on. We carefully model the "Poll_PDU"-based and the timer-based trigger functions. The former is a value set by the system. Every time a sender sends out P o l l _ P D U PDUs (including newly transmitted and retransmitted PDUs) , a polling bit is set on the next P D U ' s header. The latter is merely a timer that periodically triggers the polling. The receiver reports on status by sending a report using Super-Fields (SUFI). There are several situations that can trigger the receiver to send a status report. For example, i f the sender sends a polling request, a status report is triggered. Another situation is i f the receiver detects one or more P D U missing. A third way is i f the receiver transmits status reports periodically to the sender. A forth trigger occurs i f not all P D U s requested for retransmission have been received before the Estimated P D U Counter (EPC) has expired. Finally, either peer entity can trigger the R E S E T function to reset the wireless connection when it detects an extremely abnormal situation on the wireless link, for example, a very long idle time or other phenomena, indicating high error rates. If the transmission queue is full, an arriving Acknowledged Mode Data ( A M D ) P D U gets dropped. Otherwise, it joins the tail of the transmission queue waiting for its first transmission. Once it moves to the front, the R L C stores a copy of it in the retransmission buffer prior to passing it to the lower layer for transmission. The receiver accepts the "good" PDUs and discards P D U s with more errors than the F E C code can correct. It generates status reports to request the sender to resend the lost P D U s that have 23 Chapter 2 Overview of Previous Work not expired. R L C defines the multiple reject A R Q scheme, in which every status report not only contains N A K s , but also A C K s . Status reports can be sent as dedicated control P D U s or piggybacked in data P D U s over the return link. The control P D U s are put into the transmission queue with higher priority. In Figure 2.6, we assume these actions are incorporated in the " P D U handler" unit. The sender keeps sending P D U s in its transmission queue until it receives a status report or reaches the sending window boundary. Once a status report arrives, the R L C takes two actions according to the A C K s and N A K s received: delete A C K e d P D U s from the retransmission buffer, and copy N A K e d P D U s from the retransmission buffer to the transmission queue with higher priority than the new (first time transmission) data P D U s . This retransmitted P D U gets dropped when its number of transmissions reaches a pre-configured maximal value, which is cal led M a x D A T in the 3GPP document [18]. 2.3.2 Other Mechanisms at Wireless Access Nodes The wireless-wireline interface nodes may deploy a number of resource management mechanisms, including traffic scheduling, call admission control ( C A C ) / bandwidth reservation, and so on. These mechanisms are not defined in the 3 G P P documents, instead, they are implementation dependent. C A C and bandwidth reservation are used to address performance degradation due to mobile handoff. When a mobile changes cells, the new cell may not guarantee supplying enough network resources to keep the same QoS as the old cel l , so the mobile may experience perfor- mance degradation during the handoff period [25] [26]. C A C and bandwidth reservation seek to estimate the amount of resources needed by the incoming mobiles, and make admission decisions 24 Chapter 2 Overview of Previous Work to reduce the performance degradation or handoff dropping rates. Another kind of bandwidth reservation occurs when multiple users try to get access to the same media. Packet scheduling at the wireless-wireline interface nodes is important for QoS provision. M u c h research work has been done in this area, for example, the Weighted Fair Queuing (WFQ) [27] variances such as Effort-Limited Fair (ELF) scheduling [28] and the Idealized Wireless Fair- Queuing (IWFQ)[29], the extended Earliest-Due-Date (EDD) / Delay-Earliest-Due-Date (Delay- E D D ) for wireless [30], and so on. From the viewpoint of implementation, the E D D mechanisms need to check every packet's time stamp, thus it is hard to realize in reality. The W F Q variances guarantee delay bounds on weight tightly coupled to a preserved service rate, so it is relatively easier to implement. However, to determine a weight matching a specific delay requirement is still quite complex, because a number of factors such as propagation delay, queuing delay, packet arrival rate, and so on, have to be counted for. In order to s impli fy our research model , we assume M e d i a Access Con t ro l ( M A C ) protocols work perfectly to eliminate the negative impact of all of the above mentioned competi- tion for resources. We make the assumption that both the assigned bandwidth (or the service data rate) of a real-time flow and the time consumed by the M A C layer are constant during the lifetime of a connection. The impact of the above factors on our proposed mechanisms can be further investigated in the future. The buffer management issue at wireless-wireline interface nodes has not attracted much research interest. So far, al l researchers assume that, at the l ink layer, drop-tail is used as the default queue management scheme. Under the drop-tail assumption, [31] [32] [33] estimate the effects of throughput and the buffer overflow dropping process on buffer capacity when the l ink 25 Chapter 2 Overview of Previous Work layer A R Q is employed, but they neither consider the delay of packets, nor target the buffer management issue from the viewpoint of active queue management. ine Our analysis in the next chapter shows that i f a real-time flow with a hard time deadl uses link layer A R Q , when the transmission queue length grows too long due to retransmissions caused by wireless error, the queuing delay dominates the other delays, causing a great deal of expired packet drops. Our proposal is that A Q M not only be applied to traffic aggregates at the G P R S / U M T S core routers, but also to the link layer buffer at the wireless access nodes, namely, the DiffServ egress edge nodes, to control the queuing delay caused by wireless errors. In the following chapters, we analyze the behavior of the link layer A R Q , and derive the necessary mathematical expression of the packet expiration rate on queue length with the link layer A R Q . We also prove that introducing A Q M into the l ink layer transmission buffer signifi- cantly reduce packet dropping due to expiry, and improves the end-to-end QoS. 26 Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic Chapter 3 Applying AQM to Link Layer Buffers for Real- time Traffic In Chapter 2, we observed that the possible causes of a real-time flow P D U drop are (1), the number of transmissions exceeding M a x D A T , (2), transmission queue overflow, and (3), packet A M D P D U expiry. The first point is fully determined by the transient wireless bit error rate (BER) , which is not a focus of this thesis. Points two and three are also caused by wireless l ink quality degrada- tion, but they can be eliminated by appropriately setting the transmission queue capacity. If we set the capacity to a small enough value, expirations disappear, but overflow is very likely to occur. If the queue capacity is set to a large enough value, the situation reverses. This gives us a chance to improve the end-to-end performance of real-time flows. In the above cases causing packet drops, we believe the dropping of real-time packets through overflow has more advantages than by expiration. With overflow, a newly arriving A M D P D U does not have a chance to consume buffer space and wireless bandwidth at the interface node. Dropping a packet this way does not affect the transfer of other packets in the same flow and other flows sharing the wireless channel. In contrast, such is not the case for expiration drops, as the packets have to be sent, possibly several times, until they are received - before expiration dropping occurs. We propose to use active queue management, such as Random Early Detection ( R E D ) , to drop those packets that are l ikely to expire at a much earlier point in the end-to-end connection, using an overflow-like mechanism. Because the packets are dropped before being enqueued to the buffer, they need not consume any network resources. However, i f the packets are dropped at a rate higher than the natural dropping rate due to expiry, the system's performance 27 Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic degrades unnecessarily. In order to avoid this problem, we need an analytic expression of the natural drop rate due to expiry, and to adjust our active drop rate to be lower than but close to it. 3.1 Analysis of Real-time Flows over RLC with A M We assume a real-time flow with effective service data rate R. Each link layer packet with fixed size / takes l/R seconds for one transmission. In an extreme case, such as where the wireless link is clean enough to transmit all PDUs successfully the first time, the upper bound of the trans- mission queue length that guarantees expiration packet drop is approximated thus, where T is the maximum allowable transfer delay of the real-time application, tl is the latency caused by the Internet, and tl is the sum of the one-way wireless propagation delay and process- ing/scheduling delays at both ends. When a new packet arrives at the transmission queue, i f the observed queue length is longer than Lmax, then even i f the wireless l ink is clean, this packet is still dropped due to expiration. A queue capacity larger than Lmax is clearly not useful. If the wireless link is so noisy that every P D U has to be transmitted N times before being accepted or dropped, where N is the maximum allowable number of transmission, every packet approximately spends in the transmission queue waiting for its transmissions to complete, during which following pack- ets have to wait. Lmax _ UT-t\-t2)-R~ I (3.1) (3.2) 28 Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic On the other hand, in this situation every packet also has to spend approximately t4 = (N-\)-t2-2 (3.3) time in the retransmission buffer waiting for an A C K or N A K to come from the receiver, during which time it does not have any impact on the transmission of other packets. The factor 2 takes into account round trip delays. We assume that the delay t2 is exactly the same on both forward and backward paths. Thus, the lower bound of the queue length that guarantees no packet expira- tion is Lmin = T-tl - ? 3 - f 4 t3 (3.4) If the observed queue length falls into the range of [Lmin, Lmax], there is a certain probabil- ity that a newly arrived P D U w i l l be dropped due to expiration. Intuitively, we expect that a longer queue causes a larger queuing delay, and hence a higher probability of packet expiration. We propose to actively drop some packets at the R L C transmission queue in accordance with the above observation, so that buffer length is limited to avoid packet expirations, and bandwidth can be released earlier to improve utilization. The relationship between the average packet delay and the average queue length has been analyzed in [33] [34]. Here, we derive the relationship between the expiration probability of real- time packets and queue length. Assuming the wireless link has a long-term average packet loss rate of PB due to transmis- sion errors, we can approximate the wireless link with an error-free data link, one that has a vari- 29 Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic able data rate ranging between [R/N, R], with an average data rate of p = (1 - Pg)R. Note that this approximation does not change (3.1) - (3.4) above. The transmission time for each packet is thus independent o f each others, w i t h a mean m = Up and var iance o 2 upper bounded by {max(l/p - l/R, I N/R - / / p ) } 2 . The independence here is extremely important for further analysis. We assume the expired real-time flow packets are dropped only at the application layer of mobile terminals, which is opposite to E D D or De lay -EDD as we mentioned in the previous chapter. Because the R L C and the lower layers do not retain any timing information regarding real-time applications, they are unable to determine whether a packet has expired or not. In fact, timing information is normally stored in upper layer packets that have been encapsulated into the R L C layer P D U s . Even the R L C layer manages to retrieve timing information (which would break through the protocol lay- ering rules), it is very unlikely that every R L C layer P D U ' s delay time is recorded for its expiry status to be determined. When packets are never dropped at the R L C or the lower layers in the interface nodes due to expiry, the transmission time of any packet never affects the transmission time of other packets, so we can consider their transmission time to be independent of each other. If a newly arrived packet observes n packets in the queue, Lmin <n< Lmax, its queuing delay becomes a new random variable V, which is the sum of the previous n independent random variables. When Lmin is large enough, this sum of n packet transmission times forms a Gaussian distr ibut ion according to the central- l imit theorem, wi th mean M = mxn and variance ,2 2 o = n x o . 30 Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic N o w with the Gaussian distributed random variable V, we can analyze the relationship between the expiration drop rate p and the observed queue length n. When a packet arrives at time T 1 ( its deadline is computed as %2 = + (T-ti). Since V i s a Gaussian variable, the probabil- ity of expiration p is ,x2 - Ms p = P r{x 1 + v > x 2 } = Q [—r-) (3.5) where Q(.) is the well-known Q function giving the tail area of a Gaussian distribution, as shown by the shaded area in Figure 3.1. In Figure 3.1, we draw the Gaussian probabi l i ty density Tj MA MB x 2 Figure 3.1 Gaussian distribution with means MA and MB, variances a ' A and a ' B , where MA < MB and ° ' A < ° ' B . functions for rc = A and n = B, where A < B. When the value of n is A, both the corresponding 31 Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic mean and variance (M and o') are small, so the shaded area is also small. When n = B, its M and a' increase, thus the shaded area increases as well . This result matches the intuitive expectation earlier. Formula (3.5) is extremely important for the scheme we propose in this thesis. It is the nat- ural drop rate caused by expiration. Our active drop rate must not be higher than its value in a wide range of queue lengths, but must come as close to it as possible. F r o m (3.5) we can get the Gaussian curve I in Figure 3.2, which is the relationship O b s e r v e d q u e u e length Figure 3.2 Gaussian expiration drop rate and AQM approximations. between the expiration drop probability p and the observed queue length n. To obtain this result, we fix the long-term packet error rate at PB = 15%, and queue capacity to Lmax. To smooth the 32 Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic changes in p, the observed queue length n here is actually made to be the calculated average queue size using a low-pass filter with an exponential weighted moving average wq [13]. Note that the above result is an approximation as the possible loss of some dedicated con- trol P D U s for status reports has been ignored. These losses introduce some extra queuing delay that causes the actual Gaussian curve I to be slight higher than shown in Figure 3.2. Because the dedicated control P D U s are much smaller than the data P D U s in length, for simplicity we ignore this additional delay. To actively drop packets with the exact probability obtained above is difficult to realize, due to computing complexity. Instead, we propose to use a simpler scheme similar to gentle-RED to approximate the expiration rate. In Figure 3.2 we present two piece-wise, linear A Q M func- tions, II and III, to approximate the Gaussian curve. We set these two approximation curves to be lower than the Gaussian one in most of the range of the observed queue length, so as to guarantee that the number of packets actively dropped is less than the natural expiration drops. Clearly, the better the approximation is, the better the performance should be. Therefore function III yields the best result as shown in the next section. However, in reality, it is hard to measure the Gaussian curve because of varying factors. We consider the two-slope, piece-wise, linear function II, defined in (3.6), to be the easiest to implement, as it can be uniquely fixed by Lmax and Lmin, while the other parameters are chosen somewhat arbitrarily. We can see that this simple scheme works wel l enough in controlling queue length and improving system perfor- mance. 33 Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic max (n-L . ) P ,L . +L min max min. i f L • <n< min L • +L min P = (1 - max ) n - L . +L N min max\ L . +L \ min max) i f n > ^min  + ^max (3.6) max 0 i f n < L min 3.2 Simulation Results The simplified system simulation model shown in Figure 3.3 has been built and exercised App R N C Server U D P / IP R L C UDP/ IP Internet & G P R S / U M T S core network Application UDP/ IP ~ - . . R L C Wireless Figure 3.3 Simulation model for AQM in the link layer buffer. in the O P N E T 8.1 environment. The R L C layer protocols shown above are carefully modeled. We simulate the behavior of l ink layer A R Q with poll ing functions and other conditions that can trigger the generation of status report, introduced in Section 2.3.1. For the status report, we modeled the L I S T Super-Fields (SUFI) [18] to carry the N A K and A C K information. The wireless channel model in O P N E T 8.1 takes into account multiple access interfer- ence, background noise and multipath fading, thus, it forms a more comprehensive error model 34 Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic than the Gilbert-Elliott model [35] [36]. To focus the performance evaluations on the management of the downlink transmission queue, we make a simplifying assumption that the wireless uplink and the wireline Internet links are error-free. We assume that the arrivals of real-time packets fol- low the Poisson distribution, that packets have a constant length, and that the unit of queue length in the figures given below are the number of packets. Thus, the traffic load varies in direct propor- tion to the average inter-arrival time of real-time packets. 3.2.1 Two-slope Piece-wise Linear AQM Function II In this section, we take the two-slope piece-wise linear approximation A Q M function II as an illustrative example of the effects of the scheme. The parameters for the wireless link and real- time application are given in Table 3.1. Table 3.1 Parameters for Simulation of AQM in RLC layer. Parameters Value Effective wireless data rate R 1024 K bps Max. number of transmissions MaxDAT 5 40 packets (AMD PDUs) Lmax 210 packets (AMD PDUs) Maximum allowable end-to-end delay T 250ms Propagation delay in Internet & GPRS/UMTS core network 50ms Bandwidth of Internet link 2048 K bps Average long-term packet error rate PB 15% To get the values of the R E D parameters such as wq and maxp, we set them as suggested in [23], wq = 0.002 and maxp=0.1. Figure 3.4 ~ Figure 3.7 come from the two-slope, piece-wise, linear A Q M function II in (3.6) with 90% offered load. In this thesis, the term "offered load" refers to the ratio of the traffic 35 Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic generating rate of a specific traffic source, and the effective service rate of the corresponding wireless link. From Figure 3.4 we can see the relationship between the wireless B E R , throughput, Wireless bit error rate 0. 0003 0. 0002 0. 0001 0. 0000 300 200 100 0 100 , 000 80 , 000 eo , 000 40 , 000 20 , 000 0 Average transmission queue length Receiving throughput (butes/sec) 100 200 Figure 3.4 BER, transmission queue length, and throughput without AQM. 300 time (sec) and the average transmission queue length (in packets) as functions of time, without A Q M . When the queue length becomes large with a high wireless bit error rate, the corresponding throughput drops sharply. We must note that wireless error does not directly cause packet loss because of the link layer A R Q recovery mechanism. However, the high wireless bit error rate causes frequent 36 **** Drops for exceeding MaxDAT (packets) Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic retransmission at the link layer and large queuing delays, leading to expiration drops. Figure 3.5 Expiration drops (packets) 8 0 , 0 0 0 6 0 , 0 0 0 4 0 , 0 0 0 2 0 , 0 0 0 0 8 0 6 0 4 0 2 0 ! 0 2 0 , 0 0 0 1 5 , 0 0 0 1 0 , 0 0 0 5 , 0 0 0 0 •7* Overflow drops (packets) 3 0 0 time (sec) Figure 3.5 Different packet drops without AQM. shows this clearly. We can see that there are three types of packet drops when no A Q M is applied: expiration, transmission time exceeding maximum value (MaxDAT), and overflow. Among them, the expira- tion packet drops account for the largest share. When the queue length or wireless B E R increases, the number of all three types of packet drops increases as well , but expiration drops are the most 37 Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic harmful to system performance, due to their large share of total packet drops. Figure 3.5 shows how serious the problem of expiration can become when the system load is high. App ly ing the A Q M scheme in (3.6) to constrain the queue length changes the situation dramatically. Figure 3.6 shows the relationship between the B E R , queue length (in packets), and 0.0003 0 .0002 0 . 0 0 0 1 0 .0000 200 100 .000 6 0 , 0 0 0 W i r e l e s s b i t e r r o r r a t e i . . . i . I. Average t r a n s m i s s i o n queue l e n g t h R e c e i v i n g t h r o u g h p u t ( b y t e s / s e c ) 300 t i m e ( s e c ) Figure 3.6 BER, transmission queue length, and throughput with AQM. throughput over time, wi th A Q M function II. We can see that the average queue length is 38 Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic effectively controlled near the middle point of (Lmin + Lmax) I 2, and that the throughput curve does not fluctuate much on the time scale, which is a better situation than in Figure 3.4. The simulations also show that the maximum queue length increases with the B E R and traffic arrival rate, presented later. Figure 3.7 shows the reasons behind the improvements. We can see that one of the main Expiration drops (packets) 300 200 1 I _ _ 1 . B • . 1 ! 1 ! 1 3 0 , 0 0 0 100 50 0 l Active random, drops (packets) Drops for exceeding MaxDAT (packets) _mM . _ B B 4^ ^ da B *° B B B 100 200 300 time (sec) Figure 3.7 Different packet drops with AQM. 39 Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic differences between Figure 3.5 and Figure 3.7 is that the types of packet drops have been changed. With A Q M , all overflow and most expiration drops are eliminated, replaced by a new kind of packet drop, active random drops. The figure shows that the amount of active random drops is much less than of expiration drops in Figure 3.5, but still dominates other types of packet drops. This demonstrates the effectiveness of A Q M in improving system throughput, without degrading the overall packet drop rate. Note that the latter condition is an important criteria in maintaining the quality of service for real-time applications, as mentioned above. 3.2.2 Comparison of Different AQM Functions with Varying Traffic Load In this section, we present the effect of the two A Q M functions II and III with different traffic loads. We fix al l other parameters as in Table 3.1, and vary the traffic load from 50% to 100%. The A Q M function III has been presented in Figure 3.2. We set the two turning points corresponding to the drop rate 0.1 and 0.9 in the A Q M function III as 100 and 180, with the parameters listed in Table 3.1. Figure 3.8 and Figure 3.9 show the throughput and end-to-end delay of these dropping functions. Under heavy traffic (traffic load exceeding 70%), the A Q M schemes yield great improve- ments to the overall throughput and end-to-end delay. A s expected, the A Q M function III gives better performance than function II, thanks to better approximation of the Gaussian curve. Figure 3.10 shows the moving averaged queue length of the above three dropping schemes with varying traffic loads. Here, the average queue length increases with the traffic load when the B E R is constant. 40 Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic 0.5 0.S 0.7 0.8 0.9 1.0 T r a f f i c l o a d Figure 3.8 Throughput vs. traffic load with three schemes. 0 . 5 0 . 6 0 .7 0 .8 0 . 9 1.0 T r a f f i c l o a d Figure 3.9 End-to-end delay vs. traffic load with three schemes. Figure 3.11, Figure 3.12 and Figure 3.13 show different types of packet loss with varying 41 Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic 200 0.5 0.6 0.7 0.8 0.9 1.0 Traffic load Figure 3.10 Average transmission queue length with three schemes. traffic loads, without A Q M , and with different types of A Q M functions (II and III). Owing to the rule that the active packet drops rate is less than the Gaussian drop rate, the total numbers of drops with A Q M functions II and III are much less than that without A Q M . It is apparent that when traffic is light, the queue length is not long enough to trigger the A Q M (see Figure 3.10), and the expiration packet drop rate is small enough not to damage system performance. Therefore the performances in both scenarios, with and without A Q M , are similar. With an increasing traffic load, both the expiration drop rate and queue length grow. When the traffic load exceeds 70%, the advantages of our proposed A Q M schemes become quite obvious. Under such conditions, without A Q M , the wireless link is saturated by an ever increasing number of retransmissions, resulting in the reduction of throughput as traffic load increases, as shown in Figure 3.8. A s the transmission queue length approaches Lmax, the expiration drop rate becomes 42 Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic 140000 Traffic load Figure 3.11 Different packet drops vs. traffic load without AQM. the dominant (Figure 3.11) and most harmful factor to system performance, in both throughput and end-to-end delay. With A Q M , we can control queue length and limit it to a lower value, thus eliminating expiration packet drops. Since the A Q M releases resources that would otherwise be used ineffectively by expired packets, it is also able to mitigate throughput saturation, as shown in Figure 3.8, where the throughput curve for the A Q M continues to increase up to a traffic load of 100%. From these figures we can also see that the better the A Q M functions approximate the Gaussian curve, the better they can constrain queue length, and the better performance becomes. When we compare Figure 3.12 to Figure 3.13, we see that the A Q M function III does a better job of eliminating expiration packet drops, and it outperforms the A Q M function II. In Figure 3.12 43 Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic Traffic load Figure 3.12 Different packet drops vs. traffic load with AQM function II. some expiration packet drops still remain, while in Figure 3.13, none do. This complete conver- sion from expiration drops to active random drops is critical to improving the end-to-end Quality- of-Service in this scenario. A s we can see from Figure 3.2, the A Q M function II can be uniquely fixed by Lmin and Lmax, while the A Q M function III lacks a middle point from which to determine the turning points, and requires more accurate information regarding the factors effecting the Gaussian curve for approximation. If we have all the necessary information to derive the Gaussian curve, we can easily determine an A Q M function with a good approximation. A n y A Q M function can be actually chosen to approximate the Gaussian curve, i f the computing capacity is powerful enough. 44 Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic 45000 40000 35000 a o •J5 30000 a? o a n E 25000 20000 15000 10000 5000 Total packet drops Active random packet drops Expiration packet drops 0.4 0.5 0.6 0.7 0.8 0.9 1.1 Traffic load Figure 3.13 Different packet drops vs. traffic load with AQM function III. One of the parameters most difficult to determine, but deeply effecting the Gaussian drop rate, is the long-run wireless packet error rate PB. In the next section we show results using A Q M functions to approximate different Gaussian curves caused by different B E R s . 3.2.3 A Q M Functions with Different Error Rates Different wireless packet error rates in the long term (during transmission periods) lead to different Gaussian drop rates, as illustrated in Figure 3.14 below. The Gaussian curves vary with different long-term packet error rates in a trend where the larger PB moves the Gaussian curves closer to the Lmin. Understandably, the larger PB leads to 45 Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic Observed queue length Figure 3.14 Packet drop rate with different long-run packet error rates. more retransmission and longer queue length, thus causing a higher probability of real-time packet expiration. The wisest way to choose an A Q M function is to sample the error rate for a short period at the beginning of the transmission, and use the sampled values to determine an initial Gaussian curve, and then derive the appropriate A Q M function to approximate it. Because such short periods may not generate reliable enough sample values, the Gaussian curve and the A Q M function may need to be further adjusted during the following transmission period. The basic rule, however, that the A Q M drop rate be not higher than the Gaussian drop rate in most queue length ranges, must be adhered to. Every A Q M function may perform wel l within a certain queue length range, even i f it does not accurately approach to the Gaussian curve. Be low we show the effect of the A Q M functions II and III in Figure 3.2 with different error rates. 46 Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic Again, we use the parameters listed in Table 3.1. The packet error rate PB ranges from 5% to 35%. The traffic load is constant at 80%. The system performance of the A Q M functions and the n o n - A Q M (Gaussian) cases with a varying long-term packet error rate of PB is presented in Figure 3.15 and Figure 3.16. 0.4 Long-term packet error rate Figure 3.15 Throughput vs. long-term packet error rates PB. When the long-term packet error rate PB varies within a certain range (around 15%), both A Q M functions achieve very good performance. If the error rate grows to higher values, as in Figure 3.14, the Gaussian curve moves closer to Lmin. Here, the effect of the A Q M function II gradually weakens because the gap between the Gaussian curves widens, while the A Q M function III still performs well because it is closer to the Gaussian curves. When PB is small, the average queue length rarely grows to a large enough value, thus the A Q M rarely takes effect, and the results with and without A Q M are similar. This shows the importance of accurately measuring the 47 Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic Long-term packet error rate Figure 3.16 End-to-end delay vs. long-term packet error rates PB. Gaussian curve and carefully choosing a good A Q M function. Also , in order to get the best A Q M function, the long-term packet error rate must be measured before the A Q M function is chosen, and further adjustments may be needed during the following transmission period. In this simulation scenario, the R E S E T function of the R L C layer in G P R S / U M T S is not considered. If PB grows to a high value, the wireless link is very likely to be reset by this function. 3.3 Summary In this chapter, we analyze the behavior of the 3GPP G P R S / U M T S R L C layer's transmis- sion queue in Acknowledged Mode, and derive a novel expression for the relationship between the average queue length and drop rate due to expiry of real-time flows with hard time deadlines. We prove that this drop rate can be approximated as a Gaussian curve on the average queue 48 Chapter 3 Applying AQM to Link Layer Buffers for Real-time Traffic length. Wi th this approximation, we propose to actively drop those packets that are l ike ly to expire, according to probability, following piece-wise, linear functions before they are enqueued. This action guarantees no QoS degradation by making the active drop rate to be lower than the Gaussian drop rate in every case, and on this basis, improves overall performance when traffic load or the wireless error rate is high. Through simulations, we prove the effectiveness of the mechanism. We also prove that the better the active drop rate curve is at approximating the Gaussian curve, the more the improve- ment. 49 Chapter 4 Applying WERUF to RLC Queue Chapter 4 Applying WERUF to RLC Queue We not only apply the idea of dropping real-time packets earlier at the wireless-wireline interface nodes, but also extend it to the whole G P R S / U M T S domain in this chapter. The reasons for and advantages to l imit ing unresponsive best-effort traffic have been proven in [15] [16]. The idea behind E R U F is that when unresponsive flows encounter network congestion at certain nodes over the end-to-end path, causing some packets to miss their delivery deadlines or overflow, dropping them at an earlier node frees shared network resources for other flows. We apply a similar idea in G P R S / U M T S wireless networks. In these networks, although the shaping functions at edge (both wireless and fixed sites) nodes limits the volume of unrespon- sive flows, the problem of these flows unnecessarily using up valuable network resources is exacerbated by the costliness of wireless bandwidth. Furthermore, propagation-induced variabil- ity of transmission quality over wireless links causes congestion at the wire-wireless interface nodes, that is, R N C s in U M T S networks. We adapt the E R U F scheme to address these problems. The new scheme is called Wireless Early Regulation of Unresponsive Flows ( W E R U F ) . Ref. [14] also proposes an algorithm to identify flows that are not TCP-friendly for regulating. In our case, however, there is no need for identification because the link layer transmission queue length is the best indicator of the flows to be regulated. 4.1 Early Regulation in GPRS/UMTS Network Figure 4.1 demonstrates the general idea of W E R U F . Leaky Bucket 1 (LB1) and Leaky Bucket ( L B 2 ) can be realized by the DiffServ traffic conditioners and the wireless channel resource management functions, respectively. When packets accumulate at R N C , due to R L C retransmission caused by wireless error, some signals are sent back to the DiffServ edge node, 50 Chapter 4 Applying WERUF to RLC Queue R N C Core routers G G S N Figure 4.1 The general concept behind Wireless Early Regulation of Unresponsive Flows (WERUF). namely, G G S N in the G P R S domain, to notify it to shrink the token rate of the corresponding traffic shaper (the token bucket L B 2 ) . With this operation, the shared network resources in the forwarding nodes (or core routers) such as S G S N s are released for other flows. Because the packets accumulating at R N C may expire, as presented in discussed last chapter, we intend to generate the congestion signal using a lower probability and drop these packets at the G G S N . In this way, we actually convert the expiration packet drops to overflow-like packet drops at the DiffServ ingress edge node, and at the same time, guarantee performance degradation w i l l not occur at the constrained flows. It is worth noting that the packets in R N C are the A M D P D U s , while the packets in G G S N are the IP packets. They are handled by different layers. Again , we need to analyze the behavior of the R L C layer retransmission queue length as stated in previous chapter. Equations (3.1) - (3.5) s t i l l apply, except the meaning of some variables are changed. In this chapter we are concerned with finding an algorithm that caters to the QoS requirement of the U M T S bearer service, and making improvements. Focusing on the G P R S / U M T S domain, we assume the latency caused by the Internet, 51 Chapter 4 Applying WERUF to RLC Queue external to the G P R S / U M T S domain, is a fixed value. The variable T i n (3.1) and (3.4) becomes the amount of transfer delay over the U M T S bearer service. According to [7], T < 250 ms. In the simulation we use 250 ms for simplicity. A s a result, Lmin and Lmax in these equations becomes the minimal and maximal transmission queue length that can cater to the transfer delay require- ment of the U M T S bearer service instead of the end-to-end delay requirement of real-time applications. We also assume that no P D U is dropped in the R L C layer due to expiry, so that the independence of each P D U ' s transmission time is st i l l maintained. Another change is that the meaning of tl in these equations becomes the latency caused by the G P R S / U M T S core network, excluding the Internet. The equations still hold true, despite the changes listed. Further, we use O P N E T 8.1 to model the DiffServ functional elements in the G P R S / U M T S routers, as shown in Figure 2.3. In G G S N , we build the Mult i - f ie ld classifier, the traffic shaper, the multiplexer and the dropper. In S G S N we build only a dropper. These droppers are R I O droppers handling I N and O U T packets. They strictly follow the I E T F document [21] to complete the Assured Forwarding (AF) architecture accurately. One of the base of this scheme is that every flow must be independent, except at the core routers. Therefore we assume the Complete Partitioning (CP) [37] policy is used at the G G S N for each flow shaped by the Leaky Bucket. A t R N C , i f no explicit traffic conditioner is implemented and the shaping relies on wireless resource management, some assumptions must be clarified. If these flows are multiplexed at the same transmission buffer, our scheme can either apply to cases where each flow goes through different R N C s , served by the same core network router, or the C P policy is also used at the R N C for every flow. In order to separate the flows, a Mult i-f ield Classi- fier resides at the R N C (Figure 2.3). Whether there are one or more R N C s in the simulation, the 52 Chapter 4 Applying WERUF to RLC Queue network topology figures below show only one R N C for simplicity. Source quenches employ Internet Control Message Protocol ( ICMP) [38] packets, sent to the G G S N to adjust the shaper's token rate for the flow (here we consider the per-flow shaping only). These I C M P packets are generated according to the A Q M functions described in last chapter. In the G G S N shapers, the token rate adjustment is made by multiplicative-decreases/ additive-increases. When an I C M P packet is received, and the token rate has not been cut within the current estimated end-to-end round trip time (RTT), the new token rate is calculated by new rate=min{guaranteed token rate, current token rate/2) (4-1) where the guaranteed data rate is pre-configured by the system [7]. This cut-by-RTT scheme protects the regulator from being rapidly driven down by a sequence of source quenches before the source has had a chance to respond to the congestion. The end-to-end R T T is difficult to measure for real-time traffic, due to the lack of return link acknowledgments. [14] proposes to use twice the propagation delay on the next link towards the congested router to approximate the RTT, but this proves unfeasible due to the extra delay caused by link layer retransmissions. With "lossy" wireless links, the queuing delay dominates the propagation delay. When this type of estimate is made at the DiffServ ingress edge node, G G S N , there is no easy way to predict the queuing delay at the egress edge node, l ike R N C . To resolve this problem, we consider that the 3GPP-defined maximum transfer delay in the U M T S bearer service, T, to be a reasonable estimate for real-time applications for several reasons. First, in most cases, the latency introduced by the external Internet is much less than the latency in the G P R S / U M T S domain with noisy wireless links. We can assume the delay in the G P R S / U M T S domain 53 Chapter 4 Applying WERUF to RLC Queue represents the total trip time. Second, i f the one trip delay in the G P R S / U M T S network for al l real-time applications is uniformly distributed, the mean value of this delay is l ikely to be located in the middle of [0, 250 ms]. Therefore using 250 ms to approximate the round trip delay is quite reasonable. The token rate recovery mechanism proposed in [16] consists of the bandwidth of the rate- reduced shaper associated with a flow being increased by one average packet size for every R T T estimate that passes, without the arrival of a source quench for the flow unti l the token rate recovers to the original value. This scheme recovers the token rate too slowly, however, and does not work well in wireless networks. For congestion caused by wireless errors, which normally has a much smaller time scale than congestion periods in wired networks, the edge-node token rate should preferably not stay at a low level for an extended period of time. Otherwise, i f the wireless link has recovered but the token rate has not, the throughput degrades unnecessarily. Our recovery rule is that after a token rate reduction, the token rate increases by one average packet size every time a new downlink packet arrives at the token bucket, until the token rate has recovered to the initial value. Our simulation results prove the effectiveness of this mechanism. A s mentioned before, we use a complete partitioning (CP) policy to manage the buffer of the G G S N for each token bucket. The buffer size is set to a small value so new packets start dropping earlier, after the token rate is reduced. 4.2 Simulation Results We use the same O P N E T 8.1 R L C models as in last chapter with the new models of GSNs . Some parameters commonly used in al l below simulations are listed in Table 4.1. A l l of the fol lowing simulation scenarios are with some background traffic in the core network. I C M P 54 Chapter 4 Applying WERUF to RLC Queue packets are generated by the R N C with probabilities determined by the A Q M function II. The Table 4.1 Common Parameters for Simulation of WERUF. Parameters Value Max. number of transmissions MaxDAT 3 Packet error rate in noisy link 15% Packet error rate in clean link 0.2% Lmin 30 packets (AMD PDUs) Lmax 180 packets (AMD PDUs) Propagation delay in the GPRS/UMTS core network 1.0 (ms) Wireless effective service data rate for each real-time flow 1 M bps Wireless effective service data rate for each TCP flow 384 K bps token rate adjustment and R T T estimate methods at the G G S N , upon receiving an I C M P packet, are exactly the same as described in section 4.1. The term "offered load" in scenarios given below refers to the offered load for every real-time flow, for example, a 90% offered load means every real-time flow has a 90% offered traffic load, no matter what its wireless link condition. We also let the wireless links be independent of each other. The wireless errors are caused by a separated interference source. 4.2.1 Real-time Flow and TCP Flows Real-time flows and T C P flows normally have different t iming requirements. In the following two scenarios, we apply the W E R U F to real-time flows only because T C P flows do not have any expired packet drops. 4.2.1.1 One Real-time Flow in a Noisy Link and Multiple TCP Flows in Clean Links with Equal RTT In this scenario, M l , M 2 , M 3 and M 4 are downloading traffic from H I , H2 , H3 and H4, 55 Chapter 4 Applying WERUF to RLC Queue respectively (Figure 4.2). The M l - H l pair is transferring real-time traffic, experiencing high i G P R S / U M T S domain j Internet Figure 4.2 Network topology of one real-time plus three TCP flows. B E R in the wireless link, and has a maximal token rate of up to 1 M bps, as well as a 256 K bps guaranteed rate. The other pairs are running long-lived F T P applications, and have maximal token rates of up to 384 K bps with clean wireless links, but do not have guaranteed rates. The total core network bandwidth is limited to 1 M bps. The external Internet introduces a 30 ms delay to al l hosts. We set the real-time packets with a higher priority over T C P packets in the core network to cater to their different delay requirements. In this network topology, the S G S N acts as the core router in the DiffServ domain. Figures 4.3 and 4.4 show that after applying W E R U F , the throughput of the real-time flow in the noisy wireless l ink improves significantly, when its offered load increases of more than 60%. Further, the end-to-end transfer delay reduces to less than 65%. Figure 4.5 shows the improvement to T C P flows. When the real-time flow's offered load is higher than 60%, the improvement for T C P flows also increases significantly, due to the network resources in the core network being released by the real-time flow. 56 Chapter 4 Applying WERUF to RLC Queue 5 5 0 0 0 J O CO 5 0 0 0 0 h CD Z 3 &r 4 5 0 0 0 h Z 3 O o cu CO cu on - © - W i t h o u t W E R U F - + - • W i t h W E R U F ,4- -. 4 0 0 0 0 1 - 3 5 0 0 0 3 0 0 0 0 0 . 5 0 . 6 0 . 7 Offered load Figure 4.3 Real-time flow throughput vs. offered load. 1 1 0 W i t h W E R U F - © - W i t h o u t W E R U F -e~ 0 . 8 0 . 9 4 0 0 . 5 0 . 6 0 . 7 Offered load 0 . 8 0 . 9 Figure 4.4 End-to-end delay of the real-time flow vs. offered load. We can see that improvements only occur when the real-time traffic flow is heavy. In this case, the noisy wireless l ink is close to saturation and many real-time P D U s accumulate in the 57 Chapter 4 Applying WERUF to RLC Queue 3 0 0 0 0 0 " 1 • 1 X 0 . 5 0 . 6 0 . 7 0 . 8 0 . 9 Offered load of the real-time flow Figure 4.5 Total TCP throughput vs. real-time traffic offered load R N C , the average transmission queue length is long enough to trigger the W E R U F . On the other hand, there is much competition for the shared bandwidth/buffer in the core network, thus the saved resources due to W E R U F can make a positive impact on overall performance. Next, we investigate what causes to the improvements. In our simulation, the real-time packet drops due to transmission time exceeding M a x D A T are almost equal in both cases, so they are not presented in the figures. Figure 4.6 shows that without W E R U F , most packets are dropped due to expiration, but with W E R U F applied, packets drop due to overflow in the G G S N only, while none drop due to expiry (Figure 4.7). More importantly, the total packet drops in Figure 4.7 is less than the total packet drops in Figure 4.6. This results from our strategy of making the active packet drop rate (actually the I C M P packets', generating rates fol lowing the A Q M function II here) less than the natural packet drop rate. The fewer expired drops results in more efficient resource utilization, discussed in Chapter 3. 58 Chapter 4 Applying WERUF to RLC Queue 35000 30000 CO § " 2 5 0 0 0 ~CD - g 20000 CO Q_ M — ° 1 5 0 0 0 CD 10000 50001- | Total packets d rops D Expired packet d rops J Over f low packet d rops at RNC 0.5 0.6 0.7 Of fe red load 0.8 0.9 Figure 4.6 Real-time flow packet drops vs. offered load without WERUF. 25000 20000 CO Q . O " O "CD 15000 O CO Q_ o » - 10000 CD 5000 h • • T o t a l p a c k e t d r o p s [_ I O v e r f l o w p a c k e t d r o p s a t G G S N E x p i r e d p a c k e t d r o p s 0.5 11 0.6 0.7 Of fe red load 0.8 0.9 Figure 4.7 Real-time flow packet drops vs. offered load with WERUF. This effect is similar to the packet drops described in the last chapter, except we also 59 Chapter 4 Applying WERUF to RLC Queue improve the performance of low priority T C P flows by early regulation. The undeliverable packets are dropped at the G G S N because of the overflow caused by a smaller token rate and l imited buffer space. This mechanism has a drop-tail effect, thus, it is l ike ly to drop packets continuously, contrary to the A Q M schemes discussed in Chapter 3. Different real-time applica- tions may favour different dropping disciplines. This topic would be interesting for future research. 4.2.1.2 One Real-time Flow in a Noisy Link and Multiple T C P Flows in Clean Links with Different R T T In this scenario, the Internet latency of the real-time flow ( M l - H l pair) is 30 ms, while the latencies from H2, H3, . . . , H6 to G G S N are 5 ms, 20 ms, 80 ms, 160 ms and 250 ms, respec- tively (Figure 4.8). We set the real-time traffic load to be constant at 80%. Again , the real-time Figure 4.8 Network topology of one real-time and five TCP flows with different Internet latencies. flow has a higher priority in the core network, but suffers from a noisy wireless link, while T C P flows with lower priority in the core network are sent over clean wireless links. Their token rates 60 Chapter 4 Applying WERUF to RLC Queue in G G S N and wireless data rates in R N C are the same as in section 4.2.1.1. The total core network bandwidth is 2 M bps. The simulations yield throughputs of 43357 bytes/s and 51719 bytes/s for the real-time flow without W E R U F and with W E R U F , respectively - an improvement of better than 19% for W E R U F . The total packet drop rate with W E R U F is also less than the packet expiry rate without W E R U F . T C P throughput with different Internet latencies are shown in Figure 4.9. Here the Q | 1 1 , 1 1 1 0 50 100 150 200 250 Internet latency (ms) Figure 4.9 TCP throughput per flow vs. Internet latency. improvement favors T C P flows with smaller Internet latency. Because T C P uses A C K s to "clock out" new segments [8], flows with shorter RTTs occupy more core network bandwidth given up by the real-time flow under W E R U F . 61 Chapter 4 Applying WERUF to RLC Queue 4.2.2 Real-time Flows Over Different Link Conditions These scenarios show cases when multiple real-time flows compete for l imited core network bandwidth. W E R U F is applied to their transmission queues at R N C s . 4.2.2.1 Two Real-time Flows in Different Links In this scenario, only two real-time flows are investigated (Figure 4.10). These two real- Internet Figure 4.10 Network topology of two real-time flows. time flows both get a 1 M bps token rate in the G G S N , and the same Internet latency of 30 ms. The flow in the noisy link gets a higher priority than the one in the clean link, that is, their packets in the core network are handled by different drop precedences. The purpose of this setting is to more easily show the effects of W E R U F . When these two flows have the same drop precedence when competing for core network resources, most packets are dropped by the RIO dropper in the core network, while the surviving packets arriving the R N C are not dense enough to accumulate a long enough queue to trigger the W E R U F . The Internet latency is 30ms for both flows. The other parameters are the same as in Table 62 Chapter 4 Applying WERUF to RLC Queue 4.1. From Figures 4.11 to 4.14, we can see that W E R U F not only significantly improves the low drop precedence real-time flow performance in the noisy link, but also the performance of the high drop precedence flow in the clean link, under heavy load condition. F r o m Figures 4.11 and 4.13, we can see that without W E R U F , when the traffic load 5 2 0 0 0 "575 C L =3 o 50000Y 4 8 0 0 0 1 - 4 6 0 0 0 4 4 0 0 0 h 4 2 0 0 0 4 0 0 0 0 3 8 0 0 0 3 6 0 0 0 3 4 0 0 - € > - W i t h o u t W E R U F - + - W i t h W E R U F 3 2 0 0 0 0 . 5 0 . 6 0 . 7 Offered load 0 . 8 0 . 9 Figure 4.11 Throughput of the low drop precedence real-time flow in a noisy link vs. offered load. becomes heavy, the low drop precedence flow in a noisy l ink approaches to saturation, that is, from 70% to 90%, the throughput hardly increases, while the end-to-end delay grows. But with W E R U F , its throughput continues to increase with the offered load, and the incremental end-to- end delay is less than that without W E R U F . Without W E R U F , the performance of the high drop precedence f low in a clean l ink 63 Chapter 4 Applying WERUF to RLC Queue 5 4 0 0 0 • H 5 2 O 0 O - X "~ ~—— / \ — — - . 4 . ~ — 5 0 0 0 0 - - :e s/ s)  :e s/ s)  4 8 0 0 0 - S - >-» - O -*—' —s 4 6 0 0 0 ro ug hp i 4 4 0 0 0 • - 4 2 0 0 0 - O - W i t h o u t W E R U F 4 0 0 0 0 — 1 — W i t h W E R U F \ - 3 8 0 0 0 0 . 5 O . S 0 . 7 0 . 8 0 . 9 Offered load Figure 4.12 Throughput of the high drop precedence real-time flow in a clean link vs. offered load. 2 01 , , , 1 0.5 0.6 0.7 0.8 0.9 Offered load Figure 4.13 End-to-end delay of the low drop precedence real-time flow in a noisy link vs. offered load. degrades when the total traffic load increases (Figure 4.12 and Figure 4.14), because packets of 64 Chapter 4 Applying WERUF to RLC Queue 2 0 ' • • • 1 0.5 0.6 0.7 0.8 0.9 Offered load Figure 4.14 End-to-end delay of the high drop precedence real-time flow in a clean link vs. offered load. the low drop precedence flow uses up shared resources of the core network. M a n y of these packets are actually undeliverable due to expiration, thus network resources are wasted. Wi th W E R U F , these undeliverable packets are dropped earlier at G G S N , network resources are utilized more efficiently, and the performance of this high drop precedence flow significantly improves. Figure 4.15 and Figure 4.16 show the packet drops of the low drop precedence flow in the noisy l ink. The total number of packet drops in these two figures does not include those drops made due to transmission times exceeding M a x D A T , which are roughly equal in both cases for the same PB and offered load. Obviously, W E R U F converts the expired packet drops and overflow at the R N C to the overflow packet drops at the G G S N . Earlier packet drops prevent undeliverable packets from 65 Chapter 4 Applying WERUF to RLC Queue 8 0 0 0 0 7 0 0 0 0 «/) O " 6 0 0 0 0 T3 a> 5 0 0 0 0 o ns ° ~ 4 0 0 0 0 _g 3 0 0 0 0 2 0 0 0 0 1 0 0 0 0 • • T o t a l p a c k e t d r o p s I I E x p i r e d p a c k e t d r o p s I I O v e r f l o w p a c k e t d r o p s a t R N C 0 . 5 0 . 6 0 . 7 Offered load 0 . 8 0 . 9 Figure 4.15 Packet drops of the low drop precedence real-time flow in a noisy link vs. offered load without WERUF. Q . g "3 o ro CL. J O 4 5 0 0 0 4 0 0 0 0 3 5 0 0 0 3 0 0 0 0 2 5 0 0 0 2 0 0 0 0 - 1 5 0 0 0 - 1 0 0 0 0 - 5 0 0 0 - o T o t a l p a c k e t d r o p s E x p i r e d p a c k e t d r o p s O v e r f l o w p a c k e t d r o p s a t G G S N 0 . 5 0 . 6 0 . 7 Offered load 0 . 8 0 . 9 Figure 4.16 Packet drops of the low drop precedence real-time flow in a noisy link vs. offered load with WERUF. occupying the limited core network resources, and improve overall performance. Another 66 Chapter 4 Applying WERUF to RLC Queue phenomenon worth mentioning is that for this flow (with low drop precedence, and over the noisy l ink) the total packet drops in W E R U F are less than without W E R U F , which guarantees no observable QoS loss due to the introduction of W E R U F from end users' viewpoint. 4.2.2.2 Multiple Real-time Flows in Different Wireless Links The network topology of this scenario is illustrated in Figure 4.17 below. Five real-time Figure 4.17 Network topology of multiple real-time flows in noisy and clean links. flows come from five Internet sources, with the same Internet latency of 30 ms. The maximum token rate of each of them is 1 M bps, and the guaranteed token rate is 256K bps. The other parameters are listed in Table 4.1. We set the offered load of these flows to 90%, and vary the number of noisy links from 1 to 4. A s in the last scenario, in the core network we assign a low drop precedence to those flows experiencing noisy wireless links, and assign a high drop precedence to flows experiencing clean l i nks . The total throughput of f lows in noisy and clean l inks is measured. The result is demonstrated in Figure 4.18. 67 Chapter 4 Applying WERUF to RLC Queue o' ' ' • 1 2 3 4 Number of noisy l inks Figure 4.18 Total throughput in noisy and clean links vs. number of noisy links In this f igure , the number o f n o i s y l i n k s increases f r o m left to r i gh t (1 to 4). C l e a r l y , W E R U F improves the performance not on ly i n noisy l inks but also i n c lean ones. T h e t r end o f the i m p r o v e m e n t is i n t e r e s t i ng . T h e i m p r o v e m e n t o f c l e a n l i n k f l o w s gradual ly shrinks w i t h the growth o f the number o f noisy l i nks . A s the number o f f lows over c lean l i n k s decreases, the traffic l oad on these f lows becomes too l igh t to f u l l y u t i l i z e core ne twork resources made avai lable by constrained f lows over noisy l i nks . O n the other hand, the improve - ment o f n o i s y l i n k f l o w s g r o w s w i t h the number o f n o i s y l i n k s , because ea r ly r e g u l a t i o n is t r iggered i n more and more f l o w s ' t ransmiss ion queues, s h o w i n g a greater and greater pos i t ive effect. 4.3 Summary In this chapter, we extend the idea o f active packet drops discussed i n the last chapter f rom 68 Chapter 4 Applying WERUF to RLC Queue the wireless-wireline interface nodes to the whole G P R S / U M T S domain by Wireless Ear ly Regulation of Unresponsive Flows ( W E R U F ) . We utilize DiffServ functional elements at the edge nodes to restrict real-time flows with hard time deadlines, so as to release shared core network resources to other flows. These specific real-time flows are identified by the transmission queue length at the wireless-wireline interface nodes with the probability generated by the piece-wise linear A Q M functions. We also propose some important rules for the rate adjustment at the edge nodes to make the scheme work efficiently. In this way the congestion caused by wireless error at the interface nodes can be propagated to the edge nodes. Simulations are excised in a wide spectrum of scenarios. The results prove the W E R U F can not only significantly improve the QoS of congested real-time flows experiencing high traffic load and wireless error rates, but can also improve the QoS of other flows when shared core network resources are scarce. 69 Chapter 5 Conclusions and Future Work Chapter 5 Conclusions and Future Work In this thesis, we have analyzed the relationship between the real-time expired packet drop rate and the transmission queue length, when link layer retransmissions are used to recover packet losses over the wireless link. We propose a novel active queue management method for the trans- mission queue that actively drops potentially expiring packets before they are enqueued, in order to free network resources and control transmission queue lengths effectively. Using the RLC layer, defined in UMTS as specified by the 3GPP as an example, we have presented simulation results to show that the proposed AQM method is effective in reducing queuing delays and elimi- nating expiration packet drops. This improves overall system performance by increasing through- put and reducing end-to-end delay. Analysis and simulations have been made to prove the effectiveness of this scheme. In a similar vein, we extend source quenches to DiffServ ingress edge nodes, namely the Gateway GPRS Support Node (GGSN) in the GPRS/UMTS domain, to prevent undeliverable packets from consuming shared network resources in core networks, thus utilizing network resources more efficiently and improving overall system performance. We have proposed a new mechanism, Wireless Early Regulation of Unresponsive Flows (WERUF) in this thesis, which converts the undeliverable expired packet drops to overflow packet drops that can be applied to packet ingress nodes, freeing the network resources for other flows. This mechanism is a variation 70 Chapter 5 Conclusions and Future Work of active queue management. Simulation results have been presented to prove that this scheme can efficiently improve the end-to-end QoS of G P R S / U M T S networks. It achieves its highest efficiency when the real-time traffic load is heavy and the core network resources are limited. We, believe the mechanism can be applied to any wireless communication systems with l ink layer retransmissions and shaped traffic. The drawbacks of these mechanisms include needed extra computing power and uplink bandwidth for the source quenches. We minimize computing complexity by choosing simple active queue management functions at the wireless-wireline interface node, and using a simple token rate operation at the edge nodes. However, the impact of such actions on computing power is still unknown and needs further investigation. We believe the extra uplink bandwidth consumed by the source quenches has a minor effect, because in G P R S / U M T S , the uplink traffic is normally slight, thus the core network bandwidth of the uplink is redundant in most cases. The source quenches can also be treated as flow control signals, therefore, it is possible to implement them in control plane protocols, which can be guaranteed by the system [12]. While this paper does not distinguished between different types of packets that may exist in real-time flows, there may be a need for differentiation to be made, because in many real-time applications, some types of packets carry more important information than others. In these cases, Random Ear ly Detection with IN-and-OUT (RIO) [25] can be used as a basis for our A Q M 71 Chapter 5 Conclusions and Future Work method, at wireless-wireline interface nodes, to implement differentiation. Furthermore, the best way to apply A Q M effectively to a mix of different real-time applications remains to be investi- gated. A s we mentioned in the previous chapter, the W E R U F drops packets in a drop-tail mode. The side-effects of this dropping discipline to some real-time applications may need further analysis. A g a i n , for those packets carrying more important information than others, some effective mechanisms wi l l be developed to avoid inappropriate drops. Another question that may prove interesting to research is what impact the resource management schemes have on our proposed mechanism. Different bandwidth reservation methods used by scheduling, C A C and M A C eventually affect resources assigned to a single flow, which results in varying service rates and times. In this thesis we have ignored these impacts for the sake of clarity so that we can focus on the presentation of our idea. Further analysis is needed, however, to show their effect. In this thesis, we have successfully presented the problem of an absence of buffer manage- ment schemes at the link layer transmission queue, and a novel solution to the problem. We hope our work attracts attention to this area to resolve these important problems. 72 Bibliography Bibliography [1] The Internet Engineering Task Force, http://www.ietf.org. [2] J. Wroclawski, "The Use of R S V P with IETF Integrated Services.", R F C 2210, http:// www.ietf.org. [3] J. Wroclawski, "Specification of the Controlled-Load Network Element Service.", R F C 2211, http://www.ietf.org. [4] S. Shenker, C . Partridge and R. Guerin, "Specification of Guaranteed Quality of Service.", R F C 2212, http://www.ietf.org. [5] S. Blake, D . Black, M . Carlson, E . Davies, Z . Wang and W. Weiss, " A n Architecture for Differentiated Services", R F C 2475, http://www.ietf.org. [6] 3 G TS 22.100 V3.7.0 (2001-10), " U M T S phase 1 Release 99", http://www.3gpp.org. [7] 3 G TS 23.107 v5.3.0 (2002-01), "QoS Concept and Architecture" , http://www.3gpp.org. [8] W. R. Stevens: TCP/ IP Illustrated, Vol . 1: The Protocols. Addison-Wesley, 1994. [9] J. B . Cain and D . N . McGregor, " A Recommended Error Control Architecture for A T M Networks with Wireless Links" , I E E E Journal on Selected Areas in Communications, vol . 15, pp.16-28,Jan. 1997. [10] N . Guo and S. D . Morgera, "Frequency-Hopped A R Q for Wireless Data Services", I E E E Journal on Selected Areas in Communications, vol . 12, pp. 1324 -1337, Oct. 1994. 73 Bibliography [11] J. J. Spilker: Digital communications by satellite. Prentice-Hall, c l977. [12] 3GPP TS 23.060 V3.10.0 (2002-01), "General Packet Radio Service (GPRS); Service description; Stage 2 (Release 1999)", http://www.3gpp.org. [13] S. Floyd and V. Jacobson, "Random Early Detection Gateways for Congestion Avoidance", I E E E / A C M Transactions on Networking, vol. 1, pp. 397-413, Aug . 1993. [14] S. F loyd and K . Fal l , "Router Mechanisms to Support End-to-end Congestion Control", http://www.icir.org/floyd/papers/collapse.ps. [15] S. Floyd and K . Fal l , "Promoting the Use of End-to-end Congestion Control in the Internet", I E E E / A C M Trans, on Networking, vol. 7 no. 4 , pp. 458 -472, Aug . 1999. [16] A . Acharya and A . Rangarajan, " E R U F : Early Regulation of Unresponsive Best-effort Traffic", in Proc. I E E E ICNP '99 , Oct. 1999. [17] K . Nichols, V. Jacobson, and L . Zhang, " A Two-bit Differentiated Services Architecture for the Internet", Internet Draft, ftp://ftp.ee.lbl.gov/papers/dsarch.pdf. [18] 3GPP TS 25.322 V4.2.0 (2001-09), " R L C Protocol Specification (Release 4)", http:// www.3gpp.org. [19] K . Bala, I. Cidon and K . Sohraby, "Congestion Control for High Speed Packet Switched Networks", in Proc. I N F O C O M ' 9 0 , vol. 2, pp. 520-526, San Francisco, Apr. 1990. 74 Bibliography [20] I. Stoica, S. Shenker and H . Zhang, "Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks", in Proc. A C M S I G C O M M ' 9 8 , Vancouver, Sep. 1998. [21] J. Heinanen, F. Baker, W. Weiss and J. Wroclawski, " Assured Forwarding P H B Group.", R F C 2597, http://www.ietf.org. [22] S. Floyd, "Discussion of Setting Parameters", http://www.icir.org/floyd/REDparameters.txt. [23] D . D . Clark, and W. Fang, "Explicit Allocation of Best-Effort Packet Delivery Service", I E E E / A C M Transactions on Networking, vol . 6, pp. 362 -373, Aug . 1998. [24] V. Rosolen, O. Bonaventure and G. Leduc, " A R E D Discard Strategy for A T M Networks and its Performance Evaluation with TCP/IP Traffic", A C M Computer Communication Review, vol . 29, no. 3, Jul. 1999. [25] C. C. Chao and W. Chen, "Connection admission control for mobile multiple-class personal communications networks", I E E E Journal on Selected Areas in Communications, vol. 15, pp. 1618-1626, Oct. 1997. [26] M . Naghshineh and M . Schwartz, "Distributed call admission control in mobile/wireless networks", I E E E Journal on Selected Areas in Communications, vol . 14, pp. 711-717, M a y 1996. [27] A . K . Parekh and R. G. Gallager, " A generalized processor sharing approach to flow control in integrated services networks: the single-node case", I E E E / A C M Transactions on Networking, vol. 1, pp. 344-357, Jun. 1993. 75 Bibliography [28] D . A . Eckhardt and P. Steenkiste, "Effort-limited fair (ELF) scheduling for wireless networks", in Proc. INFOCOM'OO, vol. 3, pp. 1097-1106, Israel, Mar. 2000. [29] S. L u , V. Bharghavan and R. Srikant, "Fair scheduling in wireless packet networks", I E E E / A C M Transactions on Networking, vol. 7, pp. 473-489, Aug . 1999. [30] S. L . Tsao, "Extending earliest-due-date scheduling algorithms for wireless networks with location-dependent errors", in Vehicular Technology Conference, vol . 1, pp. 223-228, Boston, Sep. 2000. [31] M . Zorzi , "Packet dropping statistics of a data-link protocol for wireless local communications", in I E E E 6th International Conference on Universal Personal Communications Record, vol. 2, pp. 536 - 540, San Diego, Oct. 1997. [32] Y. Uooyeol, P. Seongsoo and P.S. M i n , "Performance analysis of data transmission in W- C D M A system", in I E E E 54th Vehicular Technology Conference, vol . 3, pp. 1584 - 1588, Atlantic City, Oct. 2001. [33] R. Fantacci, "Queuing Analysis of the Selective Repeat Automatic Repeat Request Protocol Wireless Packet Networks", I E E E Transactions on Vehicular Technology, vol . 45, pp. 258 - 264. May 1996. [34] E . Chan and X . Hong, "Analytical Model for an Assured Forwarding Differentiated Service over Wireless Links" , I E E Proc. on Communications, vol. 148, pp. 19 -23 Feb. 2001. [35] E . N . Gilbert, "Capacity of a Burst-noise Channel", Be l l System Technical Journal, vol . 39, pp. 1253-1265, Sep. 1960. 76 Bibliography [36] E . O. Elliott, "Estimates of Error Rates for Codes on Burst-noise Channels", Be l l System Journal, vol . 42, pp: 1977-1997, Sep. 1963. [37] S. Bhagwat, D . Tipper, K . Balakrishnan and A . Mahapatra. "Comparative Evaluation of Output Buffer Management Schemes in A T M Networks", in Proc. I E E E ICC'94, New Orleans, May, 1994. [38] J. Postel, "Internet Control Message Protocol", R F C 777, http://www.ietf.org. 77 Appendix A. List of Abbreviations and Acronyms Appendix A. List of Abbreviations and Acronyms 3GPP: 3rd Generation Partnership Project ACK: ACKnowledge AF: Assured Forwarding AM: Acknowledge Mode AMD: Acknowledged Mode Data AQM: Active Queue Management ARQ: Automatic Repeat Request BA: Behavior Aggregates BER. Bit Error Rate BSS: Base Station System CAC: Call Admission Control CN: Core Network CP: Complete Partitioning CS: Complete Sharing DiffServ: Differentiated Services 78 Appendix A. List of Abbreviations and Acronyms DSCP: DiffServ CodePoint EDD: Earlest-Due-Date E L F : Effort-Limited Fair EPC: Estimated PDU Counter E R U F : Early Regulation of Unresponsive Flows F E C : Forward Error Correction GGSN: Gateway GPRS Support Node GPRS: General Packet Radio Service G S M : Global System for Mobile ICMP: Internet Control Message Protocol ISP: Internet Service Provider IP : Internet Protocol IETF: Internet Engineering Task Force IWFQ: Idealized Wireless Fair-Queuing IntServ: Integrated Services LB: Leaky Bucket 79 Appendix A. List of Abbreviations and Acronyms M A C : Media Access Control M F : Multi-Field MS: Mobile Station M T : Mobile Terminal N A K : Negative Acknowledge PDN: Packet Data Network. PDP: Packet Data Protocol PDU: Protocol Data Unit PHB: Per-Hop Behavior P L M N : Public Land Mobile Network QoS: Quality of Service RED: Random Early Detection RIO: Random Early Detection with IN-and-OUT R L C : Radio Link Control RNC: Radio Network Controller SGSN: Serving GPRS Support Node 80 Appendix A. List of Abbreviations and Acronyms SLA: Service Level Agreement SUFI: Super-Fields TCP: Transmission Control Protocol TE: Terminal Equipment UMTS: Universal Mobile Telecommunications System UTRAN: UMTS Terrestrial Radio Access Network. WERUF: Wireless Early Regulation of Unresponsive Flows WFQ: Weighted Fair Queuing 81 Appendix B. Simulation models in OPNET 8.1 Appendix B. Simulation models in OPNET 8.1 O P N E T provides a comprehensive development environment supporting the modeling of communication networks and distributed systems. It is a discrete-event driven simulator. There are three modeling domains (Table B . l ) in O P N E T : Network, Node and Process modeling environments. These three modeling domains span all the hierarchical levels of a model. Table B.l OPNET modeling domains. Domain Editor Modeling Focus Network Project Network topo logy descr ibed in te rms of subne tworks , nodes, l inks, and geographica l context . Node Node Node internal archi tecture descr ibed in te rms of funct ional e l - ements and data f low be tween t h e m . Process Process Behavior of processes (protocols, a lgor i thms, appl icat ions) , speci f ied using finite state mach ines and ex tended high- level language. The topology of a system consists of both an inventory of its devices and the communica- tions links between them. The network domain refers to these devices as nodes, and has several types of links that can be defined to connect them together for the purpose of sending information between them. Groups of nodes and links can be used to form subnetworks, and subnetworks can in turn contain lower-level subnetworks to form unlimited hierarchies. Some system topologies can be entirely represented in the node domain by interconnecting the node level building blocks, called modules. In most cases the network domain is used to represent high-level system topology because of its direct support for unlimited depth hierarchies, sophisticated communication links, and measured physical layout. 82

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
India 8 0
United States 7 1
Nigeria 5 3
China 4 2
Philippines 1 0
Canada 1 0
Republic of Korea 1 0
Japan 1 0
City Views Downloads
Unknown 8 4
Mountain View 4 0
Beijing 4 0
New Delhi 3 0
Ashburn 2 0
Hyderabad 2 0
Toronto 1 0
Gurgaon 1 0
Seoul 1 0
Isabang 1 0
Tokyo 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}

Share

Share to:

Comment

Related Items