UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Quality of service support in the backbone network of the Universal Mobile Telecommunications Systems.. 2001

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
ubc_2001-0304.pdf
ubc_2001-0304.pdf [ 6.04MB ]
Metadata
JSON: 1.0065046.json
JSON-LD: 1.0065046+ld.json
RDF/XML (Pretty): 1.0065046.xml
RDF/JSON: 1.0065046+rdf.json
Turtle: 1.0065046+rdf-turtle.txt
N-Triples: 1.0065046+rdf-ntriples.txt
Citation
1.0065046.ris

Full Text

Quality of Service Support in the Backbone Network of the Universal Mobile Telecommunications System Using DiffServ Model by Farshid Agharebparast B . S c . in Electrical Engineering, Isfahan Universi ty of Technology, Iran, 1995 A T H E S I S S U B M I T T E D I N P A R T I A L F U L F I L L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F M A S T E R O F APPLIED S C I E N C E in T H E F A C U L T Y O F G R A D U A T E S T U D I E S (Department of Electrical and Computer Engineering) W e accept this thesis as c o n f o r m i n g to the required standard The University of British Columbia October 2001 © Farshid Agharebparast, 2001 I n p r e s e n t i n g t h i s t h e s i s i n p a r t i a l f u l f i l m e n t o f t h e r e q u i r e m e n t s f o r an advanced degree a t t h e U n i v e r s i t y o f B r i t i s h C o l u m b i a , I agree t h a t t h e L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e f o r r e f e r e n c e and s t u d y . I f u r t h e r a g r e e t h a t p e r m i s s i o n f o r e x t e n s i v e c o p y i n g o f t h i s t h e s i s f o r s c h o l a r l y p u r p o s e s may be g r a n t e d by t h e head o f my department o r by h i s o r h e r r e p r e s e n t a t i v e s . I t i s u n d e r s t o o d t h a t c o p y i n g o r p u b l i c a t i o n o f t h i s t h e s i s f o r f i n a n c i a l g a i n s h a l l n o t be a l l o w e d w i t h o u t my w r i t t e n p e r m i s s i o n . Department o f &Cc^ncJL C^T^rf^/ &^y\jt-J*ri The U n i v e r s i t y o f B r i t i s h C o l u m b i a Vancouver, Canada Abstract T h e performance of the backbone bearer service of the U n i v e r s a l M o b i l e T e l e c o m m u - nications System ( U M T S ) , or of its 2 . 5G precedent General Packet R a d i o Service ( G P R S ) , has a considerable impact on the end-to-end Quality of Service ( Q o S ) observed by a traf- fic flow. T h e purpose of this thesis is to provide a basic methodology for provis ioning Q o S in this backbone network by using Differentiated Services (DiffServ) , w h i c h is an IP-based Q o S technology. It focuses on two important issues. First, a basic plan for Q o S mapping f r o m one system to another is presented. Second, the structure of a D i f f S e r v - b a s e d U M T S backbone router is studied w h i c h can provide the appropriate functionality of forwarding packets with P H B s equivalent to the original U M T S Q o S . In particular, three major components of a D i f f S e r v router, inc luding the scheduler, the dropper, and the meter, have been addressed and novel algorithms have been proposed to make them suitable for a U M T S backbone router. T h i s thesis proposes an efficient, fair scheduling and l ink sharing system called D D B - F F Q , w h i c h has the capability of assign- ing decoupled delay bounds and bandwidth allocations. T h i s paper also studies the imple- mentation of R a n d o m Early Detection ( R E D ) on a shared-memory buffer and proposes new schemes that have improved packet loss ratios. T h e performance of a multiple node back- bone network is then evaluated by using the O P N E T simulation package. T h e results show that in terms of delay and allocated bandwidth, the routers are able to provide the P H B s considered for each traffic aggregate. T h e y also illustrate the l ink sharing capability of the router in times of congestion. i i Contents Abstract ii Contents iii List of Tables vi List of Figures vii Acknowledgements x Dedication xi 1 Introduction 1 1.1 M o t i v a t i o n 1 1.2 Research G o a l s 4 1.3 Thesis Outl ine 5 2 Background Information 7 2.1 A n O v e r v i e w of U M T S / G P R S 7 2.1.1 G P R S System Structure 8 2.1.2 U M T S / G P R S Q o S Architecture 8 2.2 D i f f S e r v Architecture 11 i i i 2.2.1 D i f f S e r v Router M o d e l 13 3 Random Early Detection (RED) Algorithms for Shared Buffers 19 3.1 Introduction 19 3.2 R E D on a Single Queue 21 3.3 M u l t i - C l a s s R E D 24 3.3.1 Introduction 24 3.3.2 M e m o r y Sharing Policies 26 3.4 R E D on a Shared M e m o r y M u l t i - C l a s s Buffer 27 3.4.1 R E D on Complete Partitioning ( R - C P ) 28 3.4.2 R E D on Complete Sharing ( R - C S ) 29 3.4.3 R E D on Sharing with M i n i m u m A l l o c a t i o n ( R - S M A ) 29 3.5 A n a l y s i s of R E D M e c h a n i s m 30 3.5.1 T a i l D r o p p i n g Queue 31 3.5.2 R E D on a Single Queue 32 3.5.3 A Mult i -c lass R E D on Complete Sharing ( R - C S ) 34 3.5.4 A Mult i -c lass R E D on Complete Partitioning ( R - C P ) 34 3.6 C o m p a r i s o n Between T a i l D r o p , R - C S , and R - C P 35 3.7 R E D D e p l o y m e n t for D i f f S e r v - B a s e d U M T S Backbone Router 36 4 DDB-FFQ Scheduling Scheme 38 4.1 Introduction 39 4.2 Frame-based Fair Q u e u i n g ( F F Q ) 41 4.3 D e c o u p l e d D e l a y - B a n d w i d t h F F Q ( D D B - F F Q ) 47 4.4 T h e Rate Estimator M o d u l e / M e t e r 50 5 Mapping and Classification 53 iv 5.1 Introduction 53 5.2 U M T S / G P R S Q o S Classes and Attributes 54 5.3 D i f f S e r v P H B Groups 57 5.4 M a p p i n g 59 5.5 Classification 60 6 S i m u l a t i o n M o d e l a n d Results 62 6.1 Dropper Element 63 6.1.1 Simulat ion M o d e l 65 6.1.2 Performance of R E D on Shared M e m o r y Buffer Without E n d - t o - E n d Congestion Contro l 65 6.1.3 Performance of R E D on a S h a r e d - M e m o r y Buffer with an E n d - t o - E n d Congestion Manager ( T C P Traffic) 74 6.2 Scheduler Element 80 6.2.1 Simulat ion M o d e l 80 6.2.2 Simulat ion Results 81 6.3 M u l t i p l e N o d e Simulation M o d e l and Results 84 7 C o n c l u s i o n s 94 7.1 S u m m a r y 94 7.2 Future W o r k 96 G l o s s a r y 98 B i b l i o g r a p h y 100 v List of Tables 5.1 Expected range for U M T S bearer service Q o S attributes 57 5.2 R e c o m m e n d e d D S C P values for A F P H B 58 6.1 T h e values of R E D parameters in simulation case A , n o n - T C P traffic . . . . 67 6.2 T h e values of R E D parameters in simulation case B , n o n - T C P traffic . . . . 69 6.3 Traffic pattern of T C P flows 75 6.4 Default values of T C P parameters used in simulations 75 6.5 T h e values of R E D parameters used in simulation, T C P traffic 75 6.6 Simulat ion values of F F Q and assigned rates, single node 81 6.7 Simulat ion values of F F Q and assigned rates, multiple node 89 6.8 Basic mapping used f r o m U M T S to D i f f S e r v 89 v i List of Figures 1.1 T h e G P R S system infrastructure 3 2.1 T h e structure of the G P R S backbone network 9 2.2 U M T S Q o S architecture 10 2.3 D i f f S e r v domain structure 12 2.4 A D i f f S e r v router's major functional blocks 14 2.5 T h e structure of a D i f f S e r v router 15 2.6 A n example of traffic conditioning blocks in a D i f f S e r v router 18 3.1 R E D dropping mechanism 23 3.2 C o m p a r i s o n of R E D thresholds in R E D - C P and R E D - C S schemes 28 3.3 M / M / l / K queuing system 31 3.4 Poisson traffic property that can be split or combined 35 3.5 D r o p probability of M / M / l / K tail drop, R - C P and R - C S ; A n a l y t i c a l results . 36 4.1 Behavior of the base-potential function and potential functions in a fluid F F Q server 44 4.2 Variables used for interarrival time estimation 51 5.1 T h e structure of the G P R S backbone network 56 v i i 6.1 T h e schematic of the simulation model used to evaluate the algorithmic drop- pers 66 6.2 Packet drop ratio, simulation case A 68 6.3 Average queue size, simulation case A 68 6.4 Packet drop ratio, simulation case B 69 6.5 Packet drop ratio class 0, simulation case B 70 6.6 Packet drop ratio class 1, simulation case B 70 6.7 Q u e u i n g delay, simulation case B 71 6.8 Packet drop pattern of R - C P , simulation case B 73 6.9 Packet drop pattern of R - C S , simulation case B 73 6.10 Packet drop pattern of R - S M A , simulation case B 73 6.11 Traffic pattern, T C P traffic 74 6.12 Packet drop pattern for R - C P , T C P traffic 77 6.13 Packet drop pattern for R - C S , T C P traffic 77 6.14 Packet drop pattern for R - S M A , T C P traffic 77 6.15 Q u e u i n g delay pattern for R - C P , T C P traffic 78 6.16 Q u e u i n g delay pattern for R - C S , T C P traffic 78 6.17 Q u e u i n g delay pattern for R - S M A , T C P traffic 78 6.18 Average of queuing delay patterns, T C P traffic 79 6.19 F T P traffic received 79 6.20 T h e schematic of the D D B - F F Q simulation M o d e l 81 6.21 90% delay, D D B - F F Q scheduling system 83 6.22 Bandwidth allocated to each class, D D B - F F Q scheduling system 83 6.23 O P N E T network model 85 6.24 T h e schematic of the simulation network, multiple nodes 86 6.25 T h e configuration of a router in a multiple node simulation m o d e l 87 v i i i 6.26 Edge-to-edge delay, M u l t i p l e node model 90 6.27 Al loca ted bandwidth, M u l t i p l e node model 90 6.28 D e l a y of traffic class 1, M u l t i p l e node m o d e l 91 6.29 D e l a y of traffic class 3, M u l t i p l e node model 91 6.30 D e l a y of traffic class 4, M u l t i p l e node model 91 6.31 D r o p ratio of aggregate class 2, M u l t i p l e node m o d e l 93 ix Acknowledgments "There is no such thing as a 'self-made' man. We are made up of thousands of others. Everyone who has ever done a kind deed for us, or spoken one word of encouragement to us, has entered into the make-up of our character and of our thoughts, as well as our success." - George Matthew Adams I am appreciative of m y advisor, Professor Victor C M . L e u n g , for his support and valuable guidance during this project. H e has been patiently available for any question or discussion anytime I needed h i m . It was a rewarding experience working with h i m . D u r i n g m y course of study here at U B C , I enjoyed the friendship of a number of talented and amicable individuals. Specially, I w o u l d like to thank m y colleagues in the Communicat ions group: C y r i l Iskander, Vaibhav Dinesh , Hansen W a n g and X i n r o n g W a n g . It was fun participating in technical and nontechnical discussions with them. I a m also grateful to the faculty and staff of the Electrical and Computer Engineering Department for their valuable lectures and assistance in making m y study more enjoyable here. I also w o u l d like to express m y appreciation to M s . Kirs ty Barclay for proofreading m y thesis. Final ly , I w o u l d like to thank m y parents for their deep love and unswerving devo- tion. T h i s w o u l d never have happened without their dedication and encouragement. T h i s work was supported by M o t o r o l a Canada L t d . , R i c h m o n d , B . C . , Canada and Natural Sciences and Engineering Research C o u n c i l under Grant C R D P J 2 2 3 0 9 5 . To my beloved parents. xi Chapter 1 Introduction 1.1 Motivation T h i r d generation (3G) mobile telecommunications systems are promis ing technologies that w i l l bring a variety of new services to mobile users. T h e y are a imed at providing f u l l ac- cess to multimedia, real-time and data applications, such as mobile Internet access, email service, video streaming and voice over IP, with a reasonable grade of service. T o achieve these goals, these systems should be able to offer differentiated services to different traffic flows, depending on the demanded Quality of Service ( Q o S ) . T h e ideal of this Q o S struc- ture is to be "future p r o o f " , to be able to accommodate any potential new application. T o provide a smooth evolution f r o m a second generation (2G) wireless system to a 3 G system, 2 . 5 G architectures are proposed. A 2 . 5 G system is usually an enhancement of a 2 G system, making a higher rate of data services available. T h e U n i v er sa l M o b i l e Telecommunications System ( U M T S ) is a promis ing 3 G wire- less system based on the G l o b a l System for M o b i l e communications ( G S M ) . G S M is a 2 G wireless system used predominantly throughout the world . It was originally standardized by a European organization (ETSI) , and is now the leading digital cellular technology, es- 1 CHAPTER I. INTRODUCTION pecially in E u r o p e and A s i a . General Packet Radio Service ( G P R S ) [1] is standardized by E T S I as a G S M enhancement providing true packet data services. It has the advantage of being built on the existing G S M network, and by only adding two new nodes to the G S M system and upgrading the software of other nodes, it can provide a true packet data system. G P R S also has more efficient radio l ink utilization and supports Q o S . It is anticipated that G P R S ' s core network w i l l evolve into the core network of U M T S . D u e to the similarity be- tween the Q o S architecture and the backbone network structure of U M T S and G P R S release 99, the two terms are used interchangeably in most of the topics covered in this thesis, and therefore, any discussion of one can be applied to the other, unless otherwise stipulated. Provis ioning reasonable service to real-time applications is viable only when the sys- tem can guarantee a well-defined end-to-end service. T h e end-to-end Q o S , that is, f r o m one Terminal Equipment ( T E ) to another T E observed by a user, is actually the overall perfor- mance o f different bearer services constructing an end-to-end connection path. T h e U M T S Q o S specification [2] divides the end-to-end Q o S into three m a i n levels: the M T / T E bearer service, the U M T S bearer service, and the external bearer service. T h e M T / T E bearer ser- vice is the user's local interface, for example, between the user's laptop and his cellphone. Q o S support o f this bearer service is beyond the scope of the U M T S Q o S specification. T h e external bearer service is the section of the datapath located outside the U M T S infrastruc- ture, usually within the Internet. T h e U M T S bearer service itself consists o f a radio interface and the core network. Figure 1.1 shows the G P R S / U M T S system infrastructure [3]. It also illustrates the path a packet might take when it is transmitted f r o m a wireless T E to an In- ternet T E . Therefore, the overall Q o S observed by this packet depends on the behavior of each network section it passes through. T h e focus of this project is the study of the Q o S support i n the U M T S / G P R S core network. T h i s backbone network is an IP network consisting of S G S N and G G S N nodes. Therefore, an IP-based Q o S mechanism needs to be implemented to add service differen- 2 CHAPTER 1. INTRODUCTION tiation capability to this backbone network. A promising methodology for this purpose is the Differentiated Services (DiffServ) . D i f f S e r v has a number of favorable features, such as scalability, and a relatively simple implementation requirement. These characteristics make the D i f f S e r v model a.likely candidate for adding Q o S support to the U M T S backbone. I E T F has standardized the basic configuration of a D i f f S e r v router. A D i f f S e r v router consists of a number of components connected together to perform a wide range of traffic management, shaping, and queuing functions. These elements can be configured in a n u m - ber of different ways and there are numerous algorithms that can be implemented on each of them, depending on the functionality requested. SGSN SGSN Figure 1.1: T h e G P R S system architecture 3 CHAPTER I. INTRODUCTION 1.2 Research Goals Providing Q o S support to the backbone network by implementing the D i f f S e r v model ne- cessitates resolving a number of issues. First, a proper set of D i f f S e r v Per H o p Behaviors ( P H B s ) needs to be selected that can bear Q o S characteristics equivalent to all of the U M T S traffic aggregates. Second, an interworking system, especially a method for mapping the Q o S parameters of one bearer service to another, should be devised. F inal ly , implementa- tion of D i f f S e r v in the U M T S backbone network requires using Dif fServ-aware nodes in the backbone. Therefore, selecting a set of proper D i f f S e r v functional components with an appropriate scheme embedded on each element is vital. T h e above objective is a massive project and w i l l require extensive time and effort to be achieved. T h i s research presents an overview of the issues related to Q o S support in the U M T S backbone network using the D i f f S e r v model . W h i l e it provides some information about Q o S in the U M T S and D i f f S e r v systems, associated with the above objective, it concen- trates on two main studies: the configuration of a D i f f S e r v - b a s e d U M T S router, and the Q o S mapping function. Specifically, four functional and traffic condit ioning elements of a D i f f S e r v router are examined thoroughly: the scheduler, algorithmic dropper, classifier and meter, as fol lows. • T h e Scheduling element: There are a large number of scheduling algorithms presented in the literature. A m o n g those, Frame-based Fair Q u e u i n g ( F F Q ) is an efficient, fair queuing system w h i c h has 0 (1 ) timestamp computation complexity and seems suit- able for implementation in the scheduler element. However , it lacks the ability to assign decoupled delay and bandwidth guarantees, w h i c h is an important factor in defining some P H B s needed in a U M T S environment. T h i s thesis presents a novel ser- vice discipline, called D e c o u p l e d D e l a y - B a n d w i d t h F F Q ( D D B - F F Q ) . T h i s scheme is based on the F F Q scheme, and therefore, inherits its characteristics. In addition, it 4 CHAPTER 1. INTRODUCTION allows assigning the delay bound and bandwidth allocation of a P H B independently. • T h e algorithmic dropping element: T h i s thesis also studies the implementation of R a n d o m E a r l y Detection ( R E D ) on shared-memory buffers. It proposes and c o m - pares three R E D on shared buffer schemes. Implementing a proper R E D on a shared buffer algorithm on the P H B s corresponding to U M T S interactive or background traf- fic classes w i l l add congestion avoidance capability, while showing a better loss rate and memory utilization performance. T h e proposed method is especially applicable to the buffer used for P H B s associated with the background traffic class, such as best effort, lower than best effort, and b u l k handling P H B s . T h i s is the first time R E D on a shared buffer has been studied. • T h e mapping function: A study of current U M T S Q o S structure and D i f f S e r v stan- dard P H B s is also undertaken and a basic scheme for mapping U M T S traffic classes to D i f f S e r v P H B s is presented in this thesis. • Meter : T h e D D B - F F Q service discipline proposed in this thesis utilizes a rate estima- tor module for its normal functionality. T h i s module is actually a metering element, and therefore, can also serve as a D i f f S e r v meter. T h u s , the D i f f S e r v meter c o m p o - nent is also covered here. 1.3 Thesis Outline T h i s thesis consists of the f o l l o w i n g chapters. Chapter 2 presents some background infor- mation related to U M T S and D i f f S e r v systems. It first covers the U M T S infrastructure and its Q o S structure, then studies the D i f f S e r v conceptual model , inc luding the components and structure of a D i f f S e r v router. Chapters 3 to 5 consider four D i f f S e r v functional blocks in more detail. Chapter 3 5 CHAPTER 1. INTRODUCTION addresses the algorithmic dropping element by studying the performance of R E D algorithms on shared-memory buffers. A f t e r a brief study of R E D and m e m o r y allocation policies, three R E D on shared buffer algorithms are proposed and studied. A t the end of Chapter 3, a brief analysis of these schemes is presented. Chapter 4 studies the scheduling block. First, it reviews the requirements of a fa- vorable scheduler. T h e n , an efficient scheduling algorithm called D D B - F F Q is proposed. Final ly , the rate-estimator module used in D D B - F F Q is studied. T h i s module has a twofold responsibility, performing as a metering element as wel l . Chapter 5 studies the D i f f S e r v classifier element and Q o S m a p p i n g function. It gives an overview of the U M T S Q o S structure and D i f f S e r v standard P H B s , and addresses a basic mapping system between these two systems. Chapter 6 focuses on presenting the O P N E T simulation models and discussing their results. First, a performance comparison of the proposed R E D on shared-memory buffer algorithms is presented. T h e n , the simulation model and results corresponding to the D D B - F F Q scheduling scheme are provided. Final ly , simulation results indicating the performance of a simple Dif fServ-based U M T S backbone model are discussed. Chapters 7 presents the thesis conclusion and possible future work. 6 Chapter 2 Background Information Since the premise of this thesis is the Q o S support in the backbone network of a G P R S or U M T S system, a brief understanding of the structure and standard specifications related to Q o S in U M T S and D i f f S e r v is essential. T h i s chapter presents an overview of, and back- ground information on, the technologies used as the basis of this project, in two main sec- tions: U M T S structure and the D i f f S e r v conceptual model . T h e first section provides a short description of the G P R S System structure and U M T S / G P R S Q o S architecture. T h e second section presents an overview of the D i f f S e r v system and router configuration. 2.1 A n Overview of U M T S / G P R S G P R S [ 1] [3] [4] [5] is an enhancement of G S M that reuses its infrastructure to provide a true packet-switched service. T h i s enhancement provides more efficient radio usage, faster set- up time, and therefore, higher data bandwidth. These benefits can be achieved only by i n - troducing two new nodes to the system and by upgrading the software o f some other G S M components. T h e most prominent advantage of G P R S is that it provides a smooth transi- tion f r o m a 2 G cellular system to a 3 G system realization. U M T S is a 3 G mobile system designed to be implemented on the G P R S structure. It w i l l introduce a completely different 7 CHAPTER 2. BACKGROUND INFORMATION radio interface and a m u c h higher data rate. However , their Q o S architecture and network skeleton are identical so, fortunately, all discussion presented here applies to both [2] [6]. In this section, first a brief overview of the system structure and background is given and then the Q o S structure as defined in the standard is described. 2.1.1 GPRS System Structure G P R S uses the G S M infrastructure but introduces two new logical nodes, called G P R S S u p - port Nodes ( G S N ) , for packet transfer within the Public L a n d M o b i l e N e t w o r k ( P L M N ) [7]. Figure 1.1 illustrates a G P R S system w h i c h includes G S M components: M o b i l e Station ( M S ) , Base Transceiver Station ( B T S ) , Base Station Controller ( B S C ) ; and G P R S added nodes: Gateway G S N ( G G S N ) , and Serving G S N ( S G S N ) . F igure 2.1 depicts only the struc- ture of the backbone network [8]. B T S handles the radio interface connecting to the mobile station ( M S ) . B S C controls a number of B T S s and manages radio resources and handovers. S G S N in G P R S is the equivalent of the M o b i l e Switching Center ( M S C ) in G S M and is responsible for packet transmission to and f r o m the M S within its service area. G G S N is the system gateway to external packet data networks and is connected to S G S N through an IP-based backbone network. Figure 2.1 shows that the backbone network itself consists of a number of i n t e r - P L M N and i n t r a - P L M N networks. A n i n t r a - P L M N network connects the S G S N s and G G S N s of a local networks and an i n t e r - P L M N network connects the intra- P L M N networks together through border gateways. 2.1.2 UMTS/GPRS QoS Architecture T h e Q o S structure of U M T S has been specified in an E T S I specification [2] that presents a framework for Q o S within U M T S . Since the Q o S of G P R S and its network structure has been defined identically to U M T S , the f o l l o w i n g discussion is val id for both of these 2 . 5 G 8 CHAPTER 2. BACKGROUND INFORMATION SGSN SGSN Figure 2.1: A schematic of G P R S backbone network structure and 3 G systems. What really matters in the Q o S support, is what end users observe. T h i s is the Q o S as seen f r o m one Terminal Equipment ( T E ) to another. It is requested by users f r o m a number of finite Q o S levels, and it is the user who determines i f she is satisfied with the service. T h e data path f r o m one T E to another can be divided into a number of bearer ser- vices. These bearer services and levels are usually different in nature and m a y be governed by distinct providers. Therefore, the study of Q o S is often easier i f examined within each separately. Figure 2.2 shows the U M T S bearer services layered structure. E a c h bearer service includes all the features needed for provisioning a contracted Q o S . T h e traffic going f r o m 9 CHAPTER 2. BACKGROUND INFORMATION T E M T U T R A N C N Iu E D G E Node C N Gateway T E l l ! ! ! ! I£nd-lo-bnd Service k- M l I . -o : Bearer Service L'M'IS Hearer Service I I Radio Access Bearer Service l I LA tenia! Bearer Service C N Bearer Service t R i• 11> • Bt-.ik'i Service In Bearer Service Backbone Bearer Serv ice I IRA I DD/TDD Service Physical Bearer Service U M T S Figure 2.2: U M T S Q o S architecture one T E to another T E passes a number of these bearer services, w h i c h can be categorized into three distinct types: T E / M T local bearer service, U M T S bearer service, and external bearer service. A T E is connected to U M T S through a M o b i l e Termination ( M T ) and it is set by the user, hence, the T E / M T local bearer service is outside the scope o f the U M T S system. T h e external bearer service is any network or system in the data path that is not included in the U M T S system. Therefore, it is also not elaborated on by the U M T S specification. T h e U M T S bearer service w h i c h provides the U M T S Q o S is itself layered into two parts: radio access and core network. T h e radio access bearer service is responsible for pro- visioning an o p t i m u m and confidential transport of user data f r o m M T to the core network Iu edge node, with Q o S negotiated with the user, w h i c h is realized by a radio bearer service 10 CHAPTER 2. BACKGROUND INFORMATION and Iu bearer service. T h e role of the radio bearer service is the radio interface transport and it uses U T R A F D D / T D D [2]. T h e core network is the main subject of this research and is based on a generic IP backbone network. UMTS QoS Classes Q o S is conveyed in the f o r m of a single attribute with multiple parameters. It is negotiated with the user as a part of the Packet Data Protocol ( P D P ) context at the time of connection establishment phase. D u e to air interface restrictions, the Q o S classes are not meant to be complex, while covering the range of all potential applications. T h e most noteworthy of U M T S Q o S parameters is the traffic class. T h i s is the attribute that defines the type of the stream and other parameters describe the traffic more specifically. Chapter 5 elaborates on the U M T S Q o S in more detail. 2.2 DiffServ Architecture Differentiated Services (DiffServ, D S ) is a scalable service differentiation proposed by I E T F for IP networks [9] . It achieves scalability by performing traffic flow aggregation. E a c h ag- gregate is identified by a code and each node in the way of packet data path treats that packet according to a role defining the Per H o p Behavior ( P H B ) corresponding to that code. There- fore, no per-application flow or per-user forwarding state is maintained in this structure. In addition, sophisticated traffic management and conditioning tasks are performed only at the edge nodes of a D i f f S e r v network. A D S domain is a D i f f S e r v network consisting of a n u m - ber of adjacent nodes with the same set of service policies and P H B definitions. A node in a D S domain is classified as either an edge or a core node. T h e D i f f S e r v architecture works according to a simple model . W h e n a packet enters a D S domain, it is classified into a finite set of behavior aggregates. These behavior aggre- 11 CHAPTER 2. BACKGROUND INFORMATION Figure 2.3: D i f f S e r v domain structure gates are identified by a D S codepoint [10], which is conveyed by marking the header of the IP packet [11]. A n y node in that D S domain along the path of that packet treats the packet according to a PFU3 rule corresponding to its codepoint. D i f f S e r v is implemented by indicating a small set of per hop behaviors, a classifica- tion function, and functional and queuing blocks. T h e I E T F D i f f S e r v working group has proposed a number of standard P H B s . T h e y are expedited forwarding (EF) , assured forwarding ( A F ) , best effort ( B E ) , lower than best effort ( L B E ) , alternative best effort ( A B E ) , dynamic real-time/non-real-time ( R T / N R T ) , and assured rate ( A R ) . Chapter 5 presents a thorough discussion of these schemes. There are a number of other packet differentiation schemas used in the literature. T h e most c o m m o n of these schemes are relative priority marking , service marking, label switching, and integrated services/Reservation Protocol ( R S V P ) . Hereafter, D i f f S e r v w i l l 12 CHAPTER 2. BACKGROUND INFORMATION be concisely contrasted with each of the above methods [9]. In a relative priority marking model , packets are assigned a precedence for one of their characteristics, such as: delay, bandwidth, and packet loss and are prioritized accord- ingly. A n example of this model is 802.3 T o k e n R i n g priority. D i f f S e r v can be considered a generalization and refinement of this scheme, since it utilizes general P H B concepts and propose a whole architecture. A n example of service marking is IPv4 T y p e of Service (ToS) [12], used for routing purposes. In this scheme packets are marked for m i n i m i z i n g delay or m a x i m i z i n g band- width to direct the routers along its path in selecting the route. T h i s scheme is totally dif- ferent to D i f f S e r v , since D i f f S e r v is not concerned with routing. Examples of label switching are A T M and M P L S . In this m o d e l , a path is established in each node along the packet path f r o m the source to the destination. Therefore, each node needs to maintain a Q o S state associated with this path. A l t h o u g h this m o d e l provides a finer Q o S support, it has extra management and configuration requirements, and also a scalability problem, that is, the number of states in a node grows with the number of established paths. In an R S V P model , the source, the destination and the nodes along the path all ex- change signaling messages to establish a packet classification and forwarding state in each of these nodes. T h i s model also has a scalability problem, i f no state aggregation is per- formed. 2.2.1 DiffServ Router Model T h i s section describes the elements used in a DiffServ-aware node. F igure 2.4 illustrates an elaborate component interaction of the high-level view of a D S node [13]. T h e rout- ing core is an abstract of a router normal routing functionality and is not discussed in D i f f - Serv model . T h e configuration and management interface monitors service provisioning 13 CHAPTER 2. BACKGROUND INFORMATION and gathers statistics regarding the traffic of each service level . These are used as network management bases f r o m w h i c h D i f f S e r v policies are assigned or altered. T h e network ad- ministrator interacts with this module through a management protocol . T h e optional Q o S agent is considered to add per-flow and per-flow-aggregate signaling capability to D i f f S e r v model . T h i s allows to snoop R S V P messages. Data In MGMT QoS Control MSGS DiffServ Configuration & Management Interface Ingress Elements (classify, meter, action,queuing) Routing Core Egress Elements (classify, meter, action, queuing) QoS Agent (optional) Data Out Figure 2.4: A D i f f S e r v router's major functional blocks Figure 2.5 shows a node with two egress/ingress interfaces. It is possible to have an arbitrary number of these interfaces in an actual router. A l s o it is not mandatory to i m - plement all of these blocks and their components on both ingress and egress points. T h e configuration of components depends on the service requirement of a router. F igure 2.6 shows the datapath that a data packet might take in a router. O n arrival, a packet is first classified according to a set of rules. T h e n it is checked for being within its allocated rate by a metering block. Afterward, the packet is passed along a set of traffic 14 CHAPTER 2. BACKGROUND INFORMATION Ingress Elements Routing Core Ingress Elements Figure 2.5: T h e structure of a D i f f S e r v router conditioning blocks such as a marker, dropper, counter and i f accepted, is enqueued in the queuing block and then transmitted within a scheduler policy. There are four kinds of components in an ingress/egress interface: a traffic classifier, meters, actions and queuing elements [14]. A c t i o n elements are markers, droppers, coun- ters, and multiplexers. M a p p i n g F u n c t i o n : M a p p i n g is the function of translating the Q o S of one system to the Q o S parameters understandable by another Q o S system. It is a necessary function for a U M T S packet entering the DiffServ-aware network, or when a packet c o m i n g f r o m an external network enters the U M T S network through a G G S N . Chapter 5 gives a thor- ough discussion of this topic. T h i s function is usually integrated with the classifier into one functioning block. C l a s s i f i e r : A Classifier is a functional element that selects packets, based on policy. It can classify packets based on the content of the packet header, other packet data and/or some implic i t information related to that packet. In a D i f f S e r v model , the packets are clas- sified according to the content of a field in the IP header, originally called T y p e of Service (ToS) in the IPv4 protocol . T o S is an 8 bits field and D i f f S e r v uses 6 bits of it, also called a 15 CHAPTER 2. B A C K G R O U N D I N F O R M A T I O N D S field, to convey the D i f f S e r v CodePoint ( D S C P ) . T h e remaining 2 bits, called C U (cur- rently unused), are not used in D i f f S e r v and are proposed for use in congestion notification function by I E T F Expl ic i t Congestion Notification ( E C N ) group. T h e classifier has a single input and splits the i n c o m i n g stream into a number of outgoing ports. T h e most c o m m o n way of implementing classifiers is to use a number of filters. A filter is a set of match con- ditions based on a classification key. T h e contents of the D S C P is passed through the filter, and according to the matching results, it is grouped into a P H B aggregate. M e t e r : A meter is a functional b l o c k responsible for measuring the rate characteris- tics of a packet stream. These gathered statistics are used by other blocks for their decision making. F o r example, a marker uses this data to re-mark the packet D S C P . I E T F has a set of R F C s proposing two main meter implementation: the time sl iding w i n d o w meter/marker and the two(three) token bucket meter/marker. M a r k e r : T h i s module is responsible for setting the D S field o f a D i f f S e r v packet according to various rules. It may also use the statistics gathered by a meter to decide on the level of a packet's conformance to its allocated rate characteristics. F o r a non conformant stream, it takes action by re-marking the packet in a class with lower priority, or in a class with a lower drop precedence. It is also responsible for assigning a va l id D S C P to a packet with an u n k n o w n D S C P . C o u n t e r : A counter is responsible for updating the packet counter. It is also used to count the number of packets passing through it. D r o p p e r : There are two major kinds of dropping elements i n a D i f f S e r v router: an absolute dropper and an algorithmic dropper. W h i l e absolute droppers are quite straight- forward, research is still being undertaken on the viability, usability and the schemes of al- gorithmic droppers. R a n d o m Ear ly Detection ( R E D ) is a w e l l - k n o w n algorithmic dropper, suggested for avoiding congestion in the network. There are a number of varieties of R E D . In this project, the implementation of R E D on shared-memory buffers is addressed and pro- 16 CHAPTER 2. BACKGROUND INFORMATION posed for deployment on a D i f f S e r v router for T C P - c o m p a t i b l e aggregates. Q u e u i n g B l o c k : T h i s is the block that stores the packets while they are waiting to receive service by the scheduler and depart to the next hop or D i f f S e r v module . There are two different types of buffers used for enqueuing packets: shared m e m o r y buffers and ded- icated buffers. In a dedicated buffering method, each queue has its o w n separated memory space, while a shared buffering scheme uses a large memory w h i c h is shared among dif- ferent traffic classes, according to a sharing rule. Chapter 3 presents a thorough study of shared-memory buffers. S c h e d u l e r : T h e scheduling block has the most impact on the level of service a packet receives. It decides on w h i c h queue, among a set of queues, to service next. There are a large number of scheduling schemes addressed in the literature, each o f w h i c h has some advantages and some disadvantages. In a Dif fServ-based U M T S backbone network, the most desirable scenario is to have a fair, efficient and simple scheme w h i c h supports l ink sharing and delay bounds. It should also be able to assign different delay b o u n d guarantees independently of bandwidth allocation, and vice versa. F igure 2.6 illustrates an example of a D i f f S e r v router using all o f the above c o m - ponents. T h i s router supports four P H B aggregates. T h e first group is an E F P H B . T h i s traffic is metered first and non-conforming traffic is dropped, but the c o n f o r m i n g traffic has a guaranteed delay bound. T h e second group is an A F 1 1 group. It has a dedicated buffer that accepts traffic packets based on a tail dropping scheme. T h i s traffic can be video streaming traffic. T h e third group is an A F 2 1 group that first meters the traffic flow. If the traffic is non conforming it is re-marked for an A F 2 2 (with a lower drop precedence). T h e last group is best effort traffic. G r o u p s three and four use a shared-memory buffer for their queue and a R E D on shared buffering scheme is applied as the dropping scheme. 17 CHAPTER 2. BACKGROUND INFORMATION Classifier Meter, Counter Dropper Dropper Meter j Dropper Marker Dropper Queue Queue Queue Queue Scheduler Figure 2.6: A n example of traffic conditioning blocks i n a D i f f S e r v router 18 Chapter 3 Random Early Detection (RED) Algorithms for Shared Buffers 3.1 Introduction T w o kinds of dropping elements are used in the D i f f S e r v m o d e l : the absolute dropper and the algorithm dropper. A n absolute dropper simply destroys its received packet. It is used to drop the non c o n f o r m i n g traffic of those applications that do not have an end-to-end conges- tion manager and are overloading the network. A n absolute dropper is also used to m o d e l tail dropping queues that drop the packet only when f u l l . However , algorithmic dropping is an efficient way of controlling and avoiding congestion. Congest ion happens when the rate of the i n c o m i n g traffic surpasses the service rate capability of a router or a l ink. T h e Internet experienced its first set of crashes, k n o w n as congestion collapse, in October 1986 [15]. T h i s event led to the implementation of a congestion control mechanism in T C P , the main transport protocol in use at that time, to maintain Internet functionality. E v e n in a normal data network, the traffic is "bursty" and changes continuously. T o absorb this bursty traffic, buffers are used in each intermediate node along the traffic path. 19 CHAPTER 3. RANDOM EARLY DETECTION (RED) ALGORITHMS FOR SHARED BUFFERS A bigger buffer can capture a larger amount of data, but it w i l l impose a longer average de- lay. However , no matter how large a buffer is, at the time of congestion it becomes f u l l and some packets should be dropped. Besides, Internet traffic is increasing continuously. If a router behaves proactively and informs sources that it is at the threshold of becoming con- gested, congestion can be avoided. This idea led to the introduction of congestion avoidance algorithms. There are two kinds of congestion control methods [16] [17]: end-to-end congestion control and network (router) based congestion control. In the former method, only the two end-sides of the connection are responsible for using the network up to capacity, without overloading it. A n example of this type is the method deployed in T C P , w h i c h has been preventing congestion collapse in the Internet for a long time. T C P is the only transport protocol equipped with a congestion control mechanism. However , there are many recent applications that use U D P , and i f their traffic occupies a substantial percentage of Internet traffic, history might repeat itself. L u c k i l y , I E T F has a w o r k i n g group on a universal conges- tion manager and has proposed a general congestion manager interface to add this capability to any k i n d of application [18]. In a system with only end-to-end congestion control, inter- mediate nodes w i l l accept every packet they received, up to their m e m o r y capacity and drop packets when their buffers become f u l l . T h i s k i n d of packet dropping is called "tail drop- p i n g " throughout this thesis. T h i s term is also used in the literature as a contrast to "head d r o p p i n g " , indicating whether the packet at the head of a queue is dropped, or one at its tail. T h e Internet has been relying on end-to-end congestion control methods for a long time, but this is inadequate for a number of reasons. O n e , T C P functions in a way that keeps queue occupancy high. T h i s imposes discrimination in drop probability for bursty traffic, since an arriving burst of data w i l l not find enough space in the buffer. T w o , T C P itself is assumed to be bursty. Three, there are many existent or emerging applications that do not obey any k i n d of end-to-end congestion control mechanism [19]. 20 CHAPTER 3. RANDOM EARLY DETECTION (RED) ALGORITHMS FOR SHARED BUFFERS W i t h network based congestion control, every node in the path also acts proactively to keep the usage of the network within its capacity. T h e most prominent of these schemes is R a n d o m Ear ly Detection ( R E D ) . R E D was first introduced in [16] for traditional single queue gateways to add congestion avoidance capability to the routers. It detects the incipient congestion and drops/marks packets before the queue becomes f u l l . B y dropping/marking packets, it notifies the source to decrease its traffic rate to prevent congestion [ 16] [ 19]. M a r k - ing packets is a more logical way of informing the corresponding source, and is an ongo- ing focus of research. However , its implementation needs a corresponding transport proto- col that understands this method. T C P has been using packet drops as a way of evaluating network conditions. Therefore, currently, packet dropping is the m a i n consideration, while packet marking is under study and proposed in an R F C [20]. These packet drops match properly with T C P - c o m p a t i b l e transport protocols and prevent severe source throughput degradation. R E D is standardized by the Internet Engineering Task Force ( I E T F ) in an R F C [21] as a better alternative of the traditional tail drop mechanism. D e p l o y m e n t of R E D has a number of advantages. O n e , it tries to keep the buffer oc- cupancy low, therefore causing a lower average delay. T w o , it prevents concurrent dropping of large numbers of packets in bursty traffic. Three, it does not have a bias against bursty traffic. Four, T C P reacts more severely to burst packet losses than to single packet loss. 3.2 R E D on a Single Queue In a gateway with a R E D [16] buffer management scheme, packets are dropped/marked be- fore its buffer gets f u l l . T h e objective of R E D is to detect incipient congestion and take measures to avoid it. It is intended to be used for networks in w h i c h the transport protocol reacts to congestion notification feedback f r o m the network. T h u s , the R E D gateway has to perform two tasks. It should first apply an algorithm for detecting the congestion, just before 21 CHAPTER 3. RANDOM EARLY DETECTION (RED) ALGORITHMS FOR SHARED BUFFERS it happens, and then it should take action by applying another algorithm for its congestion avoidance task. R E D uses the average queue size of the router as an indicator of the congestion i n - tensity level . A l t h o u g h there are a number of ways of calculating the average buffer size, Exponential Weighed M o v i n g Average ( E W M A ) is the most c o m m o n method. E W M A is calculated by adding a weighted value of the current queue size to the weighted value of the previous average queue size, putting more weight on the current data. In an E W M A filter, the average queue occupancy, avg, is estimated by avg <— avg * (1 — a) + k * a. (3.1) In this equation, k is the current size of the buffer and a is the weight factor, a is the key parameter for setting the time constant of this lowpass filter and its value is always less than one. W h e n it is set to a value of greater than 0.5, it puts more weight on current data and when it is set to a value of less than 0.5, it puts more weight on past data. R E D calculates the average queue size for every i n c o m i n g packets. H a v i n g c o m - puted the average queue size, its uses the f o l l o w i n g algorithm to decide on dropping/marking that packet. It uses two threshold values called the m a x i m u m threshold, max-th, and the m i n i m u m threshold, min-th. If avg is less than min-th, the packet is accepted and enqueued. If avg is greater than max-th, it w i l l be dropped/marked. H o w e v e r i f the value of avg is in between min-th and max-th, the dropping probability pk is first calculated thusly: avg - minth pk = maxprob * : . (3.2) maxth - minth T h e n the packet is dropped with the probability of pk- maxprob is a parameter that deter- mines the m a x i m u m value of the dropping probability. In other words, in a R E D gateway, packets are dropped with a probability w h i c h is based on a function of the average occupancy size of that buffer, possibly w e l l before it gets 22 CHAPTER 3. RANDOM EARLY DETECTION (RED) ALGORITHMS FOR SHARED BUFFERS f u l l . F igure 3.1 shows the dropping probability versus average queue size of a R E D gateway with the physical buffer size of B, m a x i m u m and m i n i m u m thresholds of max-th and min-th, and m a x i m u m drop probability of maxprob. P(k) max min-th max-th B Figure 3.1: R E D dropping mechanism T h e pseudo code of the R E D algorithm is as fol lows [16]: for each packet arrival calculate the average queue size (avg) if min-th < avg < max-th calculate drop probability Vk with probability Vk mark the arriving packet else if max-th < avg mark/drop the arriving packet 23 CHAPTER 3. RANDOM EARLY DETECTION (RED) ALGORITHMS FOR SHARED BUFFERS 3.3 Multi-Class R E D 3.3.1 Introduction T h e current Internet treats every packet in a best effort manner. Therefore, each node has only one queue for all incoming packets. In a class-based buffer, distinct queues are used to keep packets of different classes separate. There are two buffer allocation policies for implementing a class-based buffer: the shared-memory and the dedicated-memory policies. In a dedicated memory buffer, a completely separate queue is used to store the packets of one traffic class. T h i s scheme has the advantage of providing guaranteed space for each class and requires less queuing administration. However, , in a shared-memory buffer, all the packets share the same buffer. A sharing pol icy allocates enough m e m o r y space for an i n c o m i n g packet and keeps track of packets belonging to each class. T h i s scheme provides more efficient memory utilization, since the unused memory space o f one class can be used by another. T h i s allows the buffer to have higher bursty traffic handling capability. In a D i f f S e r v router, congestion control is currently only applicable to non-real time traffic, until such time as congestion manager modules are used for all kinds o f traffic [18]. Therefore, mainly interactive and background traffic classes need proactive congestion con- trol. E a c h of these traffic classes are mapped to a number of P H B groups. Therefore, a shared buffer can be used to store packets of these groups. T h i s section proposes and stud- ies the congestion control schemes applicable for a shared m e m o r y buffer. U s i n g a shared- memory buffer has a number of advantages for these P H B groups: • T h e memory is used more efficiently. In a dedicated-memory system, the router may become congested for one or a few traffic classes while the queues of other classes are not completely used. However , in a shared buffer, the unused m e m o r y of a class may be used by the other classes to handle short-term congestion. 24 CHAPTER 3. RANDOM EARLY DETECTION (RED) ALGORITHMS FOR SHARED BUFFERS • A shared buffer is managed by software and provides soft divis ion for each queue. F o r a D i f f S e r v router in w h i c h the number of its supported P H B or its parameters are to be modified or to be set on demand, using sharing schemes are very convenient. A n example of software based routers is given in [22]. • A l t h o u g h the algorithms presume symmetric traffic flow, by applying a weighted ver- sion of these algorithms and using push-out, the system also works in a fair manner for non symmetric traffic. It is also easy to configure the system to w o r k in favor of a particular traffic class. Push-out is the act of freeing enough space to store an i n c o m - ing packet when the queue is f u l l , by dropping some packets that have already been accepted f r o m a lower priority, or f r o m a queue that has used the memory unfairly. • Final ly , the simulation results presented in Chapter 6 show better performance in the deployment of R E D on shared buffers, in terms of packet loss. However , shared buffer schemes are more complex compared to dedicated systems. T h e y require a more elaborate and sophisticated buffer manager. T h e y also need to deal with packet allocation and resumption. F o r non-static threshold systems, performing push- out and/or applying dynamic policies adds to the complexity of the system. T h e f o l l o w i n g sections are focused on the congestion control schemes applicable to shared-memory buffers. First, a short overview of current buffer sharing schemes is pre- sented. T h e n , three different deployments of R E D on shared-memory buffers that have been derived f r o m present buffer sharing schemes, are proposed. These schemes work both as sharing policies and as algorithmic droppers for the purpose of congestion avoidance. F i - nally, an analytical representation is provided. 25 CHAPTER 3. RANDOM EARLY DETECTION (RED) ALGORITHMS FOR SHARED BUFFERS 3.3.2 Memory Sharing Policies A shared-memory buffer is a single pool of memory shared by the packets o f a number of traffic streams, while they wait to be serviced and transmitted. A buffer allocation pol icy is applied to enable proper queuing functions. In a sharing system, the sum of the i n d i v i d - ual partitions is equal to the the total memory. Hereafter, sharing schemes addressed in the literature are summarized [23] [24] [25]. Complete Partitioning (CP): In a complete partitioning sharing scheme, the entire buffer is permanently partitioned among the classes. Therefore, C P actually does not pro- vide any sharing. However , since the queues are not physical ly isolated, it is also classified among sharing schemes. T h i s scheme can be considered equivalent to a dedicated memory system, in w h i c h separated queues are used for different classes and has most of its prop- erties, such as packet loss rate. Thus , another advantage of studying this scheme is that it enables us to make a performance comparison between dedicated and sharing systems. Sti l l , C P has an important advantage of shared buffer systems in that it is easily reconfigurable, but it also has the disadvantage of complex buffer management. Complete Sharing (CS): Complete sharing lies on the other extreme, compared to C P . In this scheme, the buffer is shared among all queues, without limitation. E v e r y i n c o m - ing packet is accepted, as long as the buffer is not f u l l . In a C P policy , the buffer allocated to a queue may be wasted i f its corresponding traffic class is inactive, while in the C S p o l - icy, a buffer may be monopol ized by a traffic class. However , C S p o l i c y efficiently uses the memory and can be fair, i f the traffic is symmetrically divided into all classes. Sharing with Minimum Allocation (SMA) : T h i s p o l i c y is a combination of C S and C P policies. In the S M A scheme, a m i n i m u m memory space is put aside for each class and the rest is shared between all the arriving packets. Therefore, each traffic class has a m i n i m u m guaranteed memory space and the remainder m e m o r y is efficiently used by all 26 CHAPTER 3. RANDOM EARLY DETECTION (RED) ALGORITHMS FOR SHARED BUFFERS classes. Sharing with MaXimum Queue lengths (SMXQ): T h i s scheme is similar to the C S policy, but with the added feature of a m a x i m u m limitation on the memory space oc- cupied by each traffic class. S M X Q prevents a traffic class f r o m m o n o p o l i z i n g the whole buffer. T h e values of these m a x i m a are considered so that their s u m is greater than the whole memory size. Sharing with MaXimum Queue and Minimum Allocation (SMQMA): S M Q M A is the most complex buffer sharing pol icy and utilizes a combination of S M A and S M X Q features. In this scheme, the buffer is shared among all traffic classes, but with a reserved m i n i m u m space, and a m a x i m u m space l imit for each. Therefore, while providing a guaran- teed m i n i m u m space, it also restricts a class f r o m m o n o p o l i z i n g the whole shared memory space. O n the d o w n side, it requires the largest number of state variables for its performance and has the highest complexity. 3.4 R E D on a Shared Memory Multi-Class Buffer In this section, the deployment of R E D on shared memory buffers is addressed. Based on three of the sharing policies discussed previously, three corresponding R E D algorithms are proposed. These algorithms perform two tasks. First, they provide buffer allocation p o l - icy similar to the sharing scheme they are derived f r o m . Second, they provide a R E D - l i k e congestion avoidance functionality. In these schemes, the R E D m a x i m u m and m i n i m u m thresholds, w h i c h also determine the m a x i m u m and m i n i m u m sharing limits, are defined both for the whole buffer and for each queue. Based on complete sharing, complete partitioning, and sharing with m i n i m u m allo- cation, three algorithms are proposed. T h e y are R E D on complete sharing ( R - C S ) , R E D on complete partitioning ( R - C P ) and R E D on sharing with m i n i m u m allocation ( R - S M A ) . 27 CHAPTER 3. RANDOM EARLY DETECTION (RED) ALGORITHMS FOR SHARED BUFFERS k=K k=0 k=K k=0 MaxTh MinTh 4*MaxTh 4 'MinTh Figure 3.2: Compar ison of R E D thresholds in R E D - C P and R E D - C S schemes D u r i n g this discussion the f o l l o w i n g terminology is used. " B u f f e r " or " w h o l e m e m - o r y " or "queue" indicates a single large memory shared by all the subqueues of different classes. T h e memory space used for storing packets of the traffic class i is called "subqueue i " . T h e R E D average queue size calculated for the whole m e m o r y is indicated by avg and the R E D average queue size of subqueue i is indicated by avg(i). S imi lar ly the m i n i m u m and the m a x i m u m thresholds for the whole memory are called max-th and min-th and those of subqueue i are indicated as max-th(i) and min-th(i), respectively. 3.4.1 RED on Complete Partitioning (R-CP) In R E D on complete partitioning, the memory is permanently divided among the subqueues, that is, each traffic class has its o w n fixed memory space and a traditional R E D algorithm is applied on each subqueue independently of the other queues. T h i s scheme is identical to a multi-class dedicated-memory queuing system, f r o m a performance viewpoint. Therefore, it has also been used as the basis for comparing the performance of R E D on shared and dedicated m e m o r y buffers. T h i s scheme is suitable for those situations when it is preferable to have isolated subqueues, while having some of the advantages of shared m e m o r y systems, such as reconfigurability. 28 CHAPTER 3. RANDOM EARLY DETECTION (RED) ALGORITHMS FOR SHARED BUFFERS 3.4.2 R E D on Complete Sharing (R-CS) Based on the complete sharing policy, R E D on the complete sharing is defined with the f o l - l o w i n g pseudo code: for each packet arrival with class i calculate the average queue size avg calculate average sub-queue i size avg(i) if min-th < avg < max-th ifmin-th(i) < avg(i) calculate drop probability pa with probability pa mark the arriving packet else if max-th < avg mark/drop the packet In this algorithm, the whole buffer is shared among all the subqueues. F o r an incoming packet with class i the average queue and subqueue i is calculated. If the average queue size is less that the m i n i m u m threshold, the packet is accepted. Otherwise, the average sub- queue i size is compared to max-th(i) and min-th(i) and a decision to accept or discard the packet is made, similar to a traditional R E D algorithm. 3.4.3 R E D on Sharing with Minimum Allocation (R-SMA) Here, another R E D for shared memory buffers based on sharing with m i n i m u m allocation is proposed, called R E D on sharing with m i n i m u m allocation. T h i s algorithm defines a m i n i - m u m memory allocation space for subqueue i by using R E D min-th(i). T h e pseudo code of this scheme is as fol lows: 29 CHAPTER 3. RANDOM EARLY DETECTION (RED) ALGORITHMS FOR SHARED BUFFERS for each packet arrival with class i calculate the average queue size avg calculate average sub-queue i size avg(i) ifmin-th(i) < avg(i) and avg < max-th . calculate drop probability pa with probability pa mark the arriving packet else if max-th < avg mark/drop the packet T h e deployment of R E D on the S M X Q sharing scheme has not been defined here, since it is apparent that there is no other way of implementing R E D on it, other than independently applying R E D on each subqueue. 3.5 Analysis of RED Mechanism In this section, an analytic study of R E D on shared-buffer algorithms is presented. First, the packet drop probability of a tail drop queue is examined and then the analytical representa- tion of R E D on a single queue, R E D on a multi-class buffer with complete sharing ( R - C S ) , and R E D on a multi-class buffer with complete partitioning ( R - C P ) are addressed. In the analysis, the f o l l o w i n g assumptions have been made for the queuing system and the traffic model : • Packet arrival time is in Poisson distribution, that is, packets have exponentially dis- tributed inter arrival times. • Packet departure rate has Poisson distribution, that is, packet lengths are exponen- tially distributed. 30 CHAPTER 3. RANDOM EARLY DETECTION (RED) ALGORITHMS FOR SHARED BUFFERS • Physical buffer size is K. • Stationary state probability of being in state k is 7r(&). B e i n g in state k means there are k packets in the system. T h e exponential distribution has been used to allow us to use the P A S T A (Poisson Arr iva ls See T i m e Averages) property [26]. P A S T A property claims that for a Poisson arrival, the state probability distribution seen by a new arrival would be the same as the time averaged state probability distribution. In other words, the state probability distribution is equal to the fraction of time that the system is in that state. A A A Figure 3.3: M / M / l / K queuing system 3.5.1 Tail Dropping Queue Figure 3.3 shows an M / M / l / K queuing system. If A is the packet arrival rate and p is the packet departure rate, the utilization is defined as: p = K F o r a buffer with tail dropping and buffer size of K, the probability of being in the state k is [27]: T T ( 0 ) * / ifk<K, 0 otherwise. B y using the normalizing condition, that is, J2k=o — 1» ^(0) the probability of being at state zero is s imply calculated. T h u s : m = zt? = ̂  = (33) 31 CHAPTER 3. RANDOM EARLY DETECTION (RED) ALGORITHMS FOR SHARED BUFFERS T h e dropping probability can be also calculated by using the P A S T A property. A c c o r d i n g to P A S T A , the probability of dropping a packet in a tail drop system is equal to the probability of being at state K . Therefore: P T D = *(K) = (1 - p ) P K . (3.4) T h i s equation enables us to have a consistent representation of drop probability for all the schemes. 3.5.2 RED on a Single Queue F o r a queue with R E D buffer management scheme, a packet gets dropped depending on the value o f the drop probability d(k) and the queue state probability ir(k), where k indicates the state. T o make analysis possible, the instantaneous queue size has been used instead of the average queue size. T o calculate the packet drop probability, the P A S T A property is used. In addition, due to R E D algorithm, a packet in state /, 1 = 1 , K is dropped with probability d(l). There- fore, the packet drop probability is calculated thus: P M = ir(K)d(K) + Tr{K-l)d(K-l) + ... + *{l)d{l) = X X W ) - (3-5) In this equation d(k) is the dropping function of the R E D algorithm and can be presented as: 0 i f k < minTh d(k) = — k J 7 " " T / l „ , * maxv i f minTh < k < maxTh max! h—mini a v 1 i f k > maxTh where k is the average queue size calculated by R E D algorithm. T h e state probability used in the above equations is different f r o m the one used for the tail dropping. T o derive the state probability, we use a detailed balance equation that 32 CHAPTER 3. RANDOM EARLY DETECTION (RED) ALGORITHMS FOR SHARED B VFFERS says ir(i)p(i,j) = ir(j)p(j, i), where p(a, b) is the state transition f r o m state a to state b [28]. U n d e r equil ibr ium conditions, flows may be balanced across any boundary, so for the kth boundary, let i= k and j=k+l. F o r a queue with R E D implemented, the probability of a transition f r o m state k to k+1 also depends on the probability of packet acceptance. Therefore, the transition probability f r o m state k to state k+1 is p(k, k + 1) = Xk{l — d(k)) and by applying it in the balance equation we have this: (l-d(k))Xkir(k) = fik+1ir(k + l), k = l,2,...,K (3.6) w h i c h leads to the state probability n(k + 1 ) = — (1 - d(k))n(k). (3.7) fJ-k+i B y substituting k instead of k+1 and considering { A^ = A, p,k = p, VA; }, the state prob- ability ir(k) is: Tr(Jfe) = p(l-d(k-l))n(k-l) = *(o)ff/»(i-<*(0) t'=0 k—l = pk*(0)f[{l-d(i)). (3.8) t'=0 7 r ( 0 ) can be easily found by applying the normalizing equation. Therefore, oo K K k—l $>(*) = £>(*) = E A ( 0 ) - = 1 (3-9) k=0 k=0 k=0 i=0 and the probability o f being is state 0 is * ( 0 ) = E L o / n a a - ^ ) ) - ( 3 ' 1 0 ) Final ly , by applying ( 3.10) in ( 3.8), the state probability of R E D is calculated as fol lows: KRED[k) = K krfk-ih 177^- (3 - 1 1 ) 33 CHAPTER 3. RANDOM EARLY DETECTION (RED) ALGORITHMS FOR SHARED BUFFERS 3.5.3 A Multi-class RED on Complete Sharing (R-CS) A l l the equations derived for the R E D on a single queue, other than the dropping function d(k), are directly applicable to the R - C S scheme. T h e dropping function is based on the R - C S algorithms and is expressed as fol lows: 0 i f k < minTh 0 i f k; < minTh; d(k) = i f minTh < k < maxTh 'k-minTh + m a X p i f m i n T h . < £. maxTh—minTh 1 i f k > maxTh where k is the average buffer size and 4 is the average size of queue i calculated by R E D algorithm. W i t h the new dropping function d(k), T h e state probability and the drop probability of an R - C S scheme are written as fol lows: * R - c s { k ) = z L o P * Y i l : 0 \ i - d ( i ) y ( 3 - 1 2 ) PR-CS = ir{K)d(K) + TT(K - l)d(K - 1) + . . . + 7r(l)d(l) K = ]T7r(k)d(&). (3.13) k-\ 3.5.4 A Multi-class RED on Complete Partitioning (R-CP) T h i s case is similar to having a set of independent single queues where traffic is d iv ided between them. Therefore, the equation derived for a single R E D queue is applied to each of the R - C P subqueues. Figure 3.4 shows the model of the R - C P scheme for a traffic consisting of four Poisson flows, A 0 to A 3 . B y util izing the Poisson stream property, the combination of traffic also has a Poisson distribution with an average arrival time o f A = AO + A l + A2 + A3. T h i s fact is also true for the service rates fx. 34 CHAPTER 3. RANDOM EARLY DETECTION (RED) ALGORITHMS FOR SHARED BUFFERS QO \ U 0 Ql Q2 X 3 \ Q3 Figure 3.4: Poisson traffic property that can be split or c o m b i n e d 3.6 Comparison Between Tail Drop, R-CS, and R - C P F o r R - C P scenario, we consider a symmetric system with \ = j and // ; = | for i=0,...,3 and the utilization factor is the same both for the whole and for each subqueue. T h i s allows us to have an analytic comparison among normal tail dropping, R - C P , and R - C S schemes. Since the instantaneous queue size is used in the analysis, it w i l l not be comparable with the results acquired by simulation. However , it w i l l provide an intuition into the subject f r o m an analytical viewpoint . Figure 3.5 depicts the drawing based on equations ( 3.4), ( 3.5) and ( 3.13). It depicts the drop probabilities of an M / M / l / K tail drop queue, an R - C P , and an R - C S queuing management scheme. In all the schemes a buffer with the size of K=160 packets is used. In the tail dropping scenario the whole buffer is used for the i n c o m i n g traffic with a mean arrival rate o f A and a mean service rate of fi. In the R - C S scenario, the instantaneous queue size is used with a min-th =40, max-th = K = 160, min-th(i)=10 and max-th(i)=40 and an i n c o m i n g traffic with mean arrival rate of A and mean service rate of fi. F o r the R - C P scheme, each subqueue has a min-th(i)=10 and max-th(i)=40 and an i n c o m i n g traffic with mean arrival rate of A / 4 , and mean service rate of fi/i. A s expected, the drop probability of a R E D queue is higher than the tail dropping queue. T h i s figure also shows that the drop probability of the R - C P sharing scheme is higher than the R - C S scheme. T h i s figure, accompanied with the simulation results presented in Chapter 6, shows the better performance of R - C S . 35 CHAPTER 3. RANDOM EARLY DETECTION (RED) ALGORITHMS FOR SHARED BUFFERS Drop Probability of R-CS and R-CP, Analytical Results 0.3 r 0.25 h 0.2 2 co -Q 2 0.15 Q. a. 0.1 0.05 - e - Tail drop (M/M/1/K) - * - R-CS - « - R-CP 0 0.6 0.7 0.8 0.9 1 1.1 Offered Load 1.2 1.3 1.4 Figure 3.5: D r o p Probability of M / M / l / K tail drop, R - C P and R - C S with k=160 3.7 RED Deployment for DiffServ-Based UMTS Backbone Router In the context of D i f f S e r v , R E D can be used for the algorithmic dropper of a Dif fServ-based G P R S / U M T S backbone router. It is applicable to those traffic aggregates that utilize T C P for their transport protocol , that is, for interactive and background traffic classes. F o r the conversational and streaming traffic classes only absolute droppers are used. T h u s , an i n - c o m i n g packet with these traffic classes is accepted when it is determined to be conformant and is dropped i f it is not. A l g o r i t h m i c droppers are used for the other 2 classes and, there- fore, R E D deployment is appropriate. E a c h of these traffic classes are mapped to some P H B consisting of a set of subgroups. D u e to the simulation results presented in Chapter 6, deployment of R E D on shared buffers is proposed for T C P - c o m p a t i b l e aggregates. E a c h traffic aggregate has its o w n ded- 36 CHAPTER 3. RANDOM EARLY DETECTION (RED) ALGORITHMS FOR SHARED BUFFERS icated buffer, but this buffer is shared among the subgroups of that aggregate class and one of R E D on shared-memory buffer schemes is used for the algorithmic dropper. T h i s con- figuration has the advantage of lower packet drop ratio and being reconfigurable, and the disadvantage of slightly higher buffer management complexity. 37 Chapter 4 DDB-FFQ Scheduling Scheme T h i s chapter is dedicated to the study of the D i f f S e r v scheduling block. T h e motivation behind this study is to implement a fair, efficient and simple scheduler that also has decou- pled delay-bandwidth guarantees, and supports l ink sharing objectives. A scheduling block with these properties enables the operator of a Dif fServ-based U M T S backbone to define P H B s with different delay and bandwidth guarantees easily. D e c o u p l e d D e l a y - B a n d w i d t h Frame-based Fair Q u e u i n g ( D D B - F F Q ) is a proposed scheduling algorithm that can achieve the above mentioned goals. It is a modification of the Frame-based Fair Q u e u i n g ( F F Q ) scheme, and therefore, inherits all of its properties, while it enables the assignment of de- coupled delay-bandwidth guarantees. T h i s chapter consists of the f o l l o w i n g sections. First, an overview of scheduling schemes, the l ink sharing model , and related previous work is provided. T h e n , the con- cepts of rate-proportional schedulers, especially the structure of F F Q , are addressed. F i - nally, a novel scheduling system based on the F F Q , called D D B - F F Q is presented, w h i c h has the desirable properties needed for implementation in a D i f f S e r v - b a s e d U M T S back- bone router. In this chapter, the term "traffic session" is used as an equivalent to D i f f S e r v traffic aggregate. 38 CHAPTER 4. DDB-FFQ SCHEDULING SCHEME 4.1 Introduction T h e quality of service in integrated services packet networks is expressed in terms of delay, bandwidth, jitter and loss ratio. Various applications have different requirements and ex- pect different values of these Q o S characteristics. In other words, these parameters might be defined or requested independently of one another, according to the application being used. F o r example, an application might be very sensitive to delay but require only a small bandwidth, while another application might work wel l with a loose delay b o u n d but need a large bandwidth and be sensitive to packet losses. Consequently, it is essential to be able to set these parameters independently for each traffic class. T h e remarkable module responsible for maintaining Q o S is the scheduling element. A s such, it has been studied extensively and many scheduling algorithms have been pre- sented in the literature [29] [30] [31]. A m o n g these schemes, General ized Processor Shar- ing ( G P S ) [32] is purported to be an ideal service discipline in terms of delay and fairness. A l t h o u g h it is based on the fluid model and is not viable for a packet network, its packet-by- packet versions Weighted Fair Q u e u i n g ( W F Q ) and Worst-case Fair Weighted Fair Q u e u - ing ( W F 2 Q ) [29] approximate G P S closely. T h e main drawbacks of these schemes are their complexity and scalability problems. A series of scheduling policies, called rate proportional servers [30], endeavor to ap- proximate G P S with a time complexity of 0(1) . In these schemes, the level of service re- ceived by session i is set by the value of a parameter called session i rate, r,. T h i s parameter defines the allocated bandwidth of that session or traffic class. It is also the basis for calculat- ing the delay b o u n d associated with class /, that is, the delay b o u n d of class i is dependent on its allocated rate and can not be defined separately. These schedulers use a number of nondecreasing functions of time called potentials. A system potential function is used to keep track of the state of the whole system and a session i potential is used to keep track 39 CHAPTER 4. DDB-FFQ SCHEDULING SCHEME of the service session i received during its busy period. These potential functions should be updated continuously to avoid fairness discrepancy. Frame-based Fair Q u e u i n g ( F F Q ) is a rate-proportional scheduler that recalibrates its system potential periodically with a m a x i m u m period called frame size. T h i s service discipline is fair, simple to implement, and efficient. However , l ike other rate-proportional scheduling algorithms, it does not provide decoupled delay b o u n d and bandwidth alloca- tions. Another important issue related to service discipline algorithms is the concept of l ink sharing. A l ink sharing pol icy is the method of assigning the bandwidth of a l ink to a set of packet classes. A class can be an aggregate of some traffic streams having the same band- width requirements, or the traffic of an organization. T h e concept of l ink sharing [33] [34] was first introduced mainly for congested periods to restrict services offered to some classes to their l ink sharing bandwidth. T h i s bandwidth can even be zero, for example, for email traffic stream. In deploying a l ink sharing policy, there are a number of critical objectives and issues that should be considered as mentioned in [33] [35]. First, each class should re- ceive its assigned bandwidth over a suitable interval of time. T h i s requirement should be met both in normal and congestion situations. Second, the excess bandwidth should be dis- tributed to all active sessions according to a rule, not arbitrarily. Excess bandwidth is that bandwidth portion of the l ink that is unused at a particular time, for example, due to an i n - active session. T h i r d , l ink sharing is essentially a bandwidth oriented method, but delay is as important as bandwidth for many applications. A l ink sharing implementation should not hinder the general scheduler in providing a delay guarantee to a well -behaved, real-time application. Fourth, l ink sharing has its highest importance when congestion happens or a traffic aggregate misbehaves. Therefore, it should allocate at least the m i n i m u m assigned bandwidth to all classes, including the misbehaving traffic. Hierarchical Fair Service C u r v e ( H - F S C ) [35] is a recently introduced algorithm, 40 CHAPTER 4. DDB-FFQ SCHEDULING SCHEME w h i c h presents a scheduling system with the decoupled delay bandwidth feature. H - F S C is based on l ink sharing and the Service C u r v e Earliest Deadline first ( S C E D ) [36] [37] schedul- ing scheme. It is argued in [35] that since there are conflicts between l ink-sharing and real- time requisites, it is not possible to realize both of them simultaneously at all times, although it is possible to closely approximate the desired model . Since H - F S C is based on service curves, it has implementation complexity for many occasions. In this chapter, we present a l ink-sharing real-time scheduler, based on link-sharing and F F Q concepts. T h e complete scheduling system consists of a number of rate-estimators and a modif ied F F Q . T h e novelty of the proposed scheme is that two sets o f rates are consid- ered: FFQ-ra tes and assigned rates. T h e FFQ-ra tes are used to determine the delay bound of each class and are a part of the internal state of F F Q , while the assigned rates are used to set the bandwidth received by each class. T h e rate-estimators measure the rate of the i n - c o m i n g aggregates. A s long as the rates of all classes are well -behaved and are within their assigned rates, the scheduler acts like a normal F F Q . However , i f the actual estimated rate is above the assigned rate, the scheduler modifies the timestamp o f the head-of-l ine packet of the misbehaved class to restrict it to its assigned rate. 4.2 Frame-based Fair Queuing (FFQ) Frame-based Fair Q u e u i n g [38] belongs to the Rate-Proportional Scheduler ( R P S ) class [30] and has an 0(1) timestamp computation complexity, and performance characteristics close to General ized Processor Sharing ( G P S ) . There are two types of system complexity related to each service discipline: timestamp computation complexity and priority sorting complex- ity. T h e former is the complexity of the algorithm used for calculating the timestamp of each i n c o m i n g packet and the latter is the complexity of the algorithm used to select the queue with the smallest timestamp. F o r traffic consisting of N sessions, G P S has 0(N) and 41 CHAPTER 4. DDB-FFQ SCHEDULING SCHEME F F Q has 0(1) timestamp computation complexity, while the sorted-priority complexity of all the scheduling systems depends on the sorting algorithm and is usually in the order of 0{log2{N)). In a rate proportional scheduler, the session i receives service depending on its allo- cated rate r;. Le t W ; (T, t) denote the service offered to session i during the interval (T, t], then the normalized service received by session i is Wj(r, £ ) / r ; . T h e objective of a rate pro- portional scheduler is to equalize the normalized service of all the busy sessions. A n R P S scheduler uses a number of nondecreasing functions of time called potentials to keep track of the the actual normalized service received by each session during its busy period. A po- tential function P(t) is defined for the whole system and a session potential function P;(i) is considered for each session, so that for any interval (r, t]: m _ F , ( r ) = (4.1) ri A system potential should satisfy two properties. First, the system potential must be i n - creased with a rate of at least one, that is, for any interval (r, t] that the system is busy: P(t) - P(T) >{t- T). (4.2) Second, the system potential is always less than the potential of any backlogged session. Let B(t) denote the set of backlogged sessions at time t, then the f o l l o w i n g becomes true: P(t) < Pi(t) V i € B(t). (4.3) In an R P S system, these potential functions should be continuously updated to enable a close approximation to an actual fluid model in the equivalent G P S system. T h i s recali- bration interval is the key feature to associate fairness to the algorithm and is its fundamen- tal difficulty. Different recalibration methods result in different rate-proportional schemes. T h e session i potential should be modified in a way that the service missed by that session during the time that it was not backlogged is also considered. 42 CHAPTER 4. DDB-FFQ SCHEDULING SCHEME F F Q uses a frame-based recalibration method, that is, it updates the potential func- tions periodically with a m a x i m u m period offrame size F. A frame period (T) is the interval of time needed to send F bits. T h e m a x i m u m amount of class i traffic that can be transmitted during a frame period is defined as fa and /,• = r,-/C is the fraction o f the l ink rate allocated to session i. T h e relations among these parameters are as fol lows: fa = fi*F = n * T. (4.4) T h e fa is defined such that it is greater than the m a x i m u m packet size of class i, Lf. Li<fa. (4.5) In F F Q , the delay b o u n d of class / is determined by its rate r,- w h i c h also defines the por- tion of bandwidth allocated to that class. F o r a l ink with capacity C and a scheduler with N classes, the total assigned rate should be less that C : i > < C . (4.6) »=i Reference [38] proposes a relaxed method of recalibrating the potential functions. Let T^-I be the last time a frame was updated, then the next update needs to be done when the f o l - l o w i n g two conditions are met: 1) T h e potential o f each active session in the equivalent G P S system belongs to the next frame: Pi(t) >kT V? € B(t) (4.7) where B(t) is the set of active classes at time t. 2) T h e potential o f each class is less than the beginning of the next frame: Pi(t)<(k + 1)T i = l,2,... (4.8) T h e implementation of F F Q can be simplified by introducing another function called the based-potential function. A based-potential function Sp(t) is a nondecreasing function 43 CHAPTER 4. DDB-FFQ SCHEDULING SCHEME that has the f o l l o w i n g properties: 1) Sp(t) is less that the potential of every backlogged session: Sp(t) < Pi(t) Vz <= B(t). 2) A finite constant A P > 0 exists such that the f o l l o w i n g is true: (4.9) Sp{t) > Pi(t) - AP Vi G B(t). (4.10) •»—I 4—i » o cu 4T 3T 2T T Pi(o / < /p(o • f • • • — / / \ SP(t) ' ' / 1 • 1 • • 1 • • • • 1 • r • k • / _ / / 1 / T / I / t 1 Time Figure 4.1: Behavior of the base-potential function and potential functions in a fluid F F Q server. P(t) is the potential function, Sp(t) is the base potential and Pi(t) is session i poten- tial function. T h e potential functions are recalibrated at points TI, r 2 , T 3 and r 4 . T h e frame update points can fall anywhere within a w i n d o w where each session potential is between kTmd (k+l)T[38J. 44 CHAPTER 4. DDB-FFQ SCHEDULING SCHEME F i n a l l y it is shown that by using a step function as the system based-potential func- tion and applying the above conditions, F F Q can be easily implemented. F igure 4.1 depicts the relation and behavior of the based-potential and system-potential functions in a f luid F F Q server. A c t u a l l y to implement F F Q , [38] shows that it is sufficient to execute the f o l l o w - ing two algorithms, one at the time of a packet arrival and the other at the time of a packet departure. T h e algorithm that needs to be executed on the arrival of a packet in an F F Q server has the f o l l o w i n g pseudo code: Calculate current value of system potential. Let t be the current time and ts the time when the packet currently in service started its transmission. 1. temp <- P + (t-ts)/F Calculate the starting potential of the new packet 2. SP(i, k) <— max(TS( i, k-1), temp) Calculate timestamp of packet 3. TS(i, k) *- SP(i, k) + length(i, k)/fa Check if packet crosses a frame boundary 4. nl <- int(SP(i, k)) 5. n2 *— int(TS(i, k)) 6. if( nl < n2) then ( if finishing potential is in next frame) 7. B[nl] <— B[nl] + 1 (increment counter) 8. mark packet 9. end if T h e algorithm w h i c h is executed on the departure of a packet in an F F Q server is this: 45 CHAPTER 4. DDB-FFQ SCHEDULING SCHEME Increase system potential by the transmission time of the packet just completed, say j. 1. P^P + length(j)/F Find timestamp of next packet for transmission 2. TSmin < - minieB(TS(i)) Determine the corresponding frame number. 3. Fmin * ifit(TSmin) Perform frame update operation if required 4. if (packet j was marked) then 5. B[current-frame] <— B[current-frame] -1 6. end if 7. if(B[ current-frame]=0 and F m i n > current-frame) then 8. current-frame <— current-frame + 1 9. P <— max( current-frame, P) 10. end if Store starting time of transmission of next packet in ts 11. ts <— current time 12. Retrieve packet from head of queue and transmit In these algorithms, P is the potential state variable, SP is the starting potential o f a packet, TS is the timestamp calculated for each packet, and B is an array that is used to keep track of frames and frame boundary crosses. F is the frame size, T S m i n is the smallest TS of the head-of-l ine packets of backlogged classes and current-frame is a state variable to keep the value of the current frame. A l l of these values are reset when the router becomes idle. 46 CHAPTER 4. DDB-FFQ SCHEDULING SCHEME 4.3 Decoupled Delay-Bandwidth F F Q (DDB-FFQ) A n F F Q scheduler possesses many of the properties expected f r o m a service discipline. It is fair, efficient, and s imply implementable. However , it has a crucial shortcoming in that the delay b o u n d of each aggregate class depends on its assigned rate. Therefore, it is not possible to set the delay bound and bandwidth allocation of a session independently. In this section, a novel l ink sharing service discipline is proposed that overcomes this deficiency. F o r a system with N traffic classes, the complete scheduling system consists of N rate esti- mators and a scheduler. T h e responsibility of each rate estimator is to assess the rate of its corresponding traffic class and to send this estimated rate to the scheduler. T h e scheduler is a modified F F Q server that uses the statistics gathered by the rate estimator, in addition to its o w n F F Q states variables, to decide on w h i c h queue it should offer service to next. T h e novelty of this scheme is that it utilizes two sets of rates for each traffic class: the FFQ-ra tes and the assigned rates. These two sets of rates are used to realize the decoupled delay and bandwidth property of the system. T h e FFQ-rates are used to set the delay bound of each traffic class and are equivalent to the r rates defined in a normal F F Q server and the assigned rates are used to set the bandwidth allocation of each aggregate. E a c h of these sets of rates should obey equation (4.6). T h e D D B - F F Q scheduling system's functionality is described as fol lows. A s long as each traffic source is sending packets under or equal to its assigned rate, the scheduler works like a normal F F Q . Since each traffic class is offered service according to its F F Q - r a t e , its guaranteed delay b o u n d is attained, and because all classes are conformant, they get their assigned portion of bandwidth. Nevertheless, a non-conformant traffic class j is not served with its F F Q - r a t e . T h i s can be done by adjusting the timestamp of the H e a d - o f - L i n e (HOL) packet of the queue j when the sorted priority function is performed. It means that the ad- justed value is used when the scheduler decides on w h i c h queue to serve next, by choosing 47 CHAPTER 4. DDB-FFQ SCHEDULING SCHEME the queue with the m i n i m u m H o L timestamp value, but the packet's original timestamp is preserved. T h i s is done when the second line of the F F Q algorithm for a packet departure (see previous section) is run. T h i s action prevents other classes f r o m service degradation and allows them to get their portion of bandwidth under their delay bounds. W h i l e the non- conformant traffic is served at its assigned rate, its delay b o u n d is not met, w h i c h is logical in the context of the D i f f S e r v model . There are a number of ways of performing this adjust- ment. Since this algorithm is meant to be used in a D i f f S e r v - b a s e d U M T S router with the best-effort traffic as its lower priority class, the f o l l o w i n g adjustment is proposed: TSU) ™U)'«*V). (4.H) minyr) In this equation, TS(j) is the timestamp of the packet at the head-of-l ine of non-conformant class j, avg( i) is its estimated rate, and min( r) is the smallest assigned rate in the system. T h i s value is used instead of the actual TS(j) in the second line of the algorithm for the packet departure. T h e value of the weight ui used in the exponential weighted m o v i n g average of the rate estimator was chosen as .002. T h i s value is the recommended value for an E W M A system for estimating the rate and was chosen so that it calculates the average rate while it fol lows the change o f stream rate with a reasonable time constant. However , this parameter can be used to set the time constant of the system as desired. D D B - F F Q can also be described as a l ink sharing system. In this model , the gen- eral scheduler uses a normal F F Q scheme and the l ink-sharing scheduler is the D D B - F F Q service discipline. In the context of l ink sharing, a general scheduler is used during normal operating conditions of the network, and the l ink-sharing scheduler is used during conges- tion. W h e n congested or faced with a non-conformant traffic, the offered service rate of that session is l imited to its allocated rate, and therefore, the delay b o u n d guarantee is not met. In this system, the excess bandwidth is distributed among all the active classes according to 48 CHAPTER 4. DDB-FFQ SCHEDULING SCHEME their FFQ-ra tes , and when congestion happens or a misbehaved class exists, it is distributed according to the method used for the time stamp adjustment. In our case, it is distributed according to the adjustment shown in equation (4.11), that is, among well -behaving classes according to their FFQ-rates and among misbehaving classes proportional to the smallest F F Q - r a t e in the system. In a D D B - F F Q server, the delay bound observed by an aggregate is determined by its assigned F F Q - r a t e . Therefore, the proper setting of the F F Q - r a t e value is the key factor that leads to a favorable delay bound. It is shown that for an F F Q scheduling scheme the delay of session / is bounded according to the f o l l o w i n g equation [30]: J W O < - + % (4-12) T{ ti, where Li is the m a x i m u m packet size of the session i, r t is the F F Q rate assigned to session i, Lmax is the m a x i m u m packet size of all sessions and R is the output service rate of the scheduler. Consider ing its inheritance f r o m the F F Q scheme, this equation can be directly applied to a D D B - F F Q system as wel l . T h i s equation illustrates some important facts. First, the delay b o u n d is inversely proportional to F F Q - r a t e . Second, the delay bound of session i is dependent on the m a x i - m u m packet size of session i traffic. T h i r d , it also depends on the m a x i m u m packet size of all sessions, even of the session with the lowest priority. T h i s is a logical result of the fact that D D B - F F Q is a work-conserving non-preemptive service discipline. In the context of a Dif fServ-based U M T S backbone network, the delay b o u n d depen- dency on packet size can be easily overcome by the IP tunneling capability o f the U M T S backbone network. A t the time of packet encapsulation, the ingress edge router performs packet segmentation to l imit the m a x i m u m packet sizes. T h e segmented packet can be re- assembled later at the egress edge router. B y having the m a x i m u m packet sizes and the F F Q - r a t e values of the D D B - F F Q server, the router operator can set the delay b o u n d of ev- 49 CHAPTER 4. DDB-FFQ SCHEDULING SCHEME ery P H B according to its requirements. 4.4 The Rate Estimator Module/Meter O n e of the bui lding blocks of the proposed l ink sharing, real-time, scheduling system is the rate-estimator module. T h i s element determines the rate of each class, w h i c h is the ba- sic statistic fed to the D D B - F F Q scheduler. D D B - F F Q uses this data to manage the band- width allocation of each session and to provide its l ink sharing capability. In the context of a Dif fServ-based U M T S backbone router, this module can serve additional purposes, that is, it also acts as the metering element for the purpose of traffic shaping, remarking and/or for b i l l ing purposes. There are several ways of implementing a rate estimator. T h e usual algorithm used for estimating the traffic rate is the exponential weighted m o v i n g average ( E W M A ) lowpass system [33] w h i c h is also the method that is used in the proposed D D B - F F Q scheme. T h e key parameters of this estimator are its time constant and the frequency at w h i c h the rate estimation is performed. T h e rate estimator used in D D B - F F Q updates the value of the estimated rate at the time of each packet arrival. T h e algorithm actually first estimates the packet inter-arrival time. It calculates the difference between the actual and allocated inter-arrival time of each packet and then finds its E W M A values. B y having the estimated packet interarrival time, its reciprocal, the packet rate is determined. A s s u m e t is the interval between the time when the last packet has arrived and the time that the most recent packet has arrived. A l s o assume S is the size of the arrived packet, b is the bandwidth allocated to this packet class, and r is the allocated interarrival time of this class. T h e n the discrepancy between allocated and actual inter-arrival times is this: diff = t-^. (4.13) 50 CHAPTER 4. DDB-FFQ SCHEDULING SCHEME Allocated Interarrival Time =S/b Actual Interarrival Time = t F igure 4.2: Variables U s e d for interarrival time estimation. T h e n the exponential weighted m o v i n g average ( E W M A ) model is used to calculate the average of cliff as in the f o l l o w i n g : avgdiff <- avgdiff * (1 - u) + diff * to (4.14) where u is the weight factor and is the key factor for setting the E W M A ' s time constant. B y applying avgdiff in place of diff in equation (4.13) and considering t = S/r, we have the f o l l o w i n g : S S S avgdiff = t- T = T o r b (4.15) where r is the estimated rate. F i n a l l y the estimated rate is s imply calculated this way 1 rate (4.16) T h e weight u denotes the time constant of the estimator. T h e time constant is the time that it takes for the system to change f r o m its original value to 63% of it. Therefore, to calculate the time constant of this estimator, we first calculate the number of packets it takes for the value of avgdif / to move 63% away f r o m its initial value. Suppose e x c h a n g e s f r o m 0 to / a n d l e t u ; * / = c and avgdiff at the time of packet ^ a r r i v a l represented by Yk. T h e n equation (4.14) can be rewritten in the f o r m of a difference equation: Yk = (1 - w)Yk-! + c, y 0 = o. (4.17) 51 CHAPTER 4. DDB-FFQ SCHEDULING SCHEME T h e solution to this equation is s imply given as : Yk = (l—w)k(Y0—c/w)+c/w. Therefore, k, the number of packets required for Y to become Yk = 0.63c/u> is this: 7-77——r- (4-18) ln(l — u) T h i s means that i f the rate changes, and therefore causes the diff variable to change, it takes - l/\n(l-u) packets before the value of avg moves 63% f r o m its initial value. Since the packet rate can be assumed to be s/b, the approximate time constant is calculated thus: 0 * m ( l — u) 52 Chapter 5 Mapping and Classification 5.1 Introduction M a p p i n g and classification are two related functions that a D i f f S e r v node must perform to enable proper service differentiation. T h e mapping function is the act o f translating the Q o S parameters of one system to another Q o S system, in order to maintain a consistent level of service received by a packet along its transmission path, f r o m its source to its destination. In U M T S , the Q o S required is identified by a number of attributes associated with each session. D i f f S e r v uses its o w n method of differentiating between distinct aggregates, marking the IP header of their packets and handling them with different P H B s . W h e n D i f f S e r v is deployed in the U M T S backbone network, the Q o S parameters of a packet passing the boundary of the core network should be mapped to the Q o S constraints of the new system in the edge router. If an i n c o m i n g packet originates f r o m an M S into the core network, its U M T S Q o S attributes should be mapped to an equivalent D S C P , w h i c h is pre-defined in the D i f f S e r v - aware core network. If it is outgoing f r o m a core network, originating f r o m an external source, the P H B should then be translated to equivalent U M T S Q o S parameters. M a p p i n g might also be needed between the external bearer service Q o S system and D i f f S e r v . T h i s 53 CHAPTER 5. MAPPING AND CLASSIFICATION chapter is focused mainly on the mapping function f r o m the U M T S Q o S system to D i f f S e r v . Classification, on the other hand, is the act of splitting an i n c o m i n g traffic into a number of output streams, by using filters. In Dif fServ , the traffic is s imply classified by applying match conditions on the D S C P field of the packet header. T h i s chapter first studies the Q o S mechanism of U M T S / G P R S and that of Dif fServ . T h e n the function of mapping these Q o S systems to each other is addressed, and finally, the classification methods are presented. 5.2 U M T S / G P R S QoS Classes and Attributes In U M T S / G P R S , Quali ty of Service ( Q o S ) is conveyed by a Q o S profile. A Q o S profile is a part of the P D P context w h i c h is negotiated with the user to acquire his demanded grade of service. It is a single parameter with multiple attributes. E a c h of its attributes defines a specific characteristic of the traffic. These attributes as defined in the specification are traf- fic class, maximum bitrate, guaranteed bitrate, delivery order, maximum SDU size, SDU format information, SDU error ratio, residual bit error ratio, delivery of erroneous SDUs, transfer delay, traffic handling priority, and allocation/retention priority [2]. Hereafter, some of these attributes are explained. T r a f f i c C l a s s : T h e most noteworthy attribute is the traffic class. In defining the traffic classes, restrictions and limitations of the air interface have been taken into consideration to de- velop reasonable service provisioning, while avoiding complex mechanisms. There are four U M T S traffic classes: conversational class, streaming class, interactive class and back- ground class, listed according to their relative priority, f r o m highest to lowest. Conversa- tional and streaming classes are meant to carry real-time traffic streams. T h e main distinc- tions between conversational and streaming classes are their delay and delay variation sen- sitivities. Interactive and background classes, however, are intended to cover all traditional 54 CHAPTER S. MAPPING AND CLASSIFICATION Internet applications, such as W W W , email and F T P . Conversational Class: T h i s class is used for conversational and mult imedia applica- tions, such as G S M telephony speech, voice over IP and video conferencing. These appli - cations are mainly used by two (or more) peers of human end-users. T h e fundamental Q o S characteristics of this class are these: preservation of the time variation between consecutive entities of the traffic, and stringent low delay. Streaming Class: T h i s class carries the traffic of a video or audio stream f r o m a host machine to a human end-user, and is basically a one way transport. Its Q o S characteristic is that it needs that the time relation between the traffic consecutive entities be preserved. H o w e v e r it has a m u c h lower delay sensitivity compared to the conversational class. A n example of its application is M P 3 video streaming or Internet radio. Interactive Class: T h i s class is for applications where an end-user (human or m a - chine) is on-line, asking some data f r o m a host machine. T h e m a i n characteristic of this class is its request-response pattern, with a lower delay sensitivity compared to the above traffic classes. It may only need payload content preservation. Background Class: W h e n an end-user (mainly a machine) sends or receives data in the background without any delay sensitivity, this class is applied. Examples of this k i n d of application are email and database downloading. M a x i m u m B i t r a t e (kbps): T h i s attribute defines the m a x i m u m rate that can be delivered to or by U M T S . Traffic is conformant with a defined m a x i m u m bitrate when shaped by a token bucket with a token rate of the m a x i m u m bitrate, and a bucket size o f the m a x i m u m S D U size. G u a r a n t e e d B i t r a t e (kbps): It defines the guaranteed rate, that is, the guaranteed number of bits within a period of time, divided by the duration of that period, delivered by U M T S . Q o S requirements, such as delay and reliability are only guaranteed for traffic rates up to 55 CHAPTERS. MAPPING AND CLASSIFICATION this guaranteed bitrate. Delivery Order: T h i s attribute specifies an application requirement, whether its packets need to be delivered in order or not. Maximum SDU Size: It defines the m a x i m u m allowable S D U size. SDU Format Information: T h i s attribute defines the list of possible exact sizes of S D U . If an application is able to state its S D U sizes, the bearer services are less expensive, because this data can be used by U T R A N at the R L C layer. SDU Error Ratio: It indicates the ratio of dropped or erroneous packets of a conforming traffic. Transfer Delay (ms): T h i s attribute indicates the m a x i m u m delay in the 95th percentile of the distribution of delay observed when a S D U is carried f r o m one Service Access Point ( S A P ) to another S A P . Delivery of Erroneous SDUs: It indicates whether an erroneous S D U shall be delivered or discarded. S G S N G G S N MT/TE Bearer service U M T S bearer Service External Bearer Service Radio Bearer Service Backbone bearer service Figure 5.1: A schematic of the structure of the G P R S backbone network Table 5.1 shows the range of the values expected of a U M T S bearer service Q o S at- tributes. 56 CHAPTER 5. MAPPING AND CLASSIFICATION Traffic Class Conversational Streaming Interactive B a c k g r o u n d Class Class Class Class M a x bitrate (kbps) 2048 2048 2048 2048 D e l i v e r y order Y e s / N o Y e s / N o Y e s / N o Y e s / N o M a x S D U size (octets) 1500 or 1502 1500 or 1502 1500 or 1502 1500 or 1502 Errorneos S D U delivery yes/no yes/no yes/no yes/no Residual B E R 5e-2 to l e - 6 5e-2 to l e - 6 4e-3 to 6e-8 4e-3 to 6e-8 S D U error rate le -2 to le -5 le-1 to le -5 le -3 to l e - 5 le -3 to le -5 Transfer delay(ms) max 100 max 250 Guaranteed bit rate 2048 2048 Table 5.1: Expected range for U M T S bearer service Q o S attributes 5.3 DiffServ P H B Groups T h e D i f f S e r v working G r o u p of I E T F has defined a number of different P H B groups for different applications. In this section an overview of all the up-to-date P H B s are presented. Best Effort (BE) PHB Group: B E is the traditional Internet traffic class which has been used f r o m the beginning. T h i s P H B implies that there are some network resources available and there is a good faith commitment that the nodes in the path w i l l do their best to forward the packet in a fair manner, but there is no guarantee of its delivery or of its received level of service. T h e recommended D S C P for this P H B is 000000. Expedited Forwarding (EF) PHB Group [39] [40]: E F P H B is a imed at providing a low loss, low latency, low jitter, assured bandwidth edge-to-edge service. It has also been called P r e m i u m Service. A codepoint of 101110 is recommended as its D S C P , and therefore, there can be only one instance of E F in a D S domain i f one fol lows the recommended D S C P . Assured Forwarding (AF) PHB Group [41]: T h i s group provides four indepen- dently forwarded A F classes. A packet in each of these A F groups can be assigned to three dropping precedences. Table 5.2 summarizes the recommended D S C P for these twelve i n - stances of A F . 57 CHAPTER 5. MAPPING AND CLASSIFICATION T h e value of each class codepoint can be classified as a six bits binary, with the f o r m of D r o p Prec. Class 1 Class 2 Class 3 Class 4 L o w 001010 010010 011010 100010 M e d i u m 001100 010100 011100 100100 H i g h 001110 010110 011110 100110 Table 5.2: R e c o m m e n d e d D S C P values for A F P H B ' c c c d d d ' , in w h i c h ' ccc ' defines the A F ' s class and ' d d d ' indicates its drop precedence. Network Control (NC) PHB Group: A n NC packet is used for carrying any possi- ble signaling data needed in the D i f f S e r v control plan to exchange management information between D i f f S e r v routers within a D i f f S e r v domain. Its recommended codepoint is 11x000. Lower than Best Effort (LBE) PHB Group [42]: T h e primary goal of defining this P H B group is to provide an alternative to best effort traffic at the time of congestion. T h i s P H B carried packet of an application that has a lower drop precedence, compared to best effort traffic. Dynamic Real-time/non-real-time (RTVNRT) PHB Group [43]: T h i s group pro- vides delivery of real-time and non-real-time traffic. Bulk Handling (BH) Per Domain Behavior (PDB) Group [44]: B H P H B is used for traffic that may be starved even in a properly functional network. T h i s starvation is not a requirement of the P H B , but is used for comparison with the best effort P H B . It can be used for carrying packets of an application that has sufficiently low value. Alternative Best Effort (ABE) PHB Group [45]: T h i s P H B group gives the best effort applications an option: to choose between receiving lower delay or receiving higher throughput by defining two types of green and blue packets. Assured Rate (AR) PDB Group [46]: T h i s P D B defines the overall treatment a packet receives throughout a D i f f S e r v domain. T h i s P D B is suitable for carrying the traffic 58 CHAPTER 5. MAPPING AND CLASSIFICATION of applications that need rate assurance but do not require delay bounds. 5.4 Mapping T h e mapping function is the first and the most important step in supporting the Q o S of a U M T S stream. T h i s is the place where a P H B suitable for a session is considered and a D S C P is selected for it accordingly. T h e selected P H B defines the level of service that stream w i l l observe throughout that D S domain . T h e mapping function is an implementation dependent task, however. It depends on the number of different Q o S provided and the level of Q o S fineness. In this section a basic mapping system is presented. A simpler but completely analogous mapping method has been used in our simulation model . A close observation of the standard P H B suggested by I E T F shows that there are not enough P H B s to cover a fine set of P H B equivalent to the U M T S Q o S system. Nevertheless, a basic methodology can be formed. Motivated by the U M T S traffic classes, we classify the traffic types into five P H B groups, GO to G4: GO: T h i s group is used for special, infrequent network management control dedicated to the D i f f S e r v control plane for the purpose of signaling between network elements. T h i s G r o u p has a very h i g h priority over other groups. A D S C P with the value of 11x000 has been as- signed to the N C P H B , thus there can be two subclasses by using the recommended D S C P . Gl: T h i s group is used as an aggregate of traffic streams that have stringent delay sensitiv- ity, for example, the conversational traffic class in U M T S . T h e proper P H B for this group is E F P H B . Nevertheless, only one D S C P value, 101110, was advised b y I E T F . N o other subgroups can be defined i f one wants to be consistent with the standard. However , this problem can be easily overcome by using some other D S C P s , as al lowed in I E T F specifi- cations, or by mapping subgroup one to E F P H B and mapping the rest of the subgroups to 59 CHAPTER 5. MAPPING AND CLASSIFICATION the higher priority A F P H B s . These two choices only affect the value of D S C P chosen, but do not affect the overall performance of the system, as far as the proper P H B policies are selected. G2, G3: G r o u p two is aimed at aggregating streams with the streaming traffic class and G r o u p three is for the interactive traffic class. T h e most rational P H B s that can be used for these groups is a set of Assured Forwarding P H B groups. T h i s provides a total of up to twelve subgroups. G4: T h i s group is used for aggregating non-delay sensitive traffic. It is equivalent to the U M T S background traffic class. T h i s group can have a number of subgroups and can be mapped to these proposed P H B s : best effort P H B , lower that best effort P H B , alternative best effort P H B and bulk handling P H B . 5.5 Classification In the D i f f S e r v context, a Classifier is a functional b l o c k that looks at the D S C P value marked on each IP packet's header to split an i n c o m i n g traffic into /V output streams. A n important characteristic of a classifier is that it should work unambiguously, and classify uniquely the i n c o m i n g traffic. A classifier is usually implemented by using filters, and is identified by its set of fil- ters. A filter is a D i f f S e r v block that applies a set of conditions to the value o f a classification key, such as the packet header, content, or attributes to categorize the packet into a finite set of groups. In a behavior aggregate ( B A ) D i f f S e r v classifier, the classification key is the packet's D S C P field. In a B A classifier, a filter is defined by a value that is matched with the D S C P of the i n c o m i n g packet to determine its logical output stream. T h i s value can be a six bits binary number, l ike 111000, and can be applied with different match conditions, 60 CHAPTER 5. MAPPING AND CLASSIFICATION such as wildcard, prefix, masked, range and/or exact match. A M u l t i - F i l e d ( M F ) classifier classifies the packets based on two or more fields in a packet header. T h e most usual header fields used are destination address, source address and D S C P in the IP header, and source port and destination port in the T C P header. F o r a Dif fServ-based U M T S backbone router, a packet's P H B can be determined just by the value in its D S C P field in the IP header. Therefore, a simple B A classifier is sufficient, w h i c h uses exact match conditions for conversational, streaming and interactive traffic groups. A n exact condition match with a filter value of 000000 is used for the best effort subgroup of the background traffic group and a wildcard (******) match is used to put any packet with an unknown or ambiguous D S C P value into an output stream with lower priority, such as lower than best effort or b u l k handling P H B s . 61 Chapter 6 Simulation Model and Results T h i s chapter addresses the simulation model and results, presenting the discussion in three parts. T h e first part presents simulation results comparing the performance of the R E D on shared buffer schemes, as discussed in Chapter 3, under both T C P and n o n - T C P traffic m o d - els. T h e second part presents simulation results related to the scheduling system, as dis- cussed in Chapter 4, by providing proof of the decoupled delay bandwidth capability of D D B - F F Q and its l ink sharing ability. W h i l e parts one and two evaluate the scheduling and dropping elements of a complex D i f f S e r v system individually in a single node model , part three presents the simulation model of a simple Dif fServ-based U M T S backbone network with three nodes and a more realistic traffic model . T h i s model utilizes a more complete configuration of a D i f f S e r v router to add Q o S support. T h e scheduling b l o c k of each router is based on the D D B - F F Q service discipline. T h e O P t i m u m N E T w o r k performance ( O P N E T ) modeler is the simulation environ- ment used for bui lding and simulating these models. O P N E T is a renowned, widely ac- cepted, industry oriented software that enables researchers to model and simulate complex network scenarios, communicat ion protocols and traffic models. It is based on a hierarchi- cal structure that allows unlimited subnetwork nesting. W h i l e O P N E T has a fair number of 62 CHAPTER 6. SIMULATION MODEL AND RESULTS pre-defined models, it allows bui lding new models by using finite state machine model ing at it lowest level , w h i c h can be programmed by P r o t o - C language. P r o t o - C is a combina- tion of general C/C++ facilities and an extensive library of bui l t - in , high- level commands, k n o w n as K e r n e l Procedures. O P N E T has an easy-to-use graphical user interface and a variety of tools for speci- fication, data collection, simulation and analysis, and presentation of results. T h e package consists of several editors, each focusing on a particular aspect of model ing . It is based on an essentially hierarchical structure for model ing actual networks or distributed systems. There are three major model ing domains: the network domain , the node domain , and the process domain . F o r example, for model ing a simple office L A N , first the actual network is modeled in the network editor using a few nodes and links. E a c h of these nodes is defined in a node m o d e l consisting of a number of modules. Final ly , the functionality of each of these m o d - ules is described in a process model , using state machines and P r o t o - C programming. Data collection is performed in the f o r m of three types of output. These are output vectors, output scalars and animation. A n output vector is a collection o f pairs of real values and usually presents the system's performance versus simulation time. In contrast, output scalars are used to collect individual data points across several instances of simulation. A n - imation is rarely used, but it provides a powerful visual tool for debugging. 6.1 Dropper Element T h i s section compares the characteristic of different R E D on shared-memory buffer algo- rithms that were presented in Chapter 3. A l t h o u g h R E D is basically used to interact with an end-to-end congestion manager, such as the T C P congestion control and avoidance mech- anism, both T C P and n o n - T C P traffic sources should be studied for its deployment. T h e 63 CHAPTER 6. SIMULATION MODEL AND RESULTS Endpoint Congestion Management W o r k i n g G r o u p of the I E T F intends to integrate conges- tion management across all applications and transport protocols by introducing a universal congestion manager ( C M ) [18]. T h i s R F C proposes an integrated congestion management universal among all applications and transport protocols that enables application adaptation to congestion, in addition to providing efficient multiplexing. However , before the required implementation of the C M module, T C P w i l l be the only transport protocol that utilizes an end-to-end congestion manager. St i l l , many other applications use U D P and their conges- tion avoidance algorithms are inadequate or non-existent. A l m o s t all real-time and stream- ing applications are U D P - b a s e d and non-responsive to congestion notification. Therefore, it is convenient to classify the traffic streams into two groups: T C P - c o m p a t i b l e flows and non-responsive flows. T h i s section consists of two discussions, regarding the above traffic categorizations, one for n o n - T C P and one for T C P traffic flows. In the former, the traffic handling capabil- ity o f the router in terms of packet loss and queuing delay is evaluated for instances when early packet dropping has no meaning to the source. In the latter, the performance of the algorithms in avoiding congestion with m a x i m u m usage of network capacity is tested for instances when the traffic source can sense traffic conditions. T h e simulation results are used to evaluate and compare the f o l l o w i n g properties of the schemes: • the loss ratio seen when traffic sources do not use an end-to-end congestion manger; • the loss ratio and utilization when T C P is the transport protocol of the traffic used, in addition to the throughput of the system; • the packet delay. A l t h o u g h the main focus here is to compare the performance characteristics of R E D algo- rithms on a shared-memory buffer, R E D on dedicated and shared m e m o r y buffers are also 64 CHAPTER 6. SIMULATION MODEL AND RESULTS compared, because of the identical structures of an R - C P scheme and R E D on a dedicated memory queues. 6.1.1 Simulation Model A l t h o u g h two different traffic models, T C P - b a s e d and non-responsive, have been used, the simulation model itself has a single structure. Figure 6.1 shows the schematic of the s i m - ulation model . It consists of four traffic sources, a gateway with a shared-memory buffer and R E D deployment, and the traffic destination. F o u r traffic generators provide four traf- fic streams w h i c h are multiplexed into a single flow by the M u x module . T h e gateway itself is composed of a classifier element that separates the i n c o m i n g traffic into four classes, a shared memory buffer with R E D deployed on it and a scheduler. T h e traffic patterns of the traffic generators are described later in each section. T h e scheduler is a simple work-conserving, non-preemptive, round-robin scheduler. A simple scheduler has been deployed to allow the evaluation to reflect only the characteristics o f the dropping algorithms. T o compare the three congestion management schemes, three scenarios have been evaluated. These scenarios differ only in their R E D algorithms, but traffic patterns, simulation parameters, and other parts are similar, making the comparison possible. 6.1.2 Performance of R E D on Shared Memory Buffer Without End- to-End Congestion Control F o r a non-responsive traffic source, the packet dropping at the router has no meaning in terms of congestion notification. Sti l l the R E D dropping has some of its advantages, such as having a lower average queuing delay and lower bias against bursty traffic. In this m o d e l each of the traffic sources (Figure 6.1) generates packets with expo- nentially distributed interarrival time and service duration. T h e R E D parameters for each 65 CHAPTER 6. SIMULATION MODEL AND RESULTS a o N A i Mux a TRAFFIC SOURCES CLASSIFIER QUEUES SCHEDULER SHARED MEMORY o HOST GATEWAY Figure 6.1: T h e schematic of the simulation m o d e l used to evaluate the algorithmic droppers subqueue are chosen as fol lows [47]: min-th(i) = 5 * average packet size max-th(i) = 3 * min-th weight factor = 0.002 M a x drop probability = 0.1 T h e values of the m a x i m u m and m i n i m u m thresholds for the whole queue (used in R - C S and R - S M A ) are calculated thus: max-th = Yll=o max-th(i) min-th = Y%=o niin-th(i) T w o types of traffic patterns have been used to allow performance evaluation of these schemes under different conditions. These patterns are called cases A and B . Case A: i n this case, all sources are generating symmetric traffic, that is, each has an average base traffic rate of 9600 bps and the gateway has a service rate capability of 38400*1.03 bps at all times where the 1.03 factor is used to ensure stability. B o t h packet 66 CHAPTER 6. SIMULATION MODEL AND RESULTS interarrival times and packet lengths are generated using exponential distributions. In this case the performance of the system is evaluated when all classes are generating packets with overloaded rates of 100% to 400%. Therefore, every class suffers the same amount of over- load and the router is congested with the same share of each class. T h i s traffic pattern was chosen to demonstrate the performance comparison of R E D schemes under the same con- gestion conditions. T h e physical queue size of the buffer in each scenario has been set to infinity to allow evaluation of the packet dropping due to R E D algorithm only. St i l l , the queues do not grow continuously because of the m a x i m u m threshold used in R E D algo- rithms. Table 6.1 shows the value of R E D parameters used in the m o d e l . min- th (bits) max-th (bits) Buffer size Weight factor M a x D r o p Probability 48000 114000 infinite 0.002 0.1 Table 6.1: R E D Parameters F igure 6.2 shows the packet drop ratio of the three schemes when their load is i n - creased f r o m 100% to 400% of their base rates. T h e packet ratio is the number of packets dropped over the total number of packets generated during the simulation. D u e to R E D algorithm, the schemes initiate packet dropping at approximately 75% of the normal rate, uniformly. However , beyond a 150% load, they show distinct packet losses. T h i s figure shows that when traffic is symmetric, the R - C S has the least, and the R - C P has the worst packet dropping rates. Figure 6.3 shows the E W M A queue size of the buffer of each of the schemes. T h i s is not indicative of the average delay of each queue, but shows the overall status of the buffers, as it is the average value that R E D algorithms calculate to decide on whether a gateway is congested or not. T h i s figure also shows that R - C S has a lower value and indicates that it handles traffic more smoothly, without issuing a premature congestion notification. 67 CHAPTER 6. SIMULATION MODEL AND RESULTS 0.9 r Offered load% Figure 6.2: Packet drop ratio 0 1 ' 1 1 ' 1 1 1 50 100 150 200 250 300 350 400 Offered l o a d % Figure 6.3: Average queue size 68 CHAPTER 6. SIMULATION MODEL AND RESULTS Case B: In this case, the characteristics of the schemes under a different traffic pat- r tern are evaluated. Here, only two out of the four traffic generators, class 1 and class 3, cause congestion at the router. These sources overload the router in four intervals during the s i m - ulation. In a simulation with a duration of one hour, they constantly generate packets with their base rates, with an added traffic load of 150% to 300% during 10 to 14, and 35 to 39 minutes for class 1 and during 20 to 24, and 50 to 54 minutes for class 3. T h e simulation R E D parameters are set according to table 6.2: min-th(i) max-th(i) Rate of class 0 & 2 Base rate of class 1 & 3 Buffer size 48000 bits 600000 bits 8000 (bps) 9600 (bps) 1.25 * max-th Table 6.2: R E D Parameters 0.09 0.08 0.07 .9 0.06 cd g-0.05 - * - R -SMA -e- R -CS - * - R -CP 200 250 Overload% Figure 6.4: Packet drop ratio 300 Figure 6.4 shows the packet drop ratio of R - C P , R - C S and R - S M A under this new traffic pattern. T h e horizontal axis, overload%, is the percentage of the added load for the 69 CHAPTER 6. SIMULATION MODEL AND RESULTS 0.045 r O V E R L O A D % Figure 6.5: Packet drop ratio class 0 0.25 r O V E R L O A D % Figure 6.6: Packet drop ratio class 1 70 CHAPTER 6. SIMULATION MODEL AND RESULTS traffic class 1 (or 3). T h i s figure shows the same results as F igure 6.2 with a more distinct performance difference. It shows that R - C S has the smallest and R - C P the largest overall packet dropping rate. T h i s is due to the fact that in R - C S , the unused space of other traf- fic aggregates can be shared. However , the smaller total packet dropping o f R - C S has the price tag of larger packet dropping rates for classes 0 and 2, as seen in figure 6.5 for class 0. Nevertheless, this higher packet dropping happens in the high values of overload and has a relatively small drop ratio. Figure 6.6 demonstrates the packet dropping ratio of traffic class 1. Because of their analogous traffic pattern, the performance of class 2 is similar to that of class 0 and class 3 is similar to class 1. Figures 6.4, 6.5 and 6.6 together depict that using R - S M A under the mentioned traf- fic pattern improves the packet dropping rates in the overall buffer and o f classes 1 and 3 in the worst case load scenario, with only a small increase in packet dropping of well-behaved traffic. 30 Time(min) Figure 6.7: Q u e u i n g delay 71 CHAPTER 6. SIMULATION MODEL AND RESULTS T o demonstrate the time based behavior of the schemes, the simulation results for a model with the same traffic pattern as in the above, and with a class 0 and a class 3 added overload of 175%, were sampled. Figure 6.7 shows the queuing delay of R - C S , R - C P and R - S M A schemes for a s im- ulation duration of one hour within the traffic pattern of case B . A s expected, less packet dropping of R - C S w i l l y ie ld to a higher delay, while a higher drop rate of R - C P leads to a lower delay at a time of congestion. However , these schemes are proposed to be used for interactive and background traffic classes, w h i c h have a higher priority for content preser- vation than packet delay. In addition, the m a x i m u m queuing delay can be controlled by the queue size. T h i s figure also clearly shows the traffic pattern. A s expected, the congestion has occurred in the intervals of [10, 14], [20, 24], [35, 39] and [50, 54]. Since the R - C P scheme has lower packet drops, it has more backlogged packets and it takes longer to re- turn to its normal queuing delay. Figures 6.8, 6.9 and 6.10 demonstrate the dropping pattern of the schemes during a sampled simulation. These figures demonstrate two facts. First, packet dropping ratios in R - C P and R - S M A are more spread out than in R - C S , while packet dropping in R - C S mainly happens during the peak queue congestion periods. Second, the m a x i m u m value of the drop ratio is low for R - C S , and high for R - C P . R - S M A has a moderate m a x i m u m drop ratio. T h i s m a x i m u m value is an indicator of burst packet dropping. Therefore, one can conclude that R - C S handles bursty traffic more efficiently. 72 C H A P T E R 6. SIMULATION MODEL AND RESULTS P a c k e t d r o p ra t io , H - C P 3 0 t i m e ( m i n ) Figure 6.8: Packet drop pattern of R-CP, Load%=175% P a c k e t d r o p ra t io , R - C S 2 0 3 0 4 0 T i m e ( m i n ) Figure 6.9: Packet drop pattern of R-CS, Load%=175% P a c k e t d r o p ra t io , R - S M A 0 . 0 3 0 1 0 2 0 3 0 4 0 5 0 6 0 T i m e ( m i n ) Figure 6.10: Packet drop pattern of R - S M A , Load%=175% 73 CHAPTER 6. SIMULATION MODEL AND RESULTS 6.1.3 Performance of R E D on a Shared-Memory Buffer with an End- to-End Congestion Manager (TCP Traffic) T h e dominant transport protocol that uses an end-to-end congestion manager is T C P . T h e implementation o f a congestion avoidance mechanism in T C P has been the main reason of proper functionality and effectiveness of the Internet, because it prevents congestion c o l - lapse phenomena. A s the main purpose of R E D deployment is its interaction with T C P to avoid congestion in the network, this section compares the performance of R E D on a shared- memory buffer with traffic originating f r o m an application using T C P . T h e simulation model is based on Figure 6.1 with four F T P traffic sources. F T P is the application running on each source, w h i c h sends and receives data through T C P and IP layers, based on the Internet's layered structure. S imilar to the traffic pattern of case B in the previous section, only two of the traffic sources attempt to send data higher than their base rates, with a base traffic rate of 1.8 k B p s and added traffic of 2 k B p s during 4 intervals. T h i s traffic pattern is illustrated in table 6.3 and figure 6.11. ex pa A 380Q Traffic rate of Class 0 Traffic rate of Class 2 1800 •"• : = • 10 20 30 40 50 60 Simulation time (min) Figure 6.11: Traffic pattern of classes 0 and 2, in a simulation of one hour duration 74 CHAPTER 6. SIMULATION MODEL AND RESULTS 10 to 14 20 to 24 35 to 39 50 to 54 Class 0 (Bps) 2000 - 2000 - Class 2 (Bps) - 2000 - 2000 Table 6.3: Traffic pattern of T C P flows A generic T C P model is used that emulates the performance o f an actual T C P with parameter values according to table 6.4. These settings are O P N E T default values. T h e values of the R E D parameters are chosen according to table 6.5 (similar to case B in the previous section). Initial R T O M i n i m u m R T O M a x i m u m R T O R T T G a i n 1 (sec) 0.5 (sec) 64 (sec) 0.125 Deviat ion G a i n R T T Deviat ion Coefficient Persistence Timeout Fast Recovery 0.25 4.0 1.0 (sec) E n a b l e d Fast Retransmit K a r n ' s A l g o r i t h m E n a b l e d Enabled Table 6.4: Default values of T C P parameters used in simulation min-th(i) max-th(i) M a x D r o p Probability Weight Factor Buffer size 15000 (bits) 80000 (bits) 0.1 0.002 1.25 * max-th Table 6.5: R E D parameters T h e m a i n difference between the m o d e l in this section and the one in the previous section is that when a stream suffers f r o m a packet loss it lowers its rate, since packet loss is considered to be feedback of the network's condition, and is an indication o f occurring congestion. Therefore, a combination of packet loss and throughput (traffic intended to be received versus actual traffic received) is used for performance evaluations. F igure 6.12, 6.13 and 6.14 show samples ofpacket drop patterns during simulation. 75 CHAPTER 6. SIMULATION MODEL AND RESULTS Packet dropping in the R - C P schemes happens quite often and the value of the drop ratio is m u c h higher than in the other schemes. R - S M A has more frequent drops, with higher i n - stantaneous values in comparison to R - C S , but less drops comparing to R - C P . These figures show that in R - C S and R - S M A , packets are dropped only when the buffer is actually near or in congestion periods. F igure 6.16, 6 .15and 6.17 show the queuing delay patterns o f these sharing schemes and figure 6.18 shows the average of these figures overlaid. A s the result of lower packet drops, the queuing delay of R - C S is higher than the other two, while R - C P has the lowest. A t times of congestion, R - C S and R - S M A have relatively large peaks, but because of the congestion notification of R E D , they return to their normal values afterward. F igure 6.19 shows the F T P traffic received in each scenario. Since the figure shows almost similar values for the traffic received by the application layer in R - C P , R - C S , and R- S M A , and by considering the packet drop ratios, one can conclude that R - C S has the highest and R - C P has the lowest throughput. In observing the simulation results for both T C P and n o n - T C P traffic, it becomes apparent that R - C S has the lowest packet dropping rates, because it uses m e m o r y more effi- ciently, while performing its congestion avoidance responsibility and handling bursty traf- fic. However , it has a bias towards non-symmetric streams, so some traffic might m o n o p - olize the buffer usage. Nevertheless, since R E D drops packets proportional to the average queue size, it is less l ikely that a T C P - c o m p a t i b l e class monopol ize the buffer in compar- ison to tail -dropping C S or S M A schemes. R - S M A performs almost identically to R - C S , but with more fairness, by providing a m i n i m u m guaranteed space for each class. R - C S is the least efficient at memory usage, but is the best option for asymmetrical traffic. These algorithms can be easily extended to support weighted thresholds, and therefore, provide more fairness in addition to efficiency. 76 CHAPTER 6. SIMULATION MODEL AND RESULTS Packet drop ratio, R - C P O 10 2 0 3 0 4 0 50 6 0 T i m e ( m i n ) Figure 6.12: Packet drop pattern, R - C P T i m e ( m i n ) Figure 6.13: Packet drop pattern, R - C S Packet drop ratio, R - S M A 1 1 \ A jl m A 11 i l 1 A 1 1 All I 1 o 1 0 2 0 30 AO 5 0 ao T i m e ( m i n ) Figure 6.14: Packet drop pattern, R - S M A 77 CHAPTER 6. SIMULATION MODEL AND RESULTS Queueing delay, R—CP O 1 0 2 0 3 0 4 0 5 0 6 0 T i m e ( m i n ) Figure 6.15: Q u e u i n g delay pattern, R - C P Queueing delay , R—CS 1 1 r o 1 0 2 0 3 0 4 0 5 0 eo T i m e ( m i n ) Figure 6.16: Q u e u i n g delay pattern, R - C S 78 CHAPTER 6. SIMULATION MODEL AND RESULTS 1 .5r 0 10 20 30 40 50 60 Time(min) Figure 6.18: Average of queuing delay pattern 7 9 CHAPTER 6. SIMULATION MODEL AND RESULTS 6.2 Scheduler Element T h i s section addresses the performance evaluation of the D D B - F F Q scheduling algorithm presented in Chapter 4. Since this algorithm is basically a modif ied version of F F Q , the f o l l o w i n g simulation models are aimed at evaluating the new features of this scheme, that is, the simulation results have two objectives. First,the capability of this scheme in assigning delay bounds and bandwidth allocation, independently, is shown. Second, the performance of the system when one of the streams misbehaves is assessed, w h i c h is an indicator of its l ink sharing ability. 6.2.1 Simulation Model T h e schematic of the simulation model used is shown in Figure 6.20. T h e model consists of the traffic sources, the router and the destination node. T h e traffic sources generate four classes of flows that are multiplexed to each other in the M u x block. E a c h class is assigned 24% of the l ink bandwidth that limits the utilization to 0.97, to meet the necessary stability condition. However , they are assigned different delay bounds. Class 1 is the most delay sensitive and its delay bound is set to 1 sec and its bandwidth is 9600 bps. Class 2 is also delay sensitive and its delay bound is set to 2 sec with a 9600 bps bandwidth. Class 3 has a loose delay sensitivity of 6 seconds and Class 4 is not delay sensitive but is allocated a 9600 bps bandwidth. A n i n c o m i n g packet is first classified into 4 classes and packets belonging to each of these classes are then stored in a dedicated buffer. E a c h stream goes through a rate-estimator module that estimates the rate of each traffic class. These collected rate statistics are fed into the scheduler. T h e rate estimator is an E W M A with a weight factor of u = 0.02. T h e scheduler uses a D D B - F F Q scheduling algorithm. T o achieve the above m e n - tioned delay constraints, the scheduler parameters defined in Chapter 5 need to be set ap- 80 CHAPTER 6. SIMULATION MODEL AND RESULTS TRAFFIC SOURCES GATEWAY Figure 6.20: T h e schematic of the D D B - F F Q simulation M o d e l propriately, using equation ( 4 . 1 2 ) . Table 6.6 shows the values o f F F Q rates and allocated rates used for the simulation. Class 1 Class 2 Class 3 Class 4 Al loca ted rates (bps) 9600 9600 9600 9600 F F Q rates (bps) 14000 11000 7000 3270 Table 6.6: T h e values of the F F Q and assigned rates 6.2.2 Simulation Results T o examine the decoupled delay bandwidth and l ink sharing ability of the D D B - F F Q sys- tem, the rate of traffic Class 2 has been set as the changing variable. It varies f r o m 100% to 500% of its allocated rate and the observed delay and actual assigned bandwidth of each class is measured. Traffic sources 1 ,3 , and 4 generate packets according to their allocated rates. F igure 6.21 illustrates the 90% delay observed by each traffic class. Since the links are ideal, the delay measured accounts only for queuing delay. T h e horizontal axis is the 81 CHAPTER 6. SIMULATION MODEL AND RESULTS percentage of Class 2's actual rate over its assigned rate. T h e vertical axis is the 90% delay of the delay distribution of each traffic class over simulation duration. T h i s figure demon- strates several facts. M i s b e h a v i n g traffic has little effect on other sessions' observed delay. Class 1 delay is constant regardless of the rate of Class 2. T h e delay sensed by packets of traffic Class 3 varies somewhat, but its value in the worst case scenario of 500% is approxi- mately 133% of its delay bounds. T h i s case can be easily avoided by l imit ing the m a x i m u m peak rate of each traffic bandwidth to an appropriate rate. Since traffic Class 2 is misbehav- ing, there is no guarantee that the router w i l l keep its promise o f bounding its delay. Hence , it is punished according to its rate and its observed delay increases as its rate increases. T h e delay of traffic Class 4 is initially increased about 30% with the increase of the Class 2 rate, but for higher rates it can take advantage of the punishment applied to Class 2. T h i s behavior is acceptable since the background traffic class has a loose delay requirement. Figure 6.22 depicts the service rate received by each traffic class while Class 2 mis- behaves. T h e vertical axis is the percentage of the actual allocated rate over the assigned rate for each class, and the horizontal axis is the percentage of the Class 2 rate over its as- signed rate. It can be discerned f r o m the figure that the allocated rate of each class varies only, at most, by 5% of its assigned rate. T h i s fact is even applicable to the misbehaving stream. T h i s means that the scheduler has been able to successfully l imit or allocate each class's guaranteed rate. Since the bandwidth control mechanism is set to be in effect when the rate measured by the estimator is greater than 125% of what is assigned, Class 2 traf- fic gets higher bandwidth initially as its rate increases, and the Class 4 traffic, w h i c h has lower priority, gets lower bandwidth. Later, however, when the D D B - F F Q bandwidth con- trol mechanism comes into effect, the trend is reversed. These figures elucidate that the only consequence for misbehaving traffic is that it w i l l not be treated within its requested delay bound, w h i c h is a reasonable punishment. 82 CHAPTER 6. SIMULATION MODEL AND RESULTS 120r 80h 40 h 20 h - * - class 1 —«— class 2 — i — class 3 r- 1 , 1 Q * * * I I * ' ' ' I * 100 150 200 250 300 350 400 450 500 Class 2 (Rate/Assigned-rate) (%) Figure 6.21: 90% delay observed by all traffic classes when class 2 is misbehaving. 105r l 6 i 1 1—: 1 1 1 1 1 1 100 150 200 250 300 350 400 450 500 Class 2 (Rate/Assigned-rate) (%) Figure 6.22: Bandwidth allocated to each class when class 2 is misbehaving. 83 CHAPTER 6. SIMULATION MODEL AND RESULTS 6.3 Multiple Node Simulation Model and Results In the previous sections, the performance of the proposed algorithms for the dropping element and the scheduler were discussed individually . In this section, the structure of a Dif fServ-based U M T S backbone network is presented that has three characteristics. First, each node of the network utilizes the basic router's architecture as needed in a D i f f S e r v router: the classifier, dropper, meter, queuing element and scheduler, with the algorithms discussed in this thesis deployed on them. T h e scheduler uses the D D B - F F Q algorithm by interacting with the rate-estimator module. R - C S is proposed to be used for the subgroups of a traffic aggregate, as discussed in Chapter 5. Second, the m o d e l utilizes a multiple node network consisting of an ingress node, a core node and an egress node. T h e ingress and egress nodes also are G G S N and S S G N , respectively. T h i r d , the m o d e l considers all kinds of U M T S traffic classes with a more realistic traffic model . Figure 6.23 shows the O P N E T network model and Figure 6.24 shows its schematic in more details. T h e simulation model only considers the backbone network portion of the datapath and consists of a set of traffic sources, a simple core network, and destination nodes ( F i g - ures 6.23, 6.24). T h e model exploits the b u i l d - i n traffic generators o f O P N E T to gener- ate a traffic flow consisting of both T C P - c o m p a t i b l e and non responsive streams. T h e y are also chosen to include a combination of all the U M T S traffic classes; thus, it covers conver- sational, streaming, interactive and background classes. F i v e aggregates b u i l d the traffic stream going through the network. • N e t w o r k C o n t r o l Traffic N C is the traffic used for signaling or network management messages that are internally used in a D i f f S e r v domain . It occupies a very small por- tion of bandwidth but has the highest priority and very stringent delay sensitivity. Its required bandwidth should be considered so that its flow within the network has a very small effect on the guaranteed services of the actual D i f f S e r v user data. Here, 84 CHAPTER 6. SIMULATION MODEL AND RESULTS Figure 6.23: O P N E T network model it occupies 0.4% of the bandwidth and has a fixed packet size that might increase the delay bounds of the D D B - F F Q system by a small additive constant. T h i s group is also identified by Class 0 in the simulation results. • Conversational traffic: T h i s traffic is an aggregate of the voice application which holds a stringent delay and jitter requirement. T h i s aggregate is identified as Class 1 in the simulation results. Three traffic sources built this aggregate here and each source gen- erates packets with exponentially distributed interarrival time and packet size. A l - though exponential distribution is not an accurate model for conversational traffic, this aggregate represents it in terms of delay sensitivity. • Streaming traffic : T h i s traffic aggregate that is also identified by Class 2 here, repre- sents the aggregate of the streaming flows (video, audio streaming) in terms of delay requirements. Its rate varies f rom 100% of its assigned rate to 400% to evaluate the independence of the observed Q o S of aggregates to each other. T h e traffic source out- 85 CHAPTER 6. SIMULATION MODEL AND RESULTS DiffServ network control Group U M T S Traffic Classes Email Server Figure 6.24: T h e schematic of simulation network puts packets with exponentially distributed interarrival time and packet size. • Interactive traffic: A n aggregate of three H T T P browsing applications represents this traffic identified also as Class 3 here. T h e characteristics of this aggregate class are moderate delay sensitivity and contents preservation. • B a c k g r o u n d Traffic : T h i s aggregate consists of two instances of F T P applications and three instances of E - m a i l traffic, which has a low delay requisite and is identified by Class 4 in the results. T h e source generators of H T T P , F T P and E - m a i l use the T C P / I P layered structure and repre- sent the T C P - b a s e d traffic, while the other classes are non-responsive real-time applications. These traffic classes have distinct delay bounds and bandwidth assignments. T h e bandwidth allocated to Class 0 is 0.4% of the service rate, w h i c h is a very small portion of the bandwidth. Its packet size and peak burst rate are also wel l defined, so that its effect on the delay b o u n d as defined in D D - F F Q is negligible. E a c h of the other traffic aggregates are allocated almost the same bandwidth of about 25% of the l ink bandwidth. However , they 86 CHAPTER 6. SIMULATION MODEL AND RESULTS SCHEDULER 14 4 4 ; Shared-memory I I j Buffers j j GATEWAY ' Figure 6.25: T h e schematic of the simulation router's configuration have distinct delay bounds and priorities. Class 1 has the highest, and Class 4 the lowest delay requirement. F igure 6.25 depicts the structure of every router in the m o d e l , w h i c h causes packet forwarding according to a packet's D S C P . T h e router has the essential components needed for proper D i f f S e r v functionality: the classifier, the meter, the droppers, the queuing block and the scheduler. T h e i n c o m i n g packet is first classified. If it is an edge node, its Q o S is mapped to a proper PFLB and then it is encapsulated, having the associated D S C P in its header. T h e n it is passed through a rate estimator to a dropper block. T h e accepted packet is then enqueued. T h e scheduler then forwards the packet to the next hop using the D D B - F F Q service discipline. Dropping Element: T h e proposed structure for the buffer management function in our model is as fol lows. F o r each aggregate group, a dedicated buffer is allocated, that is, each of groups 1 to 5 aggregate classes has its own dedicated buffer. However , each of these buffers uses a memory sharing scheme to store the packets of the subgroups defined within each group. G r o u p 2 uses a meter that sends the i n c o m i n g packets to an absolute dropper if the average rate is greater than 120% of its assigned rate. T h e rest is scheduled for service 87 CHAPTER 6. SIMULATION MODEL AND RESULTS under the D D B - F F Q or F F Q schemes, whether it is below its assigned bandwidth or above, respectively. G r o u p 3 uses a tail dropping buffer and accepts packets up to its f u l l capacity. T h e D D B - F F Q scheme is responsible for assigning a bandwidth to it, up to its assigned rate. G r o u p s 4 and 5 use a R E D algorithmic dropper with a R E D on Complete Sharing scheme ( R - C S ) [48] implemented on it, since these two groups b u i l d the non-real-time aggregates. Scheduler Element: T h e scheduler is a combination of the D D B - F F Q scheduling system, discussed in Chapter 4, and the absolute priority scheduling. T h e N C traffic ag- gregate has been given an absolute priority over other classes and is not scheduled under D D B - F F Q . T h e other classes are scheduled according to the D D B - F F Q scheme. Class 0 is treated differently for two reasons. First, it is internal network management traffic, and there is no need to have control over its bandwidth, assuming the traffic originating f rom the network operator is well-behaved. Therefore, using absolute priority simplifies the sys- tem. Second, it has very stringent delay requirements and needs to be serviced as soon as possible. However , this should not degrade other traffic's Q o S . Nevertheless, Classes 1 to 4 are user generated and need to be scheduled according to their delay and bandwidth re- quirements. A bandwidth and a delay b o u n d are assigned to each traffic group by using the cor- responding assigned rate and F F Q rate of the D D B - F F Q scheduler, respectively. Since the scheduler is a non-preemptive scheduler, delay bounds are affected by the m a x i m u m packet size of even lower priority traffic [49]. Therefore, it is necessary that the edge routers per- f o r m segmentation and re-assembly on the large packets, depending on the system delay bounds. Here , delay bounds are set by considering the 90% packet size as the m a x i m u m packet size, g iv ing a 90% packet delay bound. Table 6.7 shows the value of internal state parameters of the D D B - F F Q scheduler, FFQ-ra tes and assigned rates. Meter: T h e meter implementation used here is the time sl iding w i n d o w type. It is an exponential weighted m o v i n g average ( E W M A ) rate estimator [49]. T h i s module is also 88 CHAPTER 6. SIMULATION MODEL AND RESULTS Class 1 Class 2 Class 3 Class 4 Al loca ted rates (bps) 20000 20000 21200 18000 F F Q rates (bps) 42000 30000 5000 2200 Table 6.7: Values of F F Q and assigned rates a bui lding b l o c k o f the D D B - F F Q service discipline. T h e mapping system used for this G r o u p G O G l G 2 G 3 G 4 P H B N C E F A F 1 A F 2 B E Table 6.8: Basic mapping f r o m U M T S to D i f f S e r v traffic model is summarized in table 6.8. E a c h of the groups can be d i v i d e d into as many subgroups as needed. F o r example, a background traffic class can be mapped to B E , L B E , and B H P H B s . What really matters for the proper performance of the Q o S system is the level of forwarding treatment each group receives and therefore, the names and codepoints can be simply implementation dependent, especially for an isolated D S network. T h e core network consists of an S G S N , a core node and a G G S N . T h e S G S N and G G S N nodes also work as D i f f S e r v edge routers, and they perform IP encapsulation [50] and mapping functions. T h e core node only applies the P H B associated with the codepoint marked in each packet header, w h i c h is a simple classification task. T h e objective of the computer simulation is to evaluate the performance of the model in terms of the Q o S constraints: latency, bandwidth and packet loss. Therefore, O P N E T calculates and records the delay observed by each aggregate, the bandwidth assigned to each group relative to their assigned rate, and the packet loss ratio. T h e independence of the service received by each traffic class is demonstrated by ob- serving the delay and allocated bandwidth of each of them when the rate of Class 2 traffic varies f r o m 100% to 400% of its preset rate. In a D i f f S e r v router, the overload of misbehav- 89 CHAPTER 6. SIMULATION MODEL AND RESULTS 120r 200 250 300 Class 2 Load% 400 Figure 6.26: Edge-to-edge delay versus class 2 l o a d % x 10 200 250 300 Class 2 Load% Figure 6.27: Al loca ted bandwidth vs class 2 l o a d % 90 CHAPTER 6. SIMULATION MODEL AND RESULTS 1 . 2 - 1 . 1 8 - 1 5 0 2 0 0 2 5 0 C l a s s 2 L o a d % 4 0 0 Figure 6.28: Delay of traffic class 1 C l a s s 2 L o a d % Figure 6.30: D e l a y of traffic class 4 91 CHAPTER 6. SIMULATION MODEL AND RESULTS ing traffic can be confronted by merely-accepting the packet when the rate is below a per- centage of the assigned rate, as it does for the E F P H B . However , for other kinds of traffic, for example, the streaming aggregate, it might be useful to accept packets until the corre- sponding queue is f u l l , while ensuring this traffic w i l l not degrade the service of the other streams. T h e size of this queue should be set to provide an acceptable m a x i m u m delay. F igure 6.26 shows the delay observed by each application and Figure 6.27 illustrates their actual bandwidth assignments when the aggregate rate of Class 2 traffic varies f r o m 100% to 400% of its base rate. T h e figures show that the P H B treatment of each aggregate is mostly independent of the behaviors of other aggregates, while they are offered a distinct range of Q o S . A l t h o u g h the rate of traffic aggregate 2 is changing widely, the delay bound and bandwidth allocated to other traffic streams do not degrade and are changing within an acceptable range o f their assigned Q o S values. Class 2 itself has two bandwidth assign- ments, depending on i f it is behaving or not. F igure 6.26 shows that the delay of each traffic aggregate does not degrade, other than the misbehaving streaming traffic. F igure 6.27 shows an initial decrease of bandwidth assignment for streaming traffic, and correspondingly, an increase for interactive and background aggregates. T h i s is a d i - rect result of the method of adjustment proposed [49] for the D D B - F F Q scheduling scheme, w h i c h modifies the bandwidth according to the m i n i m u m F F Q - r a t e of the system. T h i s means that the well -behaving traffic w i l l get at least its assigned bandwidth, while misbe- having traffic is allocated a constant bandwidth, regardless of its rate, when the bandwidth control mechanism is in effect. Figures 6.28, 6.29, and 6.30 depict the delay observed by the aggregate class 1, 3, and 4, respectively. These figures show that, initially, the delay drops until the D D B - F F Q link-sharing mechanism comes into complete effect, and then it stays almost constant. Figure 6.31 shows the packet drop ratio of aggregate Class 2, w h i c h increases when its rate increases. T h e conversational traffic observes no packet dropping and aggregate 92 CHAPTER 6. SIMULATION MODEL AND RESULTS Classes 3 and 4 observe very small packet drop rates due to their R E D mechanism. Class 2 Load% Figure 6.31: D r o p ratio of aggregate class 2 93 Chapter 7 Conclusions 7.1 Summary T h i s thesis presented an insight into Q o S support in the backbone network of the U M T S or G P R S system using the D i f f S e r v model . It covered the structural m o d e l of a D i f f S e r v router appropriate for deployment in a U M T S backbone network, in addition to providing a basic methodology for Q o S mapping between U M T S and D i f f S e r v technologies. In particular, three notable components of the router have been studied: the scheduler, the dropper, and the meter. A novel scheduling algorithm was proposed that has the ability to provide decoupled delay bounds and bandwidth allocation. T h i s property is an essential factor for al lowing a D i f f S e r v network operator to define and set diverse P H B s . A set o f novel R E D s on shared- memory schemes has also been proposed and compared, showing that their deployment w i l l improve the performance of the system in response to congestion, as related to lower packet loss. In addition, the up-to-date Q o S characteristics of both U M T S and D i f f S e r v have been addressed and a basic mapping between these two systems has been provided. T h e key con- tributions of this thesis are as fol lows: 94 CHAPTER 7. CONCLUSIONS • DDB-FFQ scheduling system. D D B - F F Q is an efficient and simple service disci - pline that supports decoupled delay bound and bandwidth allocation to each aggre- gate class. It also obeys l ink-sharing principles, by providing a predictable amount of bandwidth to each traffic class during periods of congestion. T h i s l ink sharing capa- bility suits the D D B - F F Q system to a variety of other scenarios. • Proposal and comparison of RED on shared-memory schemes: R-CS, R-CP, R-SMA. T h i s thesis studies the deployment of R E D on shared buffers for the first time, by deriving three R E D schemes f r o m the existent sharing policies . A brief analytical expression o f R E D , R - C S and R - C P is also provided. • QoS mapping between the two QoS systems: UMTS and DiffServ. T h i s thesis exam- ines the Q o S related aspects of D i f f S e r v and U M T S , and proposes a basic Q o S map- p i n g between them. T h i s method can be extended to include an arbitrary number of subgroup P H B s . • Structural study of a DiffServ-based UMTS backbone router. T w o separate simulation models have been built to evaluate the performance of the D D B - F F Q scheduling system and the dropping algorithms. T h e performance of R E D on shared m e m o r y buffer algorithms has been compared in terms of packet loss and queuing delay. D e p l o y m e n t of R - C S was proposed within the subgroups of each T C P - c o m p a t i b l e P H B group. Final ly , a multiple node network simulation model was presented that has the f o l - l o w i n g characteristics. First, it uses a traffic model consisting of both T C P - c o m p a t i b l e and non-responsive applications, covering all U M T S traffic classes. Second, it also provides a basic, but sufficient configuration for a D i f f S e r v router applicable in the U M T S backbone. Final ly , it illustrates the just performance of the router in a multiple node m o d e l . 95 CHAPTER?. CONCLUSIONS A l t h o u g h there have been some experimental deployments of G P R S and DiffServ , the actual deployments of U M T S , G P R S and D i f f S e r v are still in the draft stage. T h e next section provides some potential directions for future work. 7.2 Future Work T h e prospect of implementing D i f f S e r v on the U M T S backbone network is a massive project that w i l l require resolving a wide range of issues. B o t h D i f f S e r v and U M T S technologies are evolv ing rapidly, and accordingly, demand ongoing research and adjustments. T h e y are in the early stages of research and have not actually been implemented yet. T h i s thesis itself illuminates a number of possible future pathways. O n l y three D i f f - Serv components have been taken into detailed consideration, while other elements have not been touched. T h e f o l l o w i n g list presents suggestions for future work within the scope of the topic presented here: • T h e method D D B - F F Q applies to control the allocated bandwidth of each class is only one of numerous feasible methods. There might be a better way of achieving this ob- jective while preserving the concept of using two sets of rates, for delay and band- width, as proposed for D D B - F F Q . • There are other methods of rate estimation presented in the literature. Further studies may lead to other schemes that improve the performance of the proposed scheduling system. • A n important method of traffic management is traffic shaping, w h i c h was not touched on in this thesis. A p p l y i n g a traffic shaping b l o c k to the system might provide system improvement. 96 CHAPTER!. CONCLUSIONS • T h e proposed R E D schemes can be extended to consider push out and/or utilize weighted thresholds parameters for R E D . 97 Glossary 3G : 3rd Generation A F : Assured Forwarding B E : Best Effort BSC : Base Station Controller C N : Core Network DDB-FFQ : Decoupled Delay-Bandwidth FFQ DiffServ, DS : Differentiated Services DSCP : DiffServ CodePoint E C N : Explicit Congestion Notification E F : Expedited Forwarding ETSI : European Telecommunications Standards Institute E W M A : Exponential Weighted Moving Average FDD : Frequency Division Duplex FFQ : Frame-based Fair Queueing FTP : File Transfer Protocol G G S N : Gateway GPRS Support Node GPRS : General Packet Radio Service GPS : Generalized Processor Sharing G S M : Global System for Mobile Communications 98 IETF : Internet Engineering Task Force IP : Internet Protocol Iu : U T R A N interface between the radio network controller (RNC) and C N M T : Mobile Termination OPNET : OPtimum NETwork performance modeler PDB : Per Domain Behavior PDP : Packet Data Protocol PHB : Per Hop Behavior P L M N : Public Land Mobile Network PSTN : Public Switched Telephone Network QoS: Quality of Service R-CP : RED on Complete Partitioning R-CS : RED on Complete Sharing R-SMA : RED on Sharing with Minimum Allocation RED : Random Early Detection SAP : Service Access Point SDU : Service Data Unit SSGN : Serving GPRS Support Node TCP : Transmission Control Protocol T D D : Time Division Duplex T E : Terminal Equipment ToS: Type of Service U T R A N : UMTS Terrestrial Radio Access UMTS : Universal Mobile Telecommunications System 99 Bibliography [1] G . Brasche and B . Walke , "Concepts , services and protocols of the new G S M phase 2+ general packet radio service," IEEE Communications Magazine, v o l . 35, pp. 94-104, A u g u s t 1997. [2] E T S I T S 123 107, " U M T S Q o S concept and architecture," D e c e m b e r 2000. [3] J. C a i and D . J. G o o d m a n , " G e n e r a l packet radio service in G S M , " IEEE Communi- cations Magazine, v o l . 35, pp. 122-131, October 1997. [4] C . Bettstetter, H . V o g e l , and J. Eberspacher, " G S M phase 2+ general packet radio service: architecture, protocols, and air interface," IEEE Communications Surveys, October 1999. [5] " G P R S white paper," http://www.Cisco.com/warp/public/cc/so/neso/gprs/gprs.wp.htm, 2000. [6] Y . L i n a , H . C H . Rao , and I. Clamtac , " G e n e r a l packet radio service: architecture, interface and development," Wireless Communications and Mobile Computing, v o l . 1, pp . 77-92 , John W i l e y & Sons Journal, January-March 2001. [7] E T S I G S M 03.60 V6 .0 .0 , " G e n e r a l packet radio service; service description; stage 2," M a r c h 1998. 100 BIBLIOGRAPHY [8] G . Priggouris, S. Hadjiefthymiades, and L . Merakos , "Suppor t ing IP Q o S in the gen- eral packet radio service," IEEE Network, v o l . 14, pp. 8-17, October 2000. [9] S. Blake , D . B l a c k , M . Carlson, E . Davies , W . Weiss, and Z . W a n g , " A n architecture for differentiated services," RFC 2475, December 1998. [10] D . B l a c k , S. B r i m , B . Carpenter, and F. L e Faucheur, "Per hop behavior identification codes," IETF Internet Draft (revision of RFC 2836), January 2001. [11] K . N i c h o l s , S. Blake , F. Baker, and D . Black , " D e f i n i t i o n of the differentiated services field in the IPv4 and IPv6 header," RFC 2474, December 1998. [12] ISI Universi ty of Southern Cal i fornia , "Internet protocol , " RFC 791, September 1981. [13] Y . Bernet, A . Smith , S. Blake , and D . Grossman, " A conceptual m o d e l for diffserv routers," IETF Internet Draft, M a y 2000. [14] Y . Bernet, S. Blake , D . Grossman, and A . Smith , " A n informal management model for diffserv routers," IETF Internet Draft, February 2001. [15] V . Jacobson, "Congest ion avoidance and control , " Proceedings of ACM confer- ence of the Special Interest Group on Data Communications, pp . 314-329, Standford, September 1988. [16] S. F l o y d and V . Jacobson, " R a n d o m early detection gateway for congestion avoid- ance," IEEE/ACM Transactions on Networking, v o l . 1, pp . 397-413, A u g u s t 1993. [17] R. Jain and K . K . Ramakrishnan, "Congest ion avoidance in computer networks with a connectionless network layer: concepts, goals and methodology," Proceedings of the IEEE Computer Networking Symposium, pp. 303-313, M a r y l a n d , M A , A p r i l 1988. [18] H . Balakrishnan and S. Seshan, " T h e congestion manager," RFC 3124, June 2001. BIBLIOGRAPHY [19] T . B o l a n d , M . M a y , and J. C . Bolot , " A n a l y t i c evaluation of R E D performance," Pro- ceedings of the Joint Conference of the IEEE Computer and Communications Soci- eties, v o l . 3, pp. 415-1424, T e l A v i v , Israel, M a r c h 2000. [20] K . Ramakrishnan and S. F l o y d , " A proposal to add explicit congestion notification to IP , " RFC 2481, January 1999. [21] B . Braden et all, "Recommendations on queue management and congestion avoidance in the internet," RFC 2309, A p r i l 1998. [22] R . M o r r i s , E . Kohler , J. Jannotti, and m . F. Kaashoek, " T h e c l i c k modular router," ACM Transactions on Computer Systems, v o l . 18, pp . 263-297, A u g u s t 2000. [23] I. C i d o n n and R. G u e r i n , " O p t i m a l buffer sharing," IEEE Transactions on Selected Area in Communications, v o l . 13, pp. 1229-1240, September 1995. [24] M . A r p a c i and J. A . Copeland, " B u f f e r management for shared-memory A T M switches," IEEE Communications Surveys, First Quarter 2000. [25] A . K . C h o u d h u r y and E . L . Hahne, " D y n a m i c queue length thresholds for shared- memory packet switches," IEEE/ACM Transactions on Networking, v o l . 6, pp. 130— 140, A p r i l 1998. [26] R . W . Wolf f , Stochastic modeling and the theory of queues, Prentice H a l l , 1989. [27] D . Bertsekas and R. Gallager, Data networks, Prentice H a l l , 1992. [28] A . Papoulis , Probability, random variables and stochastic processes, M c G r o w - H i l l , 3rd edition, 1991. [29] H . Z h a n g , "Service disciplines for guaranteed performance service in packet switching networks," Proceedings of the IEEE, v o l . 83, pp. 1374-1396, October 1995. BIBLIOGRAPHY [30] D . Stiliadis and A . Verma, "Rate-proportional servers: a design methodology for fair queueing algorithms," IEEE/ACM Transactions on Networking, v o l . 6, pp . 164—174, A p r i l 1998. [31] S.S. L a m and G . G . X i e , " G r o u p priority scheduling," IEEE/ACM Transactions on Networking, v o l . 5, pp. 205-218, A p r i l 1997. [32] A . K . Parekh and R . G . Gallager, " A generalized processor sharing approach to flow control in integrated service networks: the single-node case," IEEE/ACM Transac- tions on Networking, v o l . 1, pp . 344-357, June 1993. [33] S. F l o y d and V . Jacobson, " L i n k - s h a r i n g and resource management models for packet networks," IEEE/ACM Transactions on Networking, v o l . 3, pp. 365-386, August 1995. [34] S. F l o y d , "Notes on C B Q and guaranteed services," www.aciri.org/floyaVcbq.html, July 1995. [35] I. Stoica, H . Z h a n g , and T . S . E . N G , " A hierarchical fair service curve algorithm for l ink-sharing, real-time, and priority services," IEEE/ACM Transactions on Network- ing, v o l . 8, pp . 185-199, A p r i l 2000. [36] H . Sariowan, R . L . C r u z , and G . C . Polyzos , " S C E D : a generalized scheduling pol icy for guaranteeing quality of service," IEEE/ACM Transactions on Networking, v o l . 7, pp. 669-684, October 1999. [37] H . Sariowan, R. C r u z , and G . C . Polyzos , " S c h e d u l i n g for quality of service guarantees via service curves," Proceedings of IEEE Fourth International Conference on Com- puter Communications and Networks, pp. 512-520, Seattle, W A , September 1995. 103 BIBLIOGRAPHY [38] D . Stiliadis and A . Verma, "Efficient fair queueing algorithms for packet switched networks," IEEE/ACM Transactions on Networking, v o l . 6, pp. 175-185, A p r i l 1998. [39] V . Jacobson, K . N i c h o l s , and K . Poduri , " A n expedited forwarding P H B , " RFC 2598, June 1999. [40] A . Charny, F. Baker, J. Bennett, K . Beson, J. L e B o u d e c , A . C h i u , W. Courtney, B . D a v i e , and S. Davar i , " E F P H B redefined," IETF Internet Draft, N o v e m b e r 2000. [41] J. Heinanen, F. Baker, W . Weiss, and J. Wroclawsk, " A s s u r e d forwarding P H B group," RFC 2597, June 1999. [42] R. Bless and K . Wehrle , " A lower than best-effort P H B , " IETF Internet Draft, Septem- ber 1999. [43] M . L o u k o l a , J. Ruutu, and K . K i l k k i , " D y n a m i c R T / N R T P H B g r o u p , " IETF Internet Draft, N o v e m b e r 1998. [44] B . Carpenter and K . N i c h o l s , " A bulk handling P D B for differentiated services," IETF Internet Draft, January 2001. [45] P. Hurley , " T h e alternative best-effort service," IETF Internet Draft, June 2000. [46] N . Seddigh and J. heinanen, " A n assured rate per domain behavior for differentiated services," IETF Internet Draft, December 2000. [47] S. F l o y d , " D i s c u s s i o n of setting parameters," www-nrg.ee.lbl.gov/floya7red.html. [48] F. Agharebparast and V . C . M . L e u n g , " Improving the performance of R E D deployment on a class based queue with shared buffer ," proc. IEEE Global Telecommunications Conference, San A n t o n i o , Texas, N o v e m b e r 2001. 104 BIBLIOGRAPHY [49] F. Agharebparast and V . C . M . L e u n g , "Eff ic ient fair queueing with decoupled delay- bandwidth guarantees," proc. IEEE Global Telecommunications Conference, San A n - tonio, Texas, N o v e m b e r 2001. [50] D . B l a c k , "Differentiated services and tunnels," RFC 2983, October 2000. [51] L . L . Peterson and B . S. D a v i e , Computer networks: a system approach, M o r g a n K a u f f m a n n Publishers, 2nd edition, 2000. [52] A . S. Tanenbaum, Computer networks, Prentice H a l l , 2000. 105

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
India 3 0
United States 2 1
City Views Downloads
New Delhi 3 0
Sunnyvale 1 0
Ashburn 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}

Share

Share to:

Comment

Related Items