UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Quality of service support in the backbone network of the Universal Mobile Telecommunications Systems.. Agharebparast, Farshid 2001

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
ubc_2001-0304.pdf [ 6.04MB ]
[if-you-see-this-DO-NOT-CLICK]
Metadata
JSON: 1.0065046.json
JSON-LD: 1.0065046+ld.json
RDF/XML (Pretty): 1.0065046.xml
RDF/JSON: 1.0065046+rdf.json
Turtle: 1.0065046+rdf-turtle.txt
N-Triples: 1.0065046+rdf-ntriples.txt
Original Record: 1.0065046 +original-record.json
Full Text
1.0065046.txt
Citation
1.0065046.ris

Full Text

Quality of Service Support in the Backbone Network of the Universal Mobile Telecommunications System Using DiffServ Model by  Farshid Agharebparast B . S c . i n E l e c t r i c a l E n g i n e e r i n g , Isfahan U n i v e r s i t y o f T e c h n o l o g y , Iran,  A THESIS S U B M I T T E D IN PARTIAL F U L F I L L M E N T O F THE REQUIREMENTS FOR T H EDEGREE OF  MASTER OF APPLIED SCIENCE in  T H E FACULTY OF G R A D U A T E STUDIES (Department o f E l e c t r i c a l and C o m p u t e r E n g i n e e r i n g )  W e accept this thesis as c o n f o r m i n g to the r e q u i r e d standard  The University of British Columbia O c t o b e r 2001  © Farshid Agharebparast, 2001  1995  In p r e s e n t i n g t h i s t h e s i s i n p a r t i a l f u l f i l m e n t o f the requirements f o r an advanced degree a t t h e U n i v e r s i t y o f B r i t i s h Columbia, I a g r e e t h a t t h e L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e f o r r e f e r e n c e and s t u d y . I f u r t h e r a g r e e t h a t p e r m i s s i o n f o r e x t e n s i v e c o p y i n g o f t h i s t h e s i s f o r s c h o l a r l y p u r p o s e s may b e g r a n t e d b y t h e h e a d o f my department o r by h i s o r h e r r e p r e s e n t a t i v e s . I t i s understood that copying or p u b l i c a t i o n of this thesis f o rf i n a n c i a l gain s h a l l not be a l l o w e d w i t h o u t my w r i t t e n p e r m i s s i o n .  Department o f  &Cc^ncJL  The U n i v e r s i t y o f B r i t i s h Vancouver, Canada  C^T^rf^/ Columbia  &^y\jt-J*ri  Abstract T h e p e r f o r m a n c e o f the b a c k b o n e bearer service o f the U n i v e r s a l M o b i l e T e l e c o m m u nications S y s t e m ( U M T S ) , or o f its 2 . 5 G precedent G e n e r a l Packet R a d i o S e r v i c e ( G P R S ) , has a considerable i m p a c t o n the e n d - t o - e n d Q u a l i t y o f S e r v i c e ( Q o S ) o b s e r v e d b y a traffic flow. T h e p u r p o s e o f this thesis is to p r o v i d e a basic m e t h o d o l o g y f o r p r o v i s i o n i n g Q o S in this b a c k b o n e n e t w o r k b y u s i n g Differentiated S e r v i c e s ( D i f f S e r v ) , w h i c h is an IP-based Q o S technology. It focuses o n two important issues. First, a basic p l a n f o r Q o S m a p p i n g f r o m one system to another is presented. S e c o n d , the structure o f a D i f f S e r v - b a s e d U M T S b a c k b o n e router is studied w h i c h can p r o v i d e the appropriate f u n c t i o n a l i t y o f f o r w a r d i n g packets w i t h P H B s equivalent to the o r i g i n a l U M T S Q o S . In particular, three major c o m p o n e n t s o f a D i f f S e r v router, i n c l u d i n g the scheduler, the dropper, a n d the meter, have been addressed a n d n o v e l algorithms have been p r o p o s e d to m a k e t h e m suitable f o r a U M T S b a c k b o n e router. T h i s thesis proposes an efficient, fair s c h e d u l i n g and l i n k sharing system c a l l e d D D B - F F Q , w h i c h has the c a p a b i l i t y o f assigni n g d e c o u p l e d delay b o u n d s a n d b a n d w i d t h allocations. T h i s paper also studies the i m p l e mentation o f R a n d o m E a r l y D e t e c t i o n ( R E D ) o n a s h a r e d - m e m o r y b u f f e r a n d proposes new schemes that have i m p r o v e d packet loss ratios. T h e p e r f o r m a n c e o f a m u l t i p l e node b a c k b o n e network is then evaluated b y u s i n g the O P N E T s i m u l a t i o n p a c k a g e . T h e results show that i n terms o f delay and allocated b a n d w i d t h , the routers are able to p r o v i d e the P H B s c o n s i d e r e d f o r each traffic aggregate. T h e y also illustrate the l i n k sharing c a p a b i l i t y o f the router i n times o f congestion.  ii  Contents Abstract  ii  Contents  iii  List of Tables  vi  List of Figures  vii  Acknowledgements  x  Dedication 1  2  xi  Introduction  1  1.1  Motivation  1  1.2  Research Goals  4  1.3  Thesis Outline  5  Background Information 2.1  A n Overview of U M T S / G P R S  7  2.1.1  8  2.1.2 2.2  7  G P R S S y s t e m Structure U M T S / G P R S Q o S Architecture  D i f f S e r v Architecture  8 11  iii  2.2.1  3  19  3.1  Introduction  19  3.2  R E D on a Single Queue  21  3.3  Multi-Class R E D  24  3.3.1  Introduction  24  3.3.2  M e m o r y Sharing Policies  26  3.5  5  13  Random Early Detection (RED) Algorithms for Shared Buffers  3.4  4  D i f f S e r v Router M o d e l  R E D on a Shared M e m o r y M u l t i - C l a s s Buffer  27  3.4.1  R E D o n C o m p l e t e Partitioning ( R - C P )  28  3.4.2  R E D on Complete Sharing ( R - C S )  29  3.4.3  R E D on Sharing with M i n i m u m Allocation ( R - S M A )  29  Analysis of R E D Mechanism  30  3.5.1  Tail Dropping Queue  31  3.5.2  R E D on a Single Queue  32  3.5.3  A Multi-class R E D on Complete Sharing ( R - C S )  34  3.5.4  A M u l t i - c l a s s R E D o n C o m p l e t e Partitioning ( R - C P )  34  3.6  C o m p a r i s o n B e t w e e n T a i l D r o p , R - C S , and R - C P  35  3.7  R E D D e p l o y m e n t for D i f f S e r v - B a s e d U M T S Backbone Router  36  DDB-FFQ Scheduling Scheme  38  4.1  Introduction  39  4.2  Frame-based Fair Queuing ( F F Q )  41  4.3  Decoupled Delay-Bandwidth F F Q ( D D B - F F Q )  47  4.4  T h e Rate E s t i m a t o r M o d u l e / M e t e r  50  Mapping and Classification  53  iv  6  5.1  Introduction  53  5.2  U M T S / G P R S Q o S Classes and Attributes  54  5.3  DiffServ P H B Groups  57  5.4  Mapping  59  5.5  Classification  60  S i m u l a t i o n M o d e l a n d Results 6.1  62  Dropper Element  63  6.1.1  Simulation M o d e l  65  6.1.2  Performance of R E D on Shared M e m o r y Buffer Without End-toE n d Congestion Control  6.1.3  65  P e r f o r m a n c e o f R E D o n a S h a r e d - M e m o r y B u f f e r w i t h an E n d - t o E n d C o n g e s t i o n M a n a g e r ( T C P Traffic)  6.2  6.3  7  74  Scheduler Element  80  6.2.1  Simulation M o d e l  80  6.2.2  S i m u l a t i o n Results  81  M u l t i p l e N o d e S i m u l a t i o n M o d e l and Results  Conclusions  84  94  7.1  Summary  94  7.2  Future W o r k  96  Glossary  98  Bibliography  100  v  List of Tables 5.1  E x p e c t e d range f o r U M T S bearer service Q o S attributes  57  5.2  R e c o m m e n d e d D S C P values f o r A F P H B  58  6.1  T h e values o f R E D parameters i n s i m u l a t i o n case A , n o n - T C P traffic . . . .  67  6.2  T h e values o f R E D parameters i n s i m u l a t i o n case B , n o n - T C P traffic . . . .  69  6.3  T r a f f i c pattern o f T C P f l o w s  75  6.4  D e f a u l t values o f T C P parameters used i n simulations  75  6.5  T h e values o f R E D parameters used i n s i m u l a t i o n , T C P traffic  75  6.6  S i m u l a t i o n values o f F F Q and assigned rates, single n o d e  81  6.7  S i m u l a t i o n values o f F F Q and assigned rates, m u l t i p l e n o d e  89  6.8  B a s i c m a p p i n g used f r o m U M T S to D i f f S e r v  89  vi  List of Figures 1.1  T h e G P R S system infrastructure  3  2.1  T h e structure o f the G P R S b a c k b o n e network  9  2.2  U M T S Q o S architecture  10  2.3  D i f f S e r v d o m a i n structure  12  2.4  A D i f f S e r v router's major f u n c t i o n a l b l o c k s  14  2.5  T h e structure o f a D i f f S e r v router  15  2.6  A n e x a m p l e o f traffic c o n d i t i o n i n g b l o c k s i n a D i f f S e r v router  18  3.1  R E D dropping mechanism  23  3.2  C o m p a r i s o n o f R E D thresholds i n R E D - C P a n d R E D - C S schemes  28  3.3  M / M / l / K q u e u i n g system  31  3.4  P o i s s o n traffic property that can be split or c o m b i n e d  35  3.5  D r o p p r o b a b i l i t y o f M / M / l / K tail d r o p , R - C P a n d R - C S ; A n a l y t i c a l results .  36  4.1  B e h a v i o r o f the base-potential f u n c t i o n a n d potential f u n c t i o n s i n a  fluid  F F Q server  44  4.2  Variables used for interarrival time estimation  51  5.1  T h e structure o f the G P R S b a c k b o n e n e t w o r k  56  vii  6.1  T h e schematic o f the s i m u l a t i o n m o d e l u s e d to evaluate the a l g o r i t h m i c d r o p pers  66  6.2  Packet d r o p ratio, s i m u l a t i o n case A  68  6.3  A v e r a g e queue size, s i m u l a t i o n case A  68  6.4  Packet d r o p ratio, s i m u l a t i o n case B  69  6.5  Packet d r o p ratio class 0, s i m u l a t i o n case B  70  6.6  Packet d r o p ratio class 1, s i m u l a t i o n case B  70  6.7  Q u e u i n g delay, s i m u l a t i o n case B  71  6.8  Packet d r o p pattern o f R - C P , s i m u l a t i o n case B  73  6.9  Packet d r o p pattern o f R - C S , s i m u l a t i o n case B  73  6.10  Packet d r o p pattern o f R - S M A , s i m u l a t i o n case B  73  6.11  T r a f f i c pattern, T C P traffic  74  6.12  Packet d r o p pattern for R - C P , T C P traffic  77  6.13  Packet d r o p pattern for R - C S , T C P traffic  77  6.14  Packet d r o p pattern f o r R - S M A , T C P traffic  77  6.15  Q u e u i n g delay pattern f o r R - C P , T C P traffic  78  6.16  Q u e u i n g delay pattern f o r R - C S , T C P traffic  78  6.17  Q u e u i n g delay pattern f o r R - S M A , T C P traffic  78  6.18  A v e r a g e o f q u e u i n g delay patterns, T C P traffic  79  6.19  F T P traffic r e c e i v e d  79  6.20  T h e schematic o f the D D B - F F Q s i m u l a t i o n M o d e l  81  6.21  9 0 % delay, D D B - F F Q s c h e d u l i n g system  83  6.22  B a n d w i d t h allocated to each class, D D B - F F Q s c h e d u l i n g s y s t e m  83  6.23  O P N E T network model  85  6.24  T h e schematic o f the s i m u l a t i o n network, m u l t i p l e nodes  86  6.25  T h e c o n f i g u r a t i o n o f a router i n a m u l t i p l e n o d e s i m u l a t i o n m o d e l  87  viii  6.26  E d g e - t o - e d g e delay, M u l t i p l e node m o d e l  90  6.27  Allocated bandwidth, M u l t i p l e node model  90  6.28  D e l a y o f traffic class 1, M u l t i p l e n o d e m o d e l  91  6.29  D e l a y o f traffic class 3, M u l t i p l e node m o d e l  91  6.30  D e l a y o f traffic class 4, M u l t i p l e node m o d e l  91  6.31  D r o p ratio o f aggregate class 2, M u l t i p l e node m o d e l  93  ix  Acknowledgments "There is no such thing as a 'self-made' man. We are made up of thousands of others. Everyone who has ever done a kind deed for us, or spoken one word of encouragement to us, has entered into the make-up of our character and of our thoughts, as well as our success."  - George Matthew Adams  I a m appreciative o f m y advisor, Professor V i c t o r C M . L e u n g , f o r his support a n d valuable guidance d u r i n g this project. H e has been patiently available f o r any question or d i s c u s s i o n anytime I needed h i m . It was a r e w a r d i n g experience w o r k i n g w i t h h i m . D u r i n g m y course o f study here at U B C , I e n j o y e d the f r i e n d s h i p o f a n u m b e r o f talented a n d a m i c a b l e i n d i v i d u a l s . S p e c i a l l y , I w o u l d l i k e to thank m y colleagues i n the C o m m u n i c a t i o n s g r o u p : C y r i l Iskander, V a i b h a v D i n e s h , H a n s e n W a n g a n d X i n r o n g W a n g . It was f u n participating i n technical a n d n o n t e c h n i c a l discussions w i t h t h e m . I a m also grateful to the faculty a n d staff o f the E l e c t r i c a l a n d C o m p u t e r E n g i n e e r i n g D e p a r t m e n t f o r their valuable lectures a n d assistance i n m a k i n g m y study m o r e enjoyable here. I also w o u l d l i k e to express m y appreciation to M s . K i r s t y B a r c l a y f o r p r o o f r e a d i n g m y thesis. F i n a l l y , I w o u l d l i k e to thank m y parents f o r their deep l o v e a n d u n s w e r v i n g d e v o tion. T h i s w o u l d never have h a p p e n e d without their dedication a n d encouragement. T h i s w o r k was supported b y M o t o r o l a C a n a d a L t d . , R i c h m o n d , B . C . , C a n a d a and N a t u r a l Sciences and E n g i n e e r i n g R e s e a r c h C o u n c i l under G r a n t C R D P J 2 2 3 0 9 5 .  To my beloved parents.  xi  Chapter 1 Introduction 1.1  Motivation  T h i r d generation ( 3 G ) m o b i l e t e l e c o m m u n i c a t i o n s systems are p r o m i s i n g technologies that w i l l b r i n g a variety o f new services to m o b i l e users. T h e y are a i m e d at p r o v i d i n g f u l l access to m u l t i m e d i a , real-time and data applications, such as m o b i l e Internet access, e m a i l service, v i d e o streaming and v o i c e over IP, w i t h a reasonable grade o f service. T o achieve these goals, these systems s h o u l d be able to offer differentiated services to different traffic flows, d e p e n d i n g o n the d e m a n d e d Q u a l i t y o f S e r v i c e ( Q o S ) . T h e i d e a l o f this Q o S structure is to be "future p r o o f " , to be able to a c c o m m o d a t e any potential new a p p l i c a t i o n . T o p r o v i d e a s m o o t h e v o l u t i o n f r o m a second generation ( 2 G ) wireless system to a 3 G system, 2 . 5 G architectures are p r o p o s e d . A 2 . 5 G system is u s u a l l y an enhancement o f a 2 G system, m a k i n g a h i g h e r rate o f data services available. T h e U n i v e r s a l M o b i l e T e l e c o m m u n i c a t i o n s S y s t e m ( U M T S ) is a p r o m i s i n g 3 G w i r e less system based o n the G l o b a l S y s t e m f o r M o b i l e c o m m u n i c a t i o n s ( G S M ) . G S M is a 2 G wireless system used p r e d o m i n a n t l y throughout the w o r l d . It was o r i g i n a l l y standardized b y a E u r o p e a n organization ( E T S I ) , and is n o w the l e a d i n g d i g i t a l c e l l u l a r technology, es-  1  CHAPTER  I.  INTRODUCTION  p e c i a l l y i n E u r o p e a n d A s i a . G e n e r a l Packet R a d i o S e r v i c e ( G P R S ) [1] is standardized b y E T S I as a G S M enhancement p r o v i d i n g true packet data services. It has the advantage o f b e i n g built o n the existing G S M network, a n d b y o n l y a d d i n g two n e w nodes to the G S M system a n d u p g r a d i n g the software o f other nodes, it c a n p r o v i d e a true packet data system. G P R S also has m o r e efficient r a d i o l i n k u t i l i z a t i o n and supports Q o S . It is anticipated that G P R S ' s c o r e n e t w o r k w i l l e v o l v e into the c o r e n e t w o r k o f U M T S . D u e to the s i m i l a r i t y b e tween the Q o S architecture a n d the b a c k b o n e n e t w o r k structure o f U M T S a n d G P R S release 99, the two terms are used interchangeably i n most o f the topics c o v e r e d i n this thesis, a n d therefore, any d i s c u s s i o n o f one c a n be a p p l i e d to the other, unless otherwise stipulated. P r o v i s i o n i n g reasonable service to real-time applications is v i a b l e o n l y w h e n the syst e m c a n guarantee a w e l l - d e f i n e d e n d - t o - e n d service. T h e e n d - t o - e n d Q o S , that is, f r o m one T e r m i n a l E q u i p m e n t ( T E ) to another T E o b s e r v e d b y a user, is actually the o v e r a l l perform a n c e o f different bearer services constructing an e n d - t o - e n d c o n n e c t i o n path. T h e U M T S Q o S specification [2] d i v i d e s the e n d - t o - e n d Q o S into three m a i n l e v e l s : the M T / T E bearer service, the U M T S bearer service, a n d the external bearer service. T h e M T / T E bearer serv i c e is the user's l o c a l interface, for e x a m p l e , between the user's laptop a n d his c e l l p h o n e . Q o S support o f this bearer service is b e y o n d the scope o f the U M T S Q o S specification. T h e external bearer service is the section o f the datapath located outside the U M T S infrastructure, u s u a l l y w i t h i n the Internet. T h e U M T S bearer service itself consists o f a r a d i o interface and the c o r e network. F i g u r e 1.1 shows the G P R S / U M T S system infrastructure [3]. It also illustrates the path a packet m i g h t take w h e n it is transmitted f r o m a wireless T E to an Internet T E . T h e r e f o r e , the o v e r a l l Q o S o b s e r v e d b y this packet depends o n the b e h a v i o r o f each n e t w o r k section it passes through. T h e f o c u s o f this project is the study o f the Q o S support i n the U M T S / G P R S core network. T h i s b a c k b o n e n e t w o r k is an IP n e t w o r k consisting o f S G S N a n d G G S N nodes. T h e r e f o r e , an IP-based Q o S m e c h a n i s m needs to be i m p l e m e n t e d to a d d service d i f f e r e n -  2  CHAPTER  1.  INTRODUCTION  tiation capability to this b a c k b o n e network. A p r o m i s i n g m e t h o d o l o g y f o r this purpose is the D i f f e r e n t i a t e d Services ( D i f f S e r v ) . D i f f S e r v has a n u m b e r o f f a v o r a b l e features, such as scalability, and a relatively s i m p l e i m p l e m e n t a t i o n requirement. T h e s e characteristics m a k e the D i f f S e r v m o d e l a.likely candidate f o r a d d i n g Q o S support to the U M T S b a c k b o n e . I E T F has standardized the basic configuration o f a D i f f S e r v router. A D i f f S e r v router consists o f a n u m b e r o f components connected together to p e r f o r m a w i d e range o f traffic management, shaping, and q u e u i n g functions. T h e s e elements c a n be c o n f i g u r e d i n a n u m ber o f different w a y s and there are numerous algorithms that c a n be i m p l e m e n t e d o n each o f them, d e p e n d i n g o n the functionality requested.  SGSN  SGSN F i g u r e 1.1:  T h e G P R S system architecture  3  CHAPTER I. INTRODUCTION  1.2  Research Goals  P r o v i d i n g Q o S support to the b a c k b o n e n e t w o r k b y i m p l e m e n t i n g the D i f f S e r v m o d e l necessitates r e s o l v i n g a n u m b e r o f issues. First, a proper set o f D i f f S e r v Per H o p B e h a v i o r s ( P H B s ) needs to be selected that can bear Q o S characteristics equivalent to all o f the U M T S traffic aggregates. S e c o n d , an i n t e r w o r k i n g system, e s p e c i a l l y a m e t h o d f o r m a p p i n g the Q o S parameters o f one bearer service to another, s h o u l d b e d e v i s e d . F i n a l l y , i m p l e m e n t a tion o f D i f f S e r v i n the U M T S b a c k b o n e network requires u s i n g D i f f S e r v - a w a r e nodes i n the b a c k b o n e . T h e r e f o r e , selecting a set o f proper D i f f S e r v f u n c t i o n a l c o m p o n e n t s w i t h an appropriate scheme e m b e d d e d o n each element is vital. T h e above objective is a m a s s i v e project a n d w i l l require extensive time a n d effort to be a c h i e v e d . T h i s research presents an o v e r v i e w o f the issues related to Q o S support i n the U M T S b a c k b o n e n e t w o r k u s i n g the D i f f S e r v m o d e l . W h i l e it p r o v i d e s s o m e i n f o r m a t i o n about Q o S i n the U M T S a n d D i f f S e r v systems, associated w i t h the a b o v e objective, it c o n c e n trates o n t w o m a i n studies: the c o n f i g u r a t i o n o f a D i f f S e r v - b a s e d U M T S router, and the Q o S m a p p i n g f u n c t i o n . S p e c i f i c a l l y , f o u r f u n c t i o n a l and traffic c o n d i t i o n i n g elements o f a D i f f S e r v router are e x a m i n e d t h o r o u g h l y : the scheduler, a l g o r i t h m i c dropper, classifier and meter, as f o l l o w s .  • T h e S c h e d u l i n g element: T h e r e are a large n u m b e r o f s c h e d u l i n g algorithms presented i n the literature. A m o n g those, F r a m e - b a s e d F a i r Q u e u i n g ( F F Q ) is an efficient, fair q u e u i n g system w h i c h has 0 ( 1 )  timestamp c o m p u t a t i o n c o m p l e x i t y a n d seems suit-  able f o r i m p l e m e n t a t i o n i n the scheduler element.  H o w e v e r , it lacks the ability to  assign d e c o u p l e d delay a n d b a n d w i d t h guarantees,  w h i c h is an important factor i n  d e f i n i n g s o m e P H B s needed i n a U M T S e n v i r o n m e n t . T h i s thesis presents a n o v e l serv i c e d i s c i p l i n e , c a l l e d D e c o u p l e d D e l a y - B a n d w i d t h F F Q ( D D B - F F Q ) . T h i s scheme is based o n the F F Q scheme, a n d therefore, inherits its characteristics.  4  In a d d i t i o n , it  CHAPTER  1.  INTRODUCTION  a l l o w s assigning the delay b o u n d and b a n d w i d t h allocation o f a P H B independently.  • T h e a l g o r i t h m i c d r o p p i n g element:  T h i s thesis also studies the i m p l e m e n t a t i o n o f  R a n d o m E a r l y D e t e c t i o n ( R E D ) o n s h a r e d - m e m o r y buffers. It proposes a n d c o m pares three R E D o n shared buffer schemes. I m p l e m e n t i n g a p r o p e r R E D o n a shared buffer a l g o r i t h m o n the P H B s c o r r e s p o n d i n g to U M T S interactive or b a c k g r o u n d traffic classes w i l l a d d congestion avoidance capability, w h i l e s h o w i n g a better loss rate and m e m o r y utilization p e r f o r m a n c e . T h e p r o p o s e d m e t h o d is e s p e c i a l l y applicable to the b u f f e r used f o r P H B s associated w i t h the b a c k g r o u n d traffic class, s u c h as best effort, l o w e r than best effort, and b u l k h a n d l i n g P H B s . T h i s is the first time R E D o n a shared b u f f e r has been studied.  • T h e m a p p i n g f u n c t i o n : A study o f current U M T S Q o S structure a n d D i f f S e r v stand a r d P H B s is also undertaken a n d a basic scheme f o r m a p p i n g U M T S traffic classes to D i f f S e r v P H B s is presented i n this thesis.  • M e t e r : T h e D D B - F F Q service d i s c i p l i n e p r o p o s e d i n this thesis utilizes a rate estimator m o d u l e f o r its n o r m a l functionality. T h i s m o d u l e is actually a m e t e r i n g element, and therefore, can also serve as a D i f f S e r v meter. T h u s , the D i f f S e r v meter c o m p o nent is also c o v e r e d here.  1.3  Thesis Outline  T h i s thesis consists o f the f o l l o w i n g chapters. C h a p t e r 2 presents s o m e b a c k g r o u n d i n f o r m a t i o n related to U M T S a n d D i f f S e r v systems. It first covers the U M T S infrastructure a n d its Q o S structure, then studies the D i f f S e r v conceptual m o d e l , i n c l u d i n g the c o m p o n e n t s and structure o f a D i f f S e r v router. Chapters 3 to 5 consider f o u r D i f f S e r v f u n c t i o n a l b l o c k s i n m o r e detail. C h a p t e r 3  5  CHAPTER  1.  INTRODUCTION  addresses the a l g o r i t h m i c d r o p p i n g element b y s t u d y i n g the p e r f o r m a n c e o f R E D algorithms o n s h a r e d - m e m o r y buffers. A f t e r a b r i e f study o f R E D a n d m e m o r y a l l o c a t i o n p o l i c i e s , three R E D o n shared buffer algorithms are p r o p o s e d and studied. A t the e n d o f C h a p t e r 3, a b r i e f analysis o f these schemes is presented. C h a p t e r 4 studies the s c h e d u l i n g b l o c k . First, it r e v i e w s the requirements o f a f a vorable scheduler. T h e n , an efficient s c h e d u l i n g a l g o r i t h m c a l l e d D D B - F F Q is p r o p o s e d . F i n a l l y , the rate-estimator m o d u l e used i n D D B - F F Q is studied. T h i s m o d u l e has a t w o f o l d responsibility, p e r f o r m i n g as a metering element as w e l l . C h a p t e r 5 studies the D i f f S e r v classifier element and Q o S m a p p i n g f u n c t i o n . It gives an o v e r v i e w o f the U M T S Q o S structure and D i f f S e r v standard P H B s , a n d addresses a basic m a p p i n g system between these two systems. C h a p t e r 6 focuses o n presenting the O P N E T s i m u l a t i o n m o d e l s a n d d i s c u s s i n g their results.  First, a p e r f o r m a n c e c o m p a r i s o n o f the p r o p o s e d R E D o n s h a r e d - m e m o r y buffer  algorithms is presented. T h e n , the s i m u l a t i o n m o d e l and results c o r r e s p o n d i n g to the D D B F F Q s c h e d u l i n g scheme are p r o v i d e d . F i n a l l y , s i m u l a t i o n results i n d i c a t i n g the p e r f o r m a n c e o f a s i m p l e D i f f S e r v - b a s e d U M T S b a c k b o n e m o d e l are discussed. Chapters 7 presents the thesis c o n c l u s i o n a n d p o s s i b l e future w o r k .  6  Chapter 2 Background Information S i n c e the p r e m i s e o f this thesis is the Q o S support i n the b a c k b o n e n e t w o r k o f a G P R S or U M T S system, a b r i e f understanding o f the structure a n d standard specifications related to Q o S i n U M T S a n d D i f f S e r v is essential. T h i s chapter presents an o v e r v i e w of, a n d b a c k g r o u n d i n f o r m a t i o n o n , the technologies used as the basis o f this project, i n two m a i n sections: U M T S structure and the D i f f S e r v conceptual m o d e l . T h e first section p r o v i d e s a short description o f the G P R S S y s t e m structure a n d U M T S / G P R S Q o S architecture. T h e s e c o n d section presents an o v e r v i e w o f the D i f f S e r v system a n d router c o n f i g u r a t i o n .  2.1  A n Overview of U M T S / G P R S  G P R S [ 1] [3] [4] [5] is an enhancement o f G S M that reuses its infrastructure to p r o v i d e a true p a c k e t - s w i t c h e d service. T h i s enhancement p r o v i d e s m o r e efficient r a d i o usage, faster setup time, a n d therefore, higher data b a n d w i d t h . T h e s e benefits c a n b e a c h i e v e d o n l y b y i n t r o d u c i n g two new nodes to the system and b y u p g r a d i n g the software o f s o m e other G S M c o m p o n e n t s . T h e most p r o m i n e n t advantage o f G P R S is that it p r o v i d e s a s m o o t h transition f r o m a 2 G c e l l u l a r system to a 3 G system realization. U M T S is a 3 G m o b i l e system designed to be i m p l e m e n t e d o n the G P R S structure. It w i l l introduce a c o m p l e t e l y different  7  CHAPTER  2.  BACKGROUND  INFORMATION  r a d i o interface a n d a m u c h higher data rate. H o w e v e r , their Q o S architecture a n d network skeleton are i d e n t i c a l so, fortunately, all d i s c u s s i o n presented here applies to b o t h [2] [6]. In this section, first a b r i e f o v e r v i e w o f the system structure a n d b a c k g r o u n d is g i v e n a n d then the Q o S structure as d e f i n e d i n the standard is d e s c r i b e d .  2.1.1  GPRS System Structure  G P R S uses the G S M infrastructure but introduces two new l o g i c a l nodes, c a l l e d G P R S S u p port N o d e s ( G S N ) , f o r packet transfer w i t h i n the P u b l i c L a n d M o b i l e N e t w o r k ( P L M N ) [7]. F i g u r e 1.1 illustrates a G P R S system w h i c h i n c l u d e s G S M c o m p o n e n t s :  M o b i l e Station  ( M S ) , B a s e T r a n s c e i v e r Station ( B T S ) , B a s e Station C o n t r o l l e r ( B S C ) ; a n d G P R S added nodes: G a t e w a y G S N ( G G S N ) , and S e r v i n g G S N ( S G S N ) . F i g u r e 2.1 depicts o n l y the structure o f the b a c k b o n e n e t w o r k [8]. B T S handles the r a d i o interface c o n n e c t i n g to the m o b i l e station ( M S ) . B S C controls a n u m b e r o f B T S s and manages r a d i o resources a n d handovers. S G S N i n G P R S is the equivalent o f the M o b i l e S w i t c h i n g C e n t e r ( M S C ) i n G S M and is responsible f o r packet transmission to and f r o m the M S w i t h i n its service area. G G S N is the system gateway to external packet data networks a n d is c o n n e c t e d to S G S N through an I P - b a s e d b a c k b o n e network. F i g u r e 2.1 shows that the b a c k b o n e n e t w o r k itself consists o f a n u m b e r o f i n t e r - P L M N and i n t r a - P L M N networks. A n i n t r a - P L M N n e t w o r k connects the S G S N s a n d G G S N s o f a l o c a l networks and an i n t e r - P L M N n e t w o r k connects the intraP L M N networks together through border gateways.  2.1.2  U M T S / G P R S QoS Architecture  T h e Q o S structure o f U M T S has been specified i n an E T S I specification [2] that presents a f r a m e w o r k f o r Q o S w i t h i n U M T S . S i n c e the Q o S o f G P R S a n d its n e t w o r k structure has been defined i d e n t i c a l l y to U M T S , the f o l l o w i n g d i s c u s s i o n is v a l i d f o r b o t h o f these 2 . 5 G  8  CHAPTER  2.  BACKGROUND  INFORMATION  SGSN  SGSN  F i g u r e 2.1:  A schematic o f G P R S b a c k b o n e n e t w o r k structure  and 3 G systems. W h a t really matters i n the Q o S support, is what e n d users observe. T h i s is the Q o S as seen f r o m one T e r m i n a l E q u i p m e n t ( T E ) to another. It is requested b y users f r o m a n u m b e r o f finite Q o S levels, a n d it is the user w h o determines i f she is satisfied w i t h the service. T h e data path f r o m one T E to another c a n be d i v i d e d into a n u m b e r o f bearer services. T h e s e bearer services and levels are usually different i n nature a n d m a y be g o v e r n e d b y distinct p r o v i d e r s . T h e r e f o r e , the study o f Q o S is often easier i f e x a m i n e d w i t h i n each separately. F i g u r e 2.2 shows the U M T S bearer services l a y e r e d structure. E a c h bearer service includes all the features needed f o r p r o v i s i o n i n g a contracted Q o S . T h e traffic g o i n g f r o m  9  CHAPTER  MT  TE  C N Iu E D G E  UTRAN  2.  BACKGROUND  INFORMATION  C N Gateway  Node  TE  I£nd-lo-bnd Service  ll!!!! k- M l I . - o : Bearer Service  L A tenia! Bearer  L'M'IS Hearer Service I  Service l  I  Radio Access Bearer Service  I  C N Bearer Service  t R i• 11> • Bt-.ik'i Service I IRA I DD/TDD Service  In Bearer  Backbone  Service  Bearer Serv ice  Physical Bearer Service  UMTS F i g u r e 2.2: U M T S Q o S architecture  one T E to another T E passes a n u m b e r o f these bearer services, w h i c h c a n be categorized into three distinct types: T E / M T l o c a l bearer service, U M T S bearer service, a n d external bearer service. A T E is c o n n e c t e d to U M T S t h r o u g h a M o b i l e T e r m i n a t i o n ( M T ) a n d it is set b y the user, hence, the T E / M T l o c a l bearer service is outside the scope o f the U M T S system. T h e external bearer service is any n e t w o r k or system i n the data path that is not i n c l u d e d i n the U M T S system. T h e r e f o r e , it is also not elaborated o n b y the U M T S specification. T h e U M T S bearer service w h i c h p r o v i d e s the U M T S Q o S is itself l a y e r e d into two parts: r a d i o access a n d core network. T h e r a d i o access bearer service is r e s p o n s i b l e f o r p r o v i s i o n i n g an o p t i m u m a n d confidential transport o f user data f r o m M T to the core n e t w o r k Iu edge n o d e , w i t h Q o S negotiated with the user, w h i c h is r e a l i z e d b y a r a d i o bearer service  10  CHAPTER  2.  BACKGROUND  INFORMATION  and Iu bearer service. T h e role o f the radio bearer service is the r a d i o interface transport a n d it uses U T R A F D D / T D D [2]. T h e c o r e n e t w o r k is the m a i n subject o f this research and is based o n a generic IP b a c k b o n e network.  UMTS QoS Classes Q o S is c o n v e y e d i n the f o r m o f a single attribute w i t h m u l t i p l e parameters. It is negotiated w i t h the user as a part o f the Packet D a t a P r o t o c o l ( P D P ) context at the time o f c o n n e c t i o n establishment phase. D u e to air interface restrictions, the Q o S classes are not meant to be c o m p l e x , w h i l e c o v e r i n g the range o f a l l potential applications. T h e most noteworthy o f U M T S Q o S parameters is the traffic class. T h i s is the attribute that defines the type o f the stream a n d other parameters describe the traffic m o r e specifically. C h a p t e r 5 elaborates o n the U M T S Q o S i n m o r e detail.  2.2  DiffServ Architecture  Differentiated S e r v i c e s ( D i f f S e r v , D S ) is a scalable service differentiation p r o p o s e d b y I E T F for IP networks [ 9 ] . It achieves scalability b y p e r f o r m i n g traffic flow aggregation. E a c h aggregate is identified b y a c o d e and each node i n the w a y o f packet data path treats that packet a c c o r d i n g to a role d e f i n i n g the Per H o p B e h a v i o r ( P H B ) c o r r e s p o n d i n g to that code. T h e r e fore, n o p e r - a p p l i c a t i o n flow or per-user f o r w a r d i n g state is m a i n t a i n e d i n this structure. In addition, sophisticated traffic management a n d c o n d i t i o n i n g tasks are p e r f o r m e d o n l y at the edge nodes o f a D i f f S e r v network. A D S d o m a i n is a D i f f S e r v n e t w o r k c o n s i s t i n g o f a n u m ber o f adjacent nodes w i t h the same set o f service p o l i c i e s a n d P H B definitions. A node i n a D S d o m a i n is classified as either an edge or a c o r e node. T h e D i f f S e r v architecture w o r k s a c c o r d i n g to a s i m p l e m o d e l . W h e n a packet enters a D S d o m a i n , it is classified into a finite set o f b e h a v i o r aggregates. T h e s e b e h a v i o r aggre-  11  CHAPTER  2.  BACKGROUND  INFORMATION  F i g u r e 2.3: D i f f S e r v d o m a i n structure gates are identified b y a D S codepoint [10], w h i c h is c o n v e y e d b y m a r k i n g the header o f the IP packet [11].  A n y n o d e i n that D S d o m a i n a l o n g the path o f that packet treats the packet  a c c o r d i n g to a PFU3 rule c o r r e s p o n d i n g to its codepoint. D i f f S e r v is i m p l e m e n t e d b y i n d i c a t i n g a s m a l l set o f per h o p b e h a v i o r s , a classification f u n c t i o n , a n d f u n c t i o n a l and q u e u i n g b l o c k s . T h e I E T F D i f f S e r v w o r k i n g group has p r o p o s e d a n u m b e r o f standard P H B s . T h e y are expedited f o r w a r d i n g ( E F ) , assured f o r w a r d i n g ( A F ) , best effort ( B E ) , l o w e r than best effort ( L B E ) , alternative best effort ( A B E ) , d y n a m i c r e a l - t i m e / n o n - r e a l - t i m e ( R T / N R T ) , and assured rate ( A R ) . C h a p t e r 5 presents a thorough discussion o f these schemes. T h e r e are a n u m b e r o f other packet differentiation schemas used i n the  literature.  T h e most c o m m o n o f these schemes are relative priority m a r k i n g , service m a r k i n g , label s w i t c h i n g , and integrated services/Reservation  12  P r o t o c o l ( R S V P ) . Hereafter, D i f f S e r v w i l l  CHAPTER  2.  BACKGROUND  INFORMATION  be c o n c i s e l y contrasted w i t h each o f the above methods [9]. In a relative priority m a r k i n g m o d e l , packets are assigned a precedence f o r one o f their characteristics, such as: delay, b a n d w i d t h , and packet loss a n d are p r i o r i t i z e d a c c o r d ingly. A n e x a m p l e o f this m o d e l is 802.3 T o k e n R i n g priority. D i f f S e r v c a n be c o n s i d e r e d a generalization a n d refinement o f this scheme, since it utilizes general P H B concepts and p r o p o s e a w h o l e architecture. A n e x a m p l e o f service m a r k i n g is I P v 4 T y p e o f S e r v i c e ( T o S ) [12], used f o r r o u t i n g purposes. In this scheme packets are m a r k e d f o r m i n i m i z i n g delay or m a x i m i z i n g b a n d w i d t h to direct the routers a l o n g its path i n selecting the route. T h i s scheme is totally d i f ferent to D i f f S e r v , since D i f f S e r v is not c o n c e r n e d w i t h r o u t i n g . E x a m p l e s o f l a b e l s w i t c h i n g are A T M a n d M P L S . In this m o d e l , a path is established i n each n o d e a l o n g the packet path f r o m the source to the destination. T h e r e f o r e , each node needs to m a i n t a i n a Q o S state associated w i t h this path. A l t h o u g h this m o d e l p r o v i d e s a finer Q o S support, it has extra management and c o n f i g u r a t i o n requirements, a n d also a scalability p r o b l e m , that is, the n u m b e r o f states i n a node g r o w s w i t h the n u m b e r o f established paths. In an R S V P m o d e l , the source, the destination a n d the nodes a l o n g the path all exchange s i g n a l i n g messages to establish a packet classification a n d f o r w a r d i n g state i n each o f these nodes. T h i s m o d e l also has a scalability p r o b l e m , i f n o state aggregation is performed.  2.2.1  DiffServ Router Model  T h i s section describes the elements used i n a D i f f S e r v - a w a r e n o d e . F i g u r e 2.4  illustrates  an elaborate c o m p o n e n t interaction o f the h i g h - l e v e l v i e w o f a D S n o d e [13].  T h e rout-  i n g core is an abstract o f a router n o r m a l r o u t i n g f u n c t i o n a l i t y a n d is not d i s c u s s e d i n D i f f S e r v m o d e l . T h e configuration and management interface m o n i t o r s service p r o v i s i o n i n g  13  CHAPTER  2. BACKGROUND  INFORMATION  and gathers statistics regarding the traffic o f each service l e v e l . T h e s e are u s e d as network management bases f r o m w h i c h D i f f S e r v p o l i c i e s are assigned o r altered. T h e network a d ministrator interacts w i t h this m o d u l e through a management p r o t o c o l . T h e o p t i o n a l Q o S agent is c o n s i d e r e d to add p e r - f l o w and per-flow-aggregate s i g n a l i n g c a p a b i l i t y to D i f f S e r v m o d e l . T h i s allows to snoop R S V P messages.  MGMT  DiffServ Configuration & Management Interface  Ingress Elements  Egress Routing Core  (classify, meter, action,queuing)  Data In  QoS Control MSGS  Elements (classify, meter, action, queuing)  Data Out  QoS Agent (optional)  F i g u r e 2.4: A D i f f S e r v router's major f u n c t i o n a l b l o c k s  F i g u r e 2.5 shows a node w i t h t w o egress/ingress interfaces.  It is p o s s i b l e to have  an arbitrary n u m b e r o f these interfaces i n an actual router. A l s o it is not mandatory to i m plement a l l o f these b l o c k s and their components o n b o t h ingress and egress points. T h e configuration o f c o m p o n e n t s depends o n the service requirement o f a router. F i g u r e 2.6 shows the datapath that a data packet m i g h t take i n a router. O n arrival, a packet is first classified a c c o r d i n g to a set o f rules. T h e n it is c h e c k e d f o r b e i n g w i t h i n its allocated rate b y a metering b l o c k . A f t e r w a r d , the packet is passed a l o n g a set o f traffic 14  CHAPTER  2. BACKGROUND  INFORMATION  Ingress Elements Routing Core  Ingress Elements  F i g u r e 2.5: T h e structure o f a D i f f S e r v router c o n d i t i o n i n g b l o c k s such as a marker, dropper, counter a n d i f accepted, is e n q u e u e d i n the q u e u i n g b l o c k a n d then transmitted w i t h i n a scheduler p o l i c y . T h e r e are f o u r k i n d s o f c o m p o n e n t s i n an ingress/egress interface: a traffic classifier, meters, actions a n d q u e u i n g elements [14].  A c t i o n elements are m a r k e r s , droppers, c o u n -  ters, a n d m u l t i p l e x e r s . M a p p i n g F u n c t i o n : M a p p i n g is the f u n c t i o n o f translating the Q o S o f one system to the Q o S parameters understandable b y another Q o S system.  It is a necessary f u n c t i o n  for a U M T S packet entering the D i f f S e r v - a w a r e network, o r w h e n a packet c o m i n g f r o m an external n e t w o r k enters the U M T S n e t w o r k through a G G S N . C h a p t e r 5 gives a thoro u g h d i s c u s s i o n o f this topic. T h i s f u n c t i o n is u s u a l l y integrated w i t h the classifier into one functioning block. C l a s s i f i e r : A C l a s s i f i e r is a f u n c t i o n a l element that selects packets, based o n p o l i c y . It can classify packets based o n the content o f the packet header, other packet data and/or s o m e i m p l i c i t i n f o r m a t i o n related to that packet. In a D i f f S e r v m o d e l , the packets are classified a c c o r d i n g to the content o f a field i n the IP header, o r i g i n a l l y c a l l e d T y p e o f S e r v i c e ( T o S ) i n the I P v 4 p r o t o c o l . T o S is an 8 bits field a n d D i f f S e r v uses 6 bits o f it, also c a l l e d a  15  CHAPTER  2.  B A C K G R O U N D INFORMATION  D S field, to c o n v e y the D i f f S e r v C o d e P o i n t ( D S C P ) . T h e r e m a i n i n g 2 bits, c a l l e d C U (currently unused), are not used i n D i f f S e r v a n d are p r o p o s e d f o r use i n c o n g e s t i o n notification f u n c t i o n b y I E T F E x p l i c i t C o n g e s t i o n N o t i f i c a t i o n ( E C N ) g r o u p . T h e classifier has a single input a n d splits the i n c o m i n g stream into a n u m b e r o f o u t g o i n g ports. T h e most c o m m o n w a y o f i m p l e m e n t i n g classifiers is to use a n u m b e r o f filters. A filter is a set o f m a t c h c o n ditions based o n a classification key. T h e contents o f the D S C P is passed t h r o u g h the filter, and a c c o r d i n g to the m a t c h i n g results, it is g r o u p e d into a P H B aggregate. M e t e r : A meter is a f u n c t i o n a l b l o c k responsible f o r m e a s u r i n g the rate characteristics o f a packet stream. T h e s e gathered statistics are used b y other b l o c k s f o r their d e c i s i o n m a k i n g . F o r e x a m p l e , a marker uses this data to r e - m a r k the packet D S C P . I E T F has a set o f R F C s p r o p o s i n g two m a i n meter i m p l e m e n t a t i o n : the time s l i d i n g w i n d o w meter/marker and the two(three) token bucket meter/marker. M a r k e r : T h i s m o d u l e is responsible f o r setting the D S field o f a D i f f S e r v packet a c c o r d i n g to v a r i o u s rules. It m a y also use the statistics gathered b y a meter to d e c i d e o n the l e v e l o f a packet's c o n f o r m a n c e to its allocated rate characteristics. F o r a n o n c o n f o r m a n t stream, it takes action b y r e - m a r k i n g the packet i n a class w i t h l o w e r priority, or i n a class w i t h a l o w e r d r o p precedence. It is also responsible f o r assigning a v a l i d D S C P to a packet w i t h an u n k n o w n D S C P . C o u n t e r : A counter is responsible f o r updating the packet counter. It is also used to count the n u m b e r o f packets passing through it. D r o p p e r : T h e r e are two major k i n d s o f d r o p p i n g elements i n a D i f f S e r v router: an absolute d r o p p e r a n d an a l g o r i t h m i c dropper. W h i l e absolute droppers are quite straightf o r w a r d , research is still b e i n g undertaken o n the v i a b i l i t y , usability a n d the schemes o f a l g o r i t h m i c droppers. R a n d o m E a r l y D e t e c t i o n ( R E D ) is a w e l l - k n o w n a l g o r i t h m i c dropper, suggested f o r a v o i d i n g congestion i n the network. T h e r e are a n u m b e r o f varieties o f R E D . In this project, the i m p l e m e n t a t i o n o f R E D o n s h a r e d - m e m o r y buffers is addressed a n d p r o -  16  CHAPTER 2. BACKGROUND INFORMATION  p o s e d f o r d e p l o y m e n t o n a D i f f S e r v router f o r T C P - c o m p a t i b l e aggregates. Q u e u i n g B l o c k : T h i s is the b l o c k that stores the packets w h i l e they are w a i t i n g to receive service b y the scheduler a n d depart to the next h o p or D i f f S e r v m o d u l e . T h e r e are two different types o f buffers used f o r e n q u e u i n g packets: shared m e m o r y buffers and d e d icated buffers. In a dedicated b u f f e r i n g m e t h o d , each queue has its o w n separated m e m o r y space, w h i l e a shared b u f f e r i n g scheme uses a large m e m o r y w h i c h is shared a m o n g d i f ferent traffic classes, a c c o r d i n g to a sharing rule. C h a p t e r 3 presents a t h o r o u g h study o f s h a r e d - m e m o r y buffers. S c h e d u l e r : T h e s c h e d u l i n g b l o c k has the most i m p a c t o n the l e v e l o f service a packet receives.  It decides o n w h i c h queue, a m o n g a set o f queues, to service next. T h e r e are a  large n u m b e r o f s c h e d u l i n g schemes addressed i n the literature, each o f w h i c h has some advantages a n d s o m e disadvantages.  In a D i f f S e r v - b a s e d U M T S b a c k b o n e network, the  most desirable scenario is to have a fair, efficient and s i m p l e s c h e m e w h i c h supports l i n k sharing a n d delay b o u n d s . It s h o u l d also be able to assign different delay b o u n d guarantees independently o f b a n d w i d t h allocation, a n d v i c e versa. F i g u r e 2.6 illustrates an e x a m p l e o f a D i f f S e r v router u s i n g a l l o f the above c o m ponents.  T h i s router supports f o u r P H B aggregates. T h e first g r o u p is an E F P H B . T h i s  traffic is metered first and n o n - c o n f o r m i n g traffic is d r o p p e d , but the c o n f o r m i n g traffic has a guaranteed delay b o u n d . T h e second group is an A F 1 1 g r o u p . It has a dedicated b u f f e r that accepts traffic packets based o n a tail d r o p p i n g scheme. T h i s traffic c a n b e v i d e o streaming traffic. T h e third g r o u p is an A F 2 1 group that first meters the traffic flow. If the traffic is n o n c o n f o r m i n g it is r e - m a r k e d f o r an A F 2 2 (with a l o w e r d r o p precedence).  T h e last group is  best effort traffic. G r o u p s three a n d f o u r use a s h a r e d - m e m o r y b u f f e r f o r their queue and a R E D o n shared b u f f e r i n g scheme is a p p l i e d as the d r o p p i n g scheme.  17  CHAPTER  2.  BACKGROUND  INFORMATION  Queue  Meter,  Counter  Dropper  Queue  Dropper Queue  Meter j Classifier  Scheduler  Dropper Marker  Queue Dropper  F i g u r e 2.6:  A n e x a m p l e o f traffic c o n d i t i o n i n g b l o c k s i n a D i f f S e r v router  18  Chapter 3 Random Early Detection (RED) Algorithms for Shared Buffers 3.1  Introduction  T w o k i n d s o f d r o p p i n g elements are used i n the D i f f S e r v m o d e l : the absolute dropper a n d the a l g o r i t h m dropper. A n absolute dropper s i m p l y destroys its r e c e i v e d packet. It is used to d r o p the n o n c o n f o r m i n g traffic o f those applications that do not have an e n d - t o - e n d congestion manager and are o v e r l o a d i n g the network. A n absolute d r o p p e r is also used to m o d e l tail d r o p p i n g queues that d r o p the packet o n l y w h e n f u l l . H o w e v e r , a l g o r i t h m i c d r o p p i n g is an efficient w a y o f c o n t r o l l i n g a n d a v o i d i n g congestion. C o n g e s t i o n happens w h e n the rate o f the i n c o m i n g traffic surpasses the service rate capability o f a router or a l i n k . T h e Internet experienced its first set o f crashes, k n o w n as congestion c o l l a p s e , i n O c t o b e r [15].  1986  T h i s event l e d to the i m p l e m e n t a t i o n o f a congestion c o n t r o l m e c h a n i s m i n T C P , the  m a i n transport p r o t o c o l i n use at that time, to m a i n t a i n Internet functionality. E v e n i n a n o r m a l data network, the traffic is " b u r s t y " a n d changes continuously. T o absorb this bursty traffic, buffers are used i n each intermediate n o d e a l o n g the traffic path.  19  CHAPTER  3.  RANDOM  EARLY  DETECTION  (RED)  ALGORITHMS  FOR  SHARED  BUFFERS  A b i g g e r b u f f e r can capture a larger amount o f data, but it w i l l i m p o s e a l o n g e r average delay. H o w e v e r , n o matter h o w large a b u f f e r is, at the time o f c o n g e s t i o n it b e c o m e s f u l l a n d some packets s h o u l d be d r o p p e d . B e s i d e s , Internet traffic is i n c r e a s i n g c o n t i n u o u s l y . If a router behaves p r o a c t i v e l y and i n f o r m s sources that it is at the threshold o f b e c o m i n g c o n gested, c o n g e s t i o n can be a v o i d e d . T h i s i d e a l e d to the i n t r o d u c t i o n o f c o n g e s t i o n a v o i d a n c e algorithms. T h e r e are two k i n d s o f congestion c o n t r o l methods [16] [17]:  e n d - t o - e n d congestion  c o n t r o l a n d n e t w o r k (router) based congestion c o n t r o l . In the f o r m e r m e t h o d , o n l y the two end-sides o f the c o n n e c t i o n are responsible for u s i n g the n e t w o r k u p to capacity, without o v e r l o a d i n g it. A n e x a m p l e o f this type is the m e t h o d d e p l o y e d i n T C P , w h i c h has been preventing c o n g e s t i o n collapse i n the Internet f o r a l o n g time. T C P is the o n l y transport p r o t o c o l e q u i p p e d w i t h a congestion c o n t r o l m e c h a n i s m . H o w e v e r , there are m a n y recent applications that use U D P , a n d i f their traffic o c c u p i e s a substantial percentage o f Internet traffic, history m i g h t repeat itself. L u c k i l y , I E T F has a w o r k i n g g r o u p o n a u n i v e r s a l congestion manager a n d has p r o p o s e d a general congestion manager interface to a d d this capability to any k i n d o f a p p l i c a t i o n [18]. In a system w i t h o n l y e n d - t o - e n d c o n g e s t i o n c o n t r o l , intermediate nodes w i l l accept every packet they r e c e i v e d , u p to their m e m o r y capacity a n d drop packets w h e n their buffers b e c o m e f u l l . T h i s k i n d o f packet d r o p p i n g is c a l l e d " t a i l d r o p p i n g " throughout this thesis. T h i s term is also used i n the literature as a contrast to " h e a d d r o p p i n g " , i n d i c a t i n g whether the packet at the head o f a queue is d r o p p e d , or one at its tail. T h e Internet has been r e l y i n g o n e n d - t o - e n d c o n g e s t i o n c o n t r o l methods f o r a l o n g time, but this is inadequate f o r a n u m b e r o f reasons. O n e , T C P functions i n a w a y that keeps queue o c c u p a n c y h i g h . T h i s i m p o s e s d i s c r i m i n a t i o n i n d r o p p r o b a b i l i t y f o r bursty traffic, since an a r r i v i n g burst o f data w i l l not find e n o u g h space i n the buffer. T w o , T C P itself is assumed to be bursty. T h r e e , there are m a n y existent or e m e r g i n g applications that do not o b e y any k i n d o f e n d - t o - e n d congestion c o n t r o l m e c h a n i s m  20  [19].  CHAPTER  3.  RANDOM  EARLY  DETECTION  (RED)  ALGORITHMS  FOR  SHARED  BUFFERS  W i t h network based c o n g e s t i o n control, every node i n the path also acts p r o a c t i v e l y to keep the usage o f the network w i t h i n its capacity. T h e most p r o m i n e n t o f these schemes is R a n d o m E a r l y D e t e c t i o n ( R E D ) . R E D was first i n t r o d u c e d i n [16] f o r traditional single queue gateways to a d d congestion avoidance capability to the routers. It detects the incipient c o n g e s t i o n a n d d r o p s / m a r k s packets before the queue b e c o m e s f u l l . B y d r o p p i n g / m a r k i n g packets, it notifies the source to decrease its traffic rate to prevent c o n g e s t i o n [ 16] [ 19]. M a r k i n g packets is a m o r e l o g i c a l w a y o f i n f o r m i n g the c o r r e s p o n d i n g source, a n d is an o n g o i n g f o c u s o f research. H o w e v e r , its i m p l e m e n t a t i o n needs a c o r r e s p o n d i n g transport protoc o l that understands this m e t h o d . T C P has been u s i n g packet drops as a w a y o f evaluating network c o n d i t i o n s . T h e r e f o r e , currently, packet d r o p p i n g is the m a i n c o n s i d e r a t i o n , w h i l e packet m a r k i n g is under study and p r o p o s e d i n an R F C [20].  T h e s e packet drops m a t c h  p r o p e r l y w i t h T C P - c o m p a t i b l e transport protocols a n d prevent severe source throughput degradation. R E D is standardized b y the Internet E n g i n e e r i n g T a s k F o r c e ( I E T F ) i n an R F C [21] as a better alternative o f the traditional tail drop m e c h a n i s m . D e p l o y m e n t o f R E D has a n u m b e r o f advantages. O n e , it tries to k e e p the b u f f e r o c c u p a n c y l o w , therefore c a u s i n g a l o w e r average delay. T w o , it prevents concurrent d r o p p i n g o f large n u m b e r s o f packets i n bursty traffic. T h r e e , it does not h a v e a bias against bursty traffic. F o u r , T C P reacts m o r e severely to burst packet losses than to single packet loss.  3.2  R E D on a Single Queue  In a gateway w i t h a R E D [16] buffer management scheme, packets are d r o p p e d / m a r k e d before its b u f f e r gets f u l l .  T h e objective o f R E D is to detect i n c i p i e n t c o n g e s t i o n and take  measures to a v o i d it. It is intended to be used f o r networks i n w h i c h the transport p r o t o c o l reacts to c o n g e s t i o n notification feedback f r o m the network. T h u s , the R E D gateway has to p e r f o r m two tasks. It s h o u l d first apply an a l g o r i t h m for detecting the c o n g e s t i o n , just before  21  CHAPTER  3.  RANDOM  EARLY  DETECTION  (RED)  ALGORITHMS  FOR  SHARED  BUFFERS  it happens, a n d then it s h o u l d take action b y a p p l y i n g another a l g o r i t h m f o r its c o n g e s t i o n a v o i d a n c e task. R E D uses the average queue size o f the router as an indicator o f the c o n g e s t i o n i n tensity l e v e l . A l t h o u g h there are a n u m b e r o f w a y s o f c a l c u l a t i n g the average b u f f e r size, E x p o n e n t i a l W e i g h e d M o v i n g A v e r a g e ( E W M A ) is the most c o m m o n m e t h o d . E W M A is calculated b y a d d i n g a w e i g h t e d value o f the current queue size to the w e i g h t e d value o f the p r e v i o u s average queue size, putting m o r e weight o n the current data. In an E W M A the average queue o c c u p a n c y , avg,  filter,  is estimated b y  avg <— avg * (1 — a) + k * a.  (3.1)  In this equation, k is the current size o f the b u f f e r a n d a is the w e i g h t factor, a is the k e y parameter f o r setting the time constant o f this lowpass filter a n d its v a l u e is a l w a y s less than one. W h e n it is set to a v a l u e o f greater than 0.5, it puts m o r e w e i g h t o n current data a n d w h e n it is set to a value o f less than 0.5, it puts m o r e weight o n past data. R E D calculates the average queue size f o r every i n c o m i n g packets.  Having com-  puted the average queue size, its uses the f o l l o w i n g a l g o r i t h m to d e c i d e o n d r o p p i n g / m a r k i n g that packet.  It uses two threshold values c a l l e d the m a x i m u m threshold,  m i n i m u m threshold, If  avg is  between  greater than  min-th  and  min-th. If avg is less max-th, max-th,  than  min-th, the  max-th, and  packet is accepted a n d enqueued.  it w i l l be d r o p p e d / m a r k e d . H o w e v e r i f the v a l u e o f the d r o p p i n g p r o b a b i l i t y  p = max k  prob  *  pk  avg max  th  T h e n the packet is d r o p p e d w i t h the p r o b a b i l i t y o f  .  th  -  avg is  in  is first calculated thusly:  min :  the  min  (3.2)  th  pk- max b pro  is a parameter that deter-  m i n e s the m a x i m u m value o f the d r o p p i n g probability. In other w o r d s , i n a R E D gateway, packets are d r o p p e d w i t h a p r o b a b i l i t y w h i c h is based o n a f u n c t i o n o f the average o c c u p a n c y size o f that buffer, p o s s i b l y w e l l before it gets  22  CHAPTER  3.  RANDOM  EARLY  DETECTION  (RED) ALGORITHMS  FOR SHARED  BUFFERS  f u l l . F i g u r e 3.1 shows the d r o p p i n g probability versus average queue size o f a R E D gateway w i t h the p h y s i c a l b u f f e r size o f B, m a x i m u m and m i n i m u m thresholds o f max-th a n d min-th, and m a x i m u m drop p r o b a b i l i t y o f  max b. pro  P(k)  max min-th  max-th  B  F i g u r e 3.1: R E D d r o p p i n g m e c h a n i s m  T h e p s e u d o c o d e o f the R E D a l g o r i t h m is as f o l l o w s [16]:  for each packet arrival calculate the average queue size (avg) if min-th < avg < max-th calculate drop probability Vk with probability Vk mark the arriving packet else if max-th < avg mark/drop the arriving packet  23  CHAPTER  3.3  3.  RANDOM  EARLY  DETECTION  (RED)  ALGORITHMS  FOR  SHARED  BUFFERS  Multi-Class R E D  3.3.1  Introduction  T h e current Internet treats every packet i n a best effort manner. T h e r e f o r e , each n o d e has o n l y one queue for a l l i n c o m i n g packets. In a class-based buffer, distinct queues are used to keep packets o f different classes separate.  T h e r e are two b u f f e r a l l o c a t i o n p o l i c i e s for  i m p l e m e n t i n g a class-based b u f f e r : the s h a r e d - m e m o r y a n d the d e d i c a t e d - m e m o r y p o l i c i e s . In a dedicated m e m o r y buffer, a c o m p l e t e l y separate queue is u s e d to store the packets o f one traffic class.  T h i s scheme has the advantage o f p r o v i d i n g guaranteed space f o r each  class a n d requires less q u e u i n g administration. H o w e v e r , , i n a s h a r e d - m e m o r y buffer, a l l the packets share the same buffer. A sharing p o l i c y allocates e n o u g h m e m o r y space for an i n c o m i n g packet a n d keeps track o f packets b e l o n g i n g to each class. T h i s scheme p r o v i d e s m o r e efficient m e m o r y u t i l i z a t i o n , since the unused m e m o r y space o f one class c a n b e used b y another. T h i s a l l o w s the buffer to have higher bursty traffic h a n d l i n g capability. In a D i f f S e r v router, congestion c o n t r o l is currently o n l y a p p l i c a b l e to n o n - r e a l time traffic, until s u c h time as congestion manager m o d u l e s are used f o r a l l k i n d s o f traffic [18]. T h e r e f o r e , m a i n l y interactive and b a c k g r o u n d traffic classes n e e d p r o a c t i v e c o n g e s t i o n c o n trol. E a c h o f these traffic classes are m a p p e d to a n u m b e r o f P H B groups. T h e r e f o r e , a shared b u f f e r c a n be used to store packets o f these groups. T h i s section proposes and studies the c o n g e s t i o n c o n t r o l schemes applicable f o r a shared m e m o r y buffer. U s i n g a sharedm e m o r y b u f f e r has a n u m b e r o f advantages f o r these P H B groups:  • T h e m e m o r y is used m o r e efficiently. In a d e d i c a t e d - m e m o r y system, the router m a y b e c o m e congested f o r one or a f e w traffic classes w h i l e the queues o f other classes are not c o m p l e t e l y used. H o w e v e r , i n a shared buffer, the u n u s e d m e m o r y o f a class m a y be used b y the other classes to handle short-term c o n g e s t i o n .  24  CHAPTER  3.  RANDOM  EARLY  DETECTION  (RED)  ALGORITHMS  FOR  SHARED  BUFFERS  • A shared b u f f e r is m a n a g e d b y software and p r o v i d e s soft d i v i s i o n f o r each queue. F o r a D i f f S e r v router i n w h i c h the n u m b e r o f its supported P H B or its parameters are to be m o d i f i e d or to be set o n d e m a n d , u s i n g sharing schemes are v e r y convenient. A n e x a m p l e o f software based routers is g i v e n i n [22].  • A l t h o u g h the algorithms presume s y m m e t r i c traffic flow, b y a p p l y i n g a w e i g h t e d version o f these algorithms and u s i n g push-out, the system also w o r k s i n a fair manner f o r n o n s y m m e t r i c traffic. It is also easy to configure the system to w o r k i n f a v o r o f a particular traffic class. P u s h - o u t is the act o f f r e e i n g e n o u g h space to store an i n c o m i n g packet w h e n the queue is f u l l , b y d r o p p i n g s o m e packets that have already been accepted f r o m a l o w e r priority, or f r o m a queue that has used the m e m o r y unfairly.  • F i n a l l y , the s i m u l a t i o n results presented i n C h a p t e r 6 s h o w better p e r f o r m a n c e i n the d e p l o y m e n t o f R E D o n shared buffers, i n terms o f packet loss.  H o w e v e r , shared buffer schemes are m o r e c o m p l e x c o m p a r e d to dedicated systems. T h e y require a m o r e elaborate and sophisticated b u f f e r manager.  T h e y also need to deal  w i t h packet allocation and r e s u m p t i o n . F o r non-static threshold systems, p e r f o r m i n g p u s h out and/or a p p l y i n g d y n a m i c p o l i c i e s adds to the c o m p l e x i t y o f the system. T h e f o l l o w i n g sections are f o c u s e d o n the congestion c o n t r o l schemes a p p l i c a b l e to s h a r e d - m e m o r y buffers. First, a short o v e r v i e w o f current b u f f e r sharing schemes is presented. T h e n , three different d e p l o y m e n t s o f R E D o n s h a r e d - m e m o r y buffers that have been d e r i v e d f r o m present b u f f e r sharing schemes, are p r o p o s e d . T h e s e schemes w o r k b o t h as sharing p o l i c i e s a n d as a l g o r i t h m i c droppers for the purpose o f c o n g e s t i o n a v o i d a n c e . F i nally, an analytical representation is p r o v i d e d .  25  CHAPTER  3.3.2  3.  RANDOM  EARLY  DETECTION  (RED)  ALGORITHMS  FOR  SHARED  BUFFERS  Memory Sharing Policies  A s h a r e d - m e m o r y b u f f e r is a single p o o l o f m e m o r y shared b y the packets o f a n u m b e r o f traffic streams, w h i l e they wait to be s e r v i c e d and transmitted. A b u f f e r a l l o c a t i o n p o l i c y is a p p l i e d to enable proper q u e u i n g functions. In a sharing system, the s u m o f the i n d i v i d ual partitions is equal to the the total m e m o r y . Hereafter, sharing schemes addressed i n the literature are s u m m a r i z e d [23] [24]  [25].  Complete Partitioning (CP): In  a c o m p l e t e partitioning s h a r i n g scheme, the entire  buffer is p e r m a n e n t l y partitioned a m o n g the classes. T h e r e f o r e , C P actually does not p r o v i d e any sharing. H o w e v e r , since the queues are not p h y s i c a l l y isolated, it is also classified a m o n g sharing schemes. T h i s scheme c a n be c o n s i d e r e d equivalent to a dedicated m e m o r y system, i n w h i c h separated queues are used for different classes a n d has most o f its p r o p erties, s u c h as packet loss rate. T h u s , another advantage o f s t u d y i n g this scheme is that it enables us to m a k e a p e r f o r m a n c e c o m p a r i s o n between dedicated a n d s h a r i n g systems. S t i l l , C P has an important advantage o f shared b u f f e r systems i n that it is easily reconfigurable, but it also has the disadvantage o f c o m p l e x b u f f e r management.  Complete Sharing (CS): C o m p l e t e  sharing lies o n the other extreme, c o m p a r e d to  C P . In this scheme, the b u f f e r is shared a m o n g a l l queues, without l i m i t a t i o n . E v e r y i n c o m i n g packet is accepted, as l o n g as the buffer is not f u l l . In a C P p o l i c y , the b u f f e r allocated to a queue m a y be wasted i f its c o r r e s p o n d i n g traffic class is inactive, w h i l e i n the C S p o l icy, a b u f f e r m a y be m o n o p o l i z e d b y a traffic class. H o w e v e r , C S p o l i c y efficiently uses the m e m o r y a n d c a n be fair, i f the traffic is s y m m e t r i c a l l y d i v i d e d into a l l classes.  Sharing with Minimum Allocation (SMA)  : T h i s p o l i c y is a c o m b i n a t i o n o f C S  and C P p o l i c i e s . In the S M A scheme, a m i n i m u m m e m o r y space is put aside f o r each class and the rest is shared between a l l the a r r i v i n g packets. T h e r e f o r e , each traffic class has a m i n i m u m guaranteed m e m o r y space and the r e m a i n d e r m e m o r y is efficiently used b y a l l  26  CHAPTER 3. RANDOM EARLY DETECTION (RED) ALGORITHMS FOR SHARED BUF classes.  Sharing with MaXimum Queue lengths (SMXQ):  T h i s scheme is similar to the  C S p o l i c y , but w i t h the added feature o f a m a x i m u m l i m i t a t i o n o n the m e m o r y space occ u p i e d b y each traffic class. S M X Q prevents a traffic class f r o m m o n o p o l i z i n g the w h o l e buffer. T h e values o f these m a x i m a are considered so that their s u m is greater than the w h o l e m e m o r y size.  Sharing with MaXimum Queue and Minimum Allocation (SMQMA):  SMQMA  is the m o s t c o m p l e x buffer sharing p o l i c y and utilizes a c o m b i n a t i o n o f S M A and S M X Q features. In this scheme, the buffer is shared a m o n g a l l traffic classes, but w i t h a reserved m i n i m u m space, and a m a x i m u m space l i m i t for each. T h e r e f o r e , w h i l e p r o v i d i n g a guaranteed m i n i m u m space, it also restricts a class f r o m m o n o p o l i z i n g the w h o l e shared m e m o r y space. O n the d o w n side, it requires the largest n u m b e r o f state variables for its p e r f o r m a n c e and has the highest c o m p l e x i t y .  3.4  R E D on a Shared Memory Multi-Class Buffer  In this section, the d e p l o y m e n t o f R E D o n shared m e m o r y buffers is addressed. B a s e d o n three o f the sharing p o l i c i e s discussed p r e v i o u s l y , three c o r r e s p o n d i n g R E D algorithms are p r o p o s e d . T h e s e algorithms p e r f o r m two tasks. First, they p r o v i d e b u f f e r allocation p o l i c y similar to the sharing scheme they are d e r i v e d f r o m . S e c o n d , they p r o v i d e a R E D - l i k e congestion avoidance functionality. In these schemes, the R E D m a x i m u m and m i n i m u m thresholds, w h i c h also determine the m a x i m u m and m i n i m u m sharing l i m i t s , are defined b o t h for the w h o l e buffer and for each queue. B a s e d o n complete sharing, complete partitioning, and sharing w i t h m i n i m u m a l l o cation, three algorithms are proposed. T h e y are R E D o n complete sharing ( R - C S ) , R E D o n complete partitioning ( R - C P ) and R E D o n sharing w i t h m i n i m u m allocation ( R - S M A ) .  27  CHAPTER  k=K  MaxTh  3.  k=0  RANDOM  EARLY  DETECTION  (RED) ALGORITHMS  k=K  MinTh  4*MaxTh  FOR SHARED  BUFFERS  k=0  4'MinTh  F i g u r e 3.2: C o m p a r i s o n o f R E D thresholds i n R E D - C P a n d R E D - C S  schemes  D u r i n g this d i s c u s s i o n the f o l l o w i n g t e r m i n o l o g y is used. " B u f f e r " o r " w h o l e m e m o r y " o r " q u e u e " indicates a single large m e m o r y shared b y a l l the subqueues o f different classes. T h e m e m o r y space used f o r storing packets o f the traffic class i is c a l l e d " s u b q u e u e i " . T h e R E D average queue size calculated f o r the w h o l e m e m o r y is i n d i c a t e d b y avg a n d the R E D average queue size o f subqueue i is indicated b y avg(i). S i m i l a r l y the m i n i m u m a n d the m a x i m u m thresholds f o r the w h o l e m e m o r y are c a l l e d max-th and min-th a n d those o f subqueue i are indicated as max-th(i) a n d min-th(i), respectively.  3.4.1  RED on Complete Partitioning (R-CP)  In R E D on c o m p l e t e partitioning, the m e m o r y is permanently d i v i d e d a m o n g the subqueues, that is, each traffic class has its o w n fixed m e m o r y space a n d a traditional R E D a l g o r i t h m is a p p l i e d o n each subqueue independently o f the other queues. T h i s scheme is identical to a m u l t i - c l a s s d e d i c a t e d - m e m o r y q u e u i n g system, f r o m a p e r f o r m a n c e v i e w p o i n t . T h e r e f o r e , it has also been used as the basis f o r c o m p a r i n g the p e r f o r m a n c e o f R E D o n shared a n d dedicated m e m o r y buffers. T h i s scheme is suitable f o r those situations w h e n it is preferable to have isolated subqueues, w h i l e h a v i n g some o f the advantages o f shared m e m o r y systems, such as reconfigurability.  28  CHAPTER  3.4.2  3.  RANDOM  EARLY  DETECTION  (RED) ALGORITHMS  FOR SHARED  BUFFERS  R E D on Complete Sharing (R-CS)  B a s e d o n the c o m p l e t e sharing p o l i c y , R E D o n the c o m p l e t e sharing is d e f i n e d w i t h the f o l l o w i n g pseudo c o d e :  for each packet arrival with class i calculate the average queue size avg calculate average sub-queue i size avg(i) if min-th < avg < max-th ifmin-th(i)  < avg(i)  calculate drop probability p  a  with probability p mark the arriving packet a  else if max-th < avg mark/drop the packet  In this a l g o r i t h m , the w h o l e b u f f e r is shared a m o n g a l l the subqueues. packet w i t h class i the average queue a n d subqueue i is calculated.  F o r an incoming  If the average queue  size is less that the m i n i m u m threshold, the packet is accepted. O t h e r w i s e , the average subqueue i size is c o m p a r e d to max-th(i) a n d min-th(i) a n d a d e c i s i o n to accept o r discard the packet is made, s i m i l a r to a traditional R E D a l g o r i t h m .  3.4.3  R E D on Sharing with Minimum Allocation (R-SMA)  H e r e , another R E D f o r shared m e m o r y buffers based o n sharing w i t h m i n i m u m allocation is p r o p o s e d , c a l l e d R E D o n sharing w i t h m i n i m u m allocation. T h i s a l g o r i t h m defines a m i n i m u m m e m o r y allocation space f o r subqueue i b y u s i n g R E D min-th(i). T h e p s e u d o c o d e o f this scheme is as f o l l o w s :  29  CHAPTER 3.  R A N D O M EARLY DETECTION  (RED) ALGORITHMS  FOR SHARED  BUFFERS  for each packet arrival with class i calculate the average queue size avg calculate average sub-queue i size avg(i) ifmin-th(i) < avg(i) and avg < max-th . calculate drop probability p  a  with probability p mark the arriving packet a  else if max-th < avg mark/drop the packet  T h e d e p l o y m e n t o f R E D o n the S M X Q sharing scheme has not b e e n defined here, since it is apparent that there is n o other w a y o f i m p l e m e n t i n g R E D o n it, other than independently a p p l y i n g R E D o n each subqueue.  3.5  Analysis of RED Mechanism  In this section, an analytic study o f R E D o n shared-buffer algorithms is presented. First, the packet d r o p p r o b a b i l i t y o f a tail drop queue is e x a m i n e d a n d then the analytical representation o f R E D o n a single queue, R E D o n a multi-class b u f f e r w i t h c o m p l e t e sharing ( R - C S ) , and R E D o n a m u l t i - c l a s s b u f f e r w i t h c o m p l e t e partitioning ( R - C P ) are addressed. In the analysis, the f o l l o w i n g assumptions have been m a d e f o r the q u e u i n g system a n d the traffic model:  • Packet arrival time is i n P o i s s o n distribution, that is, packets have e x p o n e n t i a l l y distributed inter arrival times.  • Packet departure rate has P o i s s o n distribution, that is, packet lengths are e x p o n e n tially distributed. 30  CHAPTER  3.  RANDOM  •  P h y s i c a l buffer size is K.  •  Stationary state p r o b a b i l i t y o f b e i n g i n state  EARLY  k  is  DETECTION  7r(&).  (RED)  ALGORITHMS  B e i n g i n state  FOR  k  SHARED  BUFFERS  means there  are k packets i n the system.  T h e exponential distribution has been used to allow us to use the P A S T A (Poisson A r r i v a l s See T i m e A v e r a g e s ) property [26].  P A S T A property c l a i m s that for a P o i s s o n arrival, the  state probability distribution seen by a new arrival would be the same as the time averaged state probability distribution.  In other words, the state p r o b a b i l i t y distribution is equal to  the fraction o f time that the system is i n that state.  A  A  A  F i g u r e 3.3: M / M / l / K q u e u i n g system  3.5.1  Tail Dropping Queue  F i g u r e 3.3 shows an M / M / l / K q u e u i n g system.  If A is the packet arrival rate and p is the  packet departure rate, the utilization is defined as: p =  K F o r a b u f f e r w i t h tail d r o p p i n g  and buffer size o f K, the p r o b a b i l i t y o f b e i n g i n the state k is  TT(0)*/  [27]:  ifk<K,  0  otherwise.  B y u s i n g the n o r m a l i z i n g c o n d i t i o n , that is, J2k=o  — 1» ^(0)  the p r o b a b i l i t y o f  b e i n g at state zero is s i m p l y calculated. T h u s :  m  =  zt? ^ =  31  =  (33)  CHAPTER 3. RANDOM EARLY DETECTION (RED) ALGORITHMS FOR SHARED BUFFERS  T h e d r o p p i n g p r o b a b i l i t y can be also calculated b y u s i n g the P A S T A property. A c c o r d i n g to P A S T A , the p r o b a b i l i t y o f d r o p p i n g a packet i n a tail drop system is equal to the p r o b a b i l i t y o f b e i n g at state K . T h e r e f o r e :  P T D = *(K)  =  (1 - p ) . P  (3.4)  K  T h i s equation enables us to have a consistent representation o f drop p r o b a b i l i t y for all the schemes.  3.5.2  RED on a Single Queue  F o r a queue w i t h R E D buffer management scheme, a packet gets d r o p p e d d e p e n d i n g o n the value o f the drop p r o b a b i l i t y d(k)  and the queue state p r o b a b i l i t y ir(k), where k indicates  the state. T o m a k e analysis possible, the instantaneous queue size has been used instead o f the average queue size. T o calculate the packet drop probability, the P A S T A property is used. In addition, due to R E D a l g o r i t h m , a packet i n state /, 1 = 1 , K  is d r o p p e d w i t h p r o b a b i l i t y d(l).  There-  fore, the packet drop p r o b a b i l i t y is calculated thus:  P  M  + Tr{K-l)d(K-l)  =  ir(K)d(K)  =  X X W ) -  + ... +  *{l)d{l) (3-5)  In this equation d(k) is the d r o p p i n g f u n c t i o n o f the R E D a l g o r i t h m a n d c a n be presented as:  0 d(k)  =  if  — J 7 " " „ , * max if max! h—mini a v 1 if k  T / l  v  k <  minTh  minTh  k >  < k <  maxTh  maxTh  where k is the average queue size calculated b y R E D a l g o r i t h m . T h e state p r o b a b i l i t y used i n the above equations is different f r o m the one used for the tail d r o p p i n g . T o derive the state probability, we use a detailed b a l a n c e equation that 32  CHAPTER  says ir(i)p(i,j)  =  3.  RANDOM  EARLY  DETECTION  (RED) ALGORITHMS  FOR SHARED  B VFFERS  ir(j)p(j, i), where p(a, b) is the state transition f r o m state a to state b  [28]. U n d e r e q u i l i b r i u m conditions, flows m a y b e b a l a n c e d across any b o u n d a r y , so f o r the k  th  boundary, let i= k and j=k+l. F o r a queue w i t h R E D i m p l e m e n t e d , the p r o b a b i l i t y  o f a transition f r o m state k to k+1 also depends o n the p r o b a b i l i t y o f packet acceptance. T h e r e f o r e , the transition p r o b a b i l i t y f r o m state k to state k+1 is p(k, k + 1) = Xk{l — d(k)) and b y a p p l y i n g it i n the balance equation w e have this:  (l-d(k))X ir(k)  = fi ir(k  k  + l),  k+1  k = l,2,...,K  (3.6)  w h i c h leads to the state p r o b a b i l i t y  n(k + 1 ) = —  ( 1 - d(k))n(k).  (3.7)  fJ-k+i  B y substituting k instead o f k+1 a n d c o n s i d e r i n g { A^ = A, p, = p, k  VA; }, the state p r o b -  ability ir(k) i s :  Tr(Jfe)  p(l-d(k-l))n(k-l)  =  =  *(o)ff/»(i-<*(0) t'=0  k—l  =  p *(0)f[{l-d(i)).  (3.8)  k  t'=0  7 r ( 0 ) c a n be easily f o u n d b y a p p l y i n g the n o r m a l i z i n g equation. T h e r e f o r e ,  K  oo  K  k—l  $>(*) = £>(*) = E A ( 0 )  k=0  k=0  k=0  -  =1  (3-9)  i=0  and the p r o b a b i l i t y o f b e i n g is state 0 is  *  (0)  = ELo/naa-^))-  (3  '  10)  F i n a l l y , b y a p p l y i n g ( 3.10) i n ( 3.8), the state p r o b a b i l i t y o f R E D is calculated as f o l l o w s :  KRED[k) =  K  -i  krfk  33  h  177^-  (3-  11  )  CHAPTER  3.5.3  3.  RANDOM  EARLY  DETECTION  (RED)  ALGORITHMS  FOR  SHARED  BUFFERS  A Multi-class RED on Complete Sharing (R-CS)  A l l the equations d e r i v e d f o r the R E D o n a single queue, other than the d r o p p i n g f u n c t i o n d(k),  are directly applicable to the R - C S scheme.  T h e d r o p p i n g f u n c t i o n is based o n the  R - C S algorithms and is expressed as f o l l o w s :  0  d(k)  0  =  if  'k-minTh maxTh—minTh  +  m  a  X  p  i f  k; < m  i  n  T  minTh; h  . <  £.  1  if  k <  if  minTh  if  k >  minTh  < k <  maxTh  maxTh  where k is the average b u f f e r size and 4 is the average size o f queue i calculated b y R E D algorithm. W i t h the new d r o p p i n g f u n c t i o n d(k), T h e state p r o b a b i l i t y and the d r o p probability o f an R - C S scheme are written as f o l l o w s :  *  PR-CS  R  -  c  s  {  k  )  =  z L  =  ir{K)d(K)  =  ]T7r(k)d(&).  o  P  * Y i l :  0  + TT(K - l)d(K  \ i - d ( i ) y  ( 3  -  1 2 )  - 1) + . . . + 7r(l)d(l)  K  (3.13)  k-\  3.5.4  A Multi-class RED on Complete Partitioning (R-CP)  T h i s case is s i m i l a r to h a v i n g a set o f independent single queues where traffic is d i v i d e d between them. T h e r e f o r e , the equation d e r i v e d f o r a single R E D queue is a p p l i e d to each o f the R - C P subqueues. F i g u r e 3.4 shows the m o d e l o f the R - C P s c h e m e f o r a traffic consisting o f f o u r P o i s s o n flows, A to A . B y u t i l i z i n g the P o i s s o n stream property, the c o m b i n a t i o n o f 0  3  traffic also has a P o i s s o n distribution w i t h an average arrival time o f A = AO + A l + A2 + A 3 . T h i s fact is also true f o r the service rates fx.  34  CHAPTER  3.  RANDOM  EARLY  DETECTION  (RED)  ALGORITHMS  FOR  SHARED  BUFFERS  \U0  QO Ql Q2  X 3 \ F i g u r e 3.4:  3.6  Q3  P o i s s o n traffic property that can be split or c o m b i n e d  Comparison Between Tail Drop, R - C S , and R - C P  F o r R - C P scenario, w e consider a s y m m e t r i c system w i t h \  =  j and / / ; =  | for  i=0,...,3  and the utilization factor is the same both f o r the w h o l e and f o r e a c h subqueue. T h i s allows us to have an analytic c o m p a r i s o n a m o n g n o r m a l tail d r o p p i n g , R - C P , and R - C S schemes. S i n c e the instantaneous queue size is used i n the analysis, it w i l l not be c o m p a r a b l e w i t h the results a c q u i r e d b y s i m u l a t i o n . H o w e v e r , it w i l l p r o v i d e an intuition into the subject f r o m an analytical v i e w p o i n t . F i g u r e 3.5 depicts the d r a w i n g based o n equations ( 3.4),  (  3.5)  and ( 3.13). It depicts the drop probabilities o f an M / M / l / K tail d r o p queue, an R - C P , and an R - C S q u e u i n g management  scheme.  In all the schemes a buffer w i t h the size o f  K=160 packets is u s e d .  In the tail d r o p p i n g  scenario the w h o l e b u f f e r is used f o r the i n c o m i n g traffic w i t h a m e a n arrival rate o f A and a m e a n service rate o f fi. In the R - C S scenario, the instantaneous queue size is used w i t h a  min-th =40, max-th = K = 160, min-th(i)=10  and  max-th(i)=40  and an i n c o m i n g traffic  w i t h m e a n arrival rate o f A and m e a n service rate o f fi. F o r the R - C P scheme, e a c h subqueue has a  min-th(i)=10  and  max-th(i)=40 and  and m e a n service rate o f  an i n c o m i n g traffic w i t h m e a n arrival rate o f A / 4 ,  fi/i.  A s expected, the drop p r o b a b i l i t y o f a R E D queue is higher than the tail d r o p p i n g queue. T h i s figure also shows that the drop p r o b a b i l i t y o f the R - C P sharing s c h e m e is higher than the R - C S scheme.  T h i s figure, a c c o m p a n i e d w i t h the s i m u l a t i o n results presented i n  C h a p t e r 6, shows the better p e r f o r m a n c e o f R - C S .  35  CHAPTER 3. RANDOM EARLY DETECTION (RED) ALGORITHMS FOR SHARED BUFFERS  Drop Probability of R-CS and R-CP, Analytical Results 0.3 r  0.25 h  - e - Tail drop (M/M/1/K) - * - R-CS - « - R-CP  0.2 2 co -Q  2 0.15  Q.  a. 0.1  0.05  0 0.6  F i g u r e 3.5:  3.7  0.7  0.8  0.9  1 Offered Load  1.1  1.2  1.3  1.4  D r o p P r o b a b i l i t y o f M / M / l / K tail d r o p , R - C P and R - C S w i t h k = 1 6 0  RED Deployment for DiffServ-Based UMTS Backbone Router  In the context o f D i f f S e r v , R E D can be used f o r the a l g o r i t h m i c d r o p p e r o f a D i f f S e r v - b a s e d G P R S / U M T S b a c k b o n e router. It is applicable to those traffic aggregates that utilize T C P f o r their transport p r o t o c o l , that is, f o r interactive and b a c k g r o u n d traffic classes. F o r the conversational and streaming traffic classes o n l y absolute droppers are u s e d . T h u s , an i n c o m i n g packet w i t h these traffic classes is accepted w h e n it is d e t e r m i n e d to be c o n f o r m a n t and is d r o p p e d i f it is not. A l g o r i t h m i c droppers are used f o r the other 2 classes and, therefore, R E D d e p l o y m e n t is appropriate. E a c h o f these traffic classes are m a p p e d to s o m e P H B consisting o f a set o f subgroups. D u e to the s i m u l a t i o n results presented i n Chapter 6, d e p l o y m e n t o f R E D o n shared buffers is p r o p o s e d f o r T C P - c o m p a t i b l e aggregates. E a c h traffic aggregate has its o w n d e d -  36  CHAPTER  3.  RANDOM  EARLY  DETECTION  (RED)  ALGORITHMS  FOR  SHARED  BUFFERS  icated buffer, but this b u f f e r is shared a m o n g the subgroups o f that aggregate class and one o f R E D o n s h a r e d - m e m o r y buffer schemes is used f o r the a l g o r i t h m i c dropper. T h i s c o n figuration  has the advantage o f l o w e r packet d r o p ratio and b e i n g reconfigurable, a n d the  disadvantage o f slightly higher buffer management c o m p l e x i t y .  37  Chapter 4 DDB-FFQ Scheduling Scheme T h i s chapter is dedicated to the study o f the D i f f S e r v s c h e d u l i n g b l o c k . T h e m o t i v a t i o n b e h i n d this study is to i m p l e m e n t a fair, efficient a n d s i m p l e scheduler that also has d e c o u p l e d d e l a y - b a n d w i d t h guarantees, and supports l i n k sharing objectives. A s c h e d u l i n g b l o c k w i t h these properties enables the operator o f a D i f f S e r v - b a s e d U M T S b a c k b o n e to define P H B s w i t h different delay a n d b a n d w i d t h guarantees easily. D e c o u p l e d D e l a y - B a n d w i d t h F r a m e - b a s e d F a i r Q u e u i n g ( D D B - F F Q ) is a p r o p o s e d s c h e d u l i n g a l g o r i t h m that c a n achieve the above m e n t i o n e d goals.  It is a m o d i f i c a t i o n o f the F r a m e - b a s e d F a i r Q u e u i n g ( F F Q )  scheme, a n d therefore, inherits all o f its properties, w h i l e it enables the assignment o f dec o u p l e d d e l a y - b a n d w i d t h guarantees. T h i s chapter consists o f the f o l l o w i n g sections.  First, an o v e r v i e w o f s c h e d u l i n g  schemes, the l i n k sharing m o d e l , and related p r e v i o u s w o r k is p r o v i d e d . T h e n , the c o n cepts o f rate-proportional schedulers, especially the structure o f F F Q , are addressed.  Fi-  nally, a n o v e l s c h e d u l i n g system based o n the F F Q , c a l l e d D D B - F F Q is presented, w h i c h has the desirable properties needed for i m p l e m e n t a t i o n i n a D i f f S e r v - b a s e d U M T S b a c k b o n e router. In this chapter, the term "traffic s e s s i o n " is used as an equivalent to D i f f S e r v traffic aggregate.  38  CHAPTER  4.1  4.  DDB-FFQ  SCHEDULING  SCHEME  Introduction  T h e quality o f service i n integrated services packet networks is expressed i n terms o f delay, b a n d w i d t h , jitter a n d loss ratio. V a r i o u s applications have different requirements a n d expect different values o f these Q o S characteristics.  In other w o r d s , these parameters m i g h t  be d e f i n e d or requested independently o f one another, a c c o r d i n g to the a p p l i c a t i o n b e i n g used. F o r e x a m p l e , an application m i g h t be very sensitive to delay but require o n l y a s m a l l b a n d w i d t h , w h i l e another application m i g h t w o r k w e l l w i t h a loose delay b o u n d but need a large b a n d w i d t h a n d be sensitive to packet losses. C o n s e q u e n t l y , it is essential to be able to set these parameters independently f o r each traffic class. T h e r e m a r k a b l e m o d u l e responsible f o r m a i n t a i n i n g Q o S is the s c h e d u l i n g element. A s s u c h , it has been studied extensively and m a n y s c h e d u l i n g algorithms have been presented i n the literature [29] [30] [31].  A m o n g these schemes, G e n e r a l i z e d Processor Shar-  i n g ( G P S ) [32] is p u r p o r t e d to be an i d e a l service d i s c i p l i n e i n terms o f d e l a y a n d fairness. A l t h o u g h it is based o n the fluid m o d e l a n d is not v i a b l e f o r a packet network, its p a c k e t - b y packet versions W e i g h t e d F a i r Q u e u i n g ( W F Q ) and Worst-case F a i r W e i g h t e d F a i r Q u e u i n g ( W F Q ) [29] approximate G P S closely. T h e m a i n d r a w b a c k s o f these schemes are their 2  c o m p l e x i t y and scalability p r o b l e m s . A series o f s c h e d u l i n g p o l i c i e s , c a l l e d rate p r o p o r t i o n a l servers [30], endeavor to app r o x i m a t e G P S w i t h a time c o m p l e x i t y o f 0 ( 1 ) .  In these schemes, the l e v e l o f service re-  c e i v e d b y session i is set b y the value o f a parameter c a l l e d session i rate, r,. T h i s parameter defines the allocated b a n d w i d t h o f that session or traffic class. It is also the basis f o r calculati n g the delay b o u n d associated w i t h class /, that is, the delay b o u n d o f class i is dependent o n its allocated rate and can not be defined separately. n o n d e c r e a s i n g functions o f time c a l l e d  T h e s e schedulers use a n u m b e r o f  potentials. A system potential  keep track o f the state o f the w h o l e system and a  39  session i potential  f u n c t i o n is used to  is used to keep track  CHAPTER  4.  DDB-FFQ  SCHEDULING  SCHEME  o f the service session i r e c e i v e d d u r i n g its busy p e r i o d . T h e s e potential functions s h o u l d be updated c o n t i n u o u s l y to a v o i d fairness discrepancy. F r a m e - b a s e d F a i r Q u e u i n g ( F F Q ) is a rate-proportional scheduler that recalibrates its system potential p e r i o d i c a l l y w i t h a m a x i m u m p e r i o d c a l l e d  frame size.  T h i s service  d i s c i p l i n e is fair, s i m p l e to i m p l e m e n t , and efficient. H o w e v e r , l i k e other rate-proportional s c h e d u l i n g algorithms, it does not p r o v i d e d e c o u p l e d delay b o u n d and b a n d w i d t h a l l o c a tions. A n o t h e r important issue related to service d i s c i p l i n e algorithms is the c o n c e p t o f l i n k sharing. A l i n k sharing p o l i c y is the m e t h o d o f assigning the b a n d w i d t h o f a l i n k to a set o f packet classes. A class can be an aggregate o f s o m e traffic streams h a v i n g the same b a n d w i d t h requirements, or the traffic o f an organization. T h e c o n c e p t o f l i n k sharing [33]  [34]  was first i n t r o d u c e d m a i n l y for congested periods to restrict services o f f e r e d to s o m e classes to their l i n k sharing b a n d w i d t h . T h i s b a n d w i d t h can e v e n be zero, f o r e x a m p l e , f o r e m a i l traffic stream. In d e p l o y i n g a l i n k sharing p o l i c y , there are a n u m b e r o f c r i t i c a l objectives and issues that s h o u l d be considered as m e n t i o n e d i n [33] [35].  First, e a c h class s h o u l d re-  c e i v e its assigned b a n d w i d t h over a suitable interval o f time. T h i s requirement s h o u l d be met b o t h i n n o r m a l and congestion situations. S e c o n d , the excess b a n d w i d t h s h o u l d be distributed to all active sessions a c c o r d i n g to a rule, not arbitrarily. E x c e s s b a n d w i d t h is that b a n d w i d t h p o r t i o n o f the l i n k that is unused at a particular time, f o r e x a m p l e , due to an i n active session.  T h i r d , l i n k sharing is essentially a b a n d w i d t h oriented m e t h o d , but delay  is as important as b a n d w i d t h f o r m a n y applications. A l i n k sharing i m p l e m e n t a t i o n s h o u l d not hinder the general scheduler i n p r o v i d i n g a delay guarantee to a w e l l - b e h a v e d , real-time application. F o u r t h , l i n k sharing has its highest importance w h e n c o n g e s t i o n happens or a traffic aggregate misbehaves.  T h e r e f o r e , it s h o u l d allocate at least the m i n i m u m assigned  b a n d w i d t h to all classes, i n c l u d i n g the m i s b e h a v i n g traffic. H i e r a r c h i c a l F a i r S e r v i c e C u r v e ( H - F S C ) [35]  40  is a recently i n t r o d u c e d a l g o r i t h m ,  CHAPTER  4.  DDB-FFQ  SCHEDULING  SCHEME  w h i c h presents a s c h e d u l i n g system w i t h the d e c o u p l e d delay b a n d w i d t h feature. H - F S C is based o n l i n k sharing a n d the S e r v i c e C u r v e Earliest D e a d l i n e first ( S C E D ) [36] [37] s c h e d u l i n g scheme. It is argued i n [35] that since there are conflicts between l i n k - s h a r i n g and realtime requisites, it is not possible to realize b o t h o f t h e m s i m u l t a n e o u s l y at a l l times, although it is p o s s i b l e to c l o s e l y approximate the desired m o d e l . S i n c e H - F S C is based o n service curves, it has i m p l e m e n t a t i o n c o m p l e x i t y f o r m a n y occasions. In this chapter, w e present a l i n k - s h a r i n g real-time scheduler, based o n l i n k - s h a r i n g and F F Q concepts. T h e c o m p l e t e s c h e d u l i n g system consists o f a n u m b e r o f rate-estimators and a m o d i f i e d F F Q . T h e n o v e l t y o f the p r o p o s e d scheme is that t w o sets o f rates are c o n s i d ered: F F Q - r a t e s a n d assigned rates. T h e F F Q - r a t e s are used to determine the delay b o u n d o f each class and are a part o f the internal state o f F F Q , w h i l e the assigned rates are used to set the b a n d w i d t h r e c e i v e d b y each class. T h e rate-estimators measure the rate o f the i n c o m i n g aggregates. A s l o n g as the rates o f a l l classes are w e l l - b e h a v e d a n d are w i t h i n their assigned rates, the scheduler acts l i k e a n o r m a l F F Q . H o w e v e r , i f the actual estimated rate is above the assigned rate, the scheduler m o d i f i e s the timestamp o f the h e a d - o f - l i n e packet o f the m i s b e h a v e d class to restrict it to its assigned rate.  4.2  Frame-based Fair Queuing (FFQ)  F r a m e - b a s e d F a i r Q u e u i n g [38] belongs to the R a t e - P r o p o r t i o n a l S c h e d u l e r ( R P S ) class and has an 0(1)  [30]  timestamp c o m p u t a t i o n c o m p l e x i t y , a n d p e r f o r m a n c e characteristics close  to G e n e r a l i z e d Processor S h a r i n g ( G P S ) . T h e r e are two types o f system c o m p l e x i t y related to each service d i s c i p l i n e : timestamp c o m p u t a t i o n c o m p l e x i t y a n d p r i o r i t y sorting c o m p l e x ity. T h e f o r m e r is the c o m p l e x i t y o f the a l g o r i t h m used f o r c a l c u l a t i n g the timestamp o f each i n c o m i n g packet a n d the latter is the c o m p l e x i t y o f the a l g o r i t h m u s e d to select the queue w i t h the smallest timestamp. F o r traffic consisting o f N sessions, G P S has 0(N)  41  and  CHAPTER  4. DDB-FFQ  SCHEDULING  SCHEME  F F Q has 0(1) timestamp c o m p u t a t i o n c o m p l e x i t y , w h i l e the sorted-priority c o m p l e x i t y o f a l l the s c h e d u l i n g systems depends o n the sorting a l g o r i t h m a n d is u s u a l l y i n the order o f  0{log {N)). 2  In a rate p r o p o r t i o n a l scheduler, the session i receives service d e p e n d i n g o n its a l l o cated rate r;. L e t W ; ( T , t) denote the service o f f e r e d to session i d u r i n g the interval (T, t], then the n o r m a l i z e d service r e c e i v e d b y session i is Wj(r, £ ) / r ; . T h e objective o f a rate p r o p o r t i o n a l scheduler is to equalize the n o r m a l i z e d service o f all the b u s y sessions. A n R P S scheduler uses a n u m b e r o f nondecreasing functions o f time c a l l e d potentials to keep track o f the the actual n o r m a l i z e d service r e c e i v e d b y each session d u r i n g its b u s y p e r i o d . A potential function P(t) is defined f o r the w h o l e system and a session potential f u n c t i o n P;(i) is c o n s i d e r e d f o r each session, so that f o r any interval ( r , t]:  m  _  F  ,  (  r  )  (4.1)  =  ri A system potential s h o u l d satisfy t w o properties.  First, the system potential must b e i n -  creased w i t h a rate o f at least one, that is, f o r any interval ( r , t] that the system is b u s y :  P(t) - P(T) >{t-  T).  (4.2)  S e c o n d , the system potential is always less than the potential o f any b a c k l o g g e d session. L e t B(t) denote the set o f b a c k l o g g e d sessions at time t, then the f o l l o w i n g b e c o m e s true:  P(t)  < Pi(t)  V i € B(t).  (4.3)  In an R P S system, these potential functions s h o u l d be c o n t i n u o u s l y updated to enable a close a p p r o x i m a t i o n to an actual fluid m o d e l i n the equivalent G P S system. T h i s r e c a l i bration interval is the k e y feature to associate fairness to the a l g o r i t h m a n d is its f u n d a m e n tal difficulty. D i f f e r e n t recalibration methods result i n different rate-proportional schemes. T h e session i potential s h o u l d b e m o d i f i e d i n a w a y that the service m i s s e d b y that session d u r i n g the time that it was not b a c k l o g g e d is also c o n s i d e r e d . 42  CHAPTER  4. DDB-FFQ  SCHEDULING  SCHEME  F F Q uses a frame-based recalibration m e t h o d , that is, it updates the potential f u n c tions p e r i o d i c a l l y w i t h a m a x i m u m p e r i o d offrame size F. A frame period (T) is the interval o f time needed to send F bits. T h e m a x i m u m amount o f class i traffic that c a n b e transmitted d u r i n g a frame p e r i o d is defined as fa and /,• = r,-/C is the fraction o f the l i n k rate allocated to session i. T h e relations a m o n g these parameters are as f o l l o w s :  fa = fi*F  = n * T.  (4.4)  T h e fa is defined s u c h that it is greater than the m a x i m u m packet size o f class i, Lf.  Li<fa.  (4.5)  In F F Q , the delay b o u n d o f class / is determined b y its rate r,- w h i c h also defines the p o r tion o f b a n d w i d t h allocated to that class. F o r a l i n k w i t h capacity C a n d a scheduler w i t h N classes, the total assigned rate s h o u l d be less that C :  i><C.  (4.6)  »=i R e f e r e n c e [38] proposes a relaxed m e t h o d o f recalibrating the potential functions. L e t T^-I be the last time a f r a m e was updated, then the next update needs to b e d o n e w h e n the f o l l o w i n g t w o c o n d i t i o n s are met: 1) T h e potential o f each active session i n the equivalent G P S system b e l o n g s to the next frame:  Pi(t)  >kT  V? € B(t)  (4.7)  where B(t) is the set o f active classes at time t. 2) T h e potential o f each class is less than the b e g i n n i n g o f the next f r a m e :  Pi(t)<(k  + 1)T  i = l,2,...  (4.8)  T h e i m p l e m e n t a t i o n o f F F Q can be s i m p l i f i e d b y i n t r o d u c i n g another f u n c t i o n c a l l e d the based-potential f u n c t i o n . A based-potential f u n c t i o n S (t) p  43  is a n o n d e c r e a s i n g f u n c t i o n  CHAPTER  4. DDB-FFQ  SCHEDULING  SCHEME  that has the f o l l o w i n g properties:  1) S (t) p  is less that the potential o f every b a c k l o g g e d session:  S (t)  (4.9)  Vz <= B(t).  < Pi(t)  p  2) A finite constant A P > 0 exists such that the f o l l o w i n g is true:  S {t)  Vi  > Pi(t) - AP  p  B(t).  G  (4.10)  •»—I  4—i »  o cu  Pi(o /  /p(o  <  4T  •  f  / / \ 3T •  2T  • •• • ••  ' ' 1/  •••  —  S (t) P  1  1  1  • r • •  T /  /  /  /T / I t  k /  _  1  Time  1  F i g u r e 4.1: B e h a v i o r o f the base-potential f u n c t i o n and potential f u n c t i o n s i n a fluid F F Q server. P(t) is the potential f u n c t i o n , S (t) p  is the base potential a n d Pi(t) is session i poten-  tial f u n c t i o n . T h e potential functions are recalibrated at points TI, r , T a n d r . T h e frame 2  3  4  update points c a n f a l l a n y w h e r e w i t h i n a w i n d o w where each session potential is between  kTmd  (k+l)T[38J.  44  CHAPTER  4. DDB-FFQ  SCHEDULING  SCHEME  F i n a l l y it is s h o w n that b y u s i n g a step f u n c t i o n as the system based-potential f u n c tion and a p p l y i n g the above conditions, F F Q c a n be easily i m p l e m e n t e d . F i g u r e 4.1 depicts the relation a n d b e h a v i o r o f the based-potential a n d system-potential functions i n a f l u i d F F Q server. A c t u a l l y to i m p l e m e n t F F Q , [38] shows that it is sufficient to execute the f o l l o w i n g t w o algorithms, o n e at the time o f a packet arrival and the other at the time o f a packet departure. T h e a l g o r i t h m that needs to be executed o n the arrival o f a packet i n an F F Q server has the f o l l o w i n g pseudo c o d e :  Calculate current value of system potential. Let t be the current time and t the time when the packet s  currently in service started its transmission. 1. temp <- P +  (t-t )/F s  Calculate the starting potential of the new packet 2. SP(i, k) <— max(TS( i, k-1), temp) Calculate timestamp ofpacket 3. TS(i, k) *- SP(i, k) + length(i, k)/fa Check ifpacket crosses a frame boundary 4. nl <- int(SP(i, k)) 5. n2 *— int(TS(i, k)) 6. if( nl < n2) then ( iffinishingpotential is in next frame) 7.  B[nl] <— B[nl] + 1 (increment counter)  8.  mark packet  9. end if  T h e a l g o r i t h m w h i c h is executed o n the departure o f a packet i n a n F F Q server is this:  45  CHAPTER  4. DDB-FFQ  SCHEDULING  SCHEME  Increase system potential by the transmission time of the packet just completed, say j. 1. P^P  + length(j)/F  Find timestamp of next packet for transmission 2. TS i  <-  m n  min (TS(i)) ieB  Determine the corresponding frame number. 3. Fmin  *  ifit(TS i ) m n  Perform frame update operation if required 4. if (packet j was marked) then 5.  B[current-frame] <— B[current-frame] -1  6. end if 7. if(B[ current-frame]=0 and  F  m  i  n  > current-frame) then  8.  current-frame <— current-frame + 1  9.  P <— max( current-frame, P)  10. end if Store starting time of transmission of next packet in t  s  11. t <— current time s  12. Retrieve packet from head of queue and transmit  In these a l g o r i t h m s , P is the potential state variable, SP is the starting potential o f a packet, TS is the timestamp calculated f o r each packet, a n d B is an array that is u s e d to k e e p track o f f r a m e s a n d f r a m e b o u n d a r y crosses. F is the f r a m e size,  T S  m  i n  is the smallest TS o f the  h e a d - o f - l i n e packets o f b a c k l o g g e d classes a n d current-frame is a state v a r i a b l e to keep the v a l u e o f the current f r a m e . A l l o f these values are reset w h e n the router b e c o m e s idle.  46  CHAPTER 4. DDB-FFQ SCHEDULING SCHEME  4.3  Decoupled Delay-Bandwidth F F Q ( D D B - F F Q )  A n F F Q scheduler possesses m a n y o f the properties expected f r o m a service d i s c i p l i n e . It is fair, efficient, and s i m p l y i m p l e m e n t a b l e . H o w e v e r , it has a c r u c i a l s h o r t c o m i n g i n that the delay b o u n d o f each aggregate class depends o n its assigned rate. T h e r e f o r e , it is not p o s s i b l e to set the delay b o u n d and b a n d w i d t h allocation o f a session independently. In this section, a n o v e l l i n k sharing service d i s c i p l i n e is p r o p o s e d that o v e r c o m e s this deficiency. F o r a system w i t h N traffic classes, the c o m p l e t e s c h e d u l i n g system consists o f N rate estimators a n d a scheduler. T h e responsibility o f each rate estimator is to assess the rate o f its c o r r e s p o n d i n g traffic class a n d to send this estimated rate to the scheduler. T h e scheduler is a m o d i f i e d F F Q server that uses the statistics gathered b y the rate estimator, i n addition to its o w n F F Q states variables, to decide o n w h i c h queue it s h o u l d offer service to next. T h e n o v e l t y o f this scheme is that it utilizes two sets o f rates f o r each traffic class: the F F Q - r a t e s a n d the assigned rates. T h e s e two sets o f rates are u s e d to realize the d e c o u p l e d delay a n d b a n d w i d t h property o f the system. T h e F F Q - r a t e s are used to set the delay b o u n d o f each traffic class and are equivalent to the r rates defined i n a n o r m a l F F Q server a n d the assigned rates are used to set the b a n d w i d t h allocation o f each aggregate. E a c h o f these sets o f rates s h o u l d o b e y equation (4.6). T h e D D B - F F Q s c h e d u l i n g system's functionality is d e s c r i b e d as f o l l o w s . A s l o n g as each traffic source is sending packets under or equal to its assigned rate, the scheduler w o r k s l i k e a n o r m a l F F Q . S i n c e each traffic class is o f f e r e d service a c c o r d i n g to its F F Q - r a t e , its guaranteed delay b o u n d is attained, and because all classes are c o n f o r m a n t , they get their assigned p o r t i o n o f b a n d w i d t h . Nevertheless, a n o n - c o n f o r m a n t traffic class j is not served w i t h its F F Q - r a t e . T h i s can be done b y adjusting the timestamp o f the H e a d - o f - L i n e ( H O L ) packet o f the queue j w h e n the sorted p r i o r i t y f u n c t i o n is p e r f o r m e d . It means that the a d justed value is used w h e n the scheduler decides o n w h i c h queue to serve next, b y c h o o s i n g  47  CHAPTER  4.  DDB-FFQ  SCHEDULING  SCHEME  the queue w i t h the m i n i m u m H o L timestamp value, but the packet's o r i g i n a l timestamp is preserved. T h i s is done w h e n the s e c o n d line o f the F F Q a l g o r i t h m f o r a packet departure (see p r e v i o u s section) is r u n . T h i s action prevents other classes f r o m service degradation and a l l o w s them to get their p o r t i o n o f b a n d w i d t h under their delay b o u n d s . W h i l e the n o n c o n f o r m a n t traffic is served at its assigned rate, its delay b o u n d is not met, w h i c h is l o g i c a l in the context o f the D i f f S e r v m o d e l . T h e r e are a n u m b e r o f w a y s o f p e r f o r m i n g this adjustment. S i n c e this a l g o r i t h m is meant to be used i n a D i f f S e r v - b a s e d U M T S router w i t h the best-effort traffic as its l o w e r p r i o r i t y class, the f o l l o w i n g adjustment is p r o p o s e d :  TSU)  ™U)'«*V).  minyr)  (4.H)  In this equation, TS(j) is the timestamp o f the packet at the h e a d - o f - l i n e o f n o n - c o n f o r m a n t class j, avg( i) is its estimated rate, and min( r) is the smallest assigned rate i n the system. T h i s value is used instead o f the actual TS(j) i n the s e c o n d line o f the a l g o r i t h m f o r the packet departure. T h e v a l u e o f the weight ui used i n the exponential w e i g h t e d m o v i n g average o f the rate estimator was c h o s e n as .002.  T h i s value is the r e c o m m e n d e d v a l u e f o r an E W M A  system f o r estimating the rate and was c h o s e n so that it calculates the average rate w h i l e it f o l l o w s the change o f stream rate w i t h a reasonable time constant. H o w e v e r , this parameter can b e used to set the time constant o f the system as desired. D D B - F F Q can also be described as a l i n k sharing system. In this m o d e l , the g e n eral scheduler uses a n o r m a l F F Q scheme a n d the l i n k - s h a r i n g scheduler is the D D B - F F Q service d i s c i p l i n e . In the context o f l i n k sharing, a general scheduler is u s e d d u r i n g n o r m a l operating c o n d i t i o n s o f the network, and the l i n k - s h a r i n g scheduler is u s e d d u r i n g c o n g e s tion. W h e n congested or f a c e d w i t h a n o n - c o n f o r m a n t traffic, the o f f e r e d service rate o f that session is l i m i t e d to its allocated rate, and therefore, the delay b o u n d guarantee is not met. In this system, the excess b a n d w i d t h is distributed a m o n g all the active classes a c c o r d i n g to  48  CHAPTER  4. DDB-FFQ  SCHEDULING  SCHEME  their F F Q - r a t e s , a n d w h e n congestion happens o r a m i s b e h a v e d class exists, it is distributed a c c o r d i n g to the m e t h o d used f o r the time stamp adjustment. In o u r case, it is distributed a c c o r d i n g to the adjustment s h o w n i n equation (4.11), that is, a m o n g w e l l - b e h a v i n g classes a c c o r d i n g to their F F Q - r a t e s a n d a m o n g m i s b e h a v i n g classes p r o p o r t i o n a l to the smallest F F Q - r a t e i n the system. In a D D B - F F Q server, the delay b o u n d o b s e r v e d b y an aggregate is d e t e r m i n e d b y its assigned F F Q - r a t e . T h e r e f o r e , the proper setting o f the F F Q - r a t e v a l u e is the k e y factor that leads to a f a v o r a b l e delay b o u n d . It is s h o w n that f o r an F F Q s c h e d u l i n g scheme the delay o f session / is b o u n d e d a c c o r d i n g to the f o l l o w i n g equation [30]:  J W O  < T{  + %  (4-12) ti,  where Li is the m a x i m u m packet size o f the session i, r is the F F Q rate assigned to session t  i, Lmax is the m a x i m u m packet size o f all sessions a n d R is the output service rate o f the scheduler. C o n s i d e r i n g its inheritance f r o m the F F Q scheme, this equation c a n b e directly a p p l i e d to a D D B - F F Q system as w e l l . T h i s equation illustrates some important facts. First, the delay b o u n d is i n v e r s e l y p r o p o r t i o n a l to F F Q - r a t e . S e c o n d , the delay b o u n d o f session i is dependent o n the m a x i m u m packet size o f session i traffic. T h i r d , it also depends o n the m a x i m u m packet size o f a l l sessions, e v e n o f the session w i t h the lowest priority. T h i s is a l o g i c a l result o f the fact that D D B - F F Q is a w o r k - c o n s e r v i n g n o n - p r e e m p t i v e service d i s c i p l i n e . In the context o f a D i f f S e r v - b a s e d U M T S b a c k b o n e network, the delay b o u n d d e p e n d e n c y o n packet size c a n be easily o v e r c o m e b y the IP t u n n e l i n g c a p a b i l i t y o f the U M T S b a c k b o n e network. A t the time o f packet encapsulation, the ingress edge router p e r f o r m s packet segmentation to l i m i t the m a x i m u m packet sizes. T h e segmented packet c a n be reassembled later at the egress edge router.  B y h a v i n g the m a x i m u m packet sizes a n d the  F F Q - r a t e values o f the D D B - F F Q server, the router operator c a n set the delay b o u n d o f e v -  49  CHAPTER  4.  DDB-FFQ  SCHEDULING  SCHEME  ery P H B a c c o r d i n g to its requirements.  4.4  The Rate Estimator Module/Meter  O n e o f the b u i l d i n g b l o c k s o f the p r o p o s e d l i n k sharing, real-time, s c h e d u l i n g system is the rate-estimator m o d u l e . T h i s element determines the rate o f each class, w h i c h is the b a sic statistic f e d to the D D B - F F Q scheduler. D D B - F F Q uses this data to m a n a g e the b a n d w i d t h a l l o c a t i o n o f each session and to p r o v i d e its l i n k sharing capability. In the context o f a D i f f S e r v - b a s e d U M T S b a c k b o n e router, this m o d u l e c a n serve a d d i t i o n a l purposes, that is, it also acts as the metering element f o r the purpose o f traffic s h a p i n g , r e m a r k i n g and/or f o r b i l l i n g purposes. T h e r e are several w a y s o f i m p l e m e n t i n g a rate estimator. T h e u s u a l a l g o r i t h m used f o r estimating the traffic rate is the exponential w e i g h t e d m o v i n g average ( E W M A ) lowpass system [33] w h i c h is also the m e t h o d that is used i n the p r o p o s e d D D B - F F Q scheme. T h e k e y parameters o f this estimator are its time constant a n d the f r e q u e n c y at w h i c h the rate estimation is p e r f o r m e d . T h e rate estimator used i n D D B - F F Q updates the v a l u e o f the estimated rate at the time o f each packet a r r i v a l . T h e a l g o r i t h m actually first estimates the packet inter-arrival time. It calculates the difference between the actual a n d allocated inter-arrival time o f each packet a n d then finds its E W M A values. B y h a v i n g the estimated packet interarrival time, its r e c i p r o c a l , the packet rate is determined. A s s u m e t is the interval between the time w h e n the last packet has a r r i v e d and the time that the most recent packet has a r r i v e d . A l s o assume S is the size o f the a r r i v e d packet, b is the b a n d w i d t h allocated to this packet class, a n d r is the allocated interarrival time o f this class. T h e n the d i s c r e p a n c y b e t w e e n allocated and actual inter-arrival times is this:  diff  = t-^.  50  (4.13)  CHAPTER  4. DDB-FFQ  SCHEDULING  SCHEME  Allocated Interarrival Time =S/b  Actual Interarrival Time = t  F i g u r e 4.2: Variables U s e d f o r interarrival time estimation. T h e n the exponential w e i g h t e d m o v i n g average ( E W M A ) m o d e l is u s e d to calculate the average o f cliff as i n the f o l l o w i n g :  * (1 - u) + diff  avgdiff <- avg  diff  * to  (4.14)  where u is the weight factor a n d is the k e y factor f o r setting the E W M A ' s time constant. B y applying  avgdiff  i n place o f diff in equation (4.13) a n d c o n s i d e r i n g  t = S/r,  w e have the  following:  S S T = o r  avgdiff = t-  S T  (4.15)  b  where r is the estimated rate. F i n a l l y the estimated rate is s i m p l y c a l c u l a t e d this w a y 1  rate  (4.16)  T h e weight u denotes the time constant o f the estimator. T h e time constant is the time that it takes f o r the system to change f r o m its o r i g i n a l value to 6 3 % o f it. T h e r e f o r e , to calculate the time constant o f this estimator, w e first calculate the n u m b e r o f packets it takes f o r the value o f avgdif / to m o v e 6 3 % away f r o m its initial value. S u p p o s e e x c h a n g e s f r o m 0 to / andletu;*/ =  at the time o f packet ^ a r r i v a l represented b y Yk. T h e n equation  c a n d avgdiff  (4.14) c a n be rewritten i n the f o r m o f a difference equation:  Y  k  = (1 - w)Y -! + c, k  51  y = o. 0  (4.17)  CHAPTER  T h e s o l u t i o n to this equation is s i m p l y g i v e n as : Y  k  4. DDB-FFQ  SCHEDULING  = (l—w) (Y —c/w)+c/w. k  k, the n u m b e r o f packets r e q u i r e d f o r Y to b e c o m e Y  k  0  SCHEME  Therefore,  = 0.63c/u> is this:  7-77——rln(l — u)  (4-18)  T h i s means that i f the rate changes, a n d therefore causes the diff variable to change, it takes l/\n(l-u) packets before the value o f avg m o v e s 6 3 % f r o m its initial v a l u e . S i n c e the packet rate c a n b e a s s u m e d to be s/b, the approximate time constant is c a l c u l a t e d thus:  0  * m ( l — u)  52  Chapter 5 Mapping and Classification 5.1  Introduction  M a p p i n g and classification are two related functions that a D i f f S e r v n o d e must p e r f o r m to enable p r o p e r service differentiation. T h e m a p p i n g f u n c t i o n is the act o f translating the Q o S parameters o f one system to another Q o S system, i n order to m a i n t a i n a consistent l e v e l o f service r e c e i v e d b y a packet a l o n g its transmission path, f r o m its source to its destination. In U M T S , the Q o S r e q u i r e d is identified b y a n u m b e r o f attributes associated w i t h each session. D i f f S e r v uses its o w n m e t h o d o f differentiating between distinct aggregates, m a r k i n g the IP header o f their packets and h a n d l i n g them w i t h different P H B s . W h e n D i f f S e r v is d e p l o y e d i n the U M T S b a c k b o n e network, the Q o S parameters o f a packet p a s s i n g the b o u n d a r y o f the c o r e n e t w o r k s h o u l d be m a p p e d to the Q o S constraints o f the n e w system i n the edge router. If an i n c o m i n g packet originates f r o m an M S into the c o r e network, its U M T S Q o S attributes s h o u l d be m a p p e d to an equivalent D S C P , w h i c h is p r e - d e f i n e d i n the D i f f S e r v aware c o r e network.  If it is o u t g o i n g f r o m a core network, o r i g i n a t i n g f r o m an external  source, the P H B s h o u l d then be translated to equivalent U M T S Q o S parameters.  Mapping  m i g h t also be needed between the external bearer service Q o S system a n d D i f f S e r v . T h i s  53  CHAPTER  5.  MAPPING  AND  CLASSIFICATION  chapter is f o c u s e d m a i n l y o n the m a p p i n g f u n c t i o n f r o m the U M T S Q o S system to D i f f S e r v . C l a s s i f i c a t i o n , o n the other h a n d , is the act o f splitting an i n c o m i n g traffic into a n u m b e r o f output streams, b y u s i n g filters. In D i f f S e r v , the traffic is s i m p l y c l a s s i f i e d b y a p p l y i n g m a t c h c o n d i t i o n s o n the D S C P field o f the packet header. T h i s chapter first studies the Q o S m e c h a n i s m o f U M T S / G P R S a n d that o f D i f f S e r v . T h e n the f u n c t i o n o f m a p p i n g these Q o S systems to each other is addressed, a n d finally, the classification methods are presented.  5.2  U M T S / G P R S QoS Classes and Attributes  In U M T S / G P R S , Q u a l i t y o f S e r v i c e ( Q o S ) is c o n v e y e d b y a Q o S profile. A Q o S profile is a part o f the P D P context w h i c h is negotiated w i t h the user to acquire his d e m a n d e d grade o f service. It is a single parameter w i t h m u l t i p l e attributes. E a c h o f its attributes defines a specific characteristic o f the traffic. T h e s e attributes as d e f i n e d i n the specification are traffic class, maximum bitrate, guaranteed bitrate, delivery order, maximum SDU size, SDU format information, SDU error ratio, residual bit error ratio, delivery of erroneous SDUs, transfer delay, traffic handling priority, and allocation/retention priority [2]. Hereafter, some o f these attributes are e x p l a i n e d . T r a f f i c C l a s s : T h e most noteworthy attribute is the traffic class. In d e f i n i n g the traffic classes, restrictions a n d limitations o f the air interface have been taken into c o n s i d e r a t i o n to d e v e l o p reasonable service p r o v i s i o n i n g , w h i l e a v o i d i n g c o m p l e x m e c h a n i s m s . T h e r e are f o u r U M T S traffic classes:  conversational class, streaming class, interactive class a n d back-  ground class, listed a c c o r d i n g to their relative priority, f r o m highest to lowest. C o n v e r s a tional a n d streaming classes are meant to carry real-time traffic streams. T h e m a i n distinctions between conversational and streaming classes are their delay a n d delay variation sensitivities. Interactive and b a c k g r o u n d classes, however, are intended to c o v e r a l l traditional  54  CHAPTER  S.  MAPPING  AND  CLASSIFICATION  Internet applications, s u c h as W W W , e m a i l and F T P . Conversational Class: T h i s class is used f o r c o n v e r s a t i o n a l a n d m u l t i m e d i a a p p l i c a tions, s u c h as G S M telephony speech, v o i c e o v e r IP a n d v i d e o c o n f e r e n c i n g . T h e s e a p p l i cations are m a i n l y used b y two (or m o r e ) peers o f h u m a n end-users. T h e f u n d a m e n t a l Q o S characteristics o f this class are these: preservation o f the time variation between consecutive entities o f the traffic, a n d stringent l o w delay. Streaming Class: T h i s class carries the traffic o f a v i d e o or a u d i o stream f r o m a host m a c h i n e to a h u m a n end-user, and is b a s i c a l l y a one w a y transport. Its Q o S characteristic is that it needs that the time relation between the traffic c o n s e c u t i v e entities be preserved. H o w e v e r it has a m u c h l o w e r delay sensitivity c o m p a r e d to the c o n v e r s a t i o n a l class.  An  e x a m p l e o f its a p p l i c a t i o n is M P 3 v i d e o streaming or Internet r a d i o . Interactive Class:  T h i s class is f o r applications where an end-user ( h u m a n or m a -  chine) is o n - l i n e , a s k i n g some data f r o m a host m a c h i n e . T h e m a i n characteristic o f this class is its request-response pattern, w i t h a l o w e r delay sensitivity c o m p a r e d to the above traffic classes. It m a y o n l y need p a y l o a d content preservation. Background Class: W h e n an end-user ( m a i n l y a m a c h i n e ) sends or receives data i n the b a c k g r o u n d without any delay sensitivity, this class is a p p l i e d . E x a m p l e s o f this k i n d o f application are e m a i l a n d database d o w n l o a d i n g .  M a x i m u m B i t r a t e (kbps): T h i s attribute defines the m a x i m u m rate that c a n b e d e l i v e r e d to or b y U M T S . T r a f f i c is c o n f o r m a n t w i t h a d e f i n e d m a x i m u m bitrate w h e n shaped b y a token b u c k e t w i t h a token rate o f the m a x i m u m bitrate, and a b u c k e t size o f the m a x i m u m S D U size. G u a r a n t e e d B i t r a t e (kbps): It defines the guaranteed rate, that is, the guaranteed n u m b e r o f bits w i t h i n a p e r i o d o f time, d i v i d e d b y the duration o f that p e r i o d , d e l i v e r e d b y U M T S . Q o S requirements, s u c h as delay and reliability are o n l y guaranteed f o r traffic rates u p to  55  CHAPTERS.  this guaranteed  MAPPING  AND  CLASSIFICATION  bitrate.  Delivery Order:  T h i s attribute specifies an application requirement, whether its packets  n e e d to be d e l i v e r e d i n order or not.  Maximum SDU Size:  It defines the m a x i m u m a l l o w a b l e S D U size.  SDU Format Information:  T h i s attribute defines the list o f possible exact sizes o f S D U . If  an application is able to state its S D U sizes, the bearer services are less expensive, because this data can be used b y U T R A N at the R L C layer.  SDU Error Ratio:  It indicates the ratio o f d r o p p e d or erroneous packets o f a c o n f o r m i n g  traffic.  Transfer Delay (ms):  T h i s attribute indicates the m a x i m u m delay i n the  95  th  percentile o f  the distribution o f delay o b s e r v e d w h e n a S D U is carried f r o m one S e r v i c e A c c e s s Point ( S A P ) to another S A P .  Delivery of Erroneous SDUs:  It indicates whether an erroneous S D U shall be d e l i v e r e d or  discarded.  SGSN  M T / T E Bearer service  GGSN  U M T S bearer Service  Radio Bearer Service  External Bearer Service  Backbone bearer service  F i g u r e 5.1: A schematic o f the structure o f the G P R S b a c k b o n e network  T a b l e 5.1 shows the range o f the values expected o f a U M T S bearer service Q o S attributes.  56  CHAPTER  5. MAPPING  AND  CLASSIFICATION  Conversational  Streaming  Interactive  Background  Class  Class  Class  Class  2048  2048  2048  2048  Yes/No  Yes/No  Yes/No  Yes/No  1500 o r 1502  1500 or 1502  1500 o r 1502  1500 or 1502  yes/no  yes/no  yes/no  yes/no  Residual B E R  5e-2 to l e - 6  5e-2 to l e - 6  4e-3 to 6e-8  4e-3 to 6e-8  S D U error rate  l e - 2 to l e - 5  l e - 1 to l e - 5  l e - 3 to l e - 5  l e - 3 to l e - 5  T r a n s f e r delay(ms)  m a x 100  max 250  G u a r a n t e e d bit rate  2048  2048  Traffic Class  M a x bitrate (kbps) D e l i v e r y order M a x S D U size (octets) Errorneos S D U delivery  T a b l e 5.1: E x p e c t e d range f o r U M T S bearer service Q o S attributes  5.3  DiffServ P H B Groups  T h e D i f f S e r v w o r k i n g G r o u p o f I E T F has defined a n u m b e r o f different P H B groups f o r different applications. In this section an o v e r v i e w o f a l l the up-to-date P H B s are presented.  Best Effort (BE) PHB Group:  B E is the traditional Internet traffic class w h i c h has  been used f r o m the b e g i n n i n g . T h i s P H B i m p l i e s that there are s o m e n e t w o r k resources available a n d there is a g o o d faith c o m m i t m e n t that the nodes i n the path w i l l d o their best to f o r w a r d the packet i n a fair manner, but there is n o guarantee o f its d e l i v e r y o r o f its r e c e i v e d l e v e l o f service. T h e r e c o m m e n d e d D S C P f o r this P H B is 0 0 0 0 0 0 .  Expedited Forwarding (EF) PHB Group [39] [40]:  E F P H B is a i m e d at p r o v i d i n g  a l o w loss, l o w latency, l o w jitter, assured b a n d w i d t h edge-to-edge service. It has also been c a l l e d P r e m i u m S e r v i c e . A c o d e p o i n t o f 101110 is r e c o m m e n d e d as its D S C P , a n d therefore, there c a n be o n l y one instance o f E F i n a D S d o m a i n i f one f o l l o w s the r e c o m m e n d e d D S C P .  Assured Forwarding (AF) PHB Group  [41]: T h i s g r o u p p r o v i d e s f o u r i n d e p e n -  dently f o r w a r d e d A F classes. A packet i n each o f these A F groups c a n be assigned to three d r o p p i n g precedences. T a b l e 5.2 s u m m a r i z e s the r e c o m m e n d e d D S C P f o r these twelve i n stances o f A F .  57  CHAPTER  5. MAPPING  AND  CLASSIFICATION  T h e v a l u e o f each class c o d e p o i n t c a n be classified as a six bits binary, w i t h the f o r m o f  D r o p Prec.  Class 1  Class 2  Class 3  Class 4  Low  001010  010010  011010  100010  Medium  001100  010100  011100  100100  High  001110  010110  011110  100110  T a b l e 5.2: R e c o m m e n d e d D S C P values f o r A F P H B  ' c c c d d d ' , i n w h i c h ' c c c ' defines the A F ' s class a n d ' d d d ' indicates its d r o p precedence.  Network Control (NC) PHB Group:  A n NC packet is used f o r c a r r y i n g any p o s s i -  ble s i g n a l i n g data needed i n the D i f f S e r v c o n t r o l p l a n to e x c h a n g e m a n a g e m e n t i n f o r m a t i o n between D i f f S e r v routers w i t h i n a D i f f S e r v d o m a i n . Its r e c o m m e n d e d c o d e p o i n t is 11x000.  Lower than Best Effort (LBE) PHB Group [42]:  T h e p r i m a r y g o a l o f d e f i n i n g this  P H B g r o u p is to p r o v i d e an alternative to best effort traffic at the time o f c o n g e s t i o n . T h i s P H B carried packet o f an application that has a l o w e r d r o p precedence, c o m p a r e d to best effort traffic.  Dynamic Real-time/non-real-time (RTVNRT) PHB Group  [43]: T h i s g r o u p p r o -  vides d e l i v e r y o f real-time a n d n o n - r e a l - t i m e traffic.  Bulk Handling (BH) Per Domain Behavior (PDB) Group  [44]: B H P H B is used  f o r traffic that m a y be starved e v e n i n a p r o p e r l y f u n c t i o n a l network. T h i s starvation is not a requirement o f the P H B , but is used f o r c o m p a r i s o n w i t h the best effort P H B . It c a n be used f o r c a r r y i n g packets o f an application that has sufficiently l o w v a l u e .  Alternative Best Effort (ABE) PHB Group  [45]: T h i s P H B g r o u p gives the best  effort applications an o p t i o n : to choose between r e c e i v i n g l o w e r d e l a y or r e c e i v i n g higher throughput b y d e f i n i n g two types o f green and b l u e packets.  Assured Rate (AR) PDB Group  [46]: T h i s P D B defines the o v e r a l l treatment a  packet receives throughout a D i f f S e r v d o m a i n . T h i s P D B is suitable f o r c a r r y i n g the traffic  58  CHAPTER  5. MAPPING  AND  CLASSIFICATION  o f applications that need rate assurance but d o not require delay b o u n d s .  5.4  Mapping  T h e m a p p i n g f u n c t i o n is the first a n d the most important step i n supporting the Q o S o f a U M T S stream.  T h i s is the place where a P H B suitable f o r a session is c o n s i d e r e d a n d a  D S C P is selected f o r it a c c o r d i n g l y .  T h e selected P H B defines the l e v e l o f service that  stream w i l l observe throughout that D S d o m a i n . T h e m a p p i n g f u n c t i o n is an i m p l e m e n t a t i o n dependent task, h o w e v e r . It depends o n the n u m b e r o f different Q o S p r o v i d e d and the l e v e l o f Q o S fineness. In this section a basic m a p p i n g system is presented.  A s i m p l e r but c o m p l e t e l y analogous m a p p i n g m e t h o d has  b e e n used i n o u r s i m u l a t i o n m o d e l . A close observation o f the standard P H B suggested b y I E T F shows that there are not e n o u g h P H B s to c o v e r a fine set o f P H B equivalent to the U M T S Q o S system. Nevertheless, a basic m e t h o d o l o g y c a n be f o r m e d . M o t i v a t e d b y the U M T S traffic classes, w e classify the traffic types into five P H B groups, GO to G4: GO: T h i s g r o u p is used f o r special, infrequent network m a n a g e m e n t c o n t r o l dedicated to the D i f f S e r v c o n t r o l plane f o r the purpose o f s i g n a l i n g between n e t w o r k elements. T h i s G r o u p has a v e r y h i g h p r i o r i t y o v e r other groups. A D S C P w i t h the value o f 11x000 has been assigned to the N C P H B , thus there c a n be t w o subclasses b y u s i n g the r e c o m m e n d e d D S C P . Gl:  T h i s g r o u p is used as an aggregate o f traffic streams that have stringent delay sensitiv-  ity, f o r e x a m p l e , the conversational traffic class i n U M T S . T h e p r o p e r P H B f o r this group is E F P H B . Nevertheless, o n l y o n e D S C P value, 101110, was a d v i s e d b y I E T F . N o other subgroups c a n b e d e f i n e d i f o n e wants to be consistent w i t h the standard.  H o w e v e r , this  p r o b l e m c a n be easily o v e r c o m e b y u s i n g some other D S C P s , as a l l o w e d i n I E T F s p e c i f i cations, o r b y m a p p i n g subgroup o n e to E F P H B a n d m a p p i n g the rest o f the subgroups to  59  CHAPTER  5. MAPPING  AND  CLASSIFICATION  the h i g h e r p r i o r i t y A F P H B s . T h e s e two choices o n l y affect the v a l u e o f D S C P c h o s e n , but do not affect the o v e r a l l p e r f o r m a n c e o f the system, as far as the p r o p e r P H B p o l i c i e s are selected. G2,  G3:  G r o u p two is a i m e d at aggregating streams w i t h the streaming traffic class a n d  G r o u p three is f o r the interactive traffic class. T h e most rational P H B s that c a n b e used for these groups is a set o f A s s u r e d F o r w a r d i n g P H B groups. T h i s p r o v i d e s a total o f up to twelve subgroups. G4:  T h i s g r o u p is used f o r aggregating n o n - d e l a y sensitive traffic. It is equivalent to the  U M T S b a c k g r o u n d traffic class. T h i s g r o u p can have a n u m b e r o f subgroups a n d can be m a p p e d to these p r o p o s e d P H B s : best effort P H B , l o w e r that best effort P H B , alternative best effort P H B a n d b u l k h a n d l i n g P H B .  5.5  Classification  In the D i f f S e r v context, a C l a s s i f i e r is a f u n c t i o n a l b l o c k that l o o k s at the D S C P value m a r k e d o n each IP packet's header to split an i n c o m i n g traffic into /V output streams. A n important characteristic o f a classifier is that it s h o u l d w o r k u n a m b i g u o u s l y , a n d c l a s s i f y u n i q u e l y the i n c o m i n g traffic. A classifier is u s u a l l y i m p l e m e n t e d b y u s i n g filters, a n d is identified b y its set o f filters. A filter is a D i f f S e r v b l o c k that applies a set o f c o n d i t i o n s to the v a l u e o f a classification key, s u c h as the packet header, content, or attributes to categorize the packet into a finite set o f groups. In a b e h a v i o r aggregate ( B A ) D i f f S e r v classifier, the classification k e y is the packet's D S C P  field.  In a B A classifier, a filter is defined b y a v a l u e that is m a t c h e d w i t h  the D S C P o f the i n c o m i n g packet to determine its l o g i c a l output stream. T h i s v a l u e can be a six bits b i n a r y number, l i k e 111000, and can be a p p l i e d w i t h different m a t c h c o n d i t i o n s ,  60  CHAPTER  5.  MAPPING  AND  CLASSIFICATION  s u c h as w i l d c a r d , prefix, m a s k e d , range and/or exact m a t c h . A M u l t i - F i l e d ( M F ) classifier classifies the packets based o n t w o or m o r e fields i n a packet header. T h e most usual header fields used are destination address, source address and D S C P i n the I P header, a n d source port and destination port i n the T C P header. F o r a D i f f S e r v - b a s e d U M T S b a c k b o n e router, a packet's P H B c a n b e d e t e r m i n e d just b y the v a l u e i n its D S C P field i n the IP header.  T h e r e f o r e , a s i m p l e B A classifier is  sufficient, w h i c h uses exact m a t c h conditions f o r conversational, streaming a n d interactive traffic groups. A n exact c o n d i t i o n m a t c h w i t h a filter v a l u e o f 0 0 0 0 0 0 is u s e d f o r the best effort subgroup o f the b a c k g r o u n d traffic group and a w i l d c a r d (******) m a t c h is used to put any packet w i t h an u n k n o w n or a m b i g u o u s D S C P v a l u e into an output stream w i t h l o w e r priority, s u c h as l o w e r than best effort or b u l k h a n d l i n g P H B s .  61  Chapter 6 Simulation Model and Results T h i s chapter addresses the s i m u l a t i o n m o d e l a n d results, presenting the d i s c u s s i o n i n three parts. T h e first part presents s i m u l a t i o n results c o m p a r i n g the p e r f o r m a n c e o f the R E D o n shared b u f f e r schemes, as discussed i n C h a p t e r 3, under both T C P a n d n o n - T C P traffic m o d els. T h e s e c o n d part presents s i m u l a t i o n results related to the s c h e d u l i n g system, as discussed i n C h a p t e r 4, b y p r o v i d i n g p r o o f o f the d e c o u p l e d delay b a n d w i d t h capability o f D D B - F F Q a n d its l i n k sharing ability. W h i l e parts one and two evaluate the s c h e d u l i n g and d r o p p i n g elements o f a c o m p l e x D i f f S e r v system i n d i v i d u a l l y i n a single n o d e m o d e l , part three presents the s i m u l a t i o n m o d e l o f a s i m p l e D i f f S e r v - b a s e d U M T S b a c k b o n e network w i t h three nodes a n d a m o r e realistic traffic m o d e l . T h i s m o d e l utilizes a m o r e c o m p l e t e c o n f i g u r a t i o n o f a D i f f S e r v router to a d d Q o S support. T h e s c h e d u l i n g b l o c k o f each router is based o n the D D B - F F Q service d i s c i p l i n e . T h e O P t i m u m N E T w o r k p e r f o r m a n c e ( O P N E T ) m o d e l e r is the s i m u l a t i o n e n v i r o n ment used f o r b u i l d i n g a n d simulating these m o d e l s .  O P N E T is a r e n o w n e d , w i d e l y ac-  cepted, industry oriented software that enables researchers to m o d e l a n d simulate c o m p l e x n e t w o r k scenarios, c o m m u n i c a t i o n protocols and traffic m o d e l s . It is b a s e d o n a h i e r a r c h i c a l structure that allows u n l i m i t e d subnetwork nesting. W h i l e O P N E T has a fair n u m b e r o f  62  CHAPTER  6.  SIMULATION  MODEL  AND  RESULTS  p r e - d e f i n e d m o d e l s , it a l l o w s b u i l d i n g new m o d e l s b y u s i n g finite state m a c h i n e m o d e l i n g at it lowest l e v e l , w h i c h c a n be p r o g r a m m e d b y P r o t o - C language. P r o t o - C is a c o m b i n a tion o f general C / C + + facilities and an extensive l i b r a r y o f b u i l t - i n , h i g h - l e v e l c o m m a n d s , k n o w n as K e r n e l Procedures. O P N E T has an easy-to-use g r a p h i c a l user interface a n d a variety o f tools f o r specification,  data c o l l e c t i o n , s i m u l a t i o n and analysis, a n d presentation o f results. T h e package  consists o f several editors, each f o c u s i n g o n a particular aspect o f m o d e l i n g . It is based o n an essentially h i e r a r c h i c a l structure f o r m o d e l i n g actual networks or distributed systems. T h e r e are three m a j o r m o d e l i n g d o m a i n s : the network d o m a i n , the n o d e d o m a i n , and the process d o m a i n . F o r e x a m p l e , f o r m o d e l i n g a s i m p l e office L A N , first the actual n e t w o r k is m o d e l e d i n the n e t w o r k editor u s i n g a f e w nodes a n d l i n k s . E a c h o f these nodes is d e f i n e d i n a node m o d e l c o n s i s t i n g o f a n u m b e r o f m o d u l e s . F i n a l l y , the f u n c t i o n a l i t y o f each o f these m o d ules is d e s c r i b e d i n a process m o d e l , u s i n g state m a c h i n e s a n d P r o t o - C p r o g r a m m i n g . D a t a c o l l e c t i o n is p e r f o r m e d i n the f o r m o f three types o f output. T h e s e are output vectors, output scalars a n d a n i m a t i o n . A n output vector is a c o l l e c t i o n o f pairs o f real values and u s u a l l y presents the system's p e r f o r m a n c e versus s i m u l a t i o n t i m e . In contrast, output scalars are used to collect i n d i v i d u a l data points across several instances o f s i m u l a t i o n . A n i m a t i o n is rarely used, but it p r o v i d e s a p o w e r f u l v i s u a l t o o l f o r d e b u g g i n g .  6.1  Dropper Element  T h i s section c o m p a r e s the characteristic o f different R E D o n s h a r e d - m e m o r y b u f f e r a l g o rithms that were presented i n C h a p t e r 3. A l t h o u g h R E D is b a s i c a l l y used to interact w i t h an e n d - t o - e n d c o n g e s t i o n manager, such as the T C P c o n g e s t i o n c o n t r o l a n d a v o i d a n c e m e c h a n i s m , b o t h T C P a n d n o n - T C P traffic sources s h o u l d be studied f o r its d e p l o y m e n t . T h e  63  CHAPTER  6.  SIMULATION  MODEL  AND  RESULTS  E n d p o i n t C o n g e s t i o n M a n a g e m e n t W o r k i n g G r o u p o f the I E T F intends to integrate congestion m a n a g e m e n t across all applications a n d transport p r o t o c o l s b y i n t r o d u c i n g a u n i v e r s a l c o n g e s t i o n manager ( C M ) [18].  T h i s R F C proposes an integrated c o n g e s t i o n m a n a g e m e n t  u n i v e r s a l a m o n g all applications a n d transport protocols that enables a p p l i c a t i o n adaptation to c o n g e s t i o n , i n a d d i t i o n to p r o v i d i n g efficient m u l t i p l e x i n g . H o w e v e r , before the r e q u i r e d i m p l e m e n t a t i o n o f the C M m o d u l e , T C P w i l l be the o n l y transport p r o t o c o l that utilizes an e n d - t o - e n d c o n g e s t i o n manager. S t i l l , m a n y other applications use U D P a n d their congestion a v o i d a n c e algorithms are inadequate or non-existent. A l m o s t a l l real-time a n d streami n g applications are U D P - b a s e d a n d n o n - r e s p o n s i v e to c o n g e s t i o n notification. T h e r e f o r e , it is c o n v e n i e n t to c l a s s i f y the traffic streams into two groups: T C P - c o m p a t i b l e flows a n d n o n - r e s p o n s i v e flows. T h i s section consists o f two discussions, r e g a r d i n g the a b o v e traffic categorizations, one f o r n o n - T C P a n d one f o r T C P traffic flows. In the former, the traffic h a n d l i n g c a p a b i l ity o f the router i n terms o f packet loss and q u e u i n g delay is evaluated f o r instances w h e n early packet d r o p p i n g has n o m e a n i n g to the source. In the latter, the p e r f o r m a n c e o f the algorithms i n a v o i d i n g congestion w i t h m a x i m u m usage o f n e t w o r k capacity is tested f o r instances w h e n the traffic source can sense traffic c o n d i t i o n s . T h e s i m u l a t i o n results are used to evaluate a n d c o m p a r e the f o l l o w i n g properties o f the schemes:  • the loss ratio seen w h e n traffic sources do not use an e n d - t o - e n d c o n g e s t i o n m a n g e r ;  • the loss ratio a n d utilization w h e n T C P is the transport p r o t o c o l o f the traffic used, i n addition to the throughput o f the system;  • the packet delay. A l t h o u g h the m a i n f o c u s here is to c o m p a r e the p e r f o r m a n c e characteristics o f R E D a l g o rithms o n a s h a r e d - m e m o r y buffer, R E D o n dedicated and shared m e m o r y buffers are also 64  CHAPTER  6.  SIMULATION  MODEL  AND  RESULTS  c o m p a r e d , because o f the identical structures o f an R - C P scheme a n d R E D o n a dedicated m e m o r y queues.  6.1.1  Simulation Model  A l t h o u g h t w o different traffic m o d e l s , T C P - b a s e d and n o n - r e s p o n s i v e , have b e e n used, the s i m u l a t i o n m o d e l itself has a single structure. F i g u r e 6.1 shows the schematic o f the s i m ulation m o d e l . It consists o f f o u r traffic sources, a gateway w i t h a s h a r e d - m e m o r y buffer and R E D d e p l o y m e n t , and the traffic destination. F o u r traffic generators p r o v i d e f o u r traffic streams w h i c h are m u l t i p l e x e d into a single flow b y the M u x m o d u l e . T h e gateway itself is c o m p o s e d o f a classifier element that separates the i n c o m i n g traffic into f o u r classes, a shared m e m o r y b u f f e r w i t h R E D d e p l o y e d o n it a n d a scheduler. T h e traffic patterns o f the traffic generators are d e s c r i b e d later i n each section. T h e scheduler is a s i m p l e w o r k - c o n s e r v i n g , n o n - p r e e m p t i v e , r o u n d - r o b i n scheduler. A s i m p l e scheduler has been d e p l o y e d to a l l o w the evaluation to reflect o n l y the characteristics o f the d r o p p i n g algorithms. T o c o m p a r e the three congestion management schemes, three scenarios have been evaluated. T h e s e scenarios differ o n l y i n their R E D algorithms, but traffic patterns, s i m u l a t i o n parameters, a n d other parts are similar, m a k i n g the c o m p a r i s o n p o s s i b l e .  6.1.2  Performance of R E D on Shared Memory Buffer Without Endto-End Congestion Control  F o r a n o n - r e s p o n s i v e traffic source, the packet d r o p p i n g at the router has no m e a n i n g i n terms o f c o n g e s t i o n notification. S t i l l the R E D d r o p p i n g has s o m e o f its advantages, s u c h as h a v i n g a l o w e r average q u e u i n g delay and l o w e r bias against bursty traffic. In this m o d e l each o f the traffic sources ( F i g u r e 6.1) generates packets w i t h e x p o nentially distributed interarrival time and service duration. T h e R E D parameters f o r each  65  CHAPTER 6. SIMULATION MODEL AND RESULTS  a o  QUEUES  o  N A  a  HOST  i Mux  SCHEDULER  CLASSIFIER  SHARED MEMORY  TRAFFIC SOURCES  GATEWAY  F i g u r e 6.1: T h e schematic o f the s i m u l a t i o n m o d e l used to evaluate the a l g o r i t h m i c droppers subqueue are c h o s e n as f o l l o w s [47]: m i n - t h ( i ) = 5 * average packet size max-th(i) = 3 * m i n - t h weight factor = 0.002 M a x d r o p p r o b a b i l i t y = 0.1  T h e values o f the m a x i m u m a n d m i n i m u m thresholds f o r the w h o l e queue (used i n R - C S and R - S M A ) are calculated thus:  max-th = Yll=o max-th(i) min-th = Y%=o niin-th(i)  T w o types o f traffic patterns have been used to a l l o w p e r f o r m a n c e e v a l u a t i o n o f these schemes under different c o n d i t i o n s . T h e s e patterns are c a l l e d cases A a n d B .  Case A: i n this  case, all sources are generating s y m m e t r i c traffic, that is, each has  an average base traffic rate o f 9 6 0 0 bps a n d the gateway has a service rate c a p a b i l i t y o f 3 8 4 0 0 * 1 . 0 3 b p s at a l l times where the 1.03 factor is used to ensure stability. B o t h packet  66  CHAPTER  6. SIMULATION  MODEL  AND  RESULTS  interarrival times a n d packet lengths are generated u s i n g exponential distributions. In this case the p e r f o r m a n c e o f the system is evaluated w h e n a l l classes are generating packets w i t h o v e r l o a d e d rates o f 100% to 4 0 0 % . T h e r e f o r e , every class suffers the same a m o u n t o f overl o a d a n d the router is congested w i t h the same share o f each class. T h i s traffic pattern was c h o s e n to demonstrate the p e r f o r m a n c e c o m p a r i s o n o f R E D schemes under the same c o n gestion c o n d i t i o n s . T h e p h y s i c a l queue size o f the b u f f e r i n each scenario has been set to infinity to a l l o w evaluation o f the packet d r o p p i n g due to R E D a l g o r i t h m o n l y . S t i l l , the queues d o not g r o w c o n t i n u o u s l y because o f the m a x i m u m threshold used i n R E D a l g o rithms. T a b l e 6.1 shows the value o f R E D parameters used i n the m o d e l .  m i n - t h (bits)  m a x - t h (bits)  48000  114000  B u f f e r size  W e i g h t factor  M a x D r o p Probability  infinite  0.002  0.1  T a b l e 6.1: R E D Parameters  F i g u r e 6.2 shows the packet drop ratio o f the three schemes w h e n their l o a d is i n creased f r o m 100% to 4 0 0 % o f their base rates. T h e packet ratio is the n u m b e r o f packets d r o p p e d o v e r the total n u m b e r o f packets generated d u r i n g the s i m u l a t i o n . D u e to R E D a l g o r i t h m , the schemes initiate packet d r o p p i n g at a p p r o x i m a t e l y 7 5 % o f the n o r m a l rate, u n i f o r m l y . H o w e v e r , b e y o n d a 150% l o a d , they s h o w distinct packet losses.  This  figure  shows that w h e n traffic is s y m m e t r i c , the R - C S has the least, a n d the R - C P has the worst packet d r o p p i n g rates. F i g u r e 6.3 shows the E W M A queue size o f the b u f f e r o f each o f the schemes. T h i s is not i n d i c a t i v e o f the average delay o f each queue, but shows the o v e r a l l status o f the buffers, as it is the average value that R E D algorithms calculate to decide o n whether a gateway is congested or not. T h i s figure also shows that R - C S has a l o w e r v a l u e a n d indicates that it handles traffic m o r e smoothly, without i s s u i n g a premature c o n g e s t i o n notification.  67  CHAPTER  6.  SIMULATION  MODEL  0.9 r  Offered l o a d % F i g u r e 6.2: Packet d r o p ratio  0  1  50  '  100  1  150  '  1  200  250  Offered l o a d %  1  300  F i g u r e 6.3: A v e r a g e queue size  68  1  1  350  400  AND  RESULTS  CHAPTER  6. SIMULATION  MODEL  AND  RESULTS  Case B: In this case, the characteristics o f the schemes under a different traffic pat-  r tern are evaluated. H e r e , o n l y two out o f the f o u r traffic generators, class 1 and class 3, cause c o n g e s t i o n at the router. T h e s e sources o v e r l o a d the router i n f o u r intervals d u r i n g the s i m ulation. In a s i m u l a t i o n w i t h a duration o f one hour, they constantly generate packets w i t h their base rates, w i t h an a d d e d traffic l o a d o f 150% to 3 0 0 % d u r i n g 10 to 14, a n d 35 to 39 minutes f o r class 1 a n d d u r i n g 2 0 to 24, a n d 50 to 54 minutes f o r class 3. T h e s i m u l a t i o n R E D parameters are set a c c o r d i n g to table 6.2: min-th(i)  max-th(i)  Rate o f class 0 & 2  B a s e rate o f class 1 & 3  B u f f e r size  4 8 0 0 0 bits  6 0 0 0 0 0 bits  8000 (bps)  9 6 0 0 (bps)  1.25 * m a x - t h  T a b l e 6.2: R E D Parameters  0.09 0.08 0.07  - * - R -SMA -e- R-CS - * - R -CP  . 9 0.06 cd  g-0.05  200  250  300  Overload% F i g u r e 6.4: Packet drop ratio F i g u r e 6.4 shows the packet drop ratio o f R - C P , R - C S a n d R - S M A under this n e w traffic pattern. T h e h o r i z o n t a l axis, o v e r l o a d % , is the percentage o f the a d d e d l o a d f o r the 69  CHAPTER  0.045 r  OVERLOAD% F i g u r e 6.5: Packet drop ratio class 0  0.25 r  OVERLOAD% F i g u r e 6.6: Packet drop ratio class 1  70  6. SIMULATION  MODEL  AND  RESULTS  CHAPTER  6.  SIMULATION  MODEL  AND  RESULTS  traffic class 1 (or 3). T h i s figure shows the same results as F i g u r e 6.2 w i t h a m o r e distinct p e r f o r m a n c e difference. It shows that R - C S has the smallest a n d R - C P the largest o v e r a l l packet d r o p p i n g rate. T h i s is due to the fact that i n R - C S , the u n u s e d space o f other traffic aggregates c a n be shared. H o w e v e r , the smaller total packet d r o p p i n g o f R - C S has the p r i c e tag o f larger packet d r o p p i n g rates f o r classes 0 and 2, as seen i n figure 6.5 f o r class 0. Nevertheless, this h i g h e r packet d r o p p i n g happens i n the h i g h values o f o v e r l o a d a n d has a relatively s m a l l drop ratio. F i g u r e 6.6 demonstrates the packet d r o p p i n g ratio o f traffic class 1. B e c a u s e o f their analogous traffic pattern, the p e r f o r m a n c e o f class 2 is s i m i l a r to that o f class 0 a n d class 3 is s i m i l a r to class 1. F i g u r e s 6.4, 6.5 and 6.6 together depict that u s i n g R - S M A under the m e n t i o n e d traffic pattern i m p r o v e s the packet d r o p p i n g rates i n the o v e r a l l b u f f e r a n d o f classes 1 and 3 i n the worst case l o a d scenario, w i t h o n l y a s m a l l increase i n packet d r o p p i n g o f w e l l - b e h a v e d traffic.  30  Time(min) F i g u r e 6.7: Q u e u i n g delay  71  CHAPTER  6.  SIMULATION  MODEL  AND  RESULTS  T o demonstrate the time based b e h a v i o r o f the schemes, the s i m u l a t i o n results for a m o d e l w i t h the same traffic pattern as i n the above, and w i t h a class 0 and a class 3 added o v e r l o a d o f 175%,  were s a m p l e d .  F i g u r e 6.7 shows the q u e u i n g delay o f R - C S , R - C P and R - S M A schemes f o r a s i m ulation duration o f one hour w i t h i n the traffic pattern o f case B . A s expected, less packet d r o p p i n g o f R - C S w i l l y i e l d to a higher delay, w h i l e a higher d r o p rate o f R - C P leads to a l o w e r delay at a time o f congestion.  H o w e v e r , these schemes are p r o p o s e d to be used for  interactive and b a c k g r o u n d traffic classes, w h i c h have a higher p r i o r i t y f o r content preservation than packet delay. In addition, the m a x i m u m q u e u i n g delay can be c o n t r o l l e d b y the queue size. T h i s figure also clearly shows the traffic pattern. A s expected, the congestion has o c c u r r e d i n the intervals o f [10,  14], [20, 24],  [35, 39] and [50, 54].  S i n c e the R - C P  scheme has l o w e r packet drops, it has m o r e b a c k l o g g e d packets a n d it takes l o n g e r to return to its n o r m a l q u e u i n g delay. F i g u r e s 6.8, 6.9 and 6.10 demonstrate the d r o p p i n g pattern o f the schemes d u r i n g a s a m p l e d s i m u l a t i o n . T h e s e figures demonstrate two facts. First, packet d r o p p i n g ratios i n R - C P and R - S M A are m o r e spread out than i n R - C S , w h i l e packet d r o p p i n g i n R - C S m a i n l y happens d u r i n g the p e a k queue congestion periods. S e c o n d , the m a x i m u m v a l u e o f the drop ratio is l o w f o r R - C S , and h i g h f o r R - C P . R - S M A has a moderate m a x i m u m d r o p ratio. T h i s m a x i m u m v a l u e is an indicator o f burst packet d r o p p i n g . T h e r e f o r e , one c a n c o n c l u d e that R - C S handles bursty traffic m o r e efficiently.  72  C H A P T E R 6.  SIMULATION MODEL AND RESULTS  P a c k e t d r o p ratio, H - C P  30 time(min)  Figure 6.8: Packet drop pattern of R-CP, Load%=175% P a c k e t d r o p ratio, R - C S  20  30 Time(min)  40  Figure 6.9: Packet drop pattern of R - C S , Load%=175% P a c k e t d r o p ratio,  R-SMA  0.03  0  10  20  30 Time(min)  40  5 0  Figure 6.10: Packet drop pattern of R - S M A , Load%=175%  73  60  CHAPTER 6. SIMULATION MODEL AND RESULTS  6.1.3  Performance of R E D on a Shared-Memory Buffer with an Endto-End Congestion Manager (TCP Traffic)  T h e d o m i n a n t transport p r o t o c o l that uses an e n d - t o - e n d congestion m a n a g e r is T C P . T h e i m p l e m e n t a t i o n o f a congestion a v o i d a n c e m e c h a n i s m i n T C P has been the m a i n reason o f proper functionality and effectiveness o f the Internet, because it prevents congestion c o l lapse p h e n o m e n a . A s the m a i n purpose o f R E D d e p l o y m e n t is its interaction w i t h T C P to a v o i d congestion i n the network, this section c o m p a r e s the p e r f o r m a n c e o f R E D o n a sharedm e m o r y b u f f e r w i t h traffic originating f r o m an application u s i n g T C P . T h e s i m u l a t i o n m o d e l is based o n F i g u r e 6.1 w i t h f o u r F T P traffic sources. F T P is the a p p l i c a t i o n r u n n i n g o n each source, w h i c h sends and receives data through T C P and IP layers, based o n the Internet's layered structure. S i m i l a r to the traffic pattern o f case B i n the p r e v i o u s section, o n l y t w o o f the traffic sources attempt to send data higher than their base rates, w i t h a base traffic rate o f 1.8 k B p s and added traffic o f 2 k B p s d u r i n g 4 intervals. T h i s traffic pattern is illustrated i n table and figure  6.11.  ex  Traffic rate of Class 0  pa A  Traffic rate of Class 2  380Q  1800  •"•  10  F i g u r e 6.11:  :  = •  20  30  40  50 60 Simulation time (min)  T r a f f i c pattern o f classes 0 and 2, i n a s i m u l a t i o n o f one h o u r duration  74  6.3  CHAPTER  10 to 14  20 to 24  35 to 39  6. SIMULATION  MODEL  AND  RESULTS  5 0 to 54  Class 0 (Bps)  2000  -  2000  -  Class 2 (Bps)  -  2000  -  2000  T a b l e 6.3: T r a f f i c pattern o f T C P flows A generic T C P m o d e l is used that emulates the p e r f o r m a n c e o f a n actual T C P w i t h parameter values a c c o r d i n g to table 6.4.  T h e s e settings are O P N E T default values. T h e  values o f the R E D parameters are c h o s e n a c c o r d i n g to table 6.5 ( s i m i l a r to case B i n the p r e v i o u s section).  Initial R T O  Minimum R T O  Maximum R T O  R T T Gain  1 (sec)  0.5 (sec)  64 (sec)  0.125  Deviation Gain  R T T Deviation Coefficient  Persistence T i m e o u t  Fast R e c o v e r y  0.25  4.0  1.0 (sec)  Enabled  Fast R e t r a n s m i t  Karn's Algorithm  Enabled  Enabled  T a b l e 6.4: D e f a u l t values o f T C P parameters used i n s i m u l a t i o n  min-th(i)  max-th(i)  M a x D r o p Probability  Weight Factor  B u f f e r size  15000 (bits)  8 0 0 0 0 (bits)  0.1  0.002  1.25 * m a x - t h  T a b l e 6.5: R E D parameters  T h e m a i n difference between the m o d e l i n this section a n d the o n e i n the p r e v i o u s section is that w h e n a stream suffers f r o m a packet loss it l o w e r s its rate, since packet loss is c o n s i d e r e d to be f e e d b a c k o f the n e t w o r k ' s c o n d i t i o n , a n d is an i n d i c a t i o n o f o c c u r r i n g congestion. T h e r e f o r e , a c o m b i n a t i o n o f packet loss and throughput (traffic intended to be r e c e i v e d versus actual traffic received) is used f o r p e r f o r m a n c e evaluations. F i g u r e 6.12, 6.13 a n d 6.14 s h o w samples o f p a c k e t drop patterns d u r i n g s i m u l a t i o n .  75  CHAPTER  6.  SIMULATION  MODEL  AND  RESULTS  Packet d r o p p i n g i n the R - C P schemes happens quite often a n d the v a l u e o f the d r o p ratio is m u c h h i g h e r than i n the other schemes.  R - S M A has m o r e frequent drops, w i t h h i g h e r i n -  stantaneous values i n c o m p a r i s o n to R - C S , but less drops c o m p a r i n g to R - C P . T h e s e  figures  show that i n R - C S a n d R - S M A , packets are d r o p p e d o n l y w h e n the b u f f e r is actually near or i n c o n g e s t i o n p e r i o d s . F i g u r e 6.16,  6 . 1 5 a n d 6.17 s h o w the q u e u i n g delay patterns o f these sharing schemes  and figure 6.18 shows the average o f these figures o v e r l a i d . A s the result o f l o w e r packet drops, the q u e u i n g delay o f R - C S is h i g h e r than the other two, w h i l e R - C P has the lowest. A t times o f c o n g e s t i o n , R - C S and R - S M A have relatively large peaks, but because o f the c o n g e s t i o n notification o f R E D , they return to their n o r m a l values afterward. F i g u r e 6.19 shows the F T P traffic r e c e i v e d i n each scenario. S i n c e the figure shows almost s i m i l a r values f o r the traffic r e c e i v e d b y the a p p l i c a t i o n layer i n R - C P , R - C S , a n d R S M A , a n d b y c o n s i d e r i n g the packet drop ratios, one can c o n c l u d e that R - C S has the highest and R - C P has the lowest throughput. In o b s e r v i n g the s i m u l a t i o n results f o r b o t h T C P a n d n o n - T C P traffic, it b e c o m e s apparent that R - C S has the lowest packet d r o p p i n g rates, because it uses m e m o r y m o r e efficiently, w h i l e p e r f o r m i n g its congestion a v o i d a n c e r e s p o n s i b i l i t y a n d h a n d l i n g bursty traffic. H o w e v e r , it has a bias towards n o n - s y m m e t r i c streams, so s o m e traffic m i g h t m o n o p o l i z e the b u f f e r usage. Nevertheless, since R E D drops packets p r o p o r t i o n a l to the average queue size, it is less l i k e l y that a T C P - c o m p a t i b l e class m o n o p o l i z e the b u f f e r i n c o m p a r i s o n to t a i l - d r o p p i n g C S or S M A schemes.  R - S M A p e r f o r m s almost i d e n t i c a l l y to R - C S ,  but w i t h m o r e fairness, b y p r o v i d i n g a m i n i m u m guaranteed space f o r each class. R - C S is the least efficient at m e m o r y usage, but is the best o p t i o n f o r a s y m m e t r i c a l traffic. T h e s e algorithms can be easily extended to support w e i g h t e d thresholds, a n d therefore, p r o v i d e m o r e fairness i n a d d i t i o n to efficiency.  76  CHAPTER  6. SIMULATION  MODEL AND RESULTS  Packet drop ratio, R - C P  O  20  10  30  40  Time(min)  50  60  Figure 6.12: Packet drop pattern, R - C P  Time(min)  Figure 6.13: Packet drop pattern, R - C S Packet drop ratio, R - S M A 1  1  \A m  jl  i l 1 11 11 A  o  10  1 1  20  All I  30 Time(min)  A  AO  Figure 6.14: Packet drop pattern, R - S M A 77  50  ao  CHAPTER  6. SIMULATION  MODEL  Q u e u e i n g delay, R—CP  O  10  2 0  30  4 0  5 0  6 0  Time(min)  F i g u r e 6.15: Q u e u i n g delay pattern, R - C P Q u e u e i n g delay , R — C S  o  10  20  1  1  r  3 0  4 0  5 0  Time(min)  F i g u r e 6.16: Q u e u i n g delay pattern, R - C S  78  eo  AND  RESULTS  CHAPTER  6.  SIMULATION  MODEL  1.5r  0  10  F i g u r e 6.18:  20  30  Time(min)  40  50  A v e r a g e o f q u e u i n g delay pattern  79  60  AND  RESULTS  CHAPTER  6.2  6. SIMULATION  MODEL  AND  RESULTS  Scheduler Element  T h i s section addresses the p e r f o r m a n c e evaluation o f the D D B - F F Q s c h e d u l i n g a l g o r i t h m presented i n C h a p t e r 4. S i n c e this a l g o r i t h m is b a s i c a l l y a m o d i f i e d v e r s i o n o f F F Q , the f o l l o w i n g s i m u l a t i o n m o d e l s are a i m e d at evaluating the n e w features o f this scheme, that is, the s i m u l a t i o n results have two objectives. First,the capability o f this s c h e m e i n assigning delay b o u n d s and b a n d w i d t h allocation, independently, is s h o w n . S e c o n d , the p e r f o r m a n c e o f the system w h e n o n e o f the streams misbehaves is assessed, w h i c h is an indicator o f its l i n k sharing ability.  6.2.1  Simulation Model  T h e schematic o f the s i m u l a t i o n m o d e l used is s h o w n i n F i g u r e 6.20. T h e m o d e l consists o f the traffic sources, the router and the destination node. T h e traffic sources generate f o u r classes o f flows that are m u l t i p l e x e d to each other i n the M u x b l o c k . E a c h class is assigned 24% o f the l i n k b a n d w i d t h that limits the utilization to 0.97, to meet the necessary stability c o n d i t i o n . H o w e v e r , they are assigned different delay b o u n d s . C l a s s 1 is the most delay sensitive a n d its delay b o u n d is set to 1 sec a n d its b a n d w i d t h is 9 6 0 0 bps. C l a s s 2 is also delay sensitive and its delay b o u n d is set to 2 sec w i t h a 9 6 0 0 bps b a n d w i d t h . C l a s s 3 has a loose delay sensitivity o f 6 seconds and C l a s s 4 is not delay sensitive but is allocated a 9 6 0 0 bps b a n d w i d t h . A n i n c o m i n g packet is first classified into 4 classes and packets b e l o n g i n g to each o f these classes are then stored i n a dedicated buffer. E a c h stream goes through a rate-estimator m o d u l e that estimates the rate o f each traffic class. T h e s e c o l l e c t e d rate statistics are f e d into the scheduler. T h e rate estimator is an E W M A w i t h a weight factor o f u =  0.02.  T h e scheduler uses a D D B - F F Q s c h e d u l i n g a l g o r i t h m . T o achieve the above m e n tioned delay constraints, the scheduler parameters defined i n C h a p t e r 5 need to be set ap-  80  CHAPTER  6. SIMULATION  MODEL  AND  RESULTS  TRAFFIC SOURCES  GATEWAY  F i g u r e 6.20: T h e schematic o f the D D B - F F Q s i m u l a t i o n M o d e l propriately, u s i n g equation ( 4 . 1 2 ) . T a b l e 6.6 shows the values o f F F Q rates and allocated rates u s e d f o r the s i m u l a t i o n .  Class 1  Class 2  Class 3  Class 4  A l l o c a t e d rates (bps)  9600  9600  9600  9600  F F Q rates (bps)  14000  11000  7000  3270  T a b l e 6.6: T h e values o f the F F Q and assigned rates  6.2.2  Simulation Results  T o e x a m i n e the d e c o u p l e d delay b a n d w i d t h and l i n k sharing ability o f the D D B - F F Q system, the rate o f traffic C l a s s 2 has been set as the c h a n g i n g variable. It varies f r o m 100% to 5 0 0 % o f its allocated rate and the observed delay and actual assigned b a n d w i d t h o f each class is measured. T r a f f i c sources 1 , 3 , and 4 generate packets a c c o r d i n g to their allocated rates. F i g u r e 6.21 illustrates the 9 0 % delay observed b y each traffic class. S i n c e the l i n k s are ideal, the delay measured accounts o n l y f o r q u e u i n g delay. T h e h o r i z o n t a l axis is the 81  CHAPTER  6.  SIMULATION  MODEL  AND  RESULTS  percentage o f C l a s s 2's actual rate over its assigned rate. T h e vertical axis is the 9 0 % delay o f the delay distribution o f each traffic class over s i m u l a t i o n duration. T h i s figure d e m o n strates several facts. M i s b e h a v i n g traffic has little effect o n other sessions' o b s e r v e d delay. C l a s s 1 delay is constant regardless o f the rate o f C l a s s 2. T h e d e l a y sensed b y packets o f traffic C l a s s 3 varies somewhat, but its value i n the worst case scenario o f 500% is a p p r o x i mately 133% o f its delay b o u n d s . T h i s case can be easily a v o i d e d b y l i m i t i n g the m a x i m u m peak rate o f each traffic b a n d w i d t h to an appropriate rate. S i n c e traffic C l a s s 2 is m i s b e h a v i n g , there is no guarantee that the router w i l l keep its p r o m i s e o f b o u n d i n g its delay. H e n c e , it is p u n i s h e d a c c o r d i n g to its rate a n d its o b s e r v e d delay increases as its rate increases. T h e delay o f traffic C l a s s 4 is initially increased about 30% w i t h the increase o f the C l a s s 2 rate, but f o r higher rates it can take advantage o f the p u n i s h m e n t a p p l i e d to C l a s s 2. T h i s b e h a v i o r is acceptable since the b a c k g r o u n d traffic class has a loose delay requirement. F i g u r e 6.22 depicts the service rate r e c e i v e d b y each traffic class w h i l e C l a s s 2 m i s behaves. T h e vertical axis is the percentage o f the actual allocated rate o v e r the assigned rate f o r each class, a n d the h o r i z o n t a l axis is the percentage o f the C l a s s 2 rate over its assigned rate. It c a n b e discerned f r o m the figure that the allocated rate o f each class varies only, at most, b y 5% o f its assigned rate. T h i s fact is e v e n a p p l i c a b l e to the m i s b e h a v i n g stream. T h i s means that the scheduler has been able to successfully l i m i t or allocate each class's guaranteed rate. S i n c e the b a n d w i d t h c o n t r o l m e c h a n i s m is set to be i n effect w h e n the rate m e a s u r e d b y the estimator is greater than 125% o f what is assigned, C l a s s 2 traffic gets h i g h e r b a n d w i d t h initially as its rate increases, a n d the C l a s s 4 traffic, w h i c h has l o w e r priority, gets l o w e r b a n d w i d t h . Later, however, w h e n the D D B - F F Q b a n d w i d t h c o n trol m e c h a n i s m c o m e s into effect, the trend is reversed. T h e s e figures elucidate that the o n l y consequence f o r m i s b e h a v i n g traffic is that it w i l l not be treated w i t h i n its requested delay b o u n d , w h i c h is a reasonable punishment.  82  CHAPTER 6. SIMULATION MODEL AND RESULTS  120r - * - class 1 —«— class 2 — i — class 3  80h  40 h  20 h  *  100  *  ,  1  rQ *  I  I  *  '  '  '  I  150  200  250  300  350  400  450  1 *  500  Class 2 (Rate/Assigned-rate) (%) F i g u r e 6.21:  9 0 % delay o b s e r v e d b y all traffic classes w h e n class 2 is m i s b e h a v i n g .  105r  l 6  i  100  1  150  1—:  200  1  1  1  1  1  1  250  300  350  400  450  500  Class 2 (Rate/Assigned-rate) (%) F i g u r e 6.22:  B a n d w i d t h allocated to each class w h e n class 2 is m i s b e h a v i n g .  83  CHAPTER  6.3  6.  SIMULATION  MODEL  AND  RESULTS  Multiple Node Simulation Model and Results  In the p r e v i o u s sections, the p e r f o r m a n c e o f the p r o p o s e d algorithms f o r the d r o p p i n g element a n d the scheduler were discussed i n d i v i d u a l l y . In this section, the structure o f a D i f f S e r v - b a s e d U M T S b a c k b o n e n e t w o r k is presented that has three characteristics.  First,  each n o d e o f the n e t w o r k utilizes the basic router's architecture as needed i n a D i f f S e r v router: the classifier, dropper, meter, q u e u i n g element and scheduler, w i t h the algorithms discussed i n this thesis d e p l o y e d o n them. T h e scheduler uses the D D B - F F Q a l g o r i t h m b y interacting w i t h the rate-estimator m o d u l e . R - C S is p r o p o s e d to b e used f o r the subgroups o f a traffic aggregate, as discussed i n C h a p t e r 5. S e c o n d , the m o d e l utilizes a m u l t i p l e node network consisting o f an ingress node, a core node a n d an egress n o d e . T h e ingress and egress nodes also are G G S N and S S G N , respectively. T h i r d , the m o d e l considers all k i n d s o f U M T S traffic classes w i t h a m o r e realistic traffic m o d e l . F i g u r e 6.23 shows the O P N E T n e t w o r k m o d e l a n d F i g u r e 6.24 shows its schematic i n m o r e details. T h e s i m u l a t i o n m o d e l o n l y considers the b a c k b o n e n e t w o r k p o r t i o n o f the datapath and consists o f a set o f traffic sources, a s i m p l e core network, a n d destination nodes ( F i g ures 6.23,  6.24). T h e m o d e l exploits the b u i l d - i n traffic generators o f O P N E T to gener-  ate a traffic flow consisting o f b o t h T C P - c o m p a t i b l e and n o n r e s p o n s i v e streams. T h e y are also c h o s e n to i n c l u d e a c o m b i n a t i o n o f all the U M T S traffic classes; thus, it covers c o n v e r sational, streaming, interactive and b a c k g r o u n d classes.  F i v e aggregates b u i l d the traffic  stream g o i n g t h r o u g h the network.  • N e t w o r k C o n t r o l T r a f f i c N C is the traffic used f o r s i g n a l i n g or n e t w o r k m a n a g e m e n t messages that are internally used i n a D i f f S e r v d o m a i n . It o c c u p i e s a v e r y s m a l l p o r tion o f b a n d w i d t h but has the highest priority a n d very stringent d e l a y sensitivity. Its r e q u i r e d b a n d w i d t h s h o u l d be c o n s i d e r e d so that its flow w i t h i n the n e t w o r k has a very s m a l l effect o n the guaranteed services o f the actual D i f f S e r v user data. H e r e , 84  CHAPTER 6. SIMULATION MODEL AND RESULTS  F i g u r e 6.23:  O P N E T network m o d e l  it occupies 0.4% o f the b a n d w i d t h and has a fixed packet size that m i g h t increase the delay b o u n d s o f the D D B - F F Q system b y a s m a l l additive constant. T h i s group is also identified b y C l a s s 0 in the s i m u l a t i o n results.  • C o n v e r s a t i o n a l traffic: T h i s traffic is an aggregate o f the v o i c e application w h i c h holds a stringent delay a n d jitter requirement. T h i s aggregate is identified as C l a s s 1 i n the s i m u l a t i o n results. T h r e e traffic sources built this aggregate here and each source g e n erates packets w i t h exponentially distributed interarrival time a n d packet size. A l though exponential distribution is not an accurate m o d e l f o r conversational traffic, this aggregate represents it i n terms o f delay sensitivity.  • S t r e a m i n g traffic : T h i s traffic aggregate that is also identified b y C l a s s 2 here, represents the aggregate o f the streaming flows (video, audio streaming) i n terms o f delay requirements. Its rate varies f r o m 100% o f its assigned rate to 400%  to evaluate the  independence o f the o b s e r v e d Q o S o f aggregates to each other. T h e traffic source out-  85  CHAPTER  6.  SIMULATION  MODEL  AND  RESULTS  DiffServ network control Group  UMTS Traffic Classes  Email Server  F i g u r e 6.24:  T h e schematic o f s i m u l a t i o n n e t w o r k  puts packets w i t h exponentially distributed interarrival time a n d packet size.  • Interactive traffic: A n aggregate o f three H T T P b r o w s i n g applications represents this traffic identified also as C l a s s 3 here. T h e characteristics o f this aggregate class are moderate delay sensitivity a n d contents preservation.  • B a c k g r o u n d T r a f f i c : T h i s aggregate consists o f two instances o f F T P applications a n d three instances o f E - m a i l traffic, w h i c h has a l o w delay requisite a n d is identified b y C l a s s 4 i n the results.  T h e source generators o f H T T P , F T P a n d E - m a i l use the T C P / I P l a y e r e d structure a n d represent the T C P - b a s e d traffic, w h i l e the other classes are n o n - r e s p o n s i v e real-time applications. T h e s e traffic classes have distinct delay b o u n d s a n d b a n d w i d t h assignments. b a n d w i d t h allocated to C l a s s 0 is 0.4%  The  o f the service rate, w h i c h is a very s m a l l p o r t i o n o f  the b a n d w i d t h . Its packet size a n d peak burst rate are also w e l l d e f i n e d , so that its effect o n the delay b o u n d as d e f i n e d i n D D - F F Q is n e g l i g i b l e . E a c h o f the other traffic aggregates are allocated almost the same b a n d w i d t h o f about 2 5 % o f the l i n k b a n d w i d t h . H o w e v e r , they  86  CHAPTER 6. SIMULATION MODEL AND RESULTS  SCHEDULER  14 4 4 ;  Shared-memory Buffers  GATEWAY  F i g u r e 6.25:  II j j j '  T h e schematic o f the s i m u l a t i o n router's c o n f i g u r a t i o n  have distinct delay b o u n d s and priorities. C l a s s 1 has the highest, a n d C l a s s 4 the lowest delay requirement. F i g u r e 6.25  depicts the structure o f every router i n the m o d e l , w h i c h causes packet  f o r w a r d i n g a c c o r d i n g to a packet's D S C P . T h e router has the essential c o m p o n e n t s needed f o r proper D i f f S e r v f u n c t i o n a l i t y : the classifier, the meter, the droppers, the q u e u i n g b l o c k and the scheduler.  T h e i n c o m i n g packet is first classified. If it is an edge node, its Q o S  is m a p p e d to a proper PFLB and then it is encapsulated, h a v i n g the associated D S C P i n its header. T h e n it is passed through a rate estimator to a d r o p p e r b l o c k . T h e accepted packet is then e n q u e u e d . T h e scheduler then forwards the packet to the next h o p u s i n g the D D B - F F Q service d i s c i p l i n e . Dropping Element: T h e p r o p o s e d structure f o r the b u f f e r m a n a g e m e n t f u n c t i o n i n our m o d e l is as f o l l o w s . F o r each aggregate g r o u p , a dedicated b u f f e r is allocated, that is, each o f groups 1 to 5 aggregate classes has its o w n dedicated buffer. H o w e v e r , each o f these buffers uses a m e m o r y sharing scheme to store the packets o f the subgroups defined w i t h i n each g r o u p . G r o u p 2 uses a meter that sends the i n c o m i n g packets to an absolute dropper i f the average rate is greater than 120% o f its assigned rate. T h e rest is s c h e d u l e d f o r service  87  CHAPTER  6.  SIMULATION  MODEL  AND  RESULTS  under the D D B - F F Q or F F Q schemes, whether it is b e l o w its assigned b a n d w i d t h or above, respectively. G r o u p 3 uses a tail d r o p p i n g b u f f e r and accepts packets u p to its f u l l capacity. T h e D D B - F F Q scheme is responsible f o r assigning a b a n d w i d t h to it, u p to its assigned rate. G r o u p s 4 a n d 5 use a R E D a l g o r i t h m i c dropper w i t h a R E D o n C o m p l e t e S h a r i n g scheme ( R - C S ) [48] i m p l e m e n t e d o n it, since these two groups b u i l d the n o n - r e a l - t i m e aggregates.  Scheduler Element:  T h e scheduler is a c o m b i n a t i o n o f the D D B - F F Q s c h e d u l i n g  system, discussed i n C h a p t e r 4, a n d the absolute priority s c h e d u l i n g . T h e N C traffic aggregate has been g i v e n an absolute p r i o r i t y o v e r other classes a n d is not s c h e d u l e d under D D B - F F Q . T h e other classes are s c h e d u l e d a c c o r d i n g to the D D B - F F Q scheme.  Class 0  is treated differently f o r two reasons. First, it is internal n e t w o r k m a n a g e m e n t traffic, and there is no need to have c o n t r o l o v e r its b a n d w i d t h , a s s u m i n g the traffic o r i g i n a t i n g f r o m the n e t w o r k operator is w e l l - b e h a v e d . T h e r e f o r e , u s i n g absolute p r i o r i t y s i m p l i f i e s the system. S e c o n d , it has v e r y stringent delay requirements a n d needs to b e s e r v i c e d as s o o n as possible. H o w e v e r , this s h o u l d not degrade other traffic's Q o S . N e v e r t h e l e s s , Classes 1 to 4 are user generated a n d need to be scheduled a c c o r d i n g to their delay a n d b a n d w i d t h requirements. A b a n d w i d t h a n d a delay b o u n d are assigned to each traffic g r o u p b y u s i n g the corr e s p o n d i n g assigned rate and F F Q rate o f the D D B - F F Q scheduler, respectively. S i n c e the scheduler is a n o n - p r e e m p t i v e scheduler, delay b o u n d s are affected b y the m a x i m u m packet size o f e v e n l o w e r p r i o r i t y traffic [49]. T h e r e f o r e , it is necessary that the edge routers perf o r m segmentation a n d re-assembly o n the large packets, d e p e n d i n g o n the system delay b o u n d s . H e r e , delay b o u n d s are set b y c o n s i d e r i n g the 9 0 % packet size as the m a x i m u m packet size, g i v i n g a 9 0 % packet delay b o u n d . T a b l e 6.7 shows the value o f internal state parameters o f the D D B - F F Q scheduler, F F Q - r a t e s and assigned rates.  Meter:  T h e meter i m p l e m e n t a t i o n used here is the time s l i d i n g w i n d o w type. It is  an exponential w e i g h t e d m o v i n g average ( E W M A ) rate estimator [49]. T h i s m o d u l e is also  88  CHAPTER  6. SIMULATION  Class 1  Class 2  Class 3  Class 4  A l l o c a t e d rates (bps)  20000  20000  21200  18000  F F Q rates (bps)  42000  30000  5000  2200  MODEL  AND  RESULTS  T a b l e 6.7: V a l u e s o f F F Q a n d assigned rates a b u i l d i n g b l o c k o f the D D B - F F Q service d i s c i p l i n e .  T h e m a p p i n g system used f o r this  Group  GO  Gl  G2  G3  G4  PHB  NC  EF  AF1  AF2  BE  T a b l e 6.8: B a s i c m a p p i n g f r o m U M T S to D i f f S e r v  traffic m o d e l is s u m m a r i z e d i n table 6.8. E a c h o f the groups c a n be d i v i d e d into as m a n y subgroups as needed. F o r e x a m p l e , a b a c k g r o u n d traffic class c a n be m a p p e d to B E , L B E , and B H P H B s .  W h a t really matters f o r the proper p e r f o r m a n c e o f the Q o S system is the  l e v e l o f f o r w a r d i n g treatment each group receives and therefore, the names a n d codepoints c a n be s i m p l y i m p l e m e n t a t i o n dependent, especially f o r an isolated D S network. T h e c o r e n e t w o r k consists o f an S G S N , a core node a n d a G G S N . T h e S G S N a n d G G S N nodes also w o r k as D i f f S e r v edge routers, a n d they p e r f o r m IP encapsulation [50] and m a p p i n g f u n c t i o n s . T h e core node o n l y applies the P H B associated w i t h the c o d e p o i n t m a r k e d i n each packet header, w h i c h is a s i m p l e classification task. T h e objective o f the c o m p u t e r s i m u l a t i o n is to evaluate the p e r f o r m a n c e o f the m o d e l i n terms o f the Q o S constraints: latency, b a n d w i d t h a n d packet loss. T h e r e f o r e , O P N E T calculates a n d records the delay o b s e r v e d b y each aggregate, the b a n d w i d t h assigned to each g r o u p relative to their assigned rate, and the packet loss ratio. T h e i n d e p e n d e n c e o f the service r e c e i v e d b y each traffic class is demonstrated b y o b serving the delay a n d allocated b a n d w i d t h o f each o f them w h e n the rate o f C l a s s 2 traffic varies f r o m 100% to 4 0 0 % o f its preset rate. In a D i f f S e r v router, the o v e r l o a d o f m i s b e h a v 89  CHAPTER  6.  SIMULATION  MODEL  120r  200  250  300  Class 2 Load%  F i g u r e 6.26:  E d g e - t o - e d g e delay versus class 2 l o a d %  x 10  200  F i g u r e 6.27:  250 300 Class 2 Load%  A l l o c a t e d b a n d w i d t h vs class 2 l o a d %  90  400  AND  RESULTS  CHAPTER  6.  SIMULATION  MODEL  1 . 2 1.18 -  150  200  F i g u r e 6.28:  250 Class 2 Load%  D e l a y o f traffic class 1  Class 2  F i g u r e 6.30:  Load%  D e l a y o f traffic class 4  91  400  AND  RESULTS  CHAPTER  6.  SIMULATION  MODEL  AND  RESULTS  i n g traffic c a n be c o n f r o n t e d b y merely-accepting the packet w h e n the rate is b e l o w a percentage o f the assigned rate, as it does f o r the E F P H B . H o w e v e r , f o r other k i n d s o f traffic, f o r e x a m p l e , the streaming aggregate, it m i g h t b e u s e f u l to accept packets until the corres p o n d i n g queue is f u l l , w h i l e ensuring this traffic w i l l not degrade the service o f the other streams. T h e size o f this queue s h o u l d be set to p r o v i d e an acceptable m a x i m u m delay. F i g u r e 6.26 shows the delay o b s e r v e d b y each a p p l i c a t i o n a n d F i g u r e 6.27  illustrates  their actual b a n d w i d t h assignments w h e n the aggregate rate o f C l a s s 2 traffic varies f r o m 100% to 4 0 0 % o f its base rate. T h e figures show that the P H B treatment o f each aggregate is m o s t l y independent o f the behaviors o f other aggregates, w h i l e they are o f f e r e d a distinct range o f Q o S . A l t h o u g h the rate o f traffic aggregate 2 is c h a n g i n g w i d e l y , the delay b o u n d and b a n d w i d t h allocated to other traffic streams do not degrade a n d are c h a n g i n g w i t h i n an acceptable range o f their assigned Q o S values.  C l a s s 2 itself has two b a n d w i d t h assign-  ments, d e p e n d i n g o n i f it is b e h a v i n g or not. F i g u r e 6.26 shows that the d e l a y o f each traffic aggregate does not degrade, other than the m i s b e h a v i n g streaming traffic. F i g u r e 6.27 shows an initial decrease o f b a n d w i d t h assignment f o r streaming traffic, and c o r r e s p o n d i n g l y , an increase f o r interactive and b a c k g r o u n d aggregates. T h i s is a d i rect result o f the m e t h o d o f adjustment p r o p o s e d [49] f o r the D D B - F F Q s c h e d u l i n g scheme, w h i c h m o d i f i e s the b a n d w i d t h a c c o r d i n g to the m i n i m u m F F Q - r a t e o f the system.  This  means that the w e l l - b e h a v i n g traffic w i l l get at least its assigned b a n d w i d t h , w h i l e m i s b e h a v i n g traffic is allocated a constant b a n d w i d t h , regardless o f its rate, w h e n the b a n d w i d t h c o n t r o l m e c h a n i s m is i n effect. F i g u r e s 6.28,  6.29, a n d 6.30 depict the delay o b s e r v e d b y the aggregate class 1, 3,  and 4, respectively. T h e s e figures show that, initially, the delay drops until the D D B - F F Q l i n k - s h a r i n g m e c h a n i s m c o m e s into c o m p l e t e effect, a n d then it stays almost constant. F i g u r e 6.31 shows the packet d r o p ratio o f aggregate C l a s s 2, w h i c h increases w h e n its rate increases.  T h e conversational traffic observes n o packet d r o p p i n g a n d aggregate  92  CHAPTER  6.  SIMULATION  MODEL  AND  Classes 3 a n d 4 observe v e r y s m a l l packet drop rates due to their R E D m e c h a n i s m .  Class 2 Load%  F i g u r e 6.31:  D r o p ratio o f aggregate class 2  93  RESULTS  Chapter 7 Conclusions 7.1  Summary  T h i s thesis presented an insight into Q o S support i n the b a c k b o n e n e t w o r k o f the U M T S or G P R S system u s i n g the D i f f S e r v m o d e l . It c o v e r e d the structural m o d e l o f a D i f f S e r v router appropriate f o r d e p l o y m e n t i n a U M T S b a c k b o n e network, i n a d d i t i o n to p r o v i d i n g a basic m e t h o d o l o g y f o r Q o S m a p p i n g between U M T S a n d D i f f S e r v technologies. In particular, three notable c o m p o n e n t s o f the router have been studied: the scheduler, the dropper, a n d the meter. A n o v e l s c h e d u l i n g a l g o r i t h m was p r o p o s e d that has the ability to p r o v i d e d e c o u p l e d delay b o u n d s a n d b a n d w i d t h allocation. T h i s property is an essential factor f o r a l l o w i n g a D i f f S e r v n e t w o r k operator to define and set diverse P H B s . A set o f n o v e l R E D s o n sharedm e m o r y schemes has also been p r o p o s e d a n d c o m p a r e d , s h o w i n g that their d e p l o y m e n t w i l l i m p r o v e the p e r f o r m a n c e o f the system i n response to c o n g e s t i o n , as related to l o w e r packet loss. In a d d i t i o n , the up-to-date Q o S characteristics o f b o t h U M T S a n d D i f f S e r v have been addressed and a basic m a p p i n g between these two systems has been p r o v i d e d . T h e k e y c o n tributions o f this thesis are as f o l l o w s :  94  CHAPTER  • DDB-FFQ  7.  CONCLUSIONS  scheduling system. D D B - F F Q is an efficient a n d s i m p l e service d i s c i -  p l i n e that supports d e c o u p l e d delay b o u n d a n d b a n d w i d t h a l l o c a t i o n to each aggregate class. It also obeys l i n k - s h a r i n g p r i n c i p l e s , b y p r o v i d i n g a predictable amount o f b a n d w i d t h to each traffic class d u r i n g periods o f congestion. T h i s l i n k sharing capab i l i t y suits the D D B - F F Q system to a variety o f other scenarios.  • Proposal and comparison of RED on shared-memory schemes: R-CS, R-CP, R-SMA. T h i s thesis studies the d e p l o y m e n t o f R E D o n shared buffers f o r the first time, b y d e r i v i n g three R E D schemes f r o m the existent sharing p o l i c i e s . A b r i e f analytical expression o f R E D , R - C S a n d R - C P is also p r o v i d e d .  •  QoS mapping between the two QoS systems: UMTS and DiffServ. T h i s thesis e x a m ines the Q o S related aspects o f D i f f S e r v a n d U M T S , a n d proposes a basic Q o S m a p p i n g between t h e m . T h i s m e t h o d c a n b e extended to i n c l u d e an arbitrary n u m b e r o f subgroup P H B s .  • Structural study of a DiffServ-based  UMTS backbone router.  T w o separate s i m u l a t i o n m o d e l s have been built to evaluate the p e r f o r m a n c e o f the D D B - F F Q s c h e d u l i n g system a n d the d r o p p i n g algorithms. T h e p e r f o r m a n c e o f R E D o n shared m e m o r y b u f f e r algorithms has been c o m p a r e d i n terms o f packet loss a n d q u e u i n g delay. D e p l o y m e n t o f R - C S was p r o p o s e d w i t h i n the subgroups o f each T C P - c o m p a t i b l e P H B group. F i n a l l y , a m u l t i p l e node network s i m u l a t i o n m o d e l was presented that has the f o l l o w i n g characteristics. First, it uses a traffic m o d e l c o n s i s t i n g o f b o t h T C P - c o m p a t i b l e and n o n - r e s p o n s i v e applications, c o v e r i n g a l l U M T S traffic classes. S e c o n d , it also p r o v i d e s a basic, but sufficient configuration f o r a D i f f S e r v router a p p l i c a b l e i n the U M T S b a c k b o n e . F i n a l l y , it illustrates the just p e r f o r m a n c e o f the router i n a m u l t i p l e n o d e m o d e l .  95  CHAPTER?.  CONCLUSIONS  A l t h o u g h there have been some experimental d e p l o y m e n t s o f G P R S a n d D i f f S e r v , the actual d e p l o y m e n t s o f U M T S , G P R S a n d D i f f S e r v are still i n the draft stage. T h e next section p r o v i d e s s o m e potential directions f o r future w o r k .  7.2  Future Work  T h e prospect o f i m p l e m e n t i n g D i f f S e r v o n the U M T S b a c k b o n e n e t w o r k is a m a s s i v e project that w i l l require r e s o l v i n g a w i d e range o f issues. B o t h D i f f S e r v a n d U M T S technologies are e v o l v i n g rapidly, a n d a c c o r d i n g l y , d e m a n d o n g o i n g research a n d adjustments. T h e y are i n the early stages o f research and have not actually been i m p l e m e n t e d yet. T h i s thesis itself illuminates a n u m b e r o f possible future p a t h w a y s . O n l y three D i f f S e r v c o m p o n e n t s have been taken into detailed consideration, w h i l e other elements have not been touched. T h e f o l l o w i n g list presents suggestions f o r future w o r k w i t h i n the scope o f the topic presented here:  • T h e m e t h o d D D B - F F Q applies to c o n t r o l the allocated b a n d w i d t h o f each class is o n l y one o f n u m e r o u s feasible methods. T h e r e m i g h t be a better w a y o f a c h i e v i n g this o b j e c t i v e w h i l e p r e s e r v i n g the concept o f u s i n g two sets o f rates, f o r delay a n d b a n d w i d t h , as p r o p o s e d f o r D D B - F F Q .  • T h e r e are other methods o f rate estimation presented i n the literature. F u r t h e r studies m a y l e a d to other schemes that i m p r o v e the p e r f o r m a n c e o f the p r o p o s e d s c h e d u l i n g system.  • A n important m e t h o d o f traffic management is traffic s h a p i n g , w h i c h was not touched o n i n this thesis. A p p l y i n g a traffic shaping b l o c k to the system m i g h t p r o v i d e system improvement.  96  CHAPTER!.  CONCLUSIONS  • T h e p r o p o s e d R E D schemes can be extended to consider p u s h out and/or utilize w e i g h t e d thresholds parameters f o r R E D .  97  Glossary 3G : 3  rd  Generation  A F : Assured Forwarding B E : Best Effort BSC : Base Station Controller C N : Core Network DDB-FFQ : Decoupled Delay-Bandwidth FFQ DiffServ, DS : Differentiated Services DSCP : DiffServ CodePoint E C N : Explicit Congestion Notification E F : Expedited Forwarding E T S I : European Telecommunications Standards Institute E W M A : Exponential Weighted Moving Average F D D : Frequency Division Duplex FFQ : Frame-based Fair Queueing FTP : File Transfer Protocol G G S N : Gateway GPRS Support Node GPRS : General Packet Radio Service GPS : Generalized Processor Sharing G S M : Global System for Mobile Communications  98  IETF : Internet Engineering Task Force IP : Internet Protocol Iu : U T R A N interface between the radio network controller (RNC) and C N M T : Mobile Termination OPNET : OPtimum NETwork performance modeler PDB : Per Domain Behavior PDP : Packet Data Protocol PHB : Per Hop Behavior P L M N : Public Land Mobile Network PSTN : Public Switched Telephone Network QoS: Quality of Service R-CP : R E D on Complete Partitioning R-CS : R E D on Complete Sharing R-SMA : R E D on Sharing with Minimum Allocation RED : Random Early Detection SAP : Service Access Point SDU : Service Data Unit SSGN : Serving GPRS Support Node TCP : Transmission Control Protocol T D D : Time Division Duplex T E : Terminal Equipment ToS: Type of Service U T R A N : U M T S Terrestrial Radio Access U M T S : Universal Mobile Telecommunications System  99  Bibliography [1] G . B r a s c h e a n d B . W a l k e , " C o n c e p t s , services a n d protocols o f the n e w G S M phase 2+ general packet r a d i o s e r v i c e , " IEEE Communications Magazine, v o l . 35, p p . 9 4 - 1 0 4 , A u g u s t 1997.  [2] E T S I T S 123 107, " U M T S Q o S concept a n d architecture," D e c e m b e r 2000.  [3] J . C a i a n d D . J . G o o d m a n , " G e n e r a l packet r a d i o service i n G S M , " IEEE  Communi-  cations Magazine, v o l . 35, p p . 1 2 2 - 1 3 1 , O c t o b e r 1997.  [4] C . Bettstetter, H . V o g e l , a n d J . Eberspacher,  " G S M phase 2+ general packet radio  service: architecture, protocols, a n d air interface,"  IEEE  Communications  Surveys,  O c t o b e r 1999.  [5] " G P R S white p a p e r , "  http://www.Cisco.com/warp/public/cc/so/neso/gprs/gprs.wp.htm,  2000.  [6] Y . L i n a , H . C H . R a o , a n d I. C l a m t a c ,  " G e n e r a l packet r a d i o service:  architecture,  interface a n d d e v e l o p m e n t , " Wireless Communications and Mobile Computing, v o l . 1, p p . 7 7 - 9 2 , J o h n W i l e y & S o n s Journal, J a n u a r y - M a r c h 2 0 0 1 .  [7] E T S I G S M 03.60 V 6 . 0 . 0 , " G e n e r a l packet radio service; service d e s c r i p t i o n ; stage 2," M a r c h 1998.  100  BIBLIOGRAPHY  [8] G . P r i g g o u r i s , S. H a d j i e f t h y m i a d e s , a n d L . M e r a k o s , " S u p p o r t i n g I P Q o S i n the g e n eral packet r a d i o s e r v i c e , " IEEE Network, v o l . 14, p p . 8 - 1 7 , O c t o b e r 2 0 0 0 .  [9] S . B l a k e , D . B l a c k , M . C a r l s o n , E . D a v i e s , W . Weiss, a n d Z . W a n g , " A n architecture f o r differentiated services," RFC 2475, D e c e m b e r 1998.  [10] D . B l a c k , S. B r i m , B . Carpenter, a n d F . L e F a u c h e u r , " P e r h o p b e h a v i o r identification c o d e s , " IETF Internet Draft (revision of RFC  2836), January 2 0 0 1 .  [11] K . N i c h o l s , S. B l a k e , F. B a k e r , and D . B l a c k , " D e f i n i t i o n o f the differentiated services field i n the I P v 4 and I P v 6 header," RFC 2474, D e c e m b e r 1998.  [12] I S I U n i v e r s i t y o f Southern C a l i f o r n i a , "Internet p r o t o c o l , " RFC 791, S e p t e m b e r 1981.  [13] Y . Bernet, A . S m i t h , S. B l a k e , a n d D . G r o s s m a n , " A c o n c e p t u a l m o d e l f o r diffserv routers," IETF Internet Draft, M a y 2000.  [14] Y . Bernet, S. B l a k e , D . G r o s s m a n , a n d A . S m i t h , " A n i n f o r m a l m a n a g e m e n t m o d e l f o r diffserv routers," IETF Internet Draft, F e b r u a r y 2 0 0 1 .  [15] V . J a c o b s o n ,  "Congestion avoidance and control,"  Proceedings of ACM confer-  ence of the Special Interest Group on Data Communications, p p . 3 1 4 - 3 2 9 , S t a n d f o r d , September 1988.  [16] S. F l o y d a n d V . Jacobson, a n c e , " IEEE/ACM  " R a n d o m early detection gateway f o r c o n g e s t i o n a v o i d -  Transactions on Networking, v o l . 1, p p . 3 9 7 - 4 1 3 , A u g u s t 1993.  [17] R . Jain a n d K . K . R a m a k r i s h n a n , " C o n g e s t i o n a v o i d a n c e i n c o m p u t e r networks w i t h a connectionless n e t w o r k layer: concepts, goals a n d m e t h o d o l o g y , " Proceedings of the IEEE Computer Networking Symposium, p p . 3 0 3 - 3 1 3 , M a r y l a n d , M A , A p r i l 1988.  [18] H . B a l a k r i s h n a n a n d S. Seshan, " T h e congestion m a n a g e r , " RFC 3124, June 2 0 0 1 .  BIBLIOGRAPHY  [19] T . B o l a n d , M . M a y , and J. C . B o l o t , " A n a l y t i c evaluation o f R E D p e r f o r m a n c e , " Proceedings of the Joint Conference of the IEEE  Computer and Communications Soci-  eties, v o l . 3, p p . 4 1 5 - 1 4 2 4 , T e l A v i v , Israel, M a r c h 2000.  [20] K . R a m a k r i s h n a n a n d S. F l o y d , " A p r o p o s a l to add e x p l i c i t c o n g e s t i o n notification to I P , " RFC 2481, January 1999.  [21] B . B r a d e n et all, " R e c o m m e n d a t i o n s o n queue m a n a g e m e n t a n d c o n g e s t i o n avoidance i n the internet," RFC 2309, A p r i l 1998.  [22] R . M o r r i s , E . K o h l e r , J . Jannotti, a n d m . F . K a a s h o e k , ACM  " T h e c l i c k m o d u l a r router,"  Transactions on Computer Systems, v o l . 18, p p . 2 6 3 - 2 9 7 , A u g u s t 2000.  [23] I. C i d o n n a n d R . G u e r i n , " O p t i m a l b u f f e r s h a r i n g , " IEEE Transactions on Selected Area in Communications, v o l . 13, p p . 1 2 2 9 - 1 2 4 0 , September 1995.  [24] M . A r p a c i a n d J . A . C o p e l a n d ,  " B u f f e r management  for shared-memory A T M  s w i t c h e s , " IEEE Communications Surveys, F i r s t Quarter 2 0 0 0 .  [25] A . K . C h o u d h u r y a n d E . L . H a h n e , m e m o r y packet s w i t c h e s , " IEEE/ACM  " D y n a m i c queue length thresholds f o r sharedTransactions on Networking, v o l . 6, p p . 130—  140, A p r i l 1998.  [26] R . W . W o l f f , Stochastic modeling and the theory of queues, Prentice H a l l , 1989.  [27] D . Bertsekas and R . G a l l a g e r , Data networks, Prentice H a l l , 1992.  [28] A . P a p o u l i s , Probability, random variables and stochastic processes,  McGrow-Hill,  3 r d e d i t i o n , 1991.  [29] H . Z h a n g , " S e r v i c e disciplines for guaranteed p e r f o r m a n c e service i n packet s w i t c h i n g n e t w o r k s , " Proceedings of the IEEE, v o l . 83, p p . 1 3 7 4 - 1 3 9 6 , O c t o b e r 1995.  BIBLIOGRAPHY  [30] D . Stiliadis a n d A . V e r m a , " R a t e - p r o p o r t i o n a l servers: a d e s i g n m e t h o d o l o g y f o r fair q u e u e i n g a l g o r i t h m s , " IEEE/ACM  Transactions on Networking, v o l . 6, p p . 164—174,  A p r i l 1998.  [31] S . S . L a m a n d G . G . X i e , " G r o u p priority s c h e d u l i n g , " IEEE/ACM  Transactions on  Networking, v o l . 5, p p . 2 0 5 - 2 1 8 , A p r i l 1997.  [32] A . K . P a r e k h a n d R . G . G a l l a g e r , " A generalized processor sharing a p p r o a c h to flow c o n t r o l i n integrated service networks: the s i n g l e - n o d e c a s e , "  IEEE/ACM  Transac-  tions on Networking, v o l . 1, p p . 3 4 4 - 3 5 7 , June 1993.  [33] S. F l o y d a n d V . J a c o b s o n , " L i n k - s h a r i n g and resource m a n a g e m e n t m o d e l s f o r packet networks,"  IEEE/ACM  Transactions on Networking, v o l . 3, p p . 3 6 5 - 3 8 6 , A u g u s t  1995.  [34] S. F l o y d ,  " N o t e s o n C B Q a n d guaranteed s e r v i c e s , "  www.aciri.org/floyaVcbq.html,  J u l y 1995.  [35] I. S t o i c a , H . Z h a n g , and T . S . E . N G , " A h i e r a r c h i c a l fair service c u r v e a l g o r i t h m f o r l i n k - s h a r i n g , real-time, and p r i o r i t y s e r v i c e s , " IEEE/ACM  Transactions on Network-  ing, v o l . 8, p p . 1 8 5 - 1 9 9 , A p r i l 2000.  [36] H . S a r i o w a n , R . L . C r u z , a n d G . C . P o l y z o s , " S C E D : a g e n e r a l i z e d s c h e d u l i n g p o l i c y for guaranteeing quality o f s e r v i c e , " IEEE/ACM  Transactions on Networking, v o l . 7,  p p . 6 6 9 - 6 8 4 , O c t o b e r 1999.  [37] H . S a r i o w a n , R . C r u z , a n d G . C . P o l y z o s , " S c h e d u l i n g f o r quality o f service guarantees v i a service c u r v e s , " Proceedings of IEEE Fourth International Conference on Computer Communications and Networks, p p . 5 1 2 - 5 2 0 , Seattle, W A , S e p t e m b e r 1995.  103  BIBLIOGRAPHY  [38] D . Stiliadis a n d A . V e r m a , n e t w o r k s , " IEEE/ACM  " E f f i c i e n t fair q u e u e i n g algorithms f o r packet s w i t c h e d  Transactions on Networking, v o l . 6, p p . 1 7 5 - 1 8 5 , A p r i l 1998.  [39] V . J a c o b s o n , K . N i c h o l s , a n d K . P o d u r i , " A n expedited f o r w a r d i n g P H B , " RFC 2598, June 1999.  [40] A . C h a r n y , F . B a k e r , J . Bennett, K . B e s o n , J . L e B o u d e c , A . C h i u , W . C o u r t n e y , B . D a v i e , a n d S . D a v a r i , " E F P H B r e d e f i n e d , " IETF Internet Draft, N o v e m b e r 2000.  [41] J . H e i n a n e n , F . B a k e r , W . Weiss, and J. W r o c l a w s k , " A s s u r e d f o r w a r d i n g P H B g r o u p , " RFC  2597, June 1999.  [42] R . B l e s s and K . W e h r l e , " A l o w e r than best-effort P H B , " IETF Internet Draft, S e p t e m ber 1999.  [43] M . L o u k o l a , J . R u u t u , and K . K i l k k i , " D y n a m i c R T / N R T P H B g r o u p , " IETF Internet Draft, N o v e m b e r 1998.  [44] B . Carpenter a n d K . N i c h o l s , " A b u l k h a n d l i n g P D B f o r differentiated s e r v i c e s , "  IETF  Internet Draft, January 2 0 0 1 .  [45] P. H u r l e y , " T h e alternative best-effort s e r v i c e , " IETF Internet Draft, June 2000.  [46] N . S e d d i g h a n d J. heinanen, " A n assured rate per d o m a i n b e h a v i o r f o r differentiated s e r v i c e s , " IETF Internet Draft, D e c e m b e r 2000.  [47] S . F l o y d , " D i s c u s s i o n o f setting parameters,"  www-nrg.ee.lbl.gov/floya7red.html.  [48] F. A g h a r e b p a r a s t and V . C . M . L e u n g , " I m p r o v i n g the p e r f o r m a n c e o f R E D d e p l o y m e n t o n a class based queue w i t h shared b u f f e r , " proc. IEEE Global Conference, S a n A n t o n i o , Texas, N o v e m b e r 2 0 0 1 .  104  Telecommunications  BIBLIOGRAPHY  [49] F . A g h a r e b p a r a s t and V . C . M . L e u n g , " E f f i c i e n t fair q u e u e i n g w i t h d e c o u p l e d delayb a n d w i d t h guarantees," proc. IEEE Global Telecommunications Conference, S a n A n tonio, Texas, N o v e m b e r 2001.  [50] D . B l a c k , " D i f f e r e n t i a t e d services and tunnels," RFC 2983, O c t o b e r 2 0 0 0 .  [51] L . L . Peterson a n d B . S. D a v i e ,  Computer networks:  a system approach,  K a u f f m a n n Publishers, 2 n d edition, 2000.  [52] A . S. T a n e n b a u m , Computer networks, Prentice H a l l , 2000.  105  Morgan  

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
United States 4 1
India 3 0
Canada 1 0
City Views Downloads
New Delhi 3 0
Ashburn 3 0
Vancouver 1 0
Sunnyvale 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0065046/manifest

Comment

Related Items