Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Adaptive REDn gateway and adaptive token bucket traffic conditioner for an assured rate service for TCP Liu, Philip K. M. 2001

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2001-0067.pdf [ 5.75MB ]
Metadata
JSON: 831-1.0065377.json
JSON-LD: 831-1.0065377-ld.json
RDF/XML (Pretty): 831-1.0065377-rdf.xml
RDF/JSON: 831-1.0065377-rdf.json
Turtle: 831-1.0065377-turtle.txt
N-Triples: 831-1.0065377-rdf-ntriples.txt
Original Record: 831-1.0065377-source.json
Full Text
831-1.0065377-fulltext.txt
Citation
831-1.0065377.ris

Full Text

A D A P T I V E R E D n G A T E W A Y A N D A D A P T I V E T O K E N B U C K E T T R A F F I C C O N D I T I O N E R F O R A N A S S U R E D R A T E S E R V I C E F O R T C P by PHILIP K . M . L I U B.A.Sc. , The University of British'Columbia, 1994 A THESIS S U B M I T T E D IN P A R T I A L F U L F I L L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F M A S T E R O F A P P L I E D S C I E N C E in T H E F A C U L T Y O F G R A D U A T E STUDIES ( D E P A R T M E N T O F E L E C T R I C A L A N D C O M P U T E R E N G I N E E R I N G ) We accept this thesis as conforming to the required standard T H E U N I V E R S I T Y O F BRIT ISH C O L U M B I A February 2001 © Philip Liu, 2001 In presenting this thesis in partial fulfillment of the requirements for an advanced degree at The University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial,gain shall not be allowed without my written permission. Department of Electrical and Computer Engineering The University of British Columbia 2356 Main Mall Vancouver, B C Canada V 6 T 1Z4 Date: Abstract The majority of traffic in the Internet today is best-effort traffic. Best-effort traffic is believed to continue predominating over traffic of the emerging enhanced network services that needs various QoS guarantees. The key congestion controls developed for best-effort traffic forwarding in the Internet are T C P and R E D . Although T C P is simple and time-invariant, the received throughputs of T C P flows vary with the associated R T T s , as opposed to the throughput expectations, to result in a bias against the flows with relatively long R T T s and large maximum windows. In general, the nature of the T C P flows traversing a gateway, such as the number of active flows and the sizes of the congestion windows, varies over time, and the per-flow throughput expectations span a wide range. However, R E D merely makes a fairly limited linear adaptation to the varying nature, and virtually neglects the throughput expectations of the flows. This thesis introduces Adaptive R E D n Gateway and Adaptive Token Bucket Traffic Conditioner for rectifying these shortcomings of T C P and R E D : The architecture of Adaptive R E D n Gateway and Adaptive Token Bucket Traffic Conditioner, with throughput-based deterministic traffic conditioning performed at the boundary and delayed-policing applied in the interior of the network, enables the implementation of an assured rate network service for T C P in the Internet, providing a short queuing delay and an assured bandwidth distribution at each gateway. The overall performance of the mechanisms is evaluated with numerous simulations in a wide variety of scenarios, and all the results certainly corroborate that the architecture attains the goals. ii Contents A b s t r a c t i i Contents i i i L ist of Tables v List of F igures v i Acknowledgments ix Ded ica t ion x 1 In t roduct ion 1 1.1 Shortcomings of R E D and its Variants . . . • 4 1.1.1 R E D with In-or-Out Bit 4 1.1.2 Self-Configuring R E D 6 1.2 Research Contributions and Thesis Statement 9 1.3 Thesis Organization . 10 2 A d a p t i v e Token Bucket Traffic Condi t ioner for T C P 11 2.1 Token Bucket Depth Adjustment Algorithm for T C P 12 2.1.1 Implementation • 13 2.2 Throughput-Based Deterministic Traffic Conditioning for T C P 13 2.3 Related Work , 16 3 A d a p t i v e R E D n Gateway 17 3.1 Implications of Simulations 18 iii 3.1.1 R E D n with Packet Drop in Scenario B - U - 0 . \ •. '. 18 3.1.2 Self-Configuring R E D n with Packet Drop in Scenario B - U - 0 24 3.2 Minimum Actual Queue Length Threshold Configuration Parameters 30 3.3 Middle Threshold Algorithm . 31 3:3.1 Evaluation 33 3.4 Related Work . . .' 36 4 Configuration Guidelines and Simulation Results 38 4.1 Simulation Scenarios with One Bottleneck Data Link 39 4.1.1 Adaptive R E D n with Packet Drop in Scenario B - U - 0 45 4.2 Simulation Scenarios with Two Bottleneck Data Links 57 4.2.1 Adaptive R E D n with Packet Drop in Scenario B - U - 0 59 4.3 C B R Traffic Source and U D P Transport Protocol 73 4.3.1 Recent Work '. 79 5 Conclusions and Further Research 81 Glossary 83 Bibliography 85 Appendix A Additional Simulation Results 90 A . l Simulation Scenarios with One Bottleneck Data Link . 90 A . l . l Adaptive R E D n with E C N in Scenario B - U - 0 90 A.1.2 Adaptive R E D n with Packet Drop in Scenario B - O - U 97 A. l .3 Adaptive R E D n with E C N in Scenario B - 0 T U 104 A.2 Simulation Scenarios with Two Bottleneck Data Links . I l l A.2.1 Adaptive R E D n with E C N in Scenario B - U - 0 I l l A.2.2 Adaptive R E D n with Packet Drop in Scenario B - O - U 123 A.2.3 Adaptive R E D n with E C N in Scenario B - O - U 135 iv List of Tables 3.1 Dynamic Maximum Packet Marking Probabilities of Self-Configuring R E D n 24 4.1 Assured Bandwidth Distribution for the R E D n and its Variants 44 List of Figures 1.1 R E D Algorithm 3 1.2 Dynamic Maximum Packet Marking Probability Adjustment Algorithm 7 2.1 Token Bucket Depth Adjustment Algorithm for T C P 12 2.2 Throughput-Based Deterministic Traffic Conditioning for T C P 14 3.1 Per-Flow Throughputs with R E D n 19 3.2 Average Queue Lengths and Overall Queuing Delay with R E D n 23 3.3 Per-Flow Throughputs with Self-Configuring R E D n 25 3.4 Average Queue Lengths and Overall Queuing Delay with Self-Configuring R E D n . . 29 3.5 Integration of Middle Threshold Algorithm and Dynamic Maximum Packet Marking Probability Adjustment Algorithm 32 4.1 Simulation Network with One Bottleneck Data Link . . . 39 4.2 Traffic Conditionings with Adaptive Token Bucket Traffic Conditioner 46 4.3 Per-Flow Throughputs with Adaptive R E D n . . : 48 4.4 Average Queue Lengths and Overall Queuing Delay with Adaptive R E D n 53 4.5 Bottleneck Link Utilization with Adaptive R E D n 54 4.6 Early Drop Ratio with Adaptive R E D n 55 4.7 Effective Maximum Packet Marking Probabilities of Adaptive R E D n 56 4.8 Simulation Network with Two Bottleneck Data Links 57 4.9 Per-Flow Throughputs of Flows Traversing Two Adaptive R E D n Gateways 60 4.10 Per-Flow Throughputs of Flows Traversing One Adaptive R E D n Gateway 63 4.11 Average Queue Lengths and Overall Queuing Delay at the First Adaptive R E D n Gateway 67 vi 4.12 Average Queue Lengths and Overall Queuing Delay at the Second Adaptive R E D n Gateway 68 4.13 Utilizations of Bottleneck Links with Adaptive R E D n 69 4.14 Early Drop Ratios with Adaptive R E D n 69 4.15 Effective Maximum Packet Marking Probabilities of the First Adaptive R E D n Gateway 71 4.16 Effective Maximum Packet Marking Probabilities of the Second Adaptive R E D n Gate-way 72 4.17 Per-Flow Throughputs with Adaptive R E D n 75 4.18 Average Queue Lengths with Adaptive R E D n 78 A . l Per-Flow Throughputs with Adaptive R E D n 91 A.2 Average Queue Lengths and Overall Queuing Delay with Adaptive R E D n 94 A.3 Bottleneck Link Utilization with Adaptive R E D n 95 A.4 Effective Maximum Packet Marking Probabilities of Adaptive R E D n 96 A.5 Per-Flow Throughputs with Adaptive R E D n 98 A.6 Average Queue Lengths and Overall Queuing Delay with Adaptive R E D n 101 A.7 Bottleneck Link Utilization with Adaptive R E D n 102 A.8 Early Drop Ratio with Adaptive R E D n , .' 102 A.9 Effective Maximum Packet Marking Probabilities of Adaptive R E D n 103 A. 10 Per-Flow Throughputs with Adaptive R E D n 105 A.11 Average Queue Lengths and Overall Queuing Delay with Adaptive R E D n 108 A.12 Bottleneck Link Utilization with Adaptive R E D n 109 A.13 Effective Maximum Packet Marking Probabilities of Adaptive R E D n 110 A. 14 Per-Flow Throughputs of Flows Traversing Two Adaptive R E D n Gateways 112 A.15 Per-Flow Throughputs of Flows Traversing One Adaptive R E D n Gateway 115 A.16 Average Queue Lengths and Overall Queuing Delay at the First Adaptive R E D n Gateway 118 A.17 Average Queue Lengths and Overall Queuing Delay at the Second Adaptive R E D n Gateway . . 119 A.18 Utilizations of Bottleneck Links with Adaptive R E D n 120 A.19 Effective Maximum Packet Marking Probabilities of the First Adaptive R E D n Gatewayl21 A.20 Effective Maximum Packet Marking Probabilities of the Second Adaptive R E D n Gate-way 122 vii A.21 Per-Flow Throughputs of Flows Traversing Two Adaptive R E D n Gateways 124 A.22 Per-Flow Throughputs of Flows Traversing One Adaptive R E D n Gateway . . . . . . . 127 A.23 Average Queue Lengths and Overall Queuing Delay at the First Adaptive R E D n Gateway 130 A.24 Average Queue Lengths and Overall Queuing Delay at the Second Adaptive R E D n Gateway , 131 A.25 Utilizations of Bottleneck Links with Adaptive R E D n , . 132 A.2'6 Early Drop Ratios with Adaptive R E D n 132 A.27 Effective Maximum Packet Marking Probabilities of the First Adaptive R E D n Gatewayl33 A.28 Effective Maximum Packet Marking Probabilities of the Second Adaptive R E D n Gate-way 134 A.29 Per-Flow Throughputs of Flows Traversing Two Adaptive R E D n Gateways 136 A.30 Per-Flow Throughputs of Flows Traversing One Adaptive R E D n Gateway 139 A.31 Average Queue Lengths and Overall Queuing Delay at the First Adaptive R E D n Gateway 142 A.32 Average Queue Lengths and Overall Queuing Delay at the Second Adaptive R E D n Gateway 143 A.33 Utilizations of Bottleneck Links with Adaptive R E D n . . 144 A.34 Effective Maximum Packet Marking Probabilities of the First Adaptive R E D n Gateway 145 A.35 Effective Maximum Packet Marking Probabilities of the Second Adaptive R E D n Gate-way 146 viii Acknowledgments I am indebted to my advisor Professor Hussein Alnuweiri for his kind guidance, generous financial support, and encouraging me to pursue my inspiration in networking. I have benefited from the numerous enlightening discussions with him. I would like to thank Professor Victor Leung for a number of motivating discussions about QoS and for generously granting me access to his worksta-tions. Thanks to my colleagues, particularly Dr. Hossein Saboksayr, for creating and maintaining such a friendly environment and for countless amusing chats and open discussions. Thanks also to my friends for their warm and infinite support. I am thankful to Elaine for her sympathetic encouragement and for everything she has done for me. Last but not least, I feel truly grateful to my parents and brother for their love for and faith in me. This research was supported in part by N S E R C Research Grants, for which I am appreciative. P H I L I P L I U The University of British Columbia February 2001 ix To my parents Michael and Stella, and my brother Albert x Chapter 1 Introduction The majority of traffic in the Internet today is best-effort traffic [47, 10], a class of network traffic that requires no explicit quality-of-service (QoS) guarantees on bandwidth, delay, delay jitter, and loss, but simply uses whatever resources available in the network at any time. Best-effort traffic is believed to continue predominating over traffic of the emerging enhanced network services that needs various QoS guarantees [21]. In general, the simplest queuing that supports best-effort traffic forwarding is first in first out (FIFO). F I F O queuing is easy to implement, and its speed scales very well with the output link capacity and the load. The nature of FIFO queuing allows the delay to be shared among all the traversing flows1, and reduces the tails of the delay distributions for forwarding packet bursts [12]. However, a proper network traffic congestion control has to be implemented to allow a network of F I F O queues to forward best-effort traffic in an efficient, sustainable manner [49, 21, 20]. TCP A key congestion control developed for best-effort traffic forwarding in the Internet is Transmission Control Protocol (TCP) [54]. T C P implements end-to-end window-based congestion avoidance and control algorithms. The window, referred to as congestion window (cwnd), denotes the amount of unacknowledged, in-flight data that a T C P flow is permitted to send at a given time. Its size is adjusted every roundtrip time (RTT) according to the window adjustment algorithms [38, 40, 39, 3]. In brief, the cwnd of a flow is adjusted in 3 phases, namely slow start, congestion avoidance, and 1A flow refers to a single instance of a transport-protocol-to-transport-protocol logical connection that is uniquely defined and identified by its parameters such as sending and receiving node addresses and protocol port numbers, protocol I D , t imestamp, sequence number, and so forth. 1 CHAPTER 1. INTRODUCTION 2 loss recovery. A T C P flow begins in the slow-start phase: The cwnd is doubled every R T T , and the corresponding throughput increases exponentially. When the cwnd reaches a threshold, defined as slow-start threshold, and initially set to half of the receiver buffer size, the flow switches to the congestion-avoidance phase: The cwnd is inflated by roughly the maximum segment size (MSS) every R T T , up to the receiver buffer size, and the throughput increases linearly. Implicitly, T C P always infers packet loss and thus congestion with the duplicate acknowledgment algorithm and a retransmission timer. In addition, T C P could be notified of congestion explicitly with Explicit Congestion Notification (ECN) [19, 55, 28,'56]. A small number of duplicate acknowledgments or an E C N signal triggers a T C P flow to enter the fast-recovery phase: The slow-start threshold is set to half of the cwnd, the cwnd is then halved, and the throughput is reduced exponentially. The cwnd remains deflated until the end of the phase. After the phase ends, the cwnd is set to equal to the slow-start threshold, and the flow enters the congestion-avoidance phase again. A retransmission timeout (RTO) or a large number of E C N signals in several successive R T T s trigger a T C P flow to restart in the slow-start phase: The slow-start threshold is set to half of the cwnd, the cwnd is then deflated back to a single MSS, and the throughput is abruptly decreased to the minimum. RED Another key congestion control developed for best-effort traffic forwarding in the Internet is Random Early Detection (RED) [26, 57]. R E D avoids congestion, and maintains a short queuing delay and a high throughput at the gateway by controlling an average queue length in cooperation with T C P . The average queue length is used as an indicator of the level of congestion or the load being experienced by the gateway, and is defined to be the exponential weighted moving average ( E W M A ) of the actual queue length as given below: avg <— wqq + (1 — wq)avg (1-1) where avg is the average queue length, wq is the queue weight, a configuration parameter, and q is the actual queue length. Figure 1.1 shows the R E D algorithm graphically. R E D compares the average queue length to a minimum and a maximum thresholds. • No control action is taken when the average queue length is below the minimum threshold. When an incipient congestion drives the average queue length to exceed the minimum threshold, R E D begins dropping or ECN-marking each arriving packet with a packet marking probability to avoid congestion and to control the average queue length. R E D varies the packet marking probability linearly with the average queue length in 2 phases to adapt to the varying load while avoiding the global synchronization of the traversing CHAPTER 1. INTRODUCTION 3 packet marking A probability queue length Figure 1.1: R E D Algorithm T C P flows. In the first phase, as the average queue length varies from the minimum threshold to the maximum threshold, the packet marking probability is linearly varied from 0 to a maximum packet marking probability as follows: { 0 , avg'< minth (1.2) maxp(avg — minth)/{maxth — rninth) > mirith < avg < maxth where the maximum packet marking probability, the minimum threshold, and the maximum thresh-old are fixed configuration parameters, which are set at configuration time, and remain unchanged at runtime. In the second phase, as the average queue length varies from the maximum threshold to twice the maximum threshold, the packet marking probability is linearly varied from the maximum packet marking probability to 1 as follows: { (avg/maxth){l — maxp) + 2maxp — 1 , maxth < avg < Imaxth (1-3) 1 , avg > Imaxth Further, the sole method of packet marking used in the second phase of the operation is packet drop. CHAPTER 1. INTRODUCTION 4 A R E D gateway operates mostly in the first phase, and would be driven into the second phase solely by an exceptionally heavy load. This thesis considers only the first phase of operation of a R E D gateway, in which maintaining a short queuing delay and a high throughput at the gateway is of great importance, and the second phase of the operation is out of the scope of this thesis. Moreover, this thesis considers loads that are either solely or mainly composed of T C P flows. 1.1 Shortcomings of RED and its Variants In general, the nature of the T C P flows traversing a gateway, such as the number of active flows and the sizes of the congestion windows, varies over time, and the per-flow throughput expectations span a wide range. However, with a single linearly adaptive packet marking probability, R E D merely makes a limited adaptation to the varying nature, and virtually neglects the throughput expectations of the traversing flows. Although some work for improving the performance of R E D in these aspects has been done, the proposed solutions are mostly incomplete. The following subsections describe a couple of these incomplete yet promising solutions on which this thesis is based, and introduce the work done in this thesis for further improving the performance and for simplifying the configurations of R E D and its variants. The reader is assumed to have at least a nodding acquaintance with Differentiated Services (DiffServ) architecture for Internet Protocol (IP), being standardized by Internet Engineering Task Force (IETF) [50, 5, 11]. This thesis is directly related to Assured Forwarding Per-Hop Behavior (AF PHB) and Assured Rate Per-Domain Behavior (AR PDB) for DiffServ [33, 59].2 1.1.1 RED with In-or-Out Bit According to the window adjustment algorithms and an approximate steady-state model of T C P [18, 21], the maximum packet loss probability that a T C P flow can endure given an expected throughput to attain is given below: where c is a constant that depends on the acknowledgment algorithm implemented by the receiving T C P , and w is the size of the cwnd that the flow is expected to attain. The value of c is approx-imately | if an acknowledgment is sent for each received data segment, or is approximately | if the delayed-acknowledgment (DACK) algorithm [3] is implemented. The value of w depends on the 2 T h e work could also be applied to Frame Relay (FR) and Asynchronous Transfer Mode (ATM). CHAPTER 1. INTRODUCTION 5 product of the expected throughput and the RTT of the flow. Apparently, if the actual packet loss probability exerted by the network is less than or equal to the maximum packet loss probability of a TCP flow, the flow will be able to attain its expected throughput. The first shortcoming of RED is that the throughput expectations of the traversing flows are virtually neglected since the packet marking for all the flows is performed according to merely a single probability, which should be mostly incompatible with the different maximum packet loss probabilities expected by the flows. RED with In-or-Out bit (RIO) and the traffic conditioner [11] were developed to resolve the problem. In the framework with RIO and the conditioner, each packet of a TCP flow is tagged with certain information in its header that denotes whether or not the packet conforms to the profile of the expected throughput of the flow.3 The processes of classifying, metering, tagging, shaping, and policing packets of a flow according to the throughput expectations of the flow are collectively referred to as traffic conditioning [5]. The mechanism that performs traffic conditioning is referred to as traffic conditioner, and is deployed at either the sending end-host or the first-hop gateway of its client flow [5]. This thesis introduces Adaptive Token Bucket Traffic Conditioner, realized by integrating the token bucket depth adjustment algorithm for TCP with 2-Rate 3-Color Marker [35]. The conditioner meters, and deterministically tags packets of its client TCP flow according to the baseline and the higher expected throughputs yet independent of the burstiness of the flow. The resultant traffic conditioning is defined as throughput-based deterministic traffic conditioning. RIO uses 2 individual sets of configuration parameters and state variables to handle the in-profile and the out-of-profile packets differently. The key characteristic of RIO is that the average queue length of out-of-profile packets is defined to be the EWMA of the sum of the actual queue lengths of the in-profile and the out-of-profile packets, as opposed to merely the actual queue length of the out-of-profile packet. Accordingly, the packet marking probability for the out-of-profile packet has to be strictly more aggressive than that for the in-profile packet. This preferential packet marking with multiple relative probabilities allows the output link bandwidth to be distributed to the traversing TCP flows in an orderly manner that takes into account the level of congestion and the per-flow throughput expectations. The resultant bandwidth distribution is defined as assured bandwidth distribution. The number of packet marking precedence levels in RIO could be generalized to more than 2. 3 T h e terms tagging and color marking are both common, and refer to the same process as described. In this thesis, only tagging is used to avoid confusion with other similar yet different terms such as ECN-marking and packet marking, performed by R E D as well as its variants. CHAPTER 1. INTRODUCTION 6 For example, 3 levels are denned for A F P H B in DiffServ. This thesis refers to the generalized version of RIO as R E D with multiple packet marking precedence levels (REDn), and considers the particular version of R E D n with 3 packet marking precedence levels. The packet marking precedence levels are commonly referred to by colors: Green corresponds to the lowest packet marking precedence, yellow to the intermediate, and red to the highest. Although R E D n maintains an assured bandwidth distribution at the gateway, the proper configuration for the desired performance has been deemed nontrivial to attain partially because the large number of configuration parameters are in fact intricately related with each other, and are therefore difficult to set correctly [48, 17, 46, 58, 37, 29]. This thesis gives guidelines on the proper configuration of Adaptive R E D n , a variant of R E D n to be introduced, for supporting multiple T C P flows with wide ranges of throughput expectations and R T T s . 1.1.2 Self-Configuring R E D The second shortcoming of R E D is that the packet marking probability corresponding to a given average queue length is in fact a fixed value, governed by Equation 1.2 with a fixed maximum packet marking probability, disregarding the varying nature of the traversing T C P flows, specifically the number of active flows and the sizes of the congestion windows. Upon receipt of a congestion notification, an active T C P flow reduces its cwnd at least by half, according to the window adjustment algorithms, and thus the load or the level of congestion experienced by the gateway should be reduced by the following fraction: ^ , £ 1 , • * G {l , : . . ,»} (1.5) where r; is the fraction of the load reduced due to the flow that is marked, and n is the number of active flows traversing the gateway. Consequently, the aggressiveness of a particular packet marking probability actually varies with the number of active flows and the sizes of the congestion windows of the flows: If the number of active flows is small, or the congestion windows of the T C P flows that are marked are large, the particular packet marking probability could unnecessarily trigger a significant reduction in the load, compared to the bandwidth-delay products of the network paths [61], resulting in a low throughput at the gateway and possibly underutilization of the output link. Conversely, if the number of active flows is large, or the congestion windows of the T C P flows that are marked are small, the particular packet marking probability could hardly control the increasing load resulting in a long queuing delay at the gateway. CHAPTER 1. INTRODUCTION 7 i n i t i a l i z a t i o n : dynamic maxp <— i n i t i a l dynamic maxp prev-state <— below if avg < minth and prev-state ^  below, then dynamic maxp «— dynamic maxp • decrement prev-state •<— below else i f minth < avg < maxth then prev-state <— between else i f avg > maxth and prev-state ^ above, then dynamic maxp <— dynamic maxp • increment prev-state above Figure 1.2: Dynamic Maximum Packet Marking Probability Adjustment Algorithm Self-Configuring R E D [15] was developed to resolve the problem. To adapt to the pos-sibly varying nature of the traversing T C P flows, the maximum packet marking probability in Self-Configuring R E D is modified to become a state variable, as opposed to the corresponding fixed configuration parameter in R E D . The probability is adjusted dynamically at runtime when the average queue length drops below the minimum threshold, or increases beyond the maximum threshold. Figure 1.2 lists the dynamic maximum packet marking probability adjustment algorithm in Self-Configuring R E D . The initial dynamic maximum packet marking probability depends on the estimated norm level of congestion specific to the network. The dynamic maximum packet marking probability is then adjusted to the varying nature of the flows at runtime when the average queue length falls outside the minimum or the maximum threshold: It is decreased when the average queue length drops below the minimum threshold; it is increased when the average queue length increases beyond the maximum threshold. As a result, the packet marking probability corresponding to a given average queue length in Self-Configuring R E D is no longer a fixed value, but adapts to the varying nature of the flows. Since the dynamic maximum packet marking probability adjustment algorithm is triggered solely when the average queue length falls outside the minimum or the maximum threshold, its sensitivity to the changes in the nature of the flows could be unreasonably low in certain scenarios, and the belated adjustment of the dynamic maximum packet marking probability could lead to an unsatisfactory performance, either a long queuing delay or a low throughput at the gateway. Consider a load that could appear at a backbone gateway in the network of an Internet Service Provider (ISP) [41]. The load stays roughly fixed at a certain level in its steady state. Every so often, it falls considerably from the steady-state level to a much lower level. It then stays at this low CHAPTER 1. INTRODUCTION level for a certain prolonged period before rising back to the steady-state level again. Suppose that a Self-Configuring R E D gateway operates with such a slowly fluctuating load, and every sustained drop in the load drives the average queue length down below the minimum threshold triggering the dynamic maximum packet marking probability to be decreased. Although the steady-state load remains roughly fixed, the corresponding average queue length still increases significantly, owing to under-marking with the decreased dynamic maximum packet marking probability. The persistent long queuing delay induced contradicts one of the original performance goals for R E D of maintaining a short queuing delay at the gateway. The problem should be likely to occur at a Self-Configuring R E D gateway that has the minimum and the maximum thresholds set far between, and serves a load with a fairly heavy steady-state component. Conversely, a dual argument exists with a slowly fluctuating load that drives the average queue length from its steady-state value up beyond the maximum threshold, and then back to the steady-state value again intermittently. Since the dynamic maximum packet marking probability is increased every time the average queue length increases beyond the maximum threshold, the steady-state throughput could finally be decreased severely, contrary to another performance goal for R E D of maintaining a high throughput at the gateway. In contrast to its dual instance, this problem should be transient though: If the average queue length is driven down below the minimum threshold, owing to over-marking with the increased dynamic maximum packet marking probability, the dynamic maximum packet marking probability will be decreased. Since no packet marking is performed when the average queue length is below the minimum threshold, and by default, each of the T C P flows increases its cwnd every R T T continuously, the average queue length should overshoot the minimum threshold again. Likewise, if over-marking still persists, the dynamic maximum packet marking probability will be decreased again. Since the average queue length fluctuates around the minimum threshold in merely small amplitude, the dynamic maximum packet marking probability has to be decreased rapidly, and should quickly settle on a value that allows the average queue length to stay above the minimum threshold, and thus ends the fluctuation. The throughput should also be recovered by then. This thesis introduces the minimum actual queue length threshold configuration parameters and the middle threshold algorithm for improving the sensitivity of the dynamic maximum packet marking probability adjustment algorithm to enable a short queuing delay and a high throughput at the gateway. CHAPTER 1. INTRODUCTION 9 1.2 Research Contributions and Thesis Statement The main research contributions of this thesis are summarized below: 1. The particular version of R E D n with 3 packet marking precedence levels is implemented, based on the implementation of R E D in ns network simulator [13, 8]. Self-Configuring R E D is implemented, and integrated with R E D n to realize Self-Configuring R E D n . 2. The minimum actual queue length threshold configuration parameters and the middle thresh-old algorithm are designed, implemented, and integrated with Self-Configuring R E D n to realize Adaptive R E D n , capable of maintaining a short queuing delay and an assured bandwidth dis-tribution at the gateway. 3. The token bucket depth adjustment algorithm for T C P is designed, implemented, and in-tegrated with 2-Rate 3-Color Marker to realize Adaptive Token Bucket Traffic Conditioner, capable of metering and deterministically tagging packets of its client T C P flow according to the baseline and the higher expected throughputs yet independent of the burstiness of the flow. 4. Guidelines on the proper configurations of Adaptive R E D n Gateway and Adaptive Token Bucket Traffic Conditioner are given. 5. The overall performance of the mechanisms, configured according to the guidelines, is evaluated with simulations, using ns network simulator, and the results are presented and explained. The thesis statement is given below: The resultant architecture of Adaptive R E D n Gateway and Adaptive Token Bucket Traf-fic Conditioner for T C P , with throughput-based deterministic traffic conditioning per-formed at the boundary and delayed-policing applied in the interior of the network, enables the implementation of an assured rate network service for T C P in the Internet, providing a short queuing delay and an assured bandwidth distribution at each gateway. CHAPTER 1. INTRODUCTION 10 1.3 Thesis Organization The rest of this thesis is organized as follows: The derivation and the description of the token bucket depth adjustment algorithm for T C P for Adaptive Token Bucket Traffic Conditioner and a discussion about throughput-based deterministic traffic conditioning are given in Chapter 2. The derivations and the descriptions of the minimum actual queue length threshold parameters and the middle threshold algorithm for Adaptive R E D n Gateway are given in Chapter 3. The configuration guidelines and the simulation results are given in Chapter 4. The intricate relationships among the configuration parameters of Adaptive R E D n are explained, and guidelines on the proper configura-tions of the parameters are given. The guidelines should also be applicable to R E D and its variants. The overall performance of the mechanisms is evaluated with numerous simulations in a wide variety of scenarios comprising network topologies with different numbers of bottleneck links, T C P flows with different throughput expectations and R T T s , and traffic aggregates with different numbers of active flows. Al l the simulation results certainly corroborate that the architecture of Adaptive R E D n Gateway and Adaptive Token Bucket Traffic Conditioner for T C P enables a short queuing delay and an assured bandwidth distribution at each gateway. The conclusions of this thesis along with directions for future research are given in Chapter 5. - • . Chapter 2 Adaptive Token Bucket Traffic Conditioner for TCP To allow delayed-policing to be applied in a network, the throughput expectations of each of the traversing T C P flows have to be conveyed to the interior gateways through traffic conditioning, performed at the boundary. In general, traffic conditioning includes packet classification, metering, tagging, shaping, and policing [5]. However, for best-effort network service and its variants, shaping and policing are usually restrictively performed, or are suppressed altogether because a flow is supposed to probe for and to take advantage of its fair share of the available network resources. This chapter introduces Adaptive Token Bucket Traffic Conditioner, realized by integrating the token bucket depth adjustment algorithm for T C P with 2-Rate 3-Color Marker [35]. The con-ditioner is intended to work with gateways with R E D n or its variants to implement an assured rate network service for T C P . It meters and tags packets of its client T C P flow according to the baseline and the higher expected throughputs yet independent of the burstiness of the flow. For metering, it uses 2 token buckets, and each bucket has its own parameters of token rate and token bucket depth. In most scenarios, the 2 token rates are trivially set to the baseline and the higher expected throughputs of the client flow respectively, and tokens are replenished by the byte uniformly over time. Section 2.1 introduces the token bucket depth adjustment algorithm for T C P for the condi-tioner. The dependence of the token bucket depth on the window adjustment algorithms of T C P and the attributes of the client flow is identified, and the token bucket depth adjustment algorithm for dynamically adjusting the depth taking into account the dependence is described. Then driven by the result of the metering, the conditioner tags the packets with appropriate colors deterministically. Section 2.2 gives a dispussion about the resultant traffic conditioning, defined as throughput-based deterministic traffic conditioning. Section 2.3 lists the related work. 11 CHAPTER 2. ADAPTIVE TOKEN BUCKET TRAFFIC CONDITIONER FOR TCP 12 i n i t i a l i z a t i o n of state variables: depth <— minimum depth tokens. <— depth i f the 1 s t sRTT of the client TCP flow, then depth <— expected throughput • sRTT tokens 4 - depth else for each subsequent sRTT of the flow: depth <— expected throughput • sRTT i f tokens > depth then tokens <— depth Figure 2.1: Token Bucket Depth Adjustment Algorithm for T C P 2.1 Token Bucket Depth Adjustment Algorithm for T C P For a T C P flow to attain an expected throughput, the number of in-flight packets along the network path has to be at least equal to the product of the expected throughput and the R T T of the flow [61], owing to the self clocking algorithm [38]. Further, traffic of a T C P flow is intrinsically, and remains mostly bursty 1, owing to numerous factors such as the window adjustment and the self clocking algorithms, the acknowledgment compression phenomenon [67], and so forth. Consequently, to allow a token-bucket-based traffic conditioner to meter its client T C P flow according to the expected throughput yet independent of the burstiness of the flow, the token bucket depth has to be roughly equal to the product of the expected throughput and the R T T of the flow. In most situations, both the baseline and the higher expected throughputs of a T C P flow should be known at configuration time since they should be governed by, or could be inferred from the end application. In contrast, the R T T of a T C P flow can solely be measured at runtime. The token bucket depth adjustment algorithm for T C P allows Adaptive Token Bucket Traffic Conditioner to set the depths dynamically at runtime: The depth is set to the current throughput-delay product of the expected throughput and the smoothed-RTT (sRTT) [38] of the client T C P flow, where the expected throughput is either the baseline or the higher expected.throughput. In general, the 2 depths are adjusted independently. The algorithm is listed in Figure 2.1. The depth is initialized with a minimum value that should at least be equal to the initial window of T C P [3], and the token bucket is initially full. For the first s R T T of the client T C P flow, the depth is adjusted to the current throughput-delay product, and the token bucket is filled up again. Afterwards, for each subsequent J A s defined in [26], the amount to send per R T T is smal l compared to the bandwidth-delay product of the pa th , . and most of the packets arrive at a gateway in clusters in short periods. CHAPTER 2. ADAPTIVE TOKEN BUCKET TRAFFIC CONDITIONER FOR TCP 13 sRTT of the flow, the depth is again adjusted to the current throughput-delay product, and any excess tokens are discarded accordingly. This algorithm allows the metering to perform according to the conformity of the arriving packets to the profile of the expected throughputs yet independent of the burstiness of the client flow. It also eases the configuration task since merely the minimum token bucket depths, which can be figured out readily, need to be set. According to the result of the metering, the packets are tagged with appropriate colors deterministically: The packets that conform to the baseline expected throughput are tagged with green, the additional ones that conform to the higher expected throughput with yellow, and all the extra ones with red. The tagging rate exactly follows the packet rate of the client flow regardless of the burstiness. The conditioner neither creates nor smoothes out any packet bursts. The resultant throughput-based deterministic traffic conditioning allows the client flow to effectively request the best possible throughput and delay from the network regardless of the level of congestion. 2.1.1 Implementation The sRTT of the client TCP flow is presumed to be available to the conditioner. Since the current sRTT of a TCP flow is stored as a state variable in the TCP control block at the end-host [54, 62], the most logical place to deploy the conditioner is certainly the end-host where the sRTT of the client flow could be accessed efficiently. For completeness and security, if the conditioner is deployed at an end-host, a peer policing conditioner should be deployed at either the egress gateway of the local domain or the ingress gateway of the domain downstream to ensure that the domain-level traffic specifications are enforced as well. Otherwise, if the conditioner is deployed at the first-hop gateway of its client flow, either a simple data distribution protocol or packet snooping should be needed for obtaining the sRTT of the flow. 2 . 2 Throughput-Based Deterministic Traffic Conditioning for TCP According to throughput-based deterministic traffic conditioning for TCP, packets of the client TCP flow that conform to the same profile of the expected throughput in a single RTT are conditioned with the same color to form a packet cluster, as depicted in Figure 2.2. Further, governed by the token bucket depth adjustment algorithm, the number of packets in each green cluster is bounded CHAPTER 2. ADAPTIVE TOKEN BUCKET TRAFFIC CONDITIONER FOR TCP 14 G - g r e e n p a c k e t Y - y e l l o w p a c k e t R - r e d p a c k e t YI • • • IY Gil Y • • • R |Y R R II G R Time Figure 2.2: Throughput-Based Deterministic Traffic Conditioning for T C P by the product of the baseline expected throughput and the R T T of the client T C P flow, and that in each yellow cluster by the product of the higher expected throughput and the R T T of the flow. Apparently, the number of packets in each red packet cluster is bounded by the transmission rate of the output link of the end-host. Green Packet Cluster Having the green packets in each R T T conditioned into a cluster is probably desirable since R E D n and its variants have no bias against bursty traffic, and the client T C P flow should be allowed to attain the best possible goodput 2. This conditioning particularly favors the short-lived and the sporadic flows, such as those of world-wide-web (WWW) transfer and telnet application, so that the small number of packets should be subject to the minimum possible packet marking. The conditioning also alleviates the difficulty for a client T C P flow with a relatively long R T T to attain its expected throughput. According to the window adjustment algorithms, the cwnd of a T C P flow varies as the inverse of the R T T , and thus the corresponding throughput varies as the inverse of the square of the R T T [25, 18]. This inverse-square relationship between the throughput and the R T T results in a bias against a T C P flow with a relatively long R T T so that after the cwnd is deflated, in response to congestion, the flow takes significantly longer time to recover the lost throughput than one with a relatively short R T T , and thus fails to compete. The conditioning assures that a client T C P flow of such type should experience the least possible packet marking before the baseline expected throughput is attained. Red Packet Cluster On the contrary, having the red packets in each R T T conditioned into a possibly long cluster could become a problem during congestion because multiple packet markings in a R T T could be triggered 2As defined in [21], goodput refers to the net received throughput excluding duplicates. CHAPTER 2. ADAPTIVE TOKEN BUCKET TRAFFIC CONDITIONER FOR TCP 15 to force the client T C P flow either to stay in the fast-recovery phase for a prolonged period or to enter . the slow-start phase unnecessarily. In any case, the flow could hardly attain even its baseline expected throughput. The problem with the red packet cluster should rarely occur when the flow is in the congestion-avoidance phase because the cwnd is merely increased linearly, and thus packet markings should be spread out across multiple R T T s . Conversely, the problem should occur mostly when the flow is in the slow-start phase because the cwnd could continue being increased exponentially to lead to a long red packet cluster even after the higher expected throughput is overshot. Probabilistic tagging was proposed to resolve the problem with the red packet cluster [11]. Once a T C P flow overshoots its higher expected throughput or packet rate, each subsequent packet will be tagged with red with a probability depending on the extent overshot, or will be tagged with yellow otherwise. Since this probabilistic tagging realistically functions as deterministic tagging when the higher expected throughput or packet rate is overshot by a significant extent, it could still be an incomplete solution to the problem. Further, this method of tagging could confuse a gateway with R E D n or its variant in performing packet marking to result in a sluggish control of queuing delay and bandwidth distribution at the gateway [37]. The algorithm for adjusting the tagging probability may need further research to address these glitches. - .' The problem with the red packet cluster could also be tackled with refinements for T C P or Adaptive Token Bucket Traffic Conditioner. The problem of T C P in losing throughput resulting from multiple packet losses in a R T T has already been dealt with in a number of ways such as the Slow-but-Steady variant of the NewReno version [24],, the Selective Acknowledgment option (SACK) [45, .27], and so forth. Since these advanced versions and options make ,TCP robust in spite of multiple packet losses in a R T T , Adaptive Token bucket Traffic Conditioner should need no further refinement. For the other versions of T C P , the refinements for Adaptive Token Bucket Traffic Conditioner involve helping the flow switch to and stay in the congestion-avoidance phase since a long red packet cluster exists mostly when the flow is in the slow-start phase as explained. These possible refinements are summarized below: ' -1. A l l the 3 colors, instead of 2, could be used [30, 29] to help the client T C P flow smoothly switch from the slow-start phase to the congestion-avoidance phase since marking for yellow packets is performed moderately yet sufficiently to trigger the phase switch. 2. Moderate shaping and policing could be used together with the token bucket depth adjustment algorithm to trigger the phase switch, from the slow-start phase to the congestion-avoidance phase, unconditionally if the client T C P flow overshoots its higher expected throughput re-CHAPTER 2. ADAPTIVE TOKEN BUCKET TRAFFIC CONDITIONER FOR TCP 16 gardless of the level of congestion. This refinement requires the knowledge of the current phase of the client flow. 3. The retransmissions could be tagged with green to help the client T C P flow stay in the congestion-avoidance phase. This refinement requires the knowledge of the current sequence number for the client flow. This thesis examines a baseline configuration with the Impatient variant of the NewReno version of T C P [24] and Adaptive Token Bucket Traffic Conditioner with 3-color conditioning. Im-patient NewReno T C P differs from Slow-but-Steady NewReno T C P in that the retransmission timer is reset for only the first partial acknowledgment. The simulation results in Chapter 4 show that given the wide ranges of throughput expectations and R T T s considered, all the conditioned T C P flows suffered no significant loss in throughput resulting from throughput-based deterministic traffic conditioning. Nevertheless, further research is probably needed for a versatile solution. 2.3 Related Work Time Sliding Window (TSW) Marker for T C P performs rate-based probabilistic traffic conditioning [11, 14]. It meters the packet rate of its client flow with a time-decaying sliding window algorithm, and tags each packet as out-of-profile with a probability depending on the extent that the packet rate overshoots its expected value or as in-profile deterministically otherwise. This method of conditioning could lead to a significant interleaving of packets of different colors, and the interleaved out-of-profile packets could trigger multiple packet markings in a R T T , even when the client flow is in its early slow-start phase, to make the flow fail to attain its expected throughput. A variant of T S W Marker, referred to as Adaptive Packet Marking Engine, also performs rate-based probabilistic traffic conditioning, but uses a slightly different logic in calculating the tagging probability [16]. Chapter 3 Adaptive R E D n Gateway R E D provides congestion avoidance, in cooperation with T C P , and maintains a short queuing delay and a high throughput at the gateway. R E D n extends the capability further that the output link bandwidth is distributed according to the throughput expectations of the traversing T C P flows.-However, the proper configuration of R E D n for the desired performance has been deemed nontrivial to attain. One of the reasons is that according to best-effort network service model or its variants, prior knowledge of the nature of the traversing flows is probably very limited or completely un-available at configuration time. The configurations of some of the parameters that depend on the knowledge, such as the maximum packet marking probabilities, are thus difficult to do properly. Further, the rather limited linear adaptation to the varying nature of the flows could in fact lead to a bad tradeoff between queuing delay and bandwidth distribution at the gateway. These con-straints call for Self-Configuring R E D n , which detects the changes in the nature of the traversing T C P flows, and adjusts the maximum packet marking probabilities to the changes dynamically at runtime accordingly. However, the low sensitivity of the dynamic maximum packet marking proba-bility adjustment algorithm to the changes in the nature of the flows and the belated adjustment of the dynamic maximum packet marking probabilities could lead to either a long queuing delay or a low throughput at the gateway, especially in scenarios such as those mentioned in Subsection 1.1.2 in Chapter 1. This chapter introduces Adaptive R E D n , realized by integrating the minimum actual queue length threshold configuration parameters and the middle threshold algorithm with Self-Configuring R E D n . Adaptive R E D n Gateway is intended to work with traffic conditioners, such as Adaptive To-ken Bucket Traffic Conditioner, to implement an assured rate network service for T C P . Section 3.1 17 CHAPTER 3. ADAPTIVE REDn GATEWAY 18 reveals the shortcomings of R E D n and Self-Configuring R E D n , which logically imply the corre-sponding design goals for Adaptive R E D n . Section 3.2 introduces the minimum actual queue length threshold configuration parameters, which complement the minimum average queue length threshold parameters, for verifying the existence of congestion to further assure a minimum throughput at the gateway. Section 3.3 introduces the middle threshold algorithm for improving the sensitivity of the dynamic maximum packet marking probability adjustment algorithm and for promptly adopting appropriate maximum packet marking probabilities, according to the underlying causes of increases in the average queue lengths, to reconcile the tradeoff between queuing delay and bandwidth distri-bution at the gateway. Section 3.4 lists the related work. 3.1 Implications of Simulations The simulation results in the following subsections reveal the shortcomings of R E D n in adapting to the varying nature of the traversing T C P flows and those, of Self-Configuring R E D n in reconciling the tradeoff between queuing delay and bandwidth distribution at the gateway. These shortcomings logically imply the design goals for Adaptive R E D n to attain with the minimum actual queue length threshold configuration parameters and the middle threshold algorithm. The single-bottleneck-link simulation network, the configurations of the mechanisms, and the simulation scenario involved are described in detail in Section 4.1 in Chapter 4. 3 .1 .1 R E D n with Packet Drop in Scenario B-U-O The main performance goal for R E D n is to maintain an assured bandwidth distribution at the gateway. One of the critical conditions for the desired performance is that the packet marking probabilities have to be adjusted to the varying nature of the traversing T C P flows so that the aggressiveness of packet marking should trigger neither the global synchronization of the flows nor queue overflow. Each of the packet marking probabilities is independently evaluated by linearly interpolating the fixed maximum packet marking probability according to the corresponding average queue length, as given by Equation 1.2 in Chapter 1. Since prior knowledge of the varying nature of the flows is probably very limited or completely unavailable at configuration time, the maximum packet marking probabilities are usually set to certain justifiably large values of up to 0.1 [26], in anticipation of a heavy congestion. The anticipatory configurations of and the limited linear interpolations with the fixed max-CHAPTER 3. ADAPTIVE REDn GATEWAY 30 Time (second) (a) 30-msec Roundtrip Transit Delay 16 (b) 60-msec Roundtrip Transit Delay Figure 3.1: Per-Flow Throughputs with R E D n CHAPTER 3. ADAPTIVE REDn GATEWAY 18 0 50 100 150 i Time (second) (c) 70-msec Roundtrip Transit Delay 14 Time (second) (d) 120-msec Roundtrip Transit Delay Figure 3.1: Per-Flow Throughputs with R E D n (Cont.) CHAPTER 3. ADAPTIVE REDn GATEWAY 10 Mbps 5 Mbps 3 Mbps 1 Mbps Expected 50 100 Time (second) (e) 160-msec Roundtrip Transit Delay 150 150 Time (second) (f) 200-msec Roundtrip Transit Delay Figure 3.1: Per-Flow Throughputs'with R E D n (Cont.) CHAPTER 3. ADAPTIVE REDn GATEWAY 22 imum packet marking probabilities make R E D n fail to adapt to the varying nature of the flows and to maintain an assured bandwidth distribution at the gateway as revealed by the per-flow received throughputs,, depicted in Figure 3.1. According to Table 4.1 in Chapter 4, the T C P flows having short roundtrip transit delays of 120 milliseconds (msec) and below attain the expected throughputs, but not most of the others having long delays of 160 msec and above: Only the flows expecting a throughput of 1 megabits per second (Mbps), the lowest throughput expectation, made it. The flows expecting 5 and 10 Mbps attained various average throughputs that were well below the expected values because their slow-start phases were terminated prematurely, and to have their congestion windows open was intrinsically difficult, owing to the biased window adjustment algorithms. The result is evident that the maximum packet marking probabilities were overly aggressive to result in over-marking. The average queue lengths, depicted in Figure 3.2, also corroborate the occurrence of over-marking: In their steady states, the average queue lengths oscillated mostly around their minimum thresholds, as opposed to above them. Moreover, the flows expecting 3 Mbps fail to make it in the oversubscribed case. Since the average queue lengths in the oversubscribed case were the longest, the high packet marking probabilities induced intensified over-marking to cause even the flows expecting 3 Mbps, the second lowest throughput expectation, to suffer as well. CHAPTER 3. ADAPTIVE REDn GATEWAY Queue Length • • • Min. and Max. Thresholds 50 100 Time (second) 150 2500 2000 a 1500 S 1000 500 Queue Length Min. and Max. Thresholds 50 100 Time (second) 150 (a) Green Packets (b) Yellow Packets Queue Length • • • Min. and Max. Thresholds 50 100 Time (second) 150 50 100 Time (second) 150 (c) Red Packets (d) Overall Queuing Delay Figure 3.2: Average Queue Lengths and Overall Queuing Delay with R E D n CHAPTER 3. ADAPTIVE REDn GATEWAY 24 Green Yellow Red Time Probability Time Probability Time Probability 0.0 0.1 0.0 0.1 0.0 0.1 1.0 0.033 1.8 0.033 1.8 0.033 1.1 0.011 2.0 0.011 2.0 0.011 1.8 0.0037 4.9 0.0037 4.9 0.0037 2.0 0.0012 5.1 0.0012 5.1 0.0012' 2.2 0.00041 50.1 0.00041 50.1 0.00041 4.4 0.00013 55.5 0.00013 53.6 0.00013 4.6 4.57e-05 55.6 4.57e-05 . 53.8 4.57e-05 4.9 1.52e-05 55.7 1.52e-05 54.0 1.52e-05 5.1 5.08e-06 55.8 5.08e-06 101.2 3.04e-05 5.42 1.69e-06 56.1 1.69e-06 101.3 6.09e-05 5.44 5.64e-07 102.0 3.38e-06 101.8 0.00012 50.1 1.88e-07 102.5 6.77e-06 101.9 0.00024 59.0 6.27e-08 102.9 1.35e-05 102.4 0.00048 59.15 2.09e-08 103.1 2.709e-05 102.5 0.00097 59.17 6.96e-09 122.7 5.41e-05 102.9 0.0019 59.19 2.32e-09 124.6 0.000108 103.1 0.0039 59.2 7.74e-10 124.7 0.00021 122.7 0.0078 59.4 2.58e-10 125.9 0.00043 124.6 0.015 59.7 8.603e-ll 127.0 0.00086 127.1 0.031 103.2 1.72e-10 127.2 0.0017 103.3 3.44e-10 Table 3.1: Dynamic Maximum Packet Marking Probabilities of Self-Configuring R E D n 3.1.2 Self-Configuring R E D n with Packet Drop in Scenario B-U-O The lack of prior knowledge of and the limited linear adaptation to the varying nature of the traversing T C P flows call for Self-Configuring R E D n , which detects the changes in the nature of the flows, and adjusts the maximum packet marking probabilities to the changes dynamically at runtime accordingly. The attained assured bandwidth distribution at the gateway indeed meets the original per-formance goal for R E D n as revealed by the per-flow received throughputs, depicted in Figure 3.3: According to Table 4.1 in Chapter 4, all the T C P flows, including those having long roundtrip tran-sit delays of 160 msec and above, attained the expected throughputs or certain average throughputs that were very close to the expected values in all the cases. Nevertheless, because the dynamic maximum packet marking probability adjustment al-gorithm was triggered solely when any of the average queue lengths fell outside its minimum or maximum threshold, the resulting belated .adjustment of the dynamic maximum packet marking CHAPTER 3. ADAPTIVE REDn GATEWAY 25 Time (second) (a) 30-msec Roundtrip Transit Delay • 14 Time (second) (b) 60-msec Roundtrip Transit Delay Figure 3.3: Per-Flow Throughputs with Self-Configuring R E D n CHAPTER 3. ADAPTIVE REDn GATEWAY Time (second) (c) 70-msec Roundtrip Transit Delay - ± - 10 Mbps —"— 5 Mbps -o- 3 Mbps —•— 1 Mbps • • • Expected 150 Time (second) (d) 120-msec Roundtrip Transit Delay 10 Mbps —— 5 Mbps - a - 3 Mbps —•— 1 Mbps • • • Expected 150 Figure 3.3: Per-Flow Throughputs with Self-Configuring R E D n (Cont.) CHAPTER 3. ADAPTIVE REDn GATEWAY 18 Time (second) (e) 160-msec Roundtrip Transit Delay 14 Time (second) (f) 200-msec Roundtrip Transit Delay Figure 3.3: Per-Flow Throughputs with Self-Configuring R E D n (Cont.) CHAPTER 3. ADAPTIVE REDn GATEWAY 28 probabilities leads to either a long queuing delay or a low throughput at the bottleneck gateway as revealed by the average queue lengths and the overall average queuing delay, depicted in Figure 3.4, and by the dynamic maximum packet marking probabilities, listed in Table 3.1. Although the initial dynamic maximum packet marking probabilities of 0.1 should be overly aggressive for the load in the baseline case, as discussed in the preceding subsection, the fluctuations in the average queue lengths crossing the minimum thresholds in the first 5 seconds indeed triggered the dynamic max-imum packet marking probabilities to be decreased promptly to avoid over-marking so that all the flows got through their slow-start phases smoothly, as opposed to being terminated prematurely as with R E D n . In the undersubscribed case, the dynamic maximum packet marking probabilities were further decreased, owing to decrease in the number of active flows. Afterwards in the oversubscribed case, the average queue lengths sharply increased beyond the maximum thresholds, owing to under-marking with the decreased dynamic maximum packet marking probabilities and the low sensitivity to increase in the number of active flows. The dynamic maximum packet marking probabilities were then increased to bring the average queue lengths down below the maximum thresholds merely slightly. The overall average queuing delay induced at the bottleneck gateway first increased to 100 msec, and then stayed roughly fixed at 80 msec, which should be deemed long for most applications. In conclusion, the low sensitivity of the dynamic maximum packet marking probability ad-justment algorithm to the changes in the nature of the traversing T C P flows and the belated adjust-ment of the dynamic maximum packet marking probabilities could lead to a poor tradeoff between queuing delay and bandwidth distribution at the gateway. ' CHAPTER 3. ADAPTIVE REDn GATEWAY a 1000 500 50 100 Time (second) (a) Green Packets 150 3000 2500 g. 2000 1500 1000 500 Queue Length Min. and Max. Thresholds I 50 100 Time (second) (b) Yellow Packets 150 50 100 Time (second) 11 Queue Length • • • Min. and Max. Thresholds 0.12 150 50 100 Time (second) 150 (c) Red Packets (d) Overall Queuing Delay Figure 3.4: Average Queue Lengths and Overall Queuing Delay with Self-Configuring R E D n CHAPTER 3. ADAPTIVE REDn GATEWAY 30 3.2 Minimum Actual Queue Length Threshold Configuration Parameters R E D n as well as its variants infers the existence of congestion at the gateway from monitoring the average queue lengths. Each of the average queue lengths is defined to be the E W M A of the corresponding actual queue length, as given by Equation 1.1 in Chapter 1. A 63% change in an average queue length to follow a change in the corresponding actual queue length takes at least — l / ln [ l — vjg] packet arrivals where wq is the queue weight. Although this deliberate delay or difference between the actual and the average queue lengths is certainly required to disambiguate a persistent overload and a normal transient overload due to bursty.traffic in inferring the existence of congestion at the gateway, it also lowers the sensitivity to both a diminishing congestion and the end of congestion possibly to lead to over-marking. The minimum actual queue length threshold configuration parameters, which complement the minimum average queue length threshold parameters, are added for Adaptive R E D n to verify the existence of congestion to further assure a minimum throughput at the gateway. Moreover, a side benefit is that the problems of over-marking and losing throughput at the gateway resulting from the low sensitivity of the dynamic maximum packet marking probability adjustment algorithm and the belated decrease in the dynamic maximum packet marking probabilities should also be alleviated. The original algorithm that governs per-color packet marking, as stated by Equation 1.2 in Chapter 1, needs to be modified as follows: { 0 , avg < minth and act < act.minth (3-1) maxp(avg — minth) I {max th — minth) , minth < o-vg < maxth where act is the actual queue length, and actjminth is the minimum actual queue length threshold. If any of the actual queue lengths is already shorter than the corresponding average queue length, and is also below a minimum threshold configured according to the output link capacity, congestion due to the packets of the associated color should be deemed nonexistent, and therefore packet marking for this color should be disabled, even though the average queue length could be above its minimum threshold. Otherwise, starting or continuing to perform packet marking could probably lead to over-marking, a low throughput at the gateway, and possibly underutilization of the output link. In brief, per-color packet marking is enabled solely when any of the average queue lengths and the corresponding actual queue length exceed their minimum thresholds respectively. CHAPTER 3. ADAPTIVE REDn GATEWAY 31 3 . 3 Middle Threshold Algorithm The main possible drawbacks to the performance of Self-Configuring R E D n are under-marking and a persistent long queuing delay at the gateway resulting from the low sensitivity of the dynamic max-imum packet marking probability adjustment algorithm and the belated increase in the dynamic maximum packet marking probabilities. Increase in any of the dynamic maximum packet mark-ing probabilities is triggered solely when the corresponding average queue length increases beyond its maximum threshold. When the average queue length approaches its maximum threshold, the queuing delay induced at the gateway should simply become long. Further, a number of fluctua-tions in the average queue length crossing the maximum threshold could occur before the dynamic maximum packet marking probability becomes sufficiently increased because the fixed multiplica-tive increment for the dynamic maximum packet marking probability adjustment has to be small, to avoid the global synchronization of the T C P flows. Lastly, the average queue length should take longer time to increase beyond its maximum threshold every time because the dynamic maximum packet marking probability is being increased gradually, and the overall increase in throughputs of the flows remains mostly linear. The overall long period taken for sufficiently increasing the dynamic maximum packet marking probability to avoid under-marking realistically makes the long queuing delay at the gateway persistent. When any of the average queue lengths starts increasing, the underlying cause of the increase could indeed be identified, with a suitable test, well before the average queue length approaches its maximum threshold. Depending on the cause, an appropriate maximum packet marking probability should be adopted promptly after the test to avoid under-marking and a long queuing delay at the gateway. Significant increase in or nearly loss of control of any of the average queue lengths stems from either prior decrease in the dynamic maximum packet marking probability or increase in the number of active flows. The dynamic maximum packet marking probability adjustment algorithm alone is unable to differentiate these causes, and could react belatedly to lead to under-marking and a persistent long queuing delay at the gateway. The middle threshold algorithm is added for Adaptive R E D n to improve the sensitivity of the dynamic maximum packet marking probability adjustment algorithm, with the early test, and to promptly adopt the appropriate maximum packet marking probabilities for reconciling the tradeoff between queuing delay and bandwidth distribution at the gateway. The middle threshold algorithm and the dynamic maximum packet marking probability adjustment algorithm have to function together, and the integrated algorithm is listed in Figure 3.5. CHAPTER 3. ADAPTIVE REDn GATEWAY i n i t i a l i z a t i o n of state v a r i a b l e s : dynamic maxp 4— s t a t i c maxp effective maxp 4 - dynamic maxp prev-state 4— below prev-state-mid 4— below i f avg < minth and prev-state ^ below, then dynamic maxp 4 - dynamic maxp • decrement effective maxp 4 - dynamic maxp prev-state 4 - below else i f minth < avg < maxth then i f avg < midth and prev-state-mid = above, then e/fective maxp 4 - dynamic maxp prev-state-mid 4 - below else i f avg > midth and prev-state-mid = below, then i f dynamic maxp < s t a t i c maxp then dynamic maxp 4 - dynamic maxp • increment i f dynamic maxp < s t a t i c maxp then effective maxp 4— s t a t i c maxp else effective maxp 4 - dynamic maxp prev-state-mid 4 - above prev-state 4 - between else i f avg > maxth and prev-state ^ above, then i f dynamic maxp < s t a t i c maxp then dynamic maxp 4 - s t a t i c maxp dynamic maxp 4 - dynamic maxp- • increment ef fective maxp 4— dynamic maxp prev-state 4 - above Figure 3.5: Integration of Middle Threshold Algorithm and Dynamic Maximum Packet Marking Probability Adjustment Algorith: CHAPTER 3. ADAPTIVE REDn GATEWAY 33 Except the integration of the middle threshold algorithm, no modification is made to the original dynamic maximum packet marking probability adjustment algorithm. The additional attributes for the middle threshold algorithm include the per-color configuration parameters of the middle average queue length thresholds and of the static maximum packet marking probabilities, and the per-color state variables for storing the previous states of the average queue lengths. The middle threshold algorithm comes into operation when any of the average queue lengths except the one of the lowest packet marking precedence crosses its middle threshold: If the average queue length crosses its middle threshold from below, the dynamic maximum packet marking probability will be increased, and the larger one of the static and the dynamic maximum packet marking probabilities will be adopted as the effective maximum packet marking probability. Otherwise, if the average queue length crosses its middle threshold from above, the dynamic maximum packet marking probability will always be adopted as the effective maximum packet marking probability. The middle average queue length thresholds denote the points where the early test for identifying the underlying causes of increases in the average queue lengths should be conducted, and the different maximum packet marking probabilities should be adopted. To avoid under-marking and a possibly persistent long queuing delay at the gateway, each of the middle thresholds should be set to a value that is considerably smaller than the maximum threshold, yet is sufficiently greater than the minimum threshold, and corresponds to a reasonably short queuing delay regarding the types of traversing traffic under a norm level of congestion. The static maximum packet marking probabilities denote the maximum packet marking probability values that should be used if increases in the average queue lengths stem from increase in the number of active flows. Each of them should be set to certain justifiably large value of up to 0.1 [26] in anticipation of a large number of active flows. 3.3.1 Evaluation The scope of the middle threshold algorithm is essentially the entire first phase of the operation. Further, if any of the dynamic maximum packet marking probabilities is increased to become greater than or equal to the corresponding static maximum packet marking probability, the middle thresh-old algorithm will have no effect on this dynamic maximum packet marking probability, and the integrated algorithm will be functionally identical to the dynamic maximum packet marking proba-bility adjustment algorithm alone. The middle threshold algorithm has to be disabled for the lowest packet marking precedence level. The rationale behind the middle threshold algorithm is explained CHAPTER 3. ADAPTIVE REDn GATEWAY 34 in detail below. Without loss of generality, given the simplifying assumptions that all the traversing T C P flows have identical expected throughputs and R T T s , and the in-flight packets being transmitted in the data links are neglected, the relationship among the number of active flows n, the size of the cwnd of each of the flows w, and the actual queue length at the gateway q is given by: Substituting for w from Equation 1.4 in Chapter 1 and arranging the terms, the packet marking probability exerted by the gateway is given by: The derivation states that the packet marking probability, as well as the maximum packet marking probability, mainly depends on the number of active flows and the actual queue length at the gateway. The middle threshold algorithm is heavily based on this relationship. If any of the average queue lengths crosses its middle threshold from below, the middle threshold algorithm will initially assume that the increase stems from increase in the number of active flows. According to the algorithm, the dynamic maximum packet marking probability will be increased, and the static maximum packet marking probability will be adopted as the effective maximum packet marking probability. However, if increase in the average queue length actually stems from prior decrease in the dynamic maximum packet marking probability, which simply contradicts the initial assumption, the static maximum packet marking probability should be overly aggressive, and should cause the average queue length to drop. As soon as the average queue length drops below its middle threshold, the smaller yet increased dynamic maximum packet marking probability will replace the overly aggressive static one to become the effective maximum packet marking probability. Likewise, if under-marking persists, and the average queue length increases to cross its middle threshold again, the dynamic maximum packet marking probability will be increased further until finally it has to settle on a value that keeps the average queue length around the middle threshold, and allows the flows to attain the expected throughputs. The convergence should be reached rapidly since the dynamic maximum packet marking probability is being increased exponentially. The corresponding queuing delay should be reasonably short, provided that the middle thresholds are properly configured. nw (3.2) Prior Decrease in Dynamic Maximum Packet Marking Probability CHAPTER 3. ADAPTIVE REDn GATEWAY 35 In brief, if increase in any of the average queue lengths stems from prior decrease in the dynamic maximum packet marking probability, this dynamic probability will be increased appropri-ately and promptly to cancel out the obsolete prior adjustment to avoid under-marking and a long queuing delay at the gateway. This branch of the middle threshold algorithm is directly derived from Equation 3.2: Given that the number of active flows remains constant, the actual queue length is increasing, and the per-flow packet marking probability, owing to prior decrease in the dynamic maximum packet marking probability, is smaller than the maximum packet loss probability of at least one of the traversing T C P flows, as given by Equation 1.4 in Chapter 1, the dynamic maximum packet marking probability should be increased exponentially and explicitly to keep the actual queue length, or the queuing delay, at a reasonably small value. Otherwise, owing to under-marking, the actual queue length would have increased quadratically to lead to a long queuing delay at the gateway unnecessarily. Increase in Number of Active Flows On the other hand, if increase in the average queue length really stems from increase in the number of active flows, which exactly matches the initial assumption, the average queue length should remain staying above the middle threshold even after the relatively aggressive static maximum packet marking probability is adopted as the effective maximum packet marking probability. The fixed static maximum packet marking probability in fact lets extra buffer space at the gateway be consumed for admitting the additional flows and for allowing all the flows to attain the expected throughputs while it attempts to keep increase in the average queue length moderate. The extra queuing delay induced is inevitable yet under control. As opposed to attempting to keep the average queue length around the middle threshold for a short queuing delay at the gateway, trading a moderate amount of the buffer space, or the queuing delay, for an assured bandwidth distribution at the gateway should be justifiable since the load is actually increased. In brief, if increase in any of the average queue lengths stems from increase in the number of active flows, the effective maximum packet marking probability will be kept fixed at an appropriate value to allow the additional flows to be admitted and to attain the expected throughputs. Adopting a fixed and justifiably aggressive maximum packet marking probability in the first phase of the operation in this scenario is identical to that as in the original R E D except that as opposed to adopting the aggressive probability unconditionally, the static maximum packet marking probability is adopted solely after deducing that increase in the average queue length should stem CHAPTER 3. ADAPTIVE REDn GATEWAY 36 from increase in the number of active flows. Further, given the already anticipatory configuration of the static maximum packet marking probability, the effective maximum packet marking probability should be increased further solely after the average queue length overshoots the maximum threshold, and a large portion of the buffer space is occupied persistently. This branch of the middle threshold algorithm is also directly derived from Equation 3.2: According to best-effort network service model or its variants, no flow should be denied attaining its expected throughput regardless of the delay provided that network resources permit. Apparently, the dynamic maximum packet marking probability should not be increased indefinitely in response to increase in the number of active flows to constrain the buffer occupancy to increase. The packet marking probability should be bounded by a maximum value justified for the first phase of the operation, such as 0.1 [26], attempting to allow the flows to attain the expected throughputs. This bound directly implies that the dynamic maximum packet marking probability should be bounded accordingly. Once the dynamic maximum packet marking probability is increased to become greater than or equal to the static maximum packet marking probability, the effective maximum packet marking probability will be kept constant to let the buffer space be consumed to make a best-effort attempt allowing the flows to attain the expected throughputs. Further increase in the effective max-imum packet marking probability should be made solely when most of the buffer space is occupied persistently. In conclusion, the middle threshold algorithm greatly improves the sensitivity of the dy-namic maximum packet marking probability adjustment algorithm so that the appropriate maxi-mum packet marking probabilities are inferred and adopted sufficiently early to reconcile the tradeoff between queuing delay and bandwidth distribution at the gateway, which is indeed consistent with the performance goal for an assured rate network service for T C P . 3 . 4 Related Work A refined version of R E D uses multiple fixed maximum packet marking probabilities to cope with increase in the average queue length [64]. The span between the minimum and the maximum thresholds is logically divided into a number of regions. The ordinal regions are associated with increasing, fixed maximum packet marking probabilities of up to 0.2. The original linear interpolation algorithm is used for determining the packet marking probability within each region. Since the maximum packet marking probabilities are still constant, this refined version of R E D also suffers CHAPTER 3. ADAPTIVE REDn GATEWAY 37 the same problem with R E D and R E D n : The anticipatory configurations of and the limited linear interpolations with the fixed maximum packet marking probabilities could make it fail to adapt flexibly to the varying nature of the flows to result in a bad tradeoff of queuing delay and bandwidth distribution at the gateway. Yet, this refined version of R E D is functionally similar to an inflexible, coarse-grained version of the integrated middle threshold and dynamic maximum packet marking probability adjustment algorithm presented in the preceding section. A slightly different version of random early detection algorithm, referred to as Stabilized R E D (SRED), explicitly estimates the number of active flows statistically [51]. The packet marking probability is evaluated by interpolating a fixed maximum packet marking probability according to the estimated number of active, flows, the estimated per-flow bandwidth, and the actual queue length. Since the maximum packet marking probability is also constant, S R E D could fail to adapt flexibly to the varying nature of the flows to result in a bad tradeoff of queuing delay and bandwidth distribution at the gateway. Chapter 4 Configuration Guidelines and Simulation Results Adaptive R E D n Gateway, Self-Configuring R E D n Gateway, R E D n Gateway, Adaptive Token Bucket Traffic Conditioner for T C P , and 2-Rate 3-Color Marker have been evaluated with numerous sim-ulations in a wide variety of scenarios comprising network topologies with different numbers of bottleneck links, T C P flows with different throughput expectations and R T T s , and traffic aggre-gates with different numbers of active flows. In brief, all the simulation results certainly corroborate the claim made in this thesis that the architecture of Adaptive R E D n Gateway and Adaptive To-ken Bucket Traffic Conditioner for T C P enables a short queuing delay and an assured bandwidth distribution at each gateway. Section 4.1 describes in detail the single-bottleneck-link simulation network, the configura-tions of the mechanisms, and the simulation scenarios involved in evaluating Adaptive R E D n , Self-Configuring R E D n , R E D n , and Adaptive Token Bucket Traffic Conditioner for T C P in handling T C P traffic. The intricate relationships among the configuration parameters of Adaptive R E D n are explained, and guidelines on the proper configurations of the parameters are given. The Adaptive R E D n , the Self-Configuring R E D n , and the R E D n for the simulations are all configured according to these guidelines. The main simulation results for Adaptive R E D n and Adaptive Token Bucket Traf-fic Conditioner for T C P are presented and explained in the section whereas the additional ones are presented and explained in Section A . l in Appendix A . Those for Self-Configuring R E D n , R E D n , and Adaptive Token Bucket Traffic Conditioner for T C P are presented and explained in Section 3.1 in Chapter 3. Section 4.2 describes in detail the dual-bottleneck-link simulation network, the config-urations of the mechanisms, and the simulation scenarios involved in evaluating Adaptive R E D n and Adaptive Token Bucket Traffic Conditioner for T C P in handling T C P traffic. The main simulation 38 CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 39 47 46 48 1 3 2 REDn or its variant with Bottleneck Link 51 50 94 49 95 96 S - Sending TCP C - Traffic Conditioner G- Gateway R - Receiving TCP Figure 4.1: Simulation Network with One Bottleneck Data Link results are presented and explained in the section whereas the additional ones are presented and explained in Section A.2 in Appendix A . Section 4.3 describes in detail the single-bottleneck-link simulation network, the configurations of the mechanisms, and the simulation scenario involved in evaluating Adaptive R E D n , Adaptive Token Bucket Traffic Conditioner for T C P , and 2-Rate 3-Color Marker in handling multiplexed T C P and User Datagram Protocol (UDP) [53] traffic. The simulation results are presented and explained. A n overview of the related issues and recent work is also given. 4.1 Simulation Scenarios with One Bottleneck Data Link Simulation Network The simulation network has a single bottleneck link, as depicted in Figure 4.1. The transmission rate of each of the access links is 100 Mbps, and that of the bottleneck link is 201.6 Mbps, which is 1.2 times of the total baseline expected throughput of all the active T C P flows in the baseline case for all the simulation scenarios. For a T C P flow to attain a particular average throughput, the instantaneous throughput has to be allowed to oscillate between 0.66 times and 1.33 times of this average throughput because decrease in the. throughput in response to a single packet marking, is half of the instantaneous value, according to the window adjustment algorithms. However, the bottleneck link capacity is merely 20% in excess of the total baseline expected throughput of all the active flows in the baseline CclSG) ctS opposed to 33% for the worst-case phasing, since a certain constructive multiplexing gain in the utilization is anticipated. The access links have different transit CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 40 delays, and the end-to-end network paths should have different roundtrip transit delays and thus R T T s , which include queuing delays. The 6 roundtrip transit delays being considered are 30, 60, 70, 120, 160, and 200 msec. The simulation network consists of 48 end-to-end network paths, and each of the roundtrip transit delays is associated with an equal number of paths. Al l the data links are lossless and of type duplex, and the configurations for the forward and the reverse transmissions are identical. Sending N o d e Each of the 48 sending nodes is composed of a bulk data source utilizing File Transfer Protocol (FTP) , a sending T C P , and an Adaptive Token Bucket Traffic Conditioner for T C P . The F T P source is always ready, and has infinite amount of data to send. The sending T C P implements the Impatient variant of the NewReno version, which responds to partial acknowledgments by staying in the fast-recovery phase to send 1 retransmission every R T T until a R T O is triggered, or a full acknowledgment is received [24]. The retransmission timer is reset for only the first partial acknowledgment. Further, the T C P uses the large initial window [2] and the timestamp option [42]. The sending T C P is logically connected to one and only receiving T C P to form a unique T C P flow. The size of each T C P data segment is fixed at 1 kilobytes (kB). The 2 token rates of .the conditioner are trivially set to the baseline and the higher expected throughputs of the client T C P flow. The 4 baseline expected throughputs being considered are 1, 3, 5, and 10 Mbps, and the corresponding higher expected throughputs are twice the respective baseline expected throughputs. Both of the minimum token bucket depths for the conditioner are set to 5 packets to match the large initial window of T C P . At runtime, the token bucket depths for the conditioner are dynamically adjusted according to the token bucket depth adjustment algorithm. The conditioner is based on 2-Rate 3-Color Marker and the implementation of single token bucket shaper in ns network simulator. For all the end-to-end network paths, each path is associated with a unique pair of T C P flow and traffic conditioner. The 48,conditioners are configured to embrace twice the 24 permutations of all the roundtrip transit delays of the paths and the baseline expected throughputs of the flows. For example, for the 8 conditioners associated with the paths having the same roundtrip transit delay of 70 msec, a pair of them are identically set to have a baseline expected throughput of 1 Mbps, another pair to have 3 Mbps, and so forth. The completely replicated set of T C P flows constitutes a control simulation run simply for verifying the results. In brief, only the Impatient NewReno T C P is used, and the key differences among the flows are the roundtrip transit delays of the paths and CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 41 the traffic conditionings performed according to the per-flow throughput expectations. Rece iv ing N o d e Each receiving node is composed of a receiving T C P and a bulk data sink. The receiving T C P implements D A C K [3]: A n acknowledgment is generated for an out-of-sequence data segment and for at least every second data segment or within 500 msec after receiving the first unacknowledged data segment. The advertised window size of the receiving T C P is always infinity so that the cwnd of the peer sending T C P , as well as the received throughput of the flow, should freely vary with the packet marking probability exerted by the bottleneck gateway. The F T P sink records the traffic statistics, and then simply discards the received data. Bott leneck Gateway The bottleneck gateway has 2 individual queues: The queue for forward transmission implements F I F O queuing with Adaptive R E D n , Self-Configuring R E D n , or R E D n depending on the simulation scenarios, and the one for reverse transmission always implements F I F O queuing with the intrinsic droptail policy on queue overflow. Configurations of R E D n and its Variants For R E D n and its variants, the aggressiveness of preferential packet marking is adjustable since an individual set of configuration parameters is associated with each of the colors, for example, 3 individual minimum average queue length thresholds are associated with green, yellow, and red respectively. However, the aggressiveness of preferential packet marking should conform to the following specifications [37]: The relative aggressiveness of preferential packet marking should mainly vary with the differences among the average queue lengths. If the colors associated with all the arriving packets are identical, R E D n and its variants should function exactly as R E D , and the same aggressiveness of packet marking should be exerted regardless of what this color is. Accordingly, the R E D n and its variants for the simulations are configured as follows: A l l the corresponding instances of each of the configuration parameters are set to the same value except those of a few parameters that are supposed to be set to different values governed by the associated algorithms or the simulation scenarios. For example, the 3 minimum average queue length thresholds are identically set to 189 packets, the 3 maximum average queue length thresholds to 2520 packets, and so forth. In general, this approach to the configurations not only makes R E D n and its variants CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 42 conform to the specifications, but also greatly simplifies the tasks. Further, the implementations may be simplified as well since merely a single instance is needed for each of most of the configuration parameters. The Adaptive R E D n for the simulations is configured as follows: The static maximum packet marking probability for the middle threshold algorithm and the initial dynamic maximum packet marking probability are both set to 0.1. Inherited from R E D , multiple markings made to packets of the same color are spaced evenly over time, and the expected number of packets between a single pair of successive markings is given by 0.5/p + 0.5 where p is the packet marking probability, as given by Equation 3.1 in Chapter 3 [26]. When the packet marking probability is increased to become equal to the maximum packet marking probability of 0.1, the amount of arriving packets being marked is approximately 20%, which should be the maximum justifiable amount for the first phase of the operation [26]. The queue weight is set to 0.002. This value induces a 63% change in the average queue length following the corresponding change in the actual queue length to take at least 500 packet arrivals [26]. This number of packet arrivals is equivalent to twice the largest product of the baseline expected throughput and the roundtrip transit delay among those for all the traversing T C P flows, and should provide a sufficiently long delay to disambiguate an incipient congestion and a normal transient overload. The maximum average queue length threshold is set to 2520 packets. This number of packets is denoted by the maximum number of packet markings, which is the maximum average number of queued packets dissipated in a R T T . Assuming that the R T T of the longest end-to-end network path is 500 msec, the maximum number of packet markings is R-RTT-2maxp = 2520 packets where R is the transmission rate of the bottleneck link, and maxp is the maximum packet marking probability. The middle average queue length threshold is set to 378 packets. This number of packets accounts for approximately 15 msec of queuing delay at the bottleneck gateway, and the delay should be deemed short for most applications. The middle threshold algorithm is only enabled for the yellow and the red packet marking precedence levels. The disabling of the algorithm for the green level should ensure that packet marking is made mostly to yellow and red packets and to green packets only under a heavy congestion. The minimum average queue length threshold is set to 189 packets. This value is half of the middle threshold value so that a persistent increase in the load should not be falsely inferred from a typical increase in the average queue length resulting from normal transient packet bursts [26]. The minimum actual queue length threshold is also set to 189 packets. The dynamic maximum packet marking probability adjustment algorithm is enabled for all the 3 packet marking precedence levels, and the multiplicative increment and decrement are CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 43 set to the recommended values of 2 and 3 respectively [15]. No minimum or maximum bound is configured for the dynamic maximum packet marking probabilities. The probabilities should freely vary with the varying nature of the flows. Packet marking is performed with either packet drop or E C N depending on the simulation scenarios. The configuration of the Self-Configuring R E D n for the simulation is identical to that of the Adaptive R E D n above except that the minimum actual queue length threshold parameters and the middle threshold algorithm are not supported. The configuration of the R E D n for the simulation is identical to that of the Self-Configuring R E D n above except that the dynamic maximum packet marking probability adjustment algorithm is not supported, and the fixed maximum packet marking probability is set to 0.1. For both the Self-Configuring R E D n and the R E D n , packet marking is performed solely with packet drop. Egress Gateway The egress gateway also uses 2 individual queues for packet forwarding in different directions, and both of the queues implement FIFO queuing with the droptail policy. Simulation Scenarios The 4 simulation scenarios being considered are referred to as baseline-undersubscribed-oversubscribed (B-U-O) scenario with packet drop, B - U - 0 scenario with E C N , baseline-oversubscribed-undersubscribed (B-O-U) scenario with packet drop, and B - O - U scenario with E C N . Each simulation run consists of 3 consecutive 50-second intervals to last for a total of 150 seconds. In the first interval, only the 36 T C P flows with baseline expected throughputs of 1, 3, and 10 Mbps are enabled, and the other 12 flows with a baseline expected throughput of 5 Mbps are disabled. Since the bottleneck link capacity is provisioned according to the total baseline expected throughput of the active flows in the first interval, the simulation run in this interval is referred to as the baseline case, and each active flow is supposed to attain its baseline expected throughput. The number of active T C P flows then varies in the second and the third intervals depending on the simulation scenarios. In the B - U - 0 scenario, only the 24 T C P flows with baseline expected throughputs of 1 and 3 Mbps are enabled in the second interval, and all the 48 T C P flows are enabled in the third interval, The simulation run in the second interval is referred to as the undersubscribed case, and that in the third interval as the oversubscribed case. In the undersubscribed case, each active flow is supposed to attain a throughput in excess of its higher expected throughput. In the oversubscribed case, some of CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 44 Assured Bandwidth Distribution (Mbps) Per-Flow Throughput Baseline ' Undersubscribed Oversubscribed Expectation Case Case Case (Mbps) 1 1 5 1 3 3 9 3 5 / / 5 10 10 / - 5 Table 4.1: Assured Bandwidth Distribution for the R E D n and its Variants the flows are supposed to attain their baseline expected throughputs whereas the others are supposed to attain certain similar throughputs that are less than their baseline expected throughputs. P e r - C o l o r M a x - M i n Unweighted Fairness Each T C P flow is subject to preferential packet marking according to its throughput expectations, and the fraction of marked packets of each flow should be roughly proportional to the share of the output link bandwidth being utilized by the flow [26]. Given preferential packet marking and that the configuration and the dynamic adjustment of the maximum packet marking probabilities are correctly done, R E D n or its variant should maintain an assured bandwidth distribution at the gateway during congestion that roughly follows per-color max-min unweighted fairness [43]: The output link bandwidth is distributed to the active flows in the increasing order of their per-fiow expected throughputs. If the bandwidth is less than the total baseline expected throughput of all the active flows, the scarce bandwidth will be equally distributed to the flows, and no flow will get a share larger than its expected throughput. If the bandwidth is in excess of the total baseline expected throughput of all the active flows, the unused bandwidth will be equally distributed to the flows, and each flow will get a share larger than its expected throughput. Table 4.1 summarizes the assured bandwidth distribution that the R E D n or its variant should maintain at the gateway in each of the cases versus the per-flow throughput expectations for all the simulation scenarios. Conversely, in the B - O - U scenario, all the 48 T C P flows are enabled in the second interval, and only the 24 T C P flows with baseline expected throughputs of 1 and 3 Mbps are enabled in the third interval. The assured bandwidth distribution in each of the cases versus the per-flow throughput expectations in the B - O - U scenario is identical to that listed in Table 4.1. CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 45 Miscellaneous Configurations The size of each of the queues at each node is sufficiently large so that queue overflow never occurs. Further, congestion and packet drop never occur along the reverse paths. Separate instances of the pseudo-random number generator, with different recommended good seeds, in ns network simulator [13] are used in each of the simulation scenarios for generating legitimate random values for the R E D n and its variants for packet marking and for the flows as starting times. • A l l the newly enabled T C P flows, including the replicated ones, start randomly within the first 15 msec of the interval. Al l the simulation statistics are dumped and averaged, if needed, every second. The simulation results for the Adaptive R E D n with packet drop in scenario B - U - 0 are presented in the following subsection whereas the additional ones for the other 3 scenarios are pre-sented in Section A . l in Appendix A. The simulation results for the R E D n and the Self-Configuring R E D n with packet drop in scenario B - U - 0 are presented in Subsections 3.1.1 and 3.1.2 in Chapter 3 respectively. 4.1.1 Adaptive R E D n with Packet Drop in Scenario B-U-O The simulation results include the per-flow traffic conditionings, the per-flow received throughputs, the average queue lengths and the overall average queuing delay at the bottleneck gateway, the bottleneck link utilization, the packet marking ratio, and the effective maximum packet marking probabilities. These results are supposed to be compared to those of Self-Configuring R E D n in Subsection 3.1.2 in Chapter 3. Per-Flow Traffic Conditionings Owing to the token bucket depth adjustment algorithm, traffic conditioning for each of the T C P flows is-performed according to the conformity of the arriving packets to the profile of the expected throughputs yet independent of the burstiness of the flow as revealed by the per-flow traffic con-ditionings, depicted in Figure 4.2. The baseline and the higher expected throughputs for all the 3 individual conditioners were 3 and 6 Mbps respectively. The statistics of the conditioned traffic of each of the colors and those of the total were dumped separately. The degrees of burstiness of the T C P flows should be arbitrarily different because of the different roundtrip transit delays, as well as the R T T s , and the combined effect of the window adjustment and the self clocking algorithms, CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 150 Time 1 (second) (a) 30-msec Roundtrip Transit Delay 150 Time (second) (b) 120-msec Roundtrip Transit Delay .. Figure 4.2: Traffic Conditionings with Adaptive Token Bucket Traffic Conditioner CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 47 150 Time (second) (c) 200-msec Roundtrip Transit Delay Figure 4.2: Traffic Conditionings with Adaptive Token Bucket Traffic Conditioner (Cont.) F I F O multiplexing, and random packet marking. The results are evident that Adaptive Token Bucket Traffic Conditioner for T C P is correct since for each of the flows, the amounts of green and yellow traffic matched the baseline and the higher expected throughputs respectively independent of the burstiness of the flow, and the total matched the sum of the amounts of green, yellow, and red traffic. Further, given that for each of the conditioners, tokens were replenished by the byte uniformly over time according to the expected throughputs of the client flow, the correctness of the token bucket depth adjustment algorithm is implied since for each of the flows, the amounts of green and yellow traffic exactly matched the baseline and the higher expected throughputs respectively independent of the burstiness of the flow when the amount of red traffic was nil. The conditionings for the other flows in this simulation scenario, as well as those for the flows in the other scenarios, are similar to the ones shown. ' CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 30 Time (second) (a) 30-msec Roundtrip Transit Delay 18 0 50 100 150 Time (second) (b) 60-msec Roundtrip Transit Delay Figure 4.3: Per-Flow Throughputs with Adaptive R E D n CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 15 Time (second) (c) 70-msec Roundtrip Transit Delay 15 0 50 100 150 Time (second) (d) 120-msec Roundtrip Transit Delay Figure 4.3: Per-Flow Throughputs with Adaptive R E D n (Cont.) CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 16 Time (second) (e) 160-msec Roundtrip Transit Delay 1 5 Time (second) (f) 200-msec Roundtrip Transit Delay Figure 4.3: Per-Flow Throughputs with Adaptive R E D n (Cont.) CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 51 Per-Flow Received Throughputs The bottleneck link bandwidth was distributed according to the per-flow throughput expectations, the level of congestion, and per-color max-min unweighted fairness in all the cases as revealed by the per-flow received throughputs, depicted in Figure 4.3. Al l the flows attained the respective expected throughputs. Although the bias against the slow T C P flows, which had roundtrip transit delays of 120 msec and above, was greatly alleviated, these slow flows still took longer time than the others to attain the expected throughputs, intrinsically owing to the window adjustment algorithms. Consequently, a considerable amount of the available bandwidth that should eventually be utilized by the slow T C P flows was first opportunistically utilized by the fast T C P flows, which had roundtrip transit delays of 70 msec and below, so that the received throughputs of these fast flows were mostly greater than the expected values in all the cases. Accordingly, the congestion windows of the fast flows had to be relatively large, and therefore these flows simply received most of the packet markings. Upon receipt of a single packet marking, each of the flows reduced the amount to send by half, and entered the fast-recovery phase. As a result, the bandwidth available to the slow flows was varying both frequently and considerably. Given that no R T O had been triggered, and the congestion windows were always being opened smoothly between successive packet markings, the zigzag received throughputs of some of the slow T C P flows should stem from the varying available bandwidth. The effect of zigzag received throughput would have been much minimized if a high degree of constructive traffic multiplexing had existed at the gateway, or the degrees of burstiness of the flows had been reduced, for example, with T C P with rate-halving, which spaces data segments evenly during the fast-recovery phase [36]. On the contrary, the 50% decreases in the received throughputs of some of the flows had to be triggered by single packet markings, such as those of the flow having a roundtrip transit delay of 70 msec and expecting a throughput of 10 Mbps at seconds 1, 15, 29, and 39, those of the one having a 120-msec delay and expecting 10 Mbps at second 45, those of the one having a 120-msec delay and expecting 3 Mbps at seconds 19, 34, and 39, and so forth. Moreover, the per-flow received throughputs were largely goodputs since RTOs, triggered by multiple packet markings, in fact merely occurred near the ends of the slow-start phases. On R T O , each of the flows reduced its cwnd to the minimum, and entered the slow-start phase. The abrupt decreases in the received throughputs stemmed from RTOs, such as that of the flow having a roundtrip transit delay of 30 msec and expecting a throughput of 10 Mbps at second 1 and that CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 52 of the one having a 120-msec delay and expecting 10 Mbps at second 2. The received throughputs of the other replicated 24 T C P flows are similar to the ones shown. Average Queue Lengths and Queuing Delay The problem of the persistent long queuing delay at the gateway resulting from: the low sensitivity of the dynamic maximum packet marking probability adjustment algorithm was resolved with the integration of the rmiddle threshold algorithm as revealed by the average queue lengths and the overall average queuing delay at the gateway, depicted in Figure 4.4. The steady-state average queue lengths were moderate in all the cases, and were mostly around or below the middle threshold value of 378 packets. Al l the low points of the average queue lengths in the undersubscribed case were in fact nonzero values. The steady-state overall average queuing delay at the bottleneck gateway was mostly around or below the middle threshold value of 15 msec in all the cases, which should be deemed low for most applications. The few low points of the queuing delay at the beginning of the undersubscribed case were nonzero values. Al l the per-flow average queuing delays are similar to the overall average queuing delay shown. The average queue lengths and the overall average queuing delay increased momentarily at the beginnings of the baseline and the oversubscribed cases because all and half of the active T C P flows were aggressively sending in their slow-start phases at these times respectively: At the beginning of the baseline case, all the active flows were sending in their slow-start phases. At the beginning of the oversubscribed case, half of the active flows, which had baseline expected throughputs of 5 and 10 Mbps, and were to utilize approximately 71% of the bottleneck link capacity, were aggressively sending in their slow-start phases. Therefore, these transient increases should be justifiable. Bottleneck Link Utilization The bottleneck link utilization remained high at roughly 100% in all the cases except at the beginning of the undersubscribed case, as depicted in Figure 4.5. By default, all the active flows with a baseline expected throughput of 10 Mbps had to be disabled in the undersubscribed case. Since these flows accounted for approximately 71% of the utilization at the time of shutdown, the transient drop in the utilization following the shutdown at the beginning of the undersubscribed case should be justifiable. Given that in all the cases, all the flows attained the expected throughputs, the average CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 2500 2000 B 1500 1000 500 Queue Length Expected Min. and Max. Thresholds 50 100 Time (second) (a) Green Packets 150 2500 2000 a 1500 1000 500 50 ,100 Time (second) (b) Yellow Packets 150 2500 2000 3 1500 S 1 0 0 0 o 500 Queue Length Expected Min. and Max. Thresholds 50 100 Time (second) 150 0.04 0.035 50 100 Time (second) 150 (c) Red Packets (d) Overall Queuing Delay Figure 4.4: Average Queue Lengths and Overall Queuing Delay with Adaptive R E D n CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 54 1 0.9 0.8 0.7 c 0.6 o | 0.5 E 0.4 0.3 0.2 0.1 0 7 ? " Utilization Expected 50 100 150 Time (second) Figure 4.5: Bottleneck Link Utilization with Adaptive R E D n queue lengths and the overall average queuing delay at the bottleneck gateway were nonzero, and the bottleneck link utilization remained mostly high at its maximum value, all these results are evident that no global synchronization of the flows occurred at all. Packet Marking Ratio The average early packet drop ratio, or the packet marking ratio, varied within the range of 0 and 6.5849e-4 during the entire simulation run except at the beginnings of the baseline and the oversubscribed cases, as depicted in Figure 4.6. The transient increases in the drop ratio at these times should be justifiable because all and half of the active T C P flows were aggressively sending in their slow-start phases respectively to result in transient increases in the average queue lengths. The norm drop ratio should be deemed low for most applications. Al l the packet drops solely stemmed from packet marking, and not queue overflow. Effective Maximum Packet Marking Probabilities For each of the colors, either the dynamic or the static maximum packet marking probability was dynamically adopted as the effective maximum packet marking probability at runtime according to the middle threshold algorithm, as depicted in Figure 4.7. Each of the horizontal solid line CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 55 0.012 0 50 100 150 Time (second) Figure 4.6: Early Drop Ratio with Adaptive R E D n segments denotes a period in which the effective maximum packet marking probability remains unchanged. Each of the vertical dotted lines denotes a change in the effective maximum packet marking probability in the preceding sampling period. The rightmost cross in each of the graphs denotes the effective maximum packet marking probability that was in effect from then until the end of the simulation run. Given that in all the cases, changes in the effective maximum packet marking probabilities followed variations in the average queue lengths, all the flows attained the expected throughputs, the overall average queuing delay at the bottleneck gateway was moderate, and the bottleneck link utilization remained mostly high at its maximum value, all these, results are evident that the inte-grated middle threshold and dynamic maximum packet marking probability adjustment algorithm is correct. In conclusion, all the simulation results certainly corroborate that the architecture of Adap-tive R E D n Gateway and Adaptive Token Bucket Traffic Conditioner for T C P enables a short queuing delay and an assured bandwidth distribution at the gateway, given the varying nature of the flows, the wide ranges of per-flow throughput expectations and R T T s , and the varying level of congestion. The additional simulation results for the other 3 scenarios, in Subsections'A. 1.1, A.1.2, and A.1.3 in Appendix A , also corroborate this claim. CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 0.1* Dynamic Probability Static Probability 50 — L - X -100 150 Time (second) (a) Green Packets 0.15 0.05 £ o.DMttOMcag:— - * 0. E E X Dynamic Probability — Static Probability ? " 1 * ! = 3 |:: ji !!!^ < IB I l l i J < 1 • ? si. i 50 100 Time (second) (b) Yellow Packets 150 0.18 5 CD f 0.14 oi O) | 0.12 co 5 % .0.1 o CD D_ e 0.08 E | 0.06 o E <2 0.04 >, o 0.02 0 x Dynamic Probability — Static Probability si* * l i x * x g X X-X 0 50 100 Time (second) (c) Red Packets )|«!B£W)J8K»< -X II I I I Ill II i i g m i j j i — z^^ '-* •>•: .:: .• :::::::::::::::::::: r; rirzr?'. 150 Figure 4.7: Effective Maximum Packet Marking Probabilities of Adaptive R E D n CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 57 Figure 4.8: Simulation Network with Two Bottleneck Data Links 4.2 Simulation Scenarios with Two Bottleneck Data Links The simulation network has 2 bottleneck links, as depicted in Figure 4.8. This network is simply a logical generalization of the one having a single bottleneck link described in the preceding sec-tion: The first pair of bottleneck gateway and bottleneck link, the egress gateway, the associated sending and receiving nodes, and the associated access links in this dual-bottleneck-link network are all identical to the corresponding components in the single-bottleneck-link network. Further, the configurations for both the networks are identical except for the following parameters. The transmission rate of the second bottleneck link is 403.2 Mbps, which is 1.2 times of the total baseline expected throughput of all the active T C P flows, including those also traversing the first bottleneck gateway, in the baseline case for all the simulation scenarios. The sending and the receiving nodes and the access links associated with the second bot-tleneck and the egress gateways are identical to the corresponding ones associated with the first bottleneck and the egress gateways, except that all the T C P flows have discrete and random start-ing times. Further, the conditioners in all the sending nodes associated with the second bottleneck gateway are also configured to embrace twice the complete permutation of all the roundtrip transit CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 58 delays of the paths and the baseline expected throughputs of the flows, where the 6 roundtrip transit delays being considered are still 30, 60, 70, 120, 160, and 200 msec, and the 4 baseline expected throughputs being considered are still 1, 3, 5, and 10 Mbps. The completely replicated set of T C P flows also constitutes a control simulation run for verifying the results. Only Adaptive R E D n is used at the bottleneck gateways. The configuration of the Adaptive R E D n at the second bottleneck gateway is identical to that of the one at the first bottleneck gateway except for the following parameters:. The maximum average queue length threshold is set to 5544 packets. Assuming that the R T T of the longest end-to-end network path is 500 msec, the maximum number of packet markings is R • RTT • 2maxp = 5544 packets where R is the transmission rate of the bottleneck link, and maxp is the maximum packet marking probability. The middle average queue length threshold is set to 831 packets. This number of packets accounts for approximately 15 msec of queuing delay at the bottleneck gateway, and the delay should be deemed short for most applications. The minimum average queue length threshold is set to 415 packets. This value is half of the middle threshold value so that a persistent increase in the load should not be falsely inferred from a typical increase in the average queue length resulting from normal transient packet bursts. The minimum actual queue length threshold is also set to 415 packets. The 4 simulation scenarios being considered are still B - U - 0 scenario with packet drop or E C N and B - O - U scenario with packet drop or E C N . These scenarios are identical to those described in the preceding section except for the following specifications: Each simulation run consists of 3 consecutive 25-second intervals to last for a total of 75 seconds. In the first interval, only the 72 T C P flows with baseline expected throughputs of 1, 3, and 10 Mbps are enabled, and the other 24 flows with a baseline expected throughput of 5 Mbps are disabled. The number of active T C P flows then varies in the second and the third intervals depending on the simulation scenarios. In the B - U - 0 scenario, only the 48 T C P flows with baseline expected throughputs of 1 and 3 Mbps are enabled in the second interval, and all the 96 T C P flows are enabled in the third interval. Conversely, in the B - O - U scenario, all the 96 T C P flows are enabled in the second interval, and only the 48 T C P flows with baseline expected throughputs of 1 and 3 Mbps are enabled in the third interval. The assured bandwidth distribution in each of the cases versus the per-flow throughput expectations for all the simulation scenarios is identical to that listed in Table 4.1 in the preceding section. The simulation results for the Adaptive R E D n with packet drop in scenario B - U - 0 are presented in the following subsection whereas the additional ones for the other 3 scenarios are presented in Section A.2 in Appendix A . CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 59 4.2.1 Adaptive R E D n with Packet Drop in Scenario B-U-O The simulation results include the per-flow received throughputs, the average queue lengths and the overall average queuing delays at the bottleneck gateways, the bottleneck link utilizations, the packet marking ratios, and the effective maximum packet marking probabilities. The explanations of these results are essentially similar to those already given in Subsection 4.1.1, and only where needed, further explanations will be given. Per-Flow Received Throughputs For the flows traversing both the congested gateways, all of them except one attain the respective expected throughputs in all the cases, as depicted in Figure 4.9. Owing to preferential random packet marking with the appropriately adjusted maximum packet marking probabilities, the flows were subject to adequate marking at the bottleneck gateway, but were protected from ambient marking at the other gateway. Further, the occurrences of packet marking and R T O exhibited no peculiarity and bias against any flow. Depending on the level of congestion, either of the gateways could become the bottleneck to a flow at a given time. The main reason for the failure of the flow having a roundtrip transit delay of 160 msec and expecting a throughput of 10 Mbps to attain the expected throughput in a short period was that the flow suffered a R T O , resulting from multiple markings of the yellow packets made mainly by the first congested gateway, early in its slow-start phase. According to the yellow average queue length at the first congested gateway, depicted in Figure 4.11, the multiple packet markings should stem from a normal transient overload since almost all the active flows were aggressively sending in their slow-start phases then. Therefore, the R T O seems to be justifiable. For the other flows traversing merely a single bottleneck gateway, or the second congested gateway, all of them attain the respective expected throughputs in all the cases, as depicted in Figure 4.10. The result is similar to that for the single-bottleneck-link network in the preceding section. CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 30 40 50 Time (second) (a) 30-msec Roundtrip Transit Delay 30 40 50 Time (second) 10 Mbps —— 5 Mbps - a - 3 Mbps —•— 1 Mbps • • • Expected 70 80 (b) 60-msec Roundtrip Transit Delay Figure 4.9: Per-Flow Throughputs of Flows Traversing Two Adaptive R E D n Gateways . CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 61 0 10 20 30 40 50 60 70 80 Time (second) (c) 70-msec Roundtrip Transit Delay 0 10 20 30 40 50 60 70 ' 80 Time (second) (d) 120-msec Roundtrip Transit Delay Figure 4.9: Per-Flow Throughputs of Flows Traversing Two Adaptive R E D n Gateways (Cont.) CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 62 15 Time (second) (e) 160-msec Roundtrip Transit Delay 25 Time (second) (f) 200-msec Roundtrip Transit Delay Figure 4.9: Per-Flow Throughputs of Flows Traversing Two Adaptive R E D n Gateways (Cont.) CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 40 10 Mbps —«— 5 Mbps - B - 3 Mbps —+— 1 Mbps • • • Expected 20 30 40 50 Time (second) 60 (a) 30-msec Roundtrip Transit Delay 70 80 30 40 50 Time (second) 10 Mbps —»— 5 Mbps - « - 3 Mbps — 1 Mbps • • • Expected 80 (b) 60-msec Roundtrip Transit Delay Figure 4.10: Per-Flow Throughputs of Flows Traversing One Adaptive R E D n Gateway CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 64 30 40 50 Time (second) (c) 70-msec Roundtrip Transit Delay 10 Mbps 5 Mbps 3 Mbps 1 Mbps Expected 80 30- 40 50 Time (second) (d) 120-msec Roundtrip Transit Delay Figure 4.10: Per-Flow Throughputs of Flows Traversing One Adaptive R E D n Gateway (Cont.) CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 65 30 40 50 Time (second) (e) 160-msec Roundtrip Transit Delay 30 40 50 Time (second) 70 80 (f) 200-msec Roundtrip Transit Delay Figure 4.10: Per-Flow Throughputs of Flows Traversing One Adaptive R E D n Gateway (Cont.) CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 66 Average Queue Lengths and Queuing Delays The average queue lengths and the overall average queuing delays at the bottleneck gateways are depicted in Figures 4.11 and 4.12. Al l the steady-state average queue lengths at the bottleneck gateways were moderate in all the cases, and were mostly around or below the respective middle threshold values of 378 packets for the first bottleneck gateway and 831 packets for the second one. Both the steady-state overall average queuing delays at the bottleneck gateways were mostly around or below the middle threshold value of 15 msec in all the cases, which should be deemed low for most applications. The transient increases in the average queue lengths and in the overall average queuing delays at the bottleneck gateways at the beginnings of the baseline and the oversubscribed cases stemmed from the fact that all and half of the active T C P flows were aggressively sending in their slow-start phases at these times respectively. Bottleneck Link Utilizations The bottleneck link utilizations remained high at roughly 100% in all the cases except at the begin-ning of the undersubscribed case, when a number of high-bandwidth active flows were disabled, as depicted in Figure 4.13. Given that in all the cases, all the flows attained the expected throughputs, the average queue lengths and the overall average queuing delays at the bottleneck gateways were nonzero, and the bottleneck link utilizations remained mostly high at the maximum value, all these results are evident that no global synchronization of the flows occurred at all. Packet Marking Ratios The average steady-state early packet drop ratio, or the packet marking ratio, for the first bottleneck gateway varied within the range of 0 and 4.02447e-4, and that for the second bottleneck gateway varied within the range of 0 and 4.33879e-4, as depicted in Figure 4.14. The transient increases in the ratios for the bottleneck gateways at the beginnings of the baseline and the oversubscribed cases stemmed from the fact that all and half of the active T C P flows were aggressively sending in their slow-start phases at these times respectively to result in transient increases in the average queue lengths. Al l the packet drops at the gateways solely stemmed from packet marking, and not queue overflow. CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS Queue Length Expected Min. and Max. Thresholds 0 10 20 30 40 50 60 70 Time (second) 2500 2000 B 1500 1000 500 Queue Length • • — Expected • • • Min. and Max. Thresholds 0 10 20 30 40 50 60 .70 Time (second) (a) Green Packets (b) Yellow Packets Queue Length • — Expected • • • Min. and Max. Thresholds 10 20 30 40 50 60 70 Time (second) 40 Time (second) (c) Red Packets (d) Overall Queuing Delay Figure 4.11: Average Queue Lengths and Overall Queuing Delay at the First Adaptive R E D n Gateway CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS Queue Length Expected Min. and Max. Thresholds 0 10 20 30 40 50 60 70 Time (second) 5000 4000 3000 = 2000 1000 Queue Length Expected Min. and Max. Thresholds (a) Green Packets 0 10 20 30 40 50 60 70 Time (second) (b) Yellow Packets — Queue Length — Expected • • Min. and Max. Thresholds (c) Red Packets 10 20 30 40 50 60 70 Time (second) 0.03 0.025 —«- Queuing Delay Expected 40 Time (second) (d) Overall Queuing Delay Figure 4.12: Average Queue Lengths and Overall Queuing Delay at the Second Adaptive R E D n Gateway CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 1 0.9 0.E 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 ' Utilization Expected 20 40 Time (second) 60 (a) First Bottleneck Link 80 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 T V Utilization Expected 20 40 Time (second) 60 80 (b) Second Bottleneck Link Figure 4.13: Utilizations of Bottleneck Links with Adaptive R E D n x 10 ' x 10" 40 Time (second) g Q >. 4 : I • ,J™~i«i\ LJ\M, 20 40 Time (second) 60 80 (a) First Bottleneck Gateway (b) Second Bottleneck Gateway Figure 4.14: Early Drop Ratios with Adaptive R E D n CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 70 Effective Maximum Packet Marking Probabilities For each of the bottleneck gateways and for each of the colors, either the dynamic or the static maxi-mum packet marking probability was dynamically adopted as the effective maximum packet marking probability at runtime according to the middle threshold algorithm, as depicted in Figures 4.15 and 4.16. In conclusion, all the simulation results certainly corroborate that the architecture of Adap-tive R E D n Gateway and Adaptive Token Bucket Traffic Conditioner for T C P enables a short queuing delay and an assured bandwidth distribution at each of the gateways, given the varying nature of the flows, the wide ranges of per-flow throughput expectations and R T T s , the multiple number of congested gateways, and the varying level of congestion. The additional simulation results for the other 3 scenarios, in Subsections A . 2 . 1 , A . 2 . 2 , and A . 2 . 3 in Appendix A , also corroborate this claim. CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 71 x x X Dynamic Probability — Static Probability 10 20 30 40 50 60 70 Time (second) 0.1 0.09 -S 0.08 Y'-fiK , 0.07 § 0.06 o 0.05 0.04 0.03 0.02 0.01 Jfrr •St x Dynamic Probability — Static Probability *OX X X 10 20 30 40 50 Time (second) 60 70 (a) Green Packets (b) Yellow Packets 0.1 0.09 •5 0.08 , 0.07 0.06 o 0.05 CO Q_ | 0.04 nj 2 0.03 o EH | 0.02 Q 0.01 X Dynamic Probability — Static Probability 10 20 30 40 Time (second) 50 60 70 (c) Red Packets Figure 4.15: Effective Maximum Packet Marking Probabilities of the First Adaptive R E D n Gateway CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 0.1 x-0.09 0.08 -, 0.07 0.06 0.05 0.04 0.03 , 0.02 0.01 0 0 x Dynamic Probability — Static Probability 10 20 30 40 50 60 70. Time (second) 0.1 0.09 0.08 , 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 x Dynamic Probability — Static Probability X X X)CK 10 20 30 40 Time (second) x atum^ •—I 50 60 70 (a) Green Packets (b) Yellow Packets 0.1E 0.16 f 0.14 cn | 0.12 co I 5 0.06 E 0.04 0.02 x Dynamic Probability — Static Probability S 0 . 1 ) » sspx— • -x — x-x*x-x>o<-E 0.08 E ~ i • : -Jxl m X-ftoOOOOOOOO! 10 20 30 40 50 Time (second) 60 70 (c) Red Packets Figure 4.16: Effective Maximum Packet Marking Probabilities of the Second Adaptive R E D n Gateway CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 73 4 . 3 CBR Traffic Source and UDP Transport Protocol This section presents and explains the results of a simple simulation revealing the performance of Adaptive R E D n in handling multiplexed T C P and U D P traffic, where U D P uses no congestion control [53]. A n overview of the related issues and recent work is also given. The single-bottleneck-link simulation network and the configurations of the mechanisms involved are identical to those stated in Section 4.1 except for the following parameters. The transmission rate of the bottleneck link is 273.6 Mbps, which is 1.2 times of the total baseline expected throughput of all the active U D P and T C P flows. The components of each of the 24 sending nodes associated with the flows with baseline expected throughputs of 3 and 5 Mbps are completely changed from a F T P data source, a sending T C P , and an Adaptive Token Bucket Traffic Conditioner for T C P to a constant-bit-rate (CBR) data source, a U D P , and a 2-Rate 3-Color Marker. Likewise, the components of the same number of the receiving nodes are also completely changed from a receiving T C P and a bulk data sink to a stub peer for U D P and a stub data sink. In brief, half of the sending and the receiving nodes are changed to incorporate 24 U D P flows into the simulation scenario. The stub peer for U D P merely defines a connection endpoint, and simply passes the received data to the associated data sink. The stub data sink records the traffic statistics, and then simply discards the received data. The U D P is logically connected to one and only stub peer to form a unique U D P flow. The size of each U D P data packet is fixed at 1- kB. The 2-Rate 3-Color Marker is simply equivalent to the Adaptive Token Bucket Traffic Conditioner with the token bucket depth adjustment algorithm for T C P disabled. The 2 token rates of the conditioner are trivially set to the baseline and the higher expected throughputs of the client U D P flow. The 2 baseline expected throughputs being considered for the U D P flows are 3 and 5 Mbps, and the corresponding higher expected throughputs are twice the respective baseline expected throughputs. A l l the minimum token bucket depths and the token bucket depths for the conditioner are identically set to 3 packets. The C B R source is always ready, and has infinite amount of data to send. Its sending rate is 2 Mbps in excess of the higher expected throughput of the associated U D P flow. For example, if the associated U D P flow is supposed to attain a baseline expected throughput of 3 Mbps, the C B R source will send at 8 Mbps. Consequently, packets of the associated U D P flow span all the 3 packet marking precedence levels. Moreover, the C B R source sends with random delay jitters, which are within ± 1 0 % of the nominal inter-packet times. The original components and the configurations of the other 24 sending nodes, associated CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 74 with the T C P flows with baseline expected throughputs of 1 and 10 Mbps, and those of the other 24 receiving nodes simply remain unchanged. For all the end-to-end network paths, each path is associated with a unique pair of either T C P or U D P flow and traffic conditioner. Al l the conditioners are also configured to embrace twice the complete permutation of all the roundtrip transit delays of the paths and the baseline expected throughputs of the T C P and the U D P flows, where the 6 roundtrip transit delays being considered are still 30, 60, 70, 120, 160, and 200 msec, and the 4 baseline expected throughputs being considered are still 1, 3, 5, and 10 Mbps. The completely replicated set of T C P and U D P flows also constitutes a control simulation run for verifying the results. Only Adaptive R E D n is used at the bottleneck gateway. The maximum average queue length threshold is set to 3420 packets. Assuming that the R T T of the longest end-to-end network path is 500 msec, the maximum number of packet markings is R • RTT • 2maxp — 3420 packets where R is the transmission rate of the bottleneck link, and maxp is the maximum packet marking probability. The middle average queue length threshold is set to 513 packets. This number of packets accounts for approximately 15 msec of queuing delay at the bottleneck gateway, and the delay should be deemed short for most applications. The minimum average queue length threshold is set to 256 packets. This value is half of the middle threshold value so that a persistent increase in the load should not be falsely inferred from a typical increase in the average queue length resulting from normal transient packet bursts. The minimum actual queue length threshold is also set to 256 packets. The sole simulation scenario being considered is B - U - 0 scenario with both packet drop and E C N : Packet marking for the T C P flows is performed with E C N whereas that for the U D P flows is performed with packet drop. The simulation run lasts for 50 seconds. Al l the U D P and the T C P flows are enabled and active for the entire simulation run. Since the bottleneck link capacity is provisioned according to the total baseline expected throughput of all the active flows, the simulation run denotes the baseline case, and each active flow is supposed to attain its baseline expected throughput. CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 14 r 10 Mbps —«— 5 Mbps - e - 3 Mbps —•— 1 Mbps Expected 20 25 30 Time (second) 40 45 50 (a) 30-msec Roundtrip Transit Delay 12 E. 10 10 Mbps 5 Mbps 3 Mbps 1 Mbps Expected >" 6 "o > a) rr 20 25 30 Time (second) (b) 120-msec Roundtrip Transit Delay Figure 4.17: Per-Flow Throughputs with Adaptive R E D n 50 CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS .10 Mbps —— 5 Mbps Time (second) (c) 200-msec Roundtrip Transit Delay Figure 4.17: Per-Flow Throughputs with Adaptive R E D n (Cont.) CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 77 The bottleneck link bandwidth was distributed with an extreme bias against the T C P flows during the entire simulation run as revealed by the per-flow received throughputs, depicted in Figure 4.17. The U D P flows consistently took approximately 87% of the bandwidth, and the T C P flows merely shared the remaining bandwidth according to their throughput expectations and per-color max-min unweighted fairness: The received throughputs of the U D P flows were approximately equal to the respective sending rates whereas only the T C P flows expecting a throughput of 1 Mbps made it, incidentally. The effective maximum packet marking probabilities essentially remained fixed at 0.1 since the average queue lengths stayed well above the middle thresholds for almost the entire simulation run, as depicted in Figure 4.18. If any of the average queue lengths had approached its maximum threshold, the amount of arriving packets being marked with the maximum packet marking prob-ability of 0.1 would have been approximately 20%. Nevertheless, even this maximum packet drop ratio could have been insufficient to limit the bandwidth utilization of the U D P flows to their fair share of approximately 42%. This simple simulation reveals the inherent shortcoming of Adaptive R E D n in handling multiplexed T C P and U D P traffic. The unfairness in bandwidth distribution stems from the facts that in response to a congestion notification, T C P reduces the amount to send at least by half roughly once per window of data sent whereas U D P does not reduce its sending rate at all when active, since it uses- no congestion control. CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 3000 2500 2000 1500 1000 500 Queue Length Expected Min. and Max. Thresholds 20 30 Time (second) (a) Green Packets 3000 2500 2000 1500 1000 500 Queue Length • — Expected • • • Min. and Max. Thresholds 20 30 Time (second) (b) Yellow Packets 3000 r 2500 2000 r 1500 1000 500 H Queue Length • — Expected • • • Min. and Max. Thresholds 20 . 30 Time (second) (c) Red Packets Figure 4.18: Average Queue Lengths with Adaptive R E D n CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 79 4.3.1 Recent Work Discussions about the increasing use of unresponsive transport protocols and TCP-unfriendly con-gestion controls for transmitting data over the Internet, especially those of realtime continuous media, have become intense lately [21, 20]. The negative impacts range from extreme unfairness in shared network resource distribution against the competing T C P flows to the potential for conges-tion collapse, especially from undelivered packets. Below is a summary of the recent findings from research efforts towards congestion control algorithms for gateways to handle the unresponsive and the TCP-unfriendly flows and for end-hosts to make the associated transport protocols TCP-friendly. The congestion control algorithms for gateways are usually classified according to their meth-ods of identifying the unresponsive and the TCP-unfriendly flows, namely examining the packet marking history and sampling the traversing flows. R E D with Preferential Drop (RED-PD) uses the packet marking history to detect high-bandwidth flows during congestion, and preferentially drops packets from these flows to control their bandwidth utilizations [44]. It identifies the flows that have received packet markings over multiple sampling intervals as high-bandwidth flows, where the intervals are of variable lengths depending on the instantaneous ambient packet marking probability. It drops additional packets from each of these high-bandwidth flows on top of the ambient packet markings according to the ratio of the number of packet drops received by this high-bandwidth flow being policed to the average number of packet drops received among all the high-bandwidth flows. C H O K e uses random flow sampling to detect high-bandwidth flows during congestion, and prefer-entially drops packets from these flows to control their bandwidth utilizations [52]. It is essentially a combined version of R E D and random early drop: Each arriving packet is matched against a ran-domly selected packet in the queue. If they belong to the same flow, both packets will be dropped, or else the arriving packet will be enqueued subject to ambient packet marking. S R E D uses statistical flow sampling to detect high-bandwidth flows, but no algorithm is given for preferential packet drop or marking yet [51]. The flow of each arriving packet is matched against a flow randomly selected from a partial list of the recently traversed flows, composed by statistical flow sampling. If a match is found, the number of hits of the flow selected will be increased by 1, or else the new flow will replace the one selected with a static probability. S R E D claims that most of the high-bandwidth flows, denoted by relatively high numbers of hits in the list, should be identified. Any of these algorithms could be easily incorporated into Adaptive R E D n . The congestion control algorithms for end-hosts are usually classified according to their methods of making the associated transport protocols TCP-friendly, or fair to T C P in competing CHAPTER 4. CONFIGURATION GUIDELINES AND SIMULATION RESULTS 80 for shared network resources. Binomial Congestion Control Algorithms [4] refer to a class of nonlin-ear window-based algorithms that embraces the generalization of Additive-Increase-Multiplicative-Decrease (AIMD) algorithms [9, 22], on which T C P is based. In response to a packet marking event, the congestion window is decreased by an amount proportional to a power of I of its current size, or else it is increased by an amount inversely proportional to a power of k of its current size, where I and k are configuration parameters. T C P is equivalent to an instance of the algorithms with k being equal to 0 and I to 1. The infinite number of instances that satisfy k +1 = 1, k > 0, and I > 0 are proved to be TCP-friendly, and converge on fairness, as well as implicit fairness, given that all the active flows should be fed back with synchronized notifications during congestion, such as in a network environment with gateways with R E D or its variants. TCP-Friendly Rate Control ( T F R C ) is a control-equation-based algorithm that explicitly gives the maximum sending rate of the flow as a function of the steady-state packet marking event rate for the flow [23, 32]. The control equation used is the one that gives the maximum steady-state sending rate of T C P , and thus a TFRC-enabled transport protocol should be friendly to T C P over sufficiently long time scales. Given their key char-acteristics of T C P friendliness, the individual adjustability of the rates of change of the sending rate, and parameterization-independent implicit fairness convergence, these congestion control algorithms are deemed to enable the transport protocols for transmitting both layered and non-layered realtime continuous medium data. 1 Chapter 5 Conclusions and Further Research The dynamic adaptive algorithms presented in this thesis greatly enhance the effectiveness of some of the key congestion control algorithms for gateways in providing congestion avoidance and loose statistical QoS assurances on per-flow bandwidth distribution, queuing delay, and loss ratio. The token bucket depth adjustment algorithm for T C P allows a token-bucket-based traffic conditioner, such as 2-Rate 3-Color Marker, to set the depths dynamically at runtime according to the attributes of its client T C P flow to attain precise traffic conditioning and to make the configuration minimal. The resultant throughput-based deterministic traffic conditioning for T C P , performed according to the expected throughputs yet independent of the burstiness of the client flow, allows the flow to effectively request the best possible throughput and delay from the network regardless of the level of congestion. The minimum actual queue length threshold configuration parameters for R E D and it's variants, which complement the minimum average queue length threshold parameters, help verify the existence of congestion to avoid over-marking and to further assure a minimum throughput at the gateway. The middle threshold algorithm for Self-Configuring R E D and its variants greatly improves the sensitivity of the dynamic maximum packet marking probability adjustment algorithm so that the appropriate maximum packet marking probabilities are inferred and adopted sufficiently early to reconcile ,the tradeoff between queuing delay and bandwidth distribution at the gateway. Along' with these algorithms, guidelines on the proper configurations of R E D and its variants are also given in this thesis to help simplify the task and get the algorithms tuned for the desired performance. This thesis claims that the resultant architecture of Adaptive R E D n Gateway and Adaptive Token Bucket Traffic Conditioner for T C P , with throughput-based deterministic traffic condition-ing performed at the boundary and delayed-policing applied in the interior of the network, enables 81 CHAPTER 5. CONCLUSIONS AND FURTHER RESEARCH 82 the implementation of an assured rate network service for T C P in the Internet, providing a short queuing delay and an assured bandwidth distribution at each gateway. The overall performance of the mechanisms is evaluated with numerous simulations in a wide variety of scenarios comprising network topologies with different numbers of bottleneck links, T C P flows with different through-put expectations and R T T s , and traffic aggregates with different numbers of active flows. A l l the simulation results certainly corroborate the claim. A number of areas for further research on the algorithms are of interest. One area concerns fairness among the flows conditioned with rate-based and throughput-based conditionings when they are multiplexed, and compete for shared network resources. A related aspect is per-aggregate traffic conditioning. Another area concerns enhancing the middle threshold algorithm with a subroutine for dynamically adjusting those maximum packet marking probabilities associated with the middle thresholds, which are now static parameters, at runtime. The adjusted values should be strictly greater than the respective effective maximum packet marking probabilities. Also an aspect of interest is expanding the versatility of Adaptive R E D n for handling flows of other transport protocols, such as TFRC-enabled protocols, UDP, and so forth. Glossary Abbreviations A C K Acknowledgment A F Assured Forwarding C B R Constant Bit Rate cwnd Congestion Window D A C K Delayed Acknowledgment DifFServ Differentiated Services E C N Explicit Congestion Notification E W M A Exponential Weighted Moving Average F I F O First In First Out F T P File Transfer Protocol I E T F Internet Engineering Task Force IP Internet Protocol I R T F Internet Research Task Force ISP Internet Service Provider kB Kilobyte Mbps Megabits Per Second msec Millisecond MSS Maximum Segment Size 83 GLOSSARY P D B Per-Domain Behavior P H B ' Per-Hop Behavior QoS Quality of Service R E D Random Early Detection R E D n Random Early Detection with Multiple Packet Marking Precedence Levels R F C Request for Comments RIO Random Early Detection with In-or-Out Bit (2 Packet Marking Precedence Levels) R T O Retransmission Timeout R T T Roundtrip Time S A C K Selective Acknowledgment s R T T Smoothed Roundtrip Time T C P Transmission Control Protocol T S W Time Sliding Window U D P User Datagram Protocol W W W World Wide Web . Bibl iography [1] M . Allman and A. Falk: On the Effective Evaluation of T C P . ACM Computer Communication Review, 29(5):59-70, Oct. 1999. [2] M . Allman, S, Floyd, and C. Partridge. Increasing T C P ' s Initial Window. R F C 2414, I E T F , Sept. 1998. [3] M . Allman, V . Paxson, and W. R. Stevens. T C P Congestion Control. R F C 2581, I E T F , Apr. 1999. [4] D. Bansal and H . Balakrishnan. Binomial Congestion Control Algorithms. In To appear in Proceedings of IEEE INFOCOM 2001, Anchorage, A K , USA, Apr. 2001. [5] S. Blake et al. A n Architecture for Differentiated Services. R F C 2475, I E T F , Dec. 1998. [6] O. Bonaventure and S. D. Cnodder. A Rate Adaptive Shaper for Differentiated Services. Internet Draft, I E T F , July 2000. [7] B. Braden et al. Recommendations on Queue Management and Congestion Avoidance in the Internet. R F C 2309, I E T F , Apr. 1998. [8] L . Breslau, D. Estrin, K . Fall, S. Floyd, et al. Advances in Network Simulation. IEEE Computer, 33(5):59-67, May 2000. [9] D. Chiu and R. Jain. Analysis of the Increase and Decrease Algorithms for Congestion Avoid-ance in Computer Networks. Computer Networks and ISDN Systems, 17(1) :1—14, June 1989. [10] k. claffy. Internet Measurement and Data Analysis: Topology, Workload, Performance, and Routing Statistics. National Academy of Engineering 1999 Conference Workshop, Apr. 1999. [11] D. D. Clark and W . Fang. Explicit Allocation of Best-Effort Packet Delivery Service. IEEE/ACM Transactions on Networking, 6(4):362-373, Aug. 1998. [12] D. D. Clark, S. Shenker, and L . Zhang. Supporting Real-Time Applications in an Integrated Services Packet Network: Architecture and Mechanism. A CM Computer Communication Re-view (SIGCOMM '92 Conference Proceedings), 22(4):14-26, Oct. 1992. [13] K . Fall and K . Varadhan, editors. The ns Manual. The V I N T Project, Sept. 2000. [14] W . Fang, N. Seddigh, and B. Nandy. A Time Sliding Window Three Color Marker ( T S W T C M ) . R F C 2859, I E T F , June 2000. 85 BIBLIOGRAPHY 86 [15] W . Feng, D. D. Kandlur, D. Saha, and K . G . Shin. A Self-Configuring R E D Gateway. In Proceedings of IEEE INFOCOM '99, volume 3, pages 1320-1328, New York, N Y , U S A , Mar, . 1999. [16] W . Feng, D. D. Kandlur, D. Saha, and K . G . Shin. Adaptive Packet Marking for Maintaining End-to-End Throughput in a Differentiated-Services Internet. IEEE/ACM Transactions on Networking, 7(5):685-697, Oct. 1999. [17] V . Firoiu and M . Borden. A Study of Active Queue Management for Congestion Control. In Proceedings of IEEE INFOCOM 2000, volume 3, pages 1435-1444, Tel Aviv, Israel, Mar. 2000. [18] S. Floyd. Connections with Multiple Congested Gateways in Packet-Switched Networks Part 1: One-way Traffic. ACM Computer Communication Review, 21(5):30-47, Oct. 1991. [19] S. Floyd. T C P and Explicit Congestion Notification. ACM Computer Communication Review, 24(5):10-23, Oct. 1994. [20] S. Floyd. Congestion Control Principles. R F C 2914, I E T F , Sept. 2000. [21] S. Floyd and K . Fall. Promoting the Use of End-to-End Congestion Control in the Internet. IEEE/ACM Transactions on Networking, 7(4):458-472, Aug. 1999. [22] S. Floyd, M . Handley, and J . Padhye. A Comparison of Equation-Based and A I M D Congestion Control. Technical report, Work in progress, May 2000. [23] S. Floyd, M . Handley, J . Padhye, and J. Widmer. Equation-Based Congestion Control for Unicast Applications. ACM Computer Communication Review (SIGCOMM 2000 Conference Proceedings), 30(4):43-56, Oct. 2000. [24] S. Floyd and T . Henderson. The NewReno Modification to T C P ' s Fast Recovery Algorithm. R F C 2582, I E T F , Apr. 1999. [25] S. Floyd and V . Jacobson. On Traffic Phase Effects in Packet-Switched Gateways. Internet-working: Research and Experience, 3(3):115—156, Sept. 1992. • [26] S. Floyd and V. Jacobson. Random Early Detection Gateways for Congestion Avoidance. IEEE/ACM Transactions on Networking, 1(4):397-413, Aug. 1993. [27] S. Floyd, J . Mahdavi, M . Mathis, and M . Podolsky. A n Extension to the Selective Acknowl-edgment (SACK) Option for T C P . R F C 2883, I E T F , July 2000. ' [28] S. Floyd and K . K . Ramakrishnan. T C P with E C N : The Treatment of Retransmitted Data Packets. Internet Draft, I E T F , Nov. 2000. [29] M . Goyal, A . Durresi, R. Jain, and C. Liu. Performance Analysis of Assured Forwarding. . Internet Draft, I E T F , Feb. 2000. [30] M . Goyal, A . Durresi, P. Misra, C. Liu, and R. Jain. Effect of Number of Drop Precedences in Assured Forwarding. In Proceedings of IEEE GLOBECOM '99, Rio de Janeiro, Brazil, Dec. 1999. BIBLIOGRAPHY 87 [31] R. Guerin, S. Kamat, V . Peris, and R. Rajan. Scalable QoS Provision Through Buffer Man-agement. ACM Computer Communication Review (SIGCOMM '98 Conference Proceedings), 28(4):29-40, Oct. 1998. [32] M . Handley, J . Padhye, S. Floyd, and J. Widmer. T C P Friendly Rate Control ( T F R C ) : Protocol Specification. Internet Draft, I E T F , Nov. 2000. [33] J . Heinanen et al. Assured Forwarding P H B Group. R F C 2597, I E T F , June 1999. [34] J . Heinanen and R. Guerin. A Single Rate Three Color Marker. R F C 2697, I E T F , Sept. 1999. [35] J . Heinanen and R. Guerin. A Two Rate Three Color Marker. R F C 2698, I E T F , Sept. 1999. [36] J . C . Hoe. Start-Up Dynamics of T C P ' s Congestion Control and Avoidance Schemes. Master's thesis, Massachusetts Institute of Technology, 1995. [37] J . Ibanez and K . Nichols. Preliminary Simulation Evaluation of an Assured Service. Internet Draft, I E T F , Aug. 1998. [38] V . Jacobson. Congestion Avoidance and Control. ACM Computer Communication Review (SIGCOMM '88 Conference Proceedings),, 18(4):314-329, Aug. 1988. [39] V . Jacobson. Berkeley T C P Evolution from 4.3-Tahoe to 4.3-Reno. In Proceedings of the Eighteenth Internet Engineering Task Force, pages 363-366, Vancouver, B C , Canada, Aug. 1990. • [40] V . Jacobson. Modified T C P Congestion Avoidance Algorithm. I R T F end2end-interest Mailing List, Apr. 1990. [41] V . Jacobson. Notes on using R E D for Queue Management and Congestion Avoidance. Presen-tation notes, N A N O G Meeting 13, Dearborn, MI, USA, June 1998. [42] V . Jacobson, B. Braden, and D. Borman. T C P Extensions for High Performance. R F C 1323, I E T F , May 1992. [43] S. Keshav. An Engineering Approach to Computer Networking: ATM Networks, the Internet, and the Telephone Network. Addison-Wesley Publishing, 1997. [44] R. Mahajan and S. Floyd. Controlling High-Bandwidth Flows at the Congested Router. Tech-nical report, Work in progress, Nov. .2000. [45] M . Mathis, J . Mahdavi, S. Floyd, and A. Romanow. T C P Selective Acknowledgment Options. R F C 2018, I E T F , Oct. 1996. [46] M . May, J . Bolot, C. Diot, and B. Lyles. Reasons Not to Deploy R E D . In Proceedings of IEEE/IFIP IWQoS '99, pages.260-262, London, U K , June 1999. [47] S. McCreary arid k. claffy. Trends in Wide Area IP Traffic Patterns: A View from Ames Internet Exchange. I T C Specialist Seminar on IP Traffic Modeling, Measurement, and Management, Sept. 2000. BIBLIOGRAPHY 88 V . Misra, W . Gong, and D. Towsley. Fluid-Based Analysis of a Network of A Q M Routers Supporting T C P Flows with an Application to R E D . A CM Computer Communication Review (SIGCOMM 2000 Conference Proceedings), 30(4):151-.160, Oct. 2000. J. Nagle. Congestion Control in I P / T C P Internetworks. ACM Computer Communication Review, 14(4):11-17, Oct. 1984. K . Nichols, S. Blake, F . Baker, and D. L . Black. Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers. R F C 2474, I E T F , Dec. 1998. T . J . Ott, T . V . Lakshman, and L . H. Wong. SRED: Stabilized R E D . In Proceedings of IEEE INFOCOM '99, volume 3, pages 1346-1355, New York, N Y , USA, Mar. 1999. R. Pan, B. Prabhakar, and K. Psounis. C H O K e , A Stateless Active Queue Management Scheme for Approximating Fair Bandwidth Allocation. In Proceedings of IEEE INFOCOM 2000, vol-ume 2, pages 942-951, Tel Aviv, Israel, Mar. 2000. J . Postel. User Datagram Protocol. R F C 768, I E T F , Aug. 1980. J . Postel. Transmission Control Protocol. R F C 793, I E T F , Sept. 1981. K . K . Ramakrishnan and S. Floyd. A Proposal to add Explicit Congestion Notification (ECN) to IP. R F C 2481, I E T F , Jan. 1999. K . K . Ramakrishnan, S. Floyd, and D. L . Black. The Addition of Explicit Congestion Notifi-cation (ECN) to IP. Internet Draft, I E T F , Jan. 2001. V . Rosolen, O. Bonaventure, and G. Leduc. A R E D Discard Strategy for A T M Networks and . its Performance Evaluation with T C P / I P Traffic. ACM Computer Communication Review, 29(3):23-43, July 1999. . . ' S. Sahu, P. Nain, D. Towsley, C. Diot, and V . Firoiu. On Achievable Service Differentiation with Token Bucket Marking for T C P . ACM Performance Evaluation Review (SIGMETRICS 2000 Conference Proceedings), 28(l):23-33, June 2000. N. Seddigh, B. Nandy, and J. Heinanen. A n Assured Rate Per-Domain Behavior for Differen-tiated Services. Internet Draft, I E T F , Feb. 2001. S. Shenker, L . Zhang, and D. D. Clark. Some Observations on the Dynamics of a Congestion Control Algorithm. ACM Computer Communication Review, 20(5):30-39, Oct. 1990. W . R. Stevens. TCP/IP Illustrated, Volume 1: The Protocols. Addison-Wesley Publishing, 1994. J. Touch. T C P Control Block Interdependence. R F C 2140, I E T F , Apr. 1997. J. S. Turner. New Directions in Communications (or Which Way to the Information Age). IEEE Communications, 24(10):8-15, Oct. 1986. [64] H . Wang and K . G . Shin. Refined Design of Random Early Detection Gateways. In Proceedings of IEEE GLOBECOM '99, volume IB, pages 769-775, Rio de Janeiro, Brazil, Dec. 1999. BIBLIOGRAPHY 89 [65] G . R. Wright and W . R. Stevens. TCP/IP Illustrated, Volume 2: The Implementation. Addison-Wesley Publishing, 1995. [66] L . Zhang and D. D. Clark. Oscillating Behavior of Network Traffic: A Case Study Simulation. Internetworking: Research and Experience, 1(2):101-112, Dec. 1990. [67] L . Zhang, S. Shenker, and D. D. Clark. Observations on the Dynamics of a Congestion Control Algorithm: The Effects of Two-Way Traffic. ACM Computer Communication Review (SIG-COMM '91 Conference Proceedings), 21(4):133-148, Sept. 1991. Appendix A Additional Simulation Results This chapter presents additional simulation results supplementing those in Chapter 4 to also cor-roborate that the architecture of Adaptive R E D n Gateway and Adaptive Token Bucket Traffic Conditioner for T C P enables a short queuing delay and an assured bandwidth distribution at each gateway. The explanations of these results are essentially similar to those in Chapter 4, and only where needed, further explanations will be given. The organization of this chapter is analogous with that of Chapter 4: The results for the single-bottleneck-link network are presented in Section A . l , and those for the dual-bottleneck-link network in Section A.2. A . l Simulation Scenarios with One Bottleneck Data Link The single-bottleneck-link simulation network, the configurations of the mechanisms, and the simu-lation scenarios involved are described in detail in Section 4.1 in Chapter 4. A.1.1 Adaptive R E D n with E C N in Scenario B-U-O The simulation results include the per-flow received throughputs, the average ^queue lengths and the overall average queuing delay at the bottleneck gateway, the bottleneck link utilization, and the effective maximum packet marking probabilities. No packet retransmission took place at all during the entire simulation run since E C N signaling instead of packet drop was used, and no R T O was triggered. Consequently, all the received throughputs,.depicted in Figure A . l , were 100% goodputs. 90 APPENDIX A. ADDITIONAL SIMULATION RESULTS Time (second) (a) 30-msec Roundtrip Transit Delay 150 .10 Mbps 5 Mbps 3 Mbps 1 Mbps Expected 150 Time (second) (b) 60-msec Roundtrip Transit Delay Figure A . l : Per-Flow Throughputs with Adaptive R E D n APPENDIX A. ADDITIONAL SIMULATION RESULTS 18 Time (second) (c) 70-msec Roundtrip Transit Delay 0 50 100 1 Time (second) (d) 120-msec Roundtrip Transit Delay Figure A . l : Per-Flow Throughputs with Adaptive R E D n (Cont.) APPENDIX A. ADDITIONAL SIMULATION RESULTS 14 Time (second) (e) 160-msec Roundtrip Transit Delay Time (second) (f) 200-msec Roundtrip Transit Delay Figure A . l : Per-Flow Throughputs with Adaptive R E D n (Cont.) APPENDIX A. ADDITIONAL SIMULATION RESULTS Queue Length Expected Min. and Max. Thresholds 50 100 Time (second) (a) Green Packets 150 2500 2000 ro 1500 1000 500 Queue Length • — Expected • • • Min. and Max. Thresholds 50 100 Time (second) (b) Yellow Packets 150 — Queue Length — Expected • • Min. and Max. Thresholds 50 100 Time (second) 0.035 150 50 100 Time (second) 150 (c) Red Packets (d) Overall Queuing Delay Figure A.2: Average Queue Lengths and Overall Queuing Delay with Adaptive R E D n APPENDIX A. ADDITIONAL SIMULATION RESULTS 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 50 100 -«— Utilization • • Expected 150 Time (second) Figure A.3: Bottleneck Link Utilization with Adaptive R E D n APPENDIX A. ADDITIONAL SIMULATION RESULTS 0.1*-0.01 P x Dynamic Probability — Static Probability 50 100 Time (second) 150 0.12 x Dynamic Probability — Static Probability 100 Time (second) -x-150 (a) Green Packets (b) Yellow Packets X Dynamic Probability • — Static Probability :x—• -*k - X XSXX>OC->«»frS<"JC< X 50 ° '™™ ! , , * r e 0 100 C K Time (second) m 150 (c) Red Packets Figure A.4: Effective Maximum Packet Marking Probabilities of Adaptive R E D n APPENDIX A. ADDITIONAL SIMULATION RESULTS 97 A. 1.2 Adaptive REDn with Packet Drop in Scenario B-O-U The combination of simulation scenarios B - U - 0 and B - O - U gives a complete continuum of variations in the number of active flows for a thorough evaluation of the performance of Adaptive R E D n . • The simulation results include the per-flow received throughputs, the average queue lengths and the overall average queuing delay at the bottleneck gateway, the bottleneck link utilization, the packet marking ratio, and the effective maximum packet marking probabilities. The average steady-state early packet drop ratio, or the packet marking ratio, varied within the range of 0 and 6.50697e-4, as depicted in Figure A.8. Al l the packet drops solely stemmed from packet marking, and not queue overflow. APPENDIX A. ADDITIONAL SIMULATION RESULTS 30 Time (second) (a) 30-msec Roundtrip Transit Delay 18 Time (second) (b) 60-msec Roundtrip Transit Delay Figure A.5: Per-Flow Throughputs with Adaptive R E D n APPENDIX A. ADDITIONAL SIMULATION RESULTS 18 0 50 100 150 Time (second) (c) 70:msec Roundtrip Transit Delay 14 0 50 100 150 Time (second) (d) 120-msec Roundtrip Transit Delay Figure A.5: Per-Flow Throughputs with Adaptive R E D n (Cont.) APPENDIX A. ADDITIONAL SIMULATION RESULTS 100 0 50 100 150 Time (second) (e) 160-msec Roundtrip Transit Delay 0 50 100 150 Time.(second) (f) 200-msec Roundtrip Transit Delay Figure A.5: Per-Flow Throughputs with Adaptive R E D n (Cont.) APPENDIX A. ADDITIONAL SIMULATION RESULTS 101 Queue Length Expected Min. and Max. Thresholds 50 100 Time (second) (a) Green Packets 150 2500 2000 ra 1500 1000 500 — Queue Length • — Expected • • • Min. and Max. Thresholds 50 100 Time (second) (b) Yellow Packets 150 Queue Length Expected Min. and Max. Thresholds 50 100 Time (second) 150 0.025 0.02 'i 0.015 0.01 0.005 Queuing Delay Expected 50 100 Time (second) 150 (c) Red Packets (d) Overall Queuing Delay Figure A.6: Average Queue Lengths and Overall Queuing Delay with Adaptive R E D n APPENDIX A. ADDITIONAL SIMULATION RESULTS 102 1 0.9 0.8 0/7 c 0.6 o | 0.5 I § 0.4 0.3 rv-0.2 I 0.1 -0 0 * „ I u:i:— Utilization Expected 50 100 150 Time (second) Figure A.7: Bottleneck Link Utilization with Adaptive R E D n 0.014 0.012 0.01 _o & 0.008 Q. Q Q >> 0.006 If-co UJ 0.004 0.002 0 50 ..^A.AAA.AA..ff«AAM.A. 100 150 Time (second) Figure A.8: Early Drop Ratio with Adaptive R E D n APPENDIX A. ADDITIONAL SIMULATION RESULTS 103 0.1* 0.09 p 0.08 . 0.07 | 0.06 0.05 0.04 2 0.03 o £ £.0.02 Q 0.01 0 x Dynamic Probability — Static Probability 50 . 100 Time (second) (a) Green Packets 150 0.12 0.1 :itx- -*WBCM#& 0.08 0.06 0.04 0.02 50 x Dynamic Probability — Static Probability Time (second) (b) Yellow Packets — 1 ^»aow! 100 150 0.08 0.06 0.04 0.02 0 0.18 0.16 f 0.14 0.12 x2 4 x y x* x Dynamic Probability — Static Probability 50 100 Time (second) (c) Red Packets 150 Figure A.9: Effective Maximum Packet Marking Probabilities of Adaptive R E D n APPENDIX A. ADDITIONAL SIMULATION RESULTS 104 A . l . 3 Adaptive R E D n with E C N in Scenario B - O - U The simulation results include the per-flow received throughputs, the average queue lengths and the overall average queuing delay at the bottleneck gateway, the bottleneck link utilization, and the effective maximum packet marking probabilities. A l l the received throughputs, depicted in Figure A. 10, were 100% goodputs. APPENDIX A. ADDITIONAL SIMULATION RESULTS Time (second) (a) 30-msec Roundtrip Transit Delay 10 Mbps —— 5 Mbps - a - 3 Mbps —•— 1 Mbps Expected 150 10 Mbps 5-Mbps 3 Mbps 1 Mbps Expected 150 Time (second) (b) 60-msec Roundtrip Transit Delay Figure A.10: Per-Flow Throughputs with Adaptive R E D n APPENDIX A. ADDITIONAL SIMULATION RESULTS 106 Time (second) (c) 70-msec Roundtrip Transit Delay 150 150 Time (second) (d) 120-msec Roundtrip Transit Delay Figure A.10: Per-Flow Throughputs with Adaptive R E D n (Cont. APPENDIX A. ADDITIONAL SIMULATION RESULTS 107 10 Mbps 5 Mbps 3 Mbps 1 Mbps Expected 50 100 Time (second) (e) 160-msec Roundtrip Transit Delay 150 10 Mbps 5 Mbps 3 Mbps 1 Mbps Expected 100 Time (second) (f) 200-msec Roundtrip Transit Delay 150 Figure A.10: Per-Flow Throughputs with Adaptive R E D n (Cont.) APPENDIX A. ADDITIONAL SIMULATION RESULTS 108 2500 2000 co 1500 S 1000 a 500 Queue Length Expected Min. and Max. Thresholds 50 100 Time (second) (a) Green Packets 150 2500 2000 to 1500 S 1000 a 500 Queue Length. Expected Min. and Max. Thresholds 50 100 Time (second) (b) Yellow Packets 150 2500 2000 co 1500 1000 500 — Queue Length — Expected • • Min. and Max. Thresholds 50 100 Time (second) 0.025 150 50 , 100 Time (second) 150 (c) Red Packets (d) Overall Queuing Delay Figure A . l l : Average Queue Lengths and Overall Queuing Delay with Adaptive R E D n APPENDIX A. ADDITIONAL SIMULATION RESULTS 109 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0* 7*" 0 50 100 Time (second) 1? Utilization Expected 150 Figure A.12: Bottleneck Link Utilization with Adaptive R E D n APPENDIX A. ADDITIONAL SIMULATION RESULTS 0.1* 0.09 •S 0.08 , 0.07 0.06 a 0.05 I 0.04 5 0.03 o E <a £,0.02 Q 0.01 x Dynamic Probability • — Static Probability 50 100 Time (second) (a) Green Packets 150 0.18 0.16 CL 0.14 0.12 & X 0.1: 0.08 0.06 0.04 0.02 x Dynamic Probability — Static Probability 100 150 Time (second) (b) Yellow Packets x Dynamic Probability — Static Probability >s$coe»Mej<xx 50 100 Time (second) 150 (c) Red Packets Figure A.13: Effective Maximum Packet Marking Probabilities of Adaptive R E D n APPENDIX A. ADDITIONAL SIMULATION RESULTS 111 A . 2 Simulation Scenarios with Two Bottleneck Data Links The dual-bottleneck-link simulation network, the configurations of the mechanisms, and the simu-lation scenarios involved are described in detail in Section 4.2 in Chapter 4. A.2.1 Adaptive REDn with E C N in Scenario B-U-O The simulation results include the per-flow received throughputs, the average queue lengths and the overall average queuing delays at the bottleneck gateways, the bottleneck link utilizations, and the effective maximum packet marking probabilities. Al l the received throughputs, depicted in Figures A . 14 and A. 15, were 100% goodputs. APPENDIX A. ADDITIONAL SIMULATION RESULTS 112 30 Time (second) (a) 30-msec Roundtrip Transit Delay 18 I 1 r 0 10 20 30 40 50 60 70 80 ' Time (second) (b) 60-msec Roundtrip Transit Delay Figure A . 14: Per-Flow Throughputs of Flows Traversing Two Adaptive R E D n Gateways APPENDIX A. ADDITIONAL SIMULATION RESULTS 113 QiT 1 1 v'llintinnfiUiuftitnmifly 1 1 1 0 10 20 30 40 50 60 70 80 Time (second) (c) 70-msec Roundtrip Transit Delay 14 | 1 1 1 1 1 1— i Time (second) (d) 120-msec Roundtrip Transit Delay Figure A. 14: Per-Flow Throughputs of Flows Traversing Two Adaptive R E D n Gateways (Cont.) APPENDIX A. ADDITIONAL SIMULATION RESULTS 114 Time (second) (e) 160-msec Roundtrip Transit Delay Time (second) (f) 200-msec Roundtrip Transit Delay Figure A. 14: Per-Flow Throughputs of Flows Traversing Two Adaptive R E D n Gateways (Cont.) APPENDIX A. ADDITIONAL SIMULATION RESULTS 115 35 Time (second) (a) 30-msec Roundtrip Transit Delay 25 I :—i 1 r 0 10 20 30 40 50 60 70 80 Time (second) (b) 60-msec Roundtrip Transit Delay Figure A. 15: Per-Flow Throughputs of Flows Traversing One Adaptive R E D n Gateway APPENDIX A. ADDITIONAL SIMULATION RESULTS 116 25 0 10 20 30 40 50 60 70 80 Time (second) (c) 70-msec Roundtrip Transit Delay 14 | ; 1 1 1 r Time (second) (d) 120-msec Roundtrip Transit Delay Figure A.15: Per-Flow Throughputs of Flows Traversing One Adaptive R E D n Gateway (Cont.) APPENDIX A. ADDITIONAL SIMULATION RESULTS 117 0 10 20 30 40 50 60 70 80 Time (second) (e) 160-msec Roundtrip Transit Delay 25 | 1 r Time (second) (f) 200-msec Roundtrip Transit Delay Figure A.15: Per-Flow Throughputs of Flows Traversing One Adaptive R E D n Gateway (Cont.) APPENDIX A. ADDITIONAL SIMULATION RESULTS 118 Queue Length Expected Min. and Max. Thresholds 0 10 20 30 40 50 60 70 Time (second) (a) Green Packets 2500 2000 r S 1500 a 1000 500 Queue Length Expected Min. and Max. Thresholds 0 10 20 30 40 50 60 70 Time (second) (b) Yellow Packets — Queue Length — Expected • • Min. and Max. Thresholds (c) Red Packets 0 10 20 30 40 50 60 Time (second) 0.03 0.025 2 0.02 0.015 r 3 0.01 0.005 —«- Queuing Delay • • • Expected 40 Time (second) (d) Overall Queuing Delay Figure A. 16: Average Queue Lengths and Overall Queuing Delay at the First Adaptive R E D n Gateway APPENDIX A. ADDITIONAL SIMULATION RESULTS 119 10 Queue Length Expected Min. and Max. Thresholds 20 30 40 50 Time (second) (a) Green Packets 60 70 5000 4000 3000 3 2000 a 1000 Queue Length • — Expected • • • Min. and Max. Thresholds 0 10 20 30 40 50 60 70 Time (second), (b) Yellow Packets — Queue Length — Expected • • Min. and Max. Thresholds •10 20 30 40 50 60 Time (second) (c) Red Packets 70 40 Time (second) (d) Overall Queuing Delay Figure A.17: Average Queue Lengths and Overall Queuing Delay at the Second Adaptive R E D n Gateway APPENDIX A. ADDITIONAL SIMULATION RESULTS 120 0.9 0.8 0.7 1 0.6 0.5 0.4 0.3 0.2 0.1 0 T Utilization Expected 20 40 Time (second) 60 80 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1' 0 Utilization Expected 20 40 Time (second) 60 (a) First Bottleneck Link (b) Second Bottleneck Link Figure A.18: Utilizations of Bottleneck Links with Adaptive R E D n 80 APPENDIX A. ADDITIONAL SIMULATION RESULTS 0.1* 0.09 0.08 , 0.07 0.06 o 0.05 h ,0. | 0.04 K 0.03 £ 0.02 0.01 X Dynamic Probability — Static Probability 10 20 30 40 50 Time (second) 60 70 (a) Green Packets . 0.1 ^ a*—XK )y 0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 0 10 x Dynamic Probability • — Static Probability 20 30 40 50 60 70 Time (second) (b) Yellow Packets 0.1 0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 X Dynamic Probability — Static Probability o l'1'lMj,irjuum, 0 10 20 30 Time (second) KX X »MttX)»:ift))Ottltt >( 40 50 60' 70 (c) Red Packets Figure A . 19: Effective Maximum Packet Marking Probabilities of the First Adaptive R E D n Gateway APPENDIX A. ADDITIONAL SIMULATION RESULTS 122 0.1*-0.09 0.08 , 0.07 0.06 0.05 o.o4 -: 0.03 0.02 -0.01 x Dynamic Probability — Static Probability 10 20 30 40 50 60 70 Time (second) (a) Green Packets 0.12 O.I»»JEXX*C«:-XXX-X • 0.08 0.06 0.04 0.02 x Dynamic Probability | • — Static Probability 10 20 30 40 50 Time (second) (b) Yellow Packets 0.16 S- 0.14 s I 0.12 « 0,1«pap!KM«('.M<T — X - * X—X to 0.08 Q_ E | 0.06 CO 5 o E 0.04 co c >, Q 0.02 x Dynamic Probability — Static Probability n: ~:.z : 'JfcxXXXXXXXCQi (c) Red Packets 10 20 30 40 50 60 70 Time (second) Figure A.20: Effective Maximum Packet Marking Probabilities of the Second Adaptive R E D n Gateway APPENDIX A. ADDITIONAL SIMULATION RESULTS 123 A.2.2 Adaptive R E D n with Packet Drop in Scenario B - O - U The simulation results include the per-flow received throughputs, the average queue lengths and the overall average queuing delays at the bottleneck gateways, the bottleneck link utilizations, the packet marking ratios, and the effective maximum packet marking probabilities. The average steady-state early packet drop ratio, or the packet marking ratio, for the first bottleneck gateway varied within the range of 0 and 5.14688e-4, and that for the second bottleneck gateway varied within the range of 0 and 4.58186e-4, as depicted in Figure A.26. Al l the packet drops at the gateways solely stemmed from packet marking, and not queue overflow. APPENDIX A. ADDITIONAL SIMULATION RESULTS 124 Time (second) (a) 30-rrisec Roundtrip Transit Delay Time (second) (b) 60-msec Roundtrip Transit Delay Figure A.21: Per-Flow Throughputs of Flows Traversing Two Adaptive R E D n Gateways APPENDIX A. ADDITIONAL SIMULATION RESULTS 125 15 | 1 1 1 r Time (second) (c) 70-msec Roundtrip Transit Delay 12 Time (second) (d) 120-msec Roundtrip Transit Delay Figure A.21: Per-Flow Throughputs of Flows Traversing Two Adaptive R E D n Gateways (Cont.) APPENDIX A. ADDITIONAL SIMULATION RESULTS 126 Time (second) (e) 160-msec Roundtrip Transit Delay Time (second) (f) 200-msec Roundtrip Transit Delay Figure A.21: Per-Flow Throughputs of Flows Traversing Two Adaptive R E D n Gateways (Cont.) APPENDIX A. ADDITIONAL SIMULATION RESULTS 127 45 | 1 1 r Time (second) (a) 30-msec Roundtrip Transit Delay 20 I 1 1 1 r Time (second) (b) 60-msec Roundtrip Transit Delay Figure A.22: Per-Flow Throughputs of Flows Traversing One Adaptive R E D n Gateway APPENDIX A. ADDITIONAL SIMULATION RESULTS 128 20 10 Mbps 5 Mbps 3 Mbps 1 Mbps Expected I k . . . . . . . . ! . . , , , , . , , ! , . , , , ! 30 40 50 Time (second) (c) 70-msec Roundtrip Transit Delay 60 70 80 10 Mbps — 5 Mbps - a - 3 Mbps 1 Mbps Expected 30 40 50 Time (second) 60 70 80 (d) 120-msec Roundtrip Transit Delay Figure A.22: Per-Flow Throughputs of Flows Traversing One Adaptive R E D n Gateway (Cont.) APPENDIX A. ADDITIONAL SIMULATION RESULTS 129 30 Time (second) (e) 160-msec Roundtrip Transit Delay 14 . , , , , r Time (second) (f) 200-msec Roundtrip Transit Delay Figure A.22: Per-Flow Throughputs of Flows Traversing One Adaptive R E D n Gateway (Cont.) APPENDIX A. ADDITIONAL SIMULATION RESULTS Queue Length Expected Min. and Max. Thresholds (a) Green Packets 0 10 20 30 40 50 60 70 Time (second) 2500 2000 S 1500 2 1000 500 Queue Length Expected Min. and Max. Thresholds 0 10 20 30 40 ;50 60 70 Time (second) (b) Yellow Packets Queue Length' • — Expected • • • Min. and Max. Thresholds 0 10 20 30 40 50 60 70 Time (second) (c) Red Packets 0.025 0.02 ii 0.015 2 3) 0.01 0.005 20 40 60 Time (second) (d) Overall Queuing Delay Figure A.23: Average Queue Lengths and Overall Queuing Delay at the First Adaptive R E D n Gateway APPENDIX A. ADDITIONAL SIMULATION RESULTS 131 5000 4000 3000 = 2000 1000 Queue Length Expected Min. and Max. Thresholds 30 40 50 Time (second) (a) Green Packets 5000 4000 3000 2 2000 1000 — Queue Length — Expected • • Min. and Max. Thresholds 0 10 20 30 40 50 60 70 Time (second) (b) Yellow Packets 5000 4000 3000 2000 1000 Queue Length Expected Min. and Max. Thresholds 0 10 20 30 40 50 60 70 Time (second) (c) Red Packets 0.025 0.02 g 0.015 5 0.01 0.005 40 60 Time (second) (d) Overall Queuing Delay Figure A.24: Average Queue Lengths and Overall Queuing Delay at the Second Adaptive R E D n Gateway 80 i APPENDIX A. ADDITIONAL SIMULATION RESULTS 132 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 ' - « - Utilization Expected 20 40 Time (second) 60 (a) First Bottleneck Link 80 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 Utilization Expected 20 40 Time (second) 60 80 (b) Second Bottleneck Link Figure A.25: Utilizations of Bottleneck Links with Adaptive R E D n x 10 ' 20 40 60 Time (second) (a) First Bottleneck Gateway 80 0.012 0.01 0.008 0.006 0.004 0.002 20 40 60 Time (second) (b) Second Bottleneck Gateway 80 Figure A.26: Early Drop Ratios with Adaptive R E D n APPENDIX A. ADDITIONAL SIMULATION RESULTS 0.1* . 0.07 k Dynamic Probability Static Probability o 1 x »< >wamrx1 -0 10 20 30 40 50 60 70 Time (second) 10 20 30 40 50 60 70 Time (second) (a) Green Packets (b) Yellow Packets 0.08 0.04 Q 0.02 x Dynamic Probability — Static Probability I X£ 10 20 30 40 50 Time (second) 60 70 (c) Red Packets Figure A.27: Effective Maximum Packet Marking Probabilities of the First Adaptive R E D n Gateway APPENDIX A. ADDITIONAL SIMULATION RESULTS 0.1* 0.09 •8 0.08 0.07 0.06 o 0.05 ca Q_ | 0.04 "x CO ^ 0.03 o ' £ c0 c O . 0 2 Q 0.01 Dynamic Probability Static Probability 0 ' 'JM tm — 0 10 20 »«—>< x 30 40 ) O K 50 60 70 Time (second) (a) Green Packets 0.1 X * i X X X i—5j 0.09 K M X X Dynamic Probability — Static Probability :5> • 10 20 30 40 50 Time (second) 60 70 (b) Yellow Packets 0.18 0.16 CO f 0.14 CL CD i 0.12 h 0.02 (-0 0 ri) 0.1 ) ) C 3 K SKH99PJB6 o co S 0.08 E | 0.06 E ' <5 0.04 X-X Dynamic Probability — Static Probability 10 20 30 40 50 60 70 Time (second) (c) Red Packets Figure A.28: Effective Maximum Packet Marking Probabilities of the Second Adaptive REDn Gateway APPENDIX A. ADDITIONAL SIMULATION RESULTS 135 A.2.3 Adaptive R E D n with E C N in Scenario B - O - U The simulation results include the per-flow received throughputs, the average queue lengths and the overall average queuing delays at the bottleneck gateways, the bottleneck link utilizations, and the effective maximum packet marking probabilities. A l l the received throughputs, depicted in Figures A.29 and A.30, were 100% goodputs. APPENDIX A. ADDITIONAL SIMULATION RESULTS 136 30 40 50 Time (second) (a) 30-msec Roundtrip Transit Delay 10 10 Mbps 5 Mbps 3 Mbps 1 Mbps Expected 20 30 40 50 Time (second) 60 70 80 (b) 60-msec Roundtrip Transit Delay Figure A.29: Per-Flow Throughputs of Flows Traversing Two Adaptive R E D n Gateways APPENDIX A. ADDITIONAL SIMULATION RESULTS 137 30 40 50 Time (second) (c) 70-msec Roundtrip Transit Delay 30 40 50 Time (second) (d) 120-msec Roundtrip Transit Delay Figure A.29: Per-Flow Throughputs of Flows Traversing Two Adaptive R E D n Gateways (Cont.) APPENDIX A. ADDITIONAL SIMULATION RESULTS 138 10 Mbps 5 Mbps 3 Mbps 1 Mbps Expected 30 40 50 Time (second) 60 70 80 (e) 160-msec Roundtrip Transit Delay 30 40 50 Time (second) (f) 200-msec Roundtrip Transit Delay Figure A.29: Per-Flow Throughputs of Flows Traversing Two Adaptive R E D n Gateways (Cont.) APPENDIX A. ADDITIONAL SIMULATION RESULTS 139 35 E.25 25 20 h 10 10 Mbps 5 Mbps 3 Mbps 1 Mbps Expected 1 20 30 40 50 Time (second) 60 70 80 (a) 30-msec Roundtrip Transit Delay 10 Mbps -"— 5 Mbps - B - 3 Mbps —•— 1 Mbps • • • Expected 30 40 50 Time (second) 80 (b) 60-msec Roundtrip Transit Delay Figure A.30: Per-Flow Throughputs of Flows Traversing One Adaptive R E D n Gateway APPENDIX A. ADDITIONAL SIMULATION RESULTS 140 30 40 50 Time (second) (c) 70-msec Roundtrip Transit Delay 10 Mbps 5 Mbps 3 Mbps 1 Mbps Expected 30 40 Time (second) 50 60 . 70 • 80 (d) 120-msec Roundtrip Transit Delay Figure A.30: Per-Flow Throughputs of Flows Traversing One Adaptive R E D n Gateway (Cont.) APPENDIX A. ADDITIONAL SIMULATION RESULTS 25 20 10 Mbps —"— 5 Mbps - B - 3 Mbps —•— 1 Mbps • • • Expected 20 30 40 50 Time (second) 60 70 80 (e) 160-msec Roundtrip Transit Delay - * - 10 Mbps —«— 5 Mbps 3 Mbps —•— 1 Mbps • • • Expected 30 40 50 Time (second) 60 70 80 (f) 200-msec Roundtrip Transit Delay Figure A.30: Per-Flow Throughputs of Flows Traversing One Adaptive R E D n Gateway (Corit APPENDIX A. ADDITIONAL SIMULATION RESULTS 142 2500 f 2000 ro 1500 1000 500 Queue Length Expected Min. and Max. Thresholds 10 20 30 40 50 60 70 Time (second) 2500 2000 (pac* 1500 CD _l eue 1000 o 500 — Queue Length — Expected • • Min. and Max. Thresholds 10 20 30 40 50 60 70 Time (second) (a) Green Packets (b) Yellow Packets 2500 f 2000 co 1500 h 1000 r 500 Queue Length • — Expected • • • Min. and Max. Thresholds 10 20 30 40 50 60 70 Time (second) (c) Red Packets 0.018 0.016 0.014 2 0.012 Queuing Delay Expected 20 40 Time (second) (d) Overall Queuing Delay Figure A.31: Average Queue Lengths and Overall Queuing Delay at the First Adaptive REDn Gateway 80 APPENDIX A. ADDITIONAL SIMULATION RESULTS 143 5000 4000 3000 = 2000 1000 Queue Length • — Expected • • • Min. and Max. Thresholds 10 20 30 40 50 60 70 Time (second) 5000 4000 3000 = 2000 1000 Queue Length • — Expected • • • Min. and Max. Thresholds (a) Green Packets 0 10 20 30 40 50 60 70 Time (second) (b) Yellow Packets 5000 4000 3000 => 2000 1000 Queue Length Expected Min. and Max. Thresholds 0 10 20 30 40 50 60 70 Time (second) (c) Red Packets 0.016 0.014 0.012 5 0.01 5 0.008 3 D 0.006 3 0.004 0.002 Queuing Delay Expected 20 40 60 Time (second) (d) Overall Queuing Delay Figure A.32: Average Queue Lengths and Overall Queuing Delay at the Second Adaptive R E D n Gateway 80 APPENDIX A. ADDITIONAL SIMULATION RESULTS 144 1 0.9 | O.I 0.7 0.6 0.5 0.4 0.3 0.2 0.1 OJ 7" Utilizalion Expected 20 40 Time (second) 60 (a) First Bottleneck Link 80 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 -»- Utilization Expected 20 40 Time (second) 60 (b) Second Bottleneck Link 80 Figure A.33: Utilizations of Bottleneck Links with Adaptive R E D n APPENDIX A. ADDITIONAL SIMULATION RESULTS o.ix-0.09 -S 0.08 , 0.07 0.06 o 0.05 CO CL | 0.04 o 0.03 E co 5,0.02 Q 0.01 X Dynamic Probability — Static Probability ° 0 10 20 30 40 50 Time (second) 60 70 (a) Green Packets x Dynamic Probability — Static Probability 20 30 40 50 Time (second) (b) Yellow Packets 0.1 0.09 1 0.08 1 0.07 | 0.06 a 0.05 co CL | 0.04 x CO 2 0.03 o E f 0.-02 o 0.01 x Dynamic Probability — Static Probability 0 10 20 30 40 50 60 70 Time (second) (c) Red Packets Figure A.34: Effective Maximum Packet Marking Probabilities of the First Adaptive REDn Gateway APPENDIX A. ADDITIONAL SIMULATION RESULTS 0.1* •a 0.08 h o 0.05 h ro 1 Q_ 0 10 20 X Dynamic Probability — Static Probability 30 40 Time (second) (a) Green Packets 50 60 70 0.16 1 0.14 0.12 2 0.I:WB<X*-X HtK>K-»9eqecx)K- -0.08 x 0.06 CO «5 o I 0-04 c >. Q 0.02 x Dynamic Probability • — Static Probability X=-X 10 20 30 40 50 60 70 Time (second) (b) Yellow Packets 0.18 0.16 £ 0.14 0.12 IS 0.1 0.08 F 0.06 K - ~ 2 0.04 0.02 -1 r-Xa_ . r r. X Dynamic Probability — Static Probability 0 10 20 30 40 50 60 > < 7 0 ^ Time (second) (c) Red Packets Figure A .35 : Effective Maximum Packet Marking Probabilities of the Second Adaptive R E D n Gateway 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0065377/manifest

Comment

Related Items