Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A window-based synchronous scheme for network congestion control/avoidance Liang, Yonghua 1991

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-UBC_1991_A6_7 L53.pdf [ 4.62MB ]
Metadata
JSON: 831-1.0051988.json
JSON-LD: 831-1.0051988-ld.json
RDF/XML (Pretty): 831-1.0051988-rdf.xml
RDF/JSON: 831-1.0051988-rdf.json
Turtle: 831-1.0051988-turtle.txt
N-Triples: 831-1.0051988-rdf-ntriples.txt
Original Record: 831-1.0051988-source.json
Full Text
831-1.0051988-fulltext.txt
Citation
831-1.0051988.ris

Full Text

A Window-based Synchronous Scheme For Network Congestion Control/Avoidance by Y O N G H U A L I A N G B.Eng., Tsinghua University, China, 1987 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in THE FACULTY OF GRADUATE STUDIES (DEPARTMENT OF COMPUTER SCIENCE) We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA August 1991 © Yonghua Liang, 1991 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of 0*»p^t9yr * > c < * » c * . The University of British Columbia Vancouver, Canada Date tje-pt. ^7 , / 99 I DE-6 (2/88) Abstract Congestion control and/or avoidance in computer networks has attracted a great deal of attentions in the last several years. Many schemes have been proposed. Each has its own advantages and disadvantages. For example, the Slow Start scheme does not consume any additional network resources, but its fairness and throughput are not satis-factory since it does not provide any mechanism to synchronize window adjustments of the sources and does not attempt to prevent packets from being dropped. The DEC Bit scheme does not perform well in two way traffic situations. In this thesis, several represen-tative schemes are reviewed and discussed, and a new synchronous congestion/avoidance scheme with explicit and implicit feedbacks (SEIF) is proposed which would produce high network throughput and good fairness at modest overhead. Performance of these schemes are compared through extensive simulations under a variety of situations. Based on the results of the simulations, the performance of the SEIF scheme is found to be satisfactory in meeting our design requirements. ii Contents Abstract i i Contents i i i List of Tables v List of Figures vi Acknowledgement vi i 1 Introduction 1 1.1 Overview of Congestions in Computer Networks 1 1.1.1 What is Network Congestion ? 1 1.1.2 Why Congestion Can happen? 2 1.1.3 How To Deal With Network Congestion ? 3 1.1.4 Fairness Issue 4 1.1.5 Components of a Congestion Control/Avoidance Scheme 5 1.1.6 Performance Indices of a Congestion Control And/Or Avoidance Scheme 6 1.2 Motivation and Objective of Thesis 7 2 Related Works 9 2.1 Source Quench Based Scheme 9 2.1.1 Overview 9 2.1.2 Discussion 10 2.2 BBN's Rate Based Scheme 11 2.2.1 Overview 11 2.2.2 Discussion 13 2.3 The Slow-Start Scheme 15 2.3.1 Overview 15 in 2.3.2 Discussion 16 2.4 DEC Binary Feedback Scheme 17 2.4.1 Overview 17 2.4.2 Discussion 18 2.5 The Tri-S Scheme 19 2.5.1 Overview 19 2.5.2 Discussion 20 2.6 Summary 21 3 SEIF - A New Congestion Control/Avoidance Scheme 22 3.1 General Philosophies 22 3.2 Detailed Design of the Explicit Feedback Mechanism 24 3.2.1 Workload Measure 24 3.2.2 The Feedback Policy 25 3.2.3 Workload Control 30 3.3 The Synchronous Congestion Control/Avoidance Scheme with Explicit and Implicit Feedbacks 31 3.4 Summary 33 4 Performance Of SEIF's Design Considerations 35 4.1 Introduction To The Network Simulator 35 4.2 Comparison Among Different Feedback Policies 38 4.3 Comparison Of Feedback Generation Filtering Mechanisms 49 4.4 Comparison Of Byte vs Packet Oriented Buffer Management 51 4.5 Performance Comparison Of Different Cut Back Parameters 53 4.6 Summary 58 5 Comparison of SEIF to Other Schemes 61 5.1 Comparison of The SEIF Scheme, The Slow Start Scheme, and The DEC Bit Scheme 61 5.2 Performance Comparison Between The SEIF Scheme and The Tri-S Scheme 82 5.3 Summary 90 6 Conclusions 91 6.1 Future Works 92 iv List of Tables 4.1 Results of Different Feedback Thresholds 40 4.2 Results of Set 1 of Experiments 43 4.3 Results of Set 2 of Experiments 45 4.4 Static Feedback Policy vs Dynamic Feedback Policy 48 4.5 Filtering vs Nonfiltering 50 4.6 Bytes vs Packets 52 4.7 Result of Exp. 1 56 4.8 Result of Exp. 2 57 4.9 Fairness Variances of Exp. 2 57 4.10 Result of Exp. 3 58 4.11 Fairness Variances of Exp. 3 • • • • 59 5.1 One Source 62 5.2 Two Source 63 5.3 Three Sources 64 5.4 Four Sources 65 5.5 One Source 67 5.6 Two Source 68 5.7 Results 1 of Class 2 74 5.8 Fairness Variances: Results 1 75 5.9 Results 2 of Class 2 76 5.10 Results 2 of Class 2 77 5.11 Results 3 of Class 2 78 5.12 Results 3 of Class 2 79 5.13 Telnet Workloads I 83 5.14 Telnet Workloads I 84 5.15 Telnet Workloads II 85 5.16 Telnet Workloads II 86 5.17 The SEIF Scheme vs The Tri-S Scheme 89 5.18 Fairness Variances of the SEIF and Tri-S Schemes 89 v List of Figures 1.1 Throughput vs Workload(without congestion control) 3 3.1 Queue Length vs Time 27 4.1 Topology One 41 4.2 Topology Two 44 4.3 Window vs Time: Congestion Avoidance 46 4.4 Window vs Time: Congestion Control 47 4.5 Window vs Time: Without Feedback Generation Filtering 51 5.1 Class 1 Topology 63 5.2 Window vs Time: Source 1 of the DEC Bit scheme 68 5.3 Window vs Time: Source 2 of the DEC Bit scheme 69 5.4 Window vs Time: Source 1 of the Slow Start scheme 69 5.5 Window vs Time: Source 2 of the Slow Start scheme 70 5.6 Window vs Time: Source 1 of the SEIF scheme 70 5.7 Window vs Time: Source 2 of the SEIF scheme 71 5.8 Class 2 Topology 1 72 5.9 Class 2 Topology 2 72 5.10 Topology One Simulated By The Tri-S Scheme 87 5.11 Topology Two Simulated By The Tri-S Scheme 87 vi Acknowledgements First of all, I would like to thank my supervisor, Dr. Samuel T. Chanson, for his advice, patience, and support. Only under his direction was I able to solve one problem after another successfully in the course of my thesis research. His care and help for me is far beyond academic concerns. I would also like to thank Dr. Son Vuong who serves as the second reader of my thesis. I also like to thank Guo Yong, my research partner, for exchanging useful idea with me on modifying the REAL simulator. Discussions with Ann Lo were always constructive and I hope I could still have such opportunities in the future. Thanks should go to Wenjing Zhu as well, who provided very important information on how and where to get the simulator package. I am absolutely in debt to my lovely wife, Ping Wang, without whose love, under-standing, care, and support this thesis would not be possible. Last but not least, I am sincerely thankful to my parent, elder sister, and younger brother back in China. Without their love and support, my stay in Canada would be difficult let alone finishing the thesis. vii Chapter 1 Introduction In this chapter, we present the motivation of this thesis. First of all, we describe what the network congestion problem is, and how the congestion problem should be dealt with, including an introduction to network fairness. Following this, the components and performance indices of a congestion control/avoidance scheme are introduced. The chapter ends with a statement of the goal of this thesis. 1.1 Overview of Congestions in Computer Networks 1.1.1 What is Network Congestion ? Under light workload situations, network throughput increases linearly with increase in workload. If workload keeps increasing, after a certain point, the gain in throughput starts to become smaller. If workload still keeps increasing, network throughput will begin to drop. A network is congested when its throughput drops dramatically despite increase in workload. The phenomenon of congestion is undesirable for networks, since network resources are kept busy while the effective throughput is low. Congestion occurs when demand for network resources (e.g. CPUs and Links) exceeds what the network can supply. Without a congestion control and/or avoidance scheme in place, the relationship between throughput and workload can be illustrated in Figure 1.1. 1 CHAPTER 1. INTRODUCTION 2 Network congestion control can be exercised at many levels [11]. In this thesis, we address the end-to-end flow control policy which ensures a fast sender will not overwhelm a slow receiver. However, flow control deals only with traffic on a specific path and usually does not take into consideration network situation. Communication paths often share network resources of which the most critical are the routers. Uneven distribution of network load among the routers can result in network congestion. There are many ways to deal with network congestion. Here are two of them. One is to extend flow control to take into consideration network situation, e.g. adjusting flow control window based on workload situation of the network as well as number of buffers available at end points. The other is to perform admission control at network boundary. That is, a network would not admit traffic when it is under heavy load. So far, no international standard has been set for congestion control at the transport level. It is the objective of our thesis to address this problem of flow control at the transport level taking into account network congestion. 1.1.2 Why Congestion Can happen? When demand exceeds supply in a network, queues start to build up in some routers (or switches). If demand exceeds supply for a long enough period of time, the buffer in the routers would be filled up. Once the routers are saturated, they start to drop packets, which would lead to timeouts at the sending sources. Sources retransmit packets when timeout occurs, thus exasperating the situation. More packets are dropped at the routers and the cycle repeats. When a network is saturated, it spends most of its time dropping packets and retransmitting packets. Therefore the effective throughput is very low. CHAPTER 1. INTRODUCTION 3 Throughput Workload Figure 1.1: Throughput vs Workload(without congestion control) 1.1.3 How To Deal With Network Congestion ? A brute force solution which is ineffective is to simply increase or upgrade network re-sources, such as replacing slower processors with faster ones , adding more buffers to routers, and introducing links with higher bandwidth. On the one hand, more buffers can cause longer network delay. If the round trip delay can not be estimated precisely, timeout events will occur frequently at senders who will retransmit packets causing more timeouts. Faster processors and higher bandwidth links can not solve the problem either. The reason is no matter what resources are used, if there is no appropriate algorithm to manage them, bottlenecks will occur whenever demand exceeds supply locally because of imbalance of resource utilization. Therefore congestion could still happen. Even though increasing some resources may solve the problems that a network is currently facing, what if the demand for network resources increases again in the future? Thus this is not an effective solution. CHAPTER 1. INTRODUCTION 4 To solve the congestion problem, mechanisms must be implemented in networks so that limited resources could be allocated appropriately. When demand exceeds supply, the only thing a network can do is to ask its users to reduce their demands. One way to do this is to ask some users to shut down their traffic totally, while the other users are allowed to use network resources as much as they want. However, this will result in unfair usage of network resources and is therefore undesirable. This is not the way a good congestion control and/or avoidance scheme should work. A successful scheme must be able to reduce network workload when it is too heavy. Meanwhile, to maximize the utilization of network resources, it should also be capable of increasing the offered workload to keep the resources from idling. This could be done either by informing the sending sources explicitly of the release of resources or letting the sources detect the release themselves. Congestion control/avoidance can be performed at more one level in a network. In this thesis, we focus on the transport layer and the network layer. 1.1.4 Fairness Issue A congestion control and/or avoidance scheme should ensure all the users equal access to network resources. Intuitively, a reasonable way is to ask all the network users to reduce their demand by a certain percentage rather than asking some of them shut down totally. In this way, network fairness could be achieved, though service quality for individual users would be degraded. If a network has different priorities assigned to users, more complicated mechanisms are needed. In our paper, we assume all network users have the same priority. Some mechanisms have been proposed to be implemented in routers to enforce fair share of network resources. Among them are Fair Queueing Algorithm [7] and Random Drop Algorithm[9]. The basic idea of the Fair Queueing Algorithm is to maintain a CHAPTER 1. INTRODUCTION 5 separate queue for each communication path sharing a router and to serve these queues in a Round-Robin manner. The Random Drop Algorithm selects a packet randomly from a queue to drop if the queue is full and dropping packet is unavoidable. The performance of Fair Queueing Algorithm is better than that of Random Drop Algorithm. On the other hand, its overhead is much higher. We will adopt the Random Drop Algorithm in our scheme. To work effectively, these schemes have to be implemented in conjunction with other schemes implemented in the sources. Without cooperation of the sources, even though network fairness can be guaranteed, a great amount of network resources would be wasted. 1.1.5 Components of a Congestion Control/Avoidance Scheme A congestion control and/or avoidance scheme consists of three components: .congestion detection .information feedback .traffic control To control and/or avoid congestion, first of all, we have to know when a network is congested or about to become congested. There are two classes of congestion detecting methods, i.e. utilization based and queue length based methods. Utilization based methods use the utilization of routers as a measure of network workload level. When the utilization of a router exceeds some values, the paths sharing the router are assumed to be congested. Queue length based methods use the queue length of waiting packets in the routers as a measure of workload intensity. If the queue length of a router exceeds a certain value, the paths sharing the router are assumed to be congested. There are two kinds of queue length based methods. One uses average queue length to measure network workload, while the other uses instant queue length. Once the workload level of a network is known, there are two ways to deal with the CHAPTER 1. INTRODUCTION 6 information. One way is to send feedbacks carrying the information back to the users. The other is to do nothing. The former is the so called explicit feedback mechanism and the latter implicit feedback. In one extreme, the router can calculate the exact amount of traffic that each user sharing the router should put into the network in a certain period of time and send the information back to the users. The users can then regulate their traffic flows according to the information received. In this way, the network has fine control over its own workload. This is one way of explicit feedback. The potential problem with this approach is that the overhead of performing congestion control and/or avoidance could be very high. In the other extreme, even though a router is congested, it would not send any feedback to the users, letting them detect the congestion themselves instead. In this case, the network consumes no resources to perform congestion control. When the users detect that something out of the ordinary has occurred (e.g., by time-outs and negative acknowledgements), they will adjust their traffic flowing into the network to rectify the situation. The sources can control their traffic flows by adjusting either their window size or their transmission rates, depending on what they are using to perform flow control. In this thesis, we will consider the window-based control methods only. 1.1.6 Performance Indices of a Congestion Control And/Or Avoidance Scheme To evaluate a congestion control and/or avoidance scheme, several aspects should be looked at. These include the effective network throughput, its adaptiveness to network situations, whether the policy is fair, and the overhead of executing it. The most serious consequence that a congested network faces is reduced effective throughput and long round-trip-delay. A good congestion control and/or avoidance scheme should produce high network throughput and moderate delay even under high CHAPTER 1. INTRODUCTION 7 workload situations. It should also adapt to network situations. When the load at some points in the network is approaching saturation, the scheme should be able to inform sources involved to reduce their workload in an attempt to prevent large number of pack-ets from being dropped, so that as little network resources would be wasted as possible. Likewise, sources should be allowed to make use of released network resources as soon as possible, so that network resources could be used more effectively. In fact, adaptiveness is related to good throughput. A congestion control and/or avoidance scheme is not a good one if it can not ensure a fair share of network resources to its users. The overhead of a scheme is also another important concern. If a scheme is trying to do too much and consumes too much resources, there will not be much network resources left for transfer-ring data. There is a compromise between the overhead of a scheme and how much it can do to reduce congestion. 1.2 Motivation and Objective of Thesis The thesis is motivated by the deficiencies of some existing congestion control/avoidance schemes. For example, some schemes consume significant amount of network resources and perform well in one-way-traffic situations but not in more complicated situations. Some schemes consume very little resources, but their performance is poor. These are dis-cussed in some details in Chapter 2. For this thesis, our goal was to develop a congestion control/avoidance scheme which would produce high network throughput under a variety of load conditions and at the same time address the fairness problem. Also it should not consume too much network resources. The contribution of this thesis is that it reviewed some representative existing congestion control/avoidance schemes, evaluated their per-formance, identified their deficiencies, and proposed a new congestion control/avoidance scheme which addressed problems that the existing schemes did not addressed properly. The problems include unfair sharing of network resources and significantly degraded CHAPTER 1. INTRODUCTION 8 throughput under two-way traffic situations. Our congestion control/avoidance scheme differs from the others in several aspects. It provides networks with a mechanism to syn-chronize window adjustment of their affected users, in conjunction with an appropriate window adjustment algorithm, which can ensure network fairness to a certain extent. Our explicit feedback threshold minimizes the number of dropped packets, while our feedback generation filtering algorithm prevents redundant feedbacks from being sent. Both of them help to reduce the amount of wasted network resources and improve net-work throughput. A byte-oriented buffer management algorithm ensure high network throughput under heavy workload situations even in two way traffic scenario. This thesis is supported by simulation results. The rest of the thesis will be organized as follows: Chapter 2 outlines the related works. Several representative schemes are reviewed and their pros and cons discussed. We describe our new scheme in Chapter 3. The results of experiments that support the design decisions are presented in Chapter 4. In Chapter 5, performance comparison is made between our scheme and the other schemes using various scenarios, and the reasons behind the performance differences are discussed. Finally, Chapter 6 concludes the thesis with suggestions for future work. Chapter 2 Related Works Network congestion has been studied for decades. Many schemes have been proposed and studied (see for example [1], [13], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [28], [29], [31], [32], [34]). We will review and discuss the Source Quench Based Scheme [2], BBN's Rate-Based Scheme [3], the Slow-Start Scheme [6], DEC Binary Bit Scheme [5], and the Tri-S Scheme [10] in the following sections. These are selected because they are considered representative and/or are the most popular schemes. 2.1 Source Quench Based Scheme 2.1.1 Overview The source quench based scheme proposed in [2] for TCP/IP networks tried to solve two problems, i.e., the so called small packet problem and general network congestion v problem. When a transport packet carries a small amount of user data, the proportion of wasted bandwidth becomes high. For example, with a packet header of 40 bytes, a 1 byte user data results in an overhead of 4000% . To solve this problem, the scheme proposed in [2] does not allow any more packet to leave a source if there is an outstanding packet from the source (i.e., stop and wait). During the period of withholding packets, the source would compact the newly arrived user data, if there is any, as one single packet 9 CHAPTER 2. RELATED WORKS 10 and sends the compacted packet out once the acknowledgement for the pending packet returns. This scheme is called Source Quench Based, because it makes use of Source Quench messages to carry information on network load from the routers to the sources. A Source Quench message is one of a dozen types of Internet Control Message Protocol (ICMP) messages each of which is encapsulated in an IP packet [11]. A Source Quench message is sent by a router if any packet is dropped by the router because its buffers are full. Although a source node is expected to slow down its transmission of packets, no standards have been set as to exactly what a source should do if it receives a Source Quench message. In fact, without the participation of the transport layer, there is little a source node can do, because there is no flow control mechanism at the IP layer. The transport layer must be informed of network congestion and transmission of data at the transport layer must be slowed down to prevent performance collapse. A small modification was made to the original policy used in ICMP as to when to send Source Quench messages. As mentioned before, with the original ICMP, a router sends a Source Quench message when a packet is dropped. The modification is to have a router sends the message when its queue is 50% full to avoid congestion. 2.1.2 Discussion This scheme solves the Small-Packet problem but may produce negative side effect on the network. Accumulating packets while a host is waiting for an acknowledgement and then sending them out at once when the acknowledgement is back is equivalent to keeping quiet for a round trip delay and then sending a whole window of packets out back to back. Stop/wait control would increase the burstiness of network traffic. The larger the individual packet size, the worse the consequence would be. Apparently, this scheme did not take into consideration of network fairness in solving CHAPTER 2. RELATED WORKS 11 the network congestion problem. There are two window sizes a source can use, i.e. the one advertised by its destination, and 0. A source sends packets using the window size advertised by its destination until it receives a Source Quench message when it acts as if its window size were 0. The source resumes its normal operation after it has received 10 acknowledgements. With this scheme, sources using different window sizes (advertised by their destinations) have different share of network resources. The larger the window size a source adopts, the bigger its share of network resources. The setting of a fixed number of acknowledgements that a source has to receive to resume its normal operation is inappropriate. Instead, a dynamic setting should be used. The value should depend on the number of current outstanding packets for each source. If a fixed number, such as 10, is used, what if a source has only 5 outstanding packets when it receives a Source Quench message? The source would never have a chance to resume sending packets. The feedback algorithm does not have any mechanism to prevent routers from sending excessive feedback messages. If one arriving packet finds the queue to be more than 50% full, the chance that packets immediately after it also find the queue to be more than 50% full is high, because of the bursty nature of network traffic. The consequence is that network resources are wasted for passing unnecessary feedbacks and those sources receiving excessive feedbacks will overestimate the loading of the network. Both aspects have negative effect on improving network throughput. 2.2 BBN's Rate Based Scheme 2.2.1 Overview Utilization of resources is used as the measure of network workload in this scheme. Con-trol information can either be propagated through the network by packets of a separate protocol with high priority or tagged with the routing packets. CHAPTER 2. RELATED WORKS 12 The congestion control scheme is implemented not at the hosts, but at all the packet switches (routers) of the network, i.e., admission control. It is the boundary switches that control the traffic from the hosts to the store-and-forward network. To implement this scheme, cooperation of the switches is required. To understand how this scheme works, some terminologies used in the scheme are introduced below. • Resource Ration is the maximum rate at which a single traffic flow may use the resource with minimal risk of congestion [3]. A resource ration advertized by a resource is determined by such factors as the capacity of the resource (e.g. speeds of CPUs and links, and size of buffers), the number of flows sharing the resource, and the load level of each flow. The rule of calculating resource ration is as follows: R(0) = T; when U(t) < 97%, R(t + l) = min{T,-^R(t)}-when U(t) > 97%, T R(t + 1) = min{T, (TTI . m ——R(t)\, v ; 1 ' max{U(t),T + R(t)} w / ' where R(t) is resource ration at time t, U{t) is the measured resource utilization over interval t, and T is the desired (target) resource utilization. • Flow Ration is the minimum of the resource rations along the path between the source and destination of the flow. A source switch controls flows based on the flow rations. The scheme works in the following way: CHAPTER 2. RELATED WORKS 13 • Each switch computes its resource ration and delivers it to all the other switches of the network periodically by flooding. • Upon receipt of the resource ration information from the other switches, a switch computes the flow ration for each flow identified by a source-destination switches pair. Flow controls are enforced by boundary switches in accordance to flow rations. 2.2.2 Discussion The scheme provides explicit feedback to the source switches to regulate their traffic rates. With more information about the load conditions of a network, the traffic rates could be set more properly. With other congestion schemes, e.g. DEC Binary scheme (see Section 2.4), a source can only tell whether the network is congested (or about to be congested). With the binary information, a source can only make simple decisions: increase, maintain, or decrease the traffic entering the network. The amount to increase or decrease is given by a fixed and simple formula. Therefore control is not very fine-grained. With the B B N scheme, source switches control traffic entering the network based on more detailed information on network status. The scheme has finer control of net-work traffic. It can also guarantee network fairness on the basis of communication pairs identified by the source and destination switches. The responsiveness of the scheme is poor, however, because the ration information is tagged onto the routing packets. The interval of sending routing packets is relatively long which means congestion control is applied at relatively long intervals. By looking at switch level instead of host or transport user level, the network have fewer connections to concern about. This is because each switch may be handling more than one transport level connections at the same time and therefore less control traffic would be produced. However, the resources consumed by the scheme are still significant CHAPTER 2. RELATED WORKS 14 when the number of network switches is large, as source rations have to be calculated and flooded through the entire network periodically. The network would not suffer from excessive user generated traffic since boundary switches would not admit this. The effect of an ill-behaved host can be minimized, because networks would not accept more traffic from the ill-behaved host than from other normal hosts if they are attached to different network switches. However, the problem of normal hosts and an ill-behaved host being attached to the same switch is not yet resolved. Since the scheme controls flows at the switch level, there is no way to guarantee transport users a fair share of network resources, as there can be more than one user residing on one host machine and more than one host attaching to one network switch. The performance of this scheme is sensitive to the interval between successive utiliza-tion measurements. If it is too short, network would suffer from excessive fluctuation. Whereas, if it is too long, a sudden change in network workload would not be captured. Another problem is that the information passed among switches is based on the rate of resource utilization, thus switches must be able to convert this information to bit-per-second based information. Another problem, which is common to all rate based algorithms, is that it is hard to preserve the transmission rate set at a source throughout the whole network at the various routers since the routers have to deal with multiple flows at the same time. Thus the information fed back to the sources may be misleading. We will not consider rate based control mechanism in this thesis. CHAPTER 2. RELATED WORKS 15 2.3 The Slow-Start Scheme 2.3.1 Overview In this scheme, queue length is used as the measure of network workload. If the queue in a router is full and packets are still coming in, the router starts to drop packets which means network performance would start to drop if the workload is not reduced. There is no explicit feedback mechanism employed by this scheme. Retransmission timeout is considered as the signal that a network is congested. If a packet is dropped by a router due to lack of buffer space, there would be no acknowledgement for this packet. After a period of time, retransmission timeout occurs at the source. In today's networks, packet loss due to transmission error or packet damage is rare. Therefore the assumption that timeout means network congestion is made in the Slow Start scheme. This scheme controls network workload by adjusting window sizes adopted by the sources. To control or avoid network congestion, the algorithm operates in three states, i.e., increasing window size quickly, increasing window size slowly, and cutting back on window size. Besides controlling network congestion, the other purpose of this scheme is to get a traffic, flowing stably through a path that consists of links with different capacities. To start a transmission, a source sets its window size to 1. Upon receipt of an acknowledgement, its window size is increased by one. Now it can send two packets out at a time. When their acknowledgements are received, the window size becomes four. Thus the window size increases exponentially. With this mechanism, a flow will get to stable quickly. After its window size reaches a certain point, the source takes a more cautious atti-tude. Instead of increasing the window size by one for every acknowledgement it receives, CHAPTER 2. RELATED WORKS 16 it increases the window size by one only after the number of acknowledgements it has received is equal to its current window size. The idea behind this is to make use of the redundant resources of a network without causing too many packets drops. If a timeout occurs, a source reduces its window size to one in an attempt to quickly release the network from the congestion state. It is worth noting the difference between the current window size and the maximum window size. The former is equal to the number of outstanding packets allowed in the network by the congestion control/avoidance algorithm. The current window size changes dynamically. The latter is the maximum value of the flow control window size allowed by the end systems. It is static and limited by the amount of buffers available at the receiving ends. The current window size is always smaller than or equal to the maximum window size. Unless otherwise stated, window size will mean the maximum window size in this thesis. 2.3.2 Discussion The most prominent advantage of this scheme is that it does not consume network resource at all. But the information it obtains about network status is limited and does not produce optimal performance. The efficiency of this scheme relies on the accuracy of the estimated round trip time [27]. There are many factors that can affect the round trip delay [8], therefore the variance of the estimated value can be very large. This is the vital flaw of all pure timeout-based schemes [33]. This scheme cannot deal with the fairness problem effectively, because there is no mechanism to synchronize the behavior of sources. Timeout is the only signal for packet loss. Routers would not always drop packets from all affected sources at the same time. Sometimes, several packets from the same source get dropped while packets from other sources are not. Since the feedback signals are not necessarily synchronized, the sources CHAPTER 2. RELATED WORKS 17 might have different knowledge about network loads which would result in different ac-tions taken. While some sources are reducing their window sizes, others may still be increasing theirs, which would contribute to unfair share of network resources. There is no mechanism in this scheme to carry information about network loads back to the sources promptly. By the time the sources are aware of network congestion, some packets have been dropped and network resources wasted. 2.4 D E C Binary Feedback Scheme 2.4.1 Overview With this scheme , two layers of protocols are involved, i.e., the network layer and transport layer. The average queue length is used as the measure of network workload. A router monitors its average queue length all the time. Whenever the average queue length is over 1, it sets a special bit in the packets passing through it. Upon receipt of a network layer packet, the receiver copies the special bit , regardless of what its value is , from the received packet to a transport layer acknowledgement or packet which is going to be sent back to the source. In this way, the transport layer sources can obtain information about the workload situation in the network. After signal filtering, the sources determine what action should be taken. This consists of either increasing or decreasing the traffic they put into the network by increasing or decreasing their window sizes respectively. If the network is deemed to be saturated or approaching saturation, the traffic is de-creased. Otherwise, the traffic is increased. An Additive Increase/Multiplicative Decrease algorithm is employed to control the network load. Detailed analysis of this algorithm can be found in [4]. CHAPTER 2. RELATED WORKS 18 2.4.2 Discussion The performance of this scheme is in between those of BBN's and the Slow-Start schemes. It provides more information about the network than the Slow-Start scheme, but fewer than BBN's scheme. It consumes more network resources than the Slow-Start scheme, but fewer than BBN's scheme. However, this scheme is also not very responsive. Imagine congestion occurs near a source. Although the affected router can detect it quickly, the source would not know about it until after almost a round trip time has elapsed. During this period, the congestion could get worse. When this scheme was designed, only simple topologies in which all sources are sharing the same path, i.e. a chain, were considered [5]. In more complicated topologies, its performance is poor. The reason is that when the average queue length is calculated, acknowledgements are treated the same as data packets. Generally speaking, data packets are much larger than acknowledgements, and the time needed to transmit a queue of 5 large data packets and a queue of 5 small acknowledgements are significantly different. In scenarios where two-way traffic exists, this scheme would produce unfair use of network resources. These phenomena and some simple preliminary remedies were reported in [12]. We will show the dramatic performance degradation of this scheme in face of two-way traffic in Chapter 5. Because all routers have to calculate their average queue lengths all the time and sources have to filter feedback signals from the routers, the processing overhead is high in this scheme although the bandwidth overhead is low. CHAPTER 2. RELATED WORKS 19 2.5 The Tri-S Scheme 2.5.1 Overview Unlike the other schemes which try to approach the optimal point of a network, Tri-S (Slow Start and Search) tries to operate a network at the optimal point [10]. It does not adjust the sources' workload as frequently as the other schemes do. The flow from a source is adjusted only when it detects a significant change in network workload, such as one or more sources joining or leaving a network. The idea behind this scheme is to adjust the workload based on the value of throughput gradient. A throughput gradient is the ratio of the increase in throughput to the increase in offered load. If the throughput gradient at a source is high, that means the network is under light workload and higher throughput can be achieved if the workload is increased. The source can then increase its output rate. On the contrary, if the throughput gradient is low, that means the network is already under heavy load. Little can be gained in throughput even if the workload is increased. The real metric used by the scheme is the Normalized Throughput Gradient (NTG). NTG is defined as where TG(Wn) and TG{W\) are throughput gradients at window sizes of Wn and Wi respectively. TG(Wn) is in turn defined as r n w \ n ^ n ) - rCWn-Q  T G { W n ) = Wn - Wn_, ' where T(Wn) is the throughput at window size of Wn. In this scheme, a source is in one of the following states after it has established a connection with its destination: 1. The initialization state in which the source sets its window size to one and increases its window size by one whenever an acknowledgement is received until the maximum CHAPTER 2. RELATED WORKS 20 window size allowed by the destination is reached or timeout occurs. 2. The decrease state which would be entered when timeout occurs. In this state, the window size is reset to one. Then, the NTG is checked whenever an acknowledge-ment is received. The Window size is increased by one if the NTG is greater than a threshold NTGd (i.e. the network is lightly loaded). Otherwise it enters into the increase state described below. 3. The increase state which would be entered when the NTG is not over NTGd- In this state, the window size is increased by l/(current-window-size) each time an acknowledgement is received. The NTG is checked only if the accumulated increase in window size is larger than one. If the NTG is less than NTG{, another threshold, the window size is reduced by one. Otherwise, do nothing. 2.5.2 Discussion This scheme provides a new method to obtain network workload information. Like the Slow-Start scheme, it does not consume any additional network resources except the processing overhead at the sources for calculating the NTGs. The primary goal of this scheme is to operate the network at its optimal point. First of all, the optimal point of a network has to be determined. Without explicit feedbacks from a network, it is difficult to get exact information about the status of the network to calculate the optimal point. Even with explicit feedbacks, it is not certain if the information obtained about a network is accurate, that is why most congestion control/avoidance schemes are satisfied with operating the network near its optimal point. Under simple situations, e.g. only one user transferring a file over a path, it might be possible to obtain exact information about a network. Under more complicated sit-uations, such as those in which more than one user share the same path entirely or in CHAPTER 2. RELATED WORKS 21 part or users sending data in opposite directions, it is very difficult, if not impossible, to determine the optimal performance point for a network without explicit feedbacks. It is even more difficult to determine the optimal point in terms of fairness. It is reported in [10] that even in a simple scenario in which two file transfer users share the same path, network fairness is not guaranteed with this scheme. What is guaranteed is Statistical Fairness, which means that the average share of the bottleneck resources for each user would be more or less equal when the same scenario is repeated a large number of times. However, for any independent run, the shares could be very unevenly distributed. 2.6 Summary The Source Quench Based scheme is simple but is not very effective. In addition, it does not consider the fairness issue when it tries to solve the congestion problem. BBN's scheme has finer control of data flow, but it is not responsive since the interval of updating routing tables is relatively long for congestion control. The overhead in this scheme is high when the number of switches in the network is large. Furthermore, it only ensures fairness on the bases of each pair of communicating switches, which is not adequate in the situations where multiple hosts are attached to one switch and/or multiple transport users are residing on one host. Both the Slow Start scheme and the Tri-S scheme have the problem that users may learn of network congestion asynchronously which leads to unfair share of network resources. With the Slow Start scheme, a considerable amount of network resources would be wasted by dropping packets. With the Tri-S scheme, the information the sources have is more indirect than time-outs and it is more difficult to synchronize responses at the sources using throughput gradients. In the DEC Bit scheme, synchronization of the sources can be more or less achieved in simple topologies. In the situations where two way traffic exists, performance of the DEC Bit scheme is poor in terms of both throughput and fairness. Chapter 3 SEIF - A New Congestion Control/Avoidance Scheme In Section 3.1 of this chapter, the general philosophies of a new congestion control/avoidance scheme are outlined. The detailed design of the scheme is presented in Section 3.2 , fol-lowed by a complete description of the scheme in Section 3.3. 3.1 General Philosophies In light of the advantages and disadvantages of the schemes reviewed in Chapter 2, we would like to design a new scheme which will incorporate the advantages and rule out the disadvantages of the explicit feedback schemes and the implicit feedback schemes. Our scheme is based on the following observations: 1. With pure explicit feedback schemes, too much network resources would be con-sumed by performing control, especially when the congestion occurs at a node that is far away from the source(s) of the congesting packets. In the extreme case, congestion could be made worse due to the explicit feedbacks. 2. With a pure implicit feedback scheme, the source(s) of the congesting packets do not know the occurrence of the congestion soon enough to take action in time, so 22 CHAPTER 3. SEIF - A NEW CONGESTION CONTROL/AVOIDANCE SCHEME23 that part of the network bandwidth is wasted by the sources , which continue to put spurious packets into the congested network while they should be reducing their output to the network. This case is especially true when congestion occurs near the destination of the transmission. As we know, the earlier a congestion is detected, the less the cost of recovery as congestion may grow rapidly if unchecked. To combine explicit and implicit feedbacks, we need: 1. A traffic control algorithm which can make use of both explicit feedback signals and implicit feedback signals 2. A feedback control algorithm which can determine when it is worthwhile to send explicit feedback. With the new scheme, two layers of protocols are involved in congestion control, i.e., network layer and transport layer. The feedback messages sent by a congested router belong to the network layer, whereas the congestion control timers are located at the transport layer. The traffic control algorithm will be implemented at the transport layer. For the traffic control algorithm to make use of explicit feedbacks, the network layer protocol entity at the source has to pass the corresponding signal up to the right transport layer protocol entity when it receives an explicit feedback from a router. Our scheme is a combination of an implicit feedback scheme and an explicit feed-back scheme. For the implicit feedback, we adopt the Slow-Start scheme, because its practical implementation in TCP/IP protocols has been demonstrated to avoid network performance collapse under heavy load situation and ensure network fairness to a certain extent. Although its performance could be improved further, it is still a good scheme considering its low overhead. For explicit feedback, there is no existing scheme which can produce satisfactory performance and integrate well with the Slow-Start scheme. CHAPTER 3. SEIF - A NEW CONGESTION CONTROL/AVOIDANCE SCHEMF24 The explicit feedback scheme is supposed to take over in those situations where the Slow-Start scheme does not work effectively . Among the situations are those in which congestion occurs at a router which is very close to the affected sources. With the Slow-Start scheme, it will be a long time before the sources will detect packet loss. Whereas, if an explicit feedback scheme is used, the sources will be notified soon after congestion is indicated and can react to the situation promptly. Thus better performance could be expected. Meanwhile, network resources consumed by the feedback messages would not be much since they only have to travel a short distance in the network. In the next section, we describe the explicit feedback mechanism used in our new scheme. 3.2 Detailed Design of the Explicit Feedback Mech-anism Decisions have to be made concerning the parameterization of workload and what con-stitutes congestion, what kind of feedback mechanism to use, and what kind of workload control algorithm to adopt. 3.2.1 Workload Measure There are a lot of metrics which could be used to measure network workload. Among them are queue length, resource utilization, and throughput gradient. To select a proper measure , we should consider such aspects as the overhead in collecting and evaluating the relevant data, and the usefulness of the information. In our opinion, throughput gradient, which a source calculates based on the amount of increase in workload and throughput, is not a good metric for us. Although it does not cost any additional network resources to carry the necessary data back to the sources, the processing overhead at the sources is nontrivial. Furthermore, since the sources are asynchronous, it is very likely that different sources will have different view of the network CHAPTER 3. SEIF - A NEW CONGESTION CONTROL/AVOIDANCE SCHEME25 condition which would have negative effect on network fairness. Utilizations of routers could be used to measure the network workload, but it re-quires overhead to monitor and compute router utilization. Besides, messages containing utilization information have to be passed around the network, which consume network bandwidth. Furthermore, in rate based schemes, the sources have to convert the uti-lization information to bit-per-second based information to perform traffic control. To do so, each source has to keep a database on the capacities of all network resources. If any change is made to a network, the databases have to be updated. The cost of using utilization in rate based schemes is high. Instead of average queue length, we choose instant queue length as a measure for network workload, because it is less expensive to monitor. As will be shown later, careful use of this information can produce reasonably good results. 3.2.2 The Feedback Policy One of the decisions our feedback policy has to make is when feedbacks should be sent. Some of the options are: • send feedbacks when a packet is dropped, • send feedbacks when a queue is a certain percentage full, and • send feedbacks when a router predicts in some other way (perhaps based on the rate of incoming packets, rate of serving packets in the queue, and the delay in sending explicit feedbacks to the sources) that overflow would occur in the future. To select a policy, we carried out extensive simulation experiments. The experiment results are shown in Chapter 4. The results show that under a wide range of workload conditions, a good value to use as a threshold for sending warning feedback messages is when the router's buffer space is 80% full. CHAPTER 3. SEIF - A NEW CONGESTION CONTROL/AVOIDANCE SCHEME26 Because we have adopted instant queue length as the workload measure, a burst packet arrival pattern may cause a router to send warning messages frequently and un-necessarily. The unnecessary feedbacks not only consume network resources, but also send the sources wrong information about the situations at the routers. This is exactly what happened to the Source Quench Based Scheme. To avoid the transient effect of network traffic, we need a filtering mechanism which could diagnose the real threats but ignore the faulty warnings. Before we describe the filtering mechanism used, let us ex-amine the traffic pattern of a network. To have a better insight into the relationship among network traffic flow, traffic control algorithm, and feedback policy, we will select a simple topology for our analysis. We assume there is only one pair of users in a net-work which are the source and the destination respectively, and one bottleneck router lies between them. The traffic control algorithm will be presented later in this chapter, but its basic function is to have the source increase its window size if some number of acknowledgements are received, and reduce its window size if a congestion feedback mes-sage is received from the bottleneck router or if other congestion indication is presented. Because of the window based traffic control algorithm at the source, the shape of the bottleneck queue length at the router vs time will look like ocean waves with one peak after another as shown in Figure 3.1. The points A, B, and C in Figure 3.1 correspond to the points in time that the queue is 80% full. Without the filtering policy, the router will send feedbacks as long as the state of its queue falls into the region between A and B. As can be observed, after sending a feedback at point A, there is no need to send any more until point C is reached. Our filtering policy is that after sending a feedback, a router disables its feedback mechanism until its queue length drops below the feedback threshold (80%) and then reaches the threshold again. CHAPTER 3. SEIF - A NEW CONGESTION CONTROL/AVOIDANCE SCHEME27 Queue Length 100% 80% Time Figure 3.1: Queue Length vs Time The explicit feedback messages are given a higher priority, since it is important to let the sources know as soon as possible that the router is going to be saturated . If a feedback message needs to be sent and the output line is busy, we put it at the head of the queue of the output line so that it is the next to be sent when the line is free. Another factor affecting the performance of a feedback policy is the way buffers are managed. In some systems, it is assumed that each buffer is large enough to contain a packet of maximum size allowed by a network and that one packet, no matter how small its size, occupies one buffer. We call this packet oriented buffer management. This method of managing buffers is simple but expensive in the sense that a large amount of storage space might be wasted. Under light load situations, there is no problem with using such a buffer management scheme. However, this method can affect network performance adversely under heavy load situations. We choose to adopt a byte oriented buffer management method. This method allocates buffers according to the actual sizes of the packets in bytes. There will be no wastage in buffer space. A router is said to be saturated only when the number of bytes left in the router is smaller than the number CHAPTER 3. SEIF - A NEW CONGESTION CONTROL/AVOIDANCE SCHEME28 of bytes the arriving packet has. Even when a router starts to drop packets, there is still a chance that the next arriving packet is small enough to be fitted into the space left which is impossible for packet oriented buffer management. In the latter scheme when a router starts to drop packets, subsequent arriving packets will be dropped unless some packets from the queue have been serviced. Table 4.6 in the next chapter illustrates the performance difference of byte oriented buffer management and packet oriented buffer-management. If the byte oriented buffer management is adopted, the decongesting algorithm for the router needs to be modified. Although our scheme aims to avoid congestion, dropping packets is still possible in a network because of the bursty nature of network traffic. A router has to decide which packet should be dropped when its buffer is full. For fairness consideration, we chose the Random Drop algorithm [9], i.e. when a router needs to drop a packet, it randomly selects one from those in the queue as well as the arriving packet. If the arriving packet is selected to be dropped, there would be no problem at all. If the packet selected to be dropped is one of those in the queue, there might be a problem with the byte oriented buffer management algorithm because the space vacated might not be large enough to contain the arriving packet. When this happens, the router can either select another packet to drop or just drop the arriving one. In our algorithm, the routers do the latter. With packet oriented buffer management, there would be no such problem since it is assumed that a buffer is large enough to contain a packet of any size. In a two-way traffic situation, sending feedbacks for the forward traffic might cause some packets of the backward traffic to be dropped although this would seldom happen. If it does happen, the router drops the feedback messages to prevent any negative effect on the backward traffic and relies on implicit feedback scheme for congestion control. Although relying on implicit feedback might give rise to unfair sharing of the network CHAPTER 3. SEIF - A NEW CONGESTION CONTROL/AVOIDANCE SCHEME29 and the situation is even worse if some explicit feedback messages get through and some do not, doing so is still better than applying the random drop algorithm indiscriminately. This is because the latter will cause data packets to be dropped and generate more feedback messages recursively. However, this situation is difficult to be simulated in the REAL simulator and as the frequency of this happening is small, the decision should not have much impact in the overall performance. If a more intelligent selective drop algorithm is implemented at the routers, it can be applied when feedback messages for the forward traffic cause packets of the backward traffic to be dropped. For example, dropping some redundant acknowledgements would not hurt any one . However, the overhead of implementing such a selective drop algorithm would not be negligible. Routers are in good positions to enforce network fairness. Fair Queueing [7] and Virtual Clock [30] are two of the algorithms proposed to enforce network fairness at the routers. We prefer an algorithm which is simple but which works. Since we are going to adopt an additive increase/ multiplicative decrease algorithm at the sources, network fairness is achievable if the sources can be synchronized [4] [14]. In our scheme, routers are the ones to send synchronous signals. When a router wants to send feedback messages, it sends to all the sources which originate traffic through the queue. As such, all affected sources increase and decrease their window sizes at almost the same time. Therefore a fair sharing of network resources can be achieved. Note, however, there may be situations in which this synchronization may be hard to achieved, e.g. those sources may be of different distance from the router. To send feedbacks to the right sources, a router has to keep a table of the sources sharing its queue. For a router that has a queue for each of its output links, the router has to keep multiple tables. A router has to reset its table(s) of sources at the beginning of every routing interval and build up the table(s) while it routes packets. One simple way to build a source table is to check the source of each packet when it arrives. If the source is not in the table, it is inserted, CHAPTER 3. SEIF - A NEW CONGESTION CONTROL/AVOIDANCE SCHEME30 otherwise nothing needs to be done. If feedback messages are sent not because of packet drops, all those messages carry sequence numbers of -1. If feedback messages are sent because a packet is dropped, only the feedback message sent to the source of the dropped packet contains the sequence number of the discarded packet for retransmission. The other feedback messages sent to the source sharing the router carry the sequence number of-1. 3.2.3 Workload Control Our workload control algorithm is a modification of the one used in the Slow Start scheme. In the Slow Start scheme, there are two kinds of congestion indications, i.e. timeouts and the receipts of triple acknowledgements for a single packet, which are taken to mean the packet is dropped. In our scheme, there are three kinds of congestion indications. Besides the two mentioned above, we also have explicit feedbacks from the routers. With the Slow Start scheme a source's window is cut back to 1 if a feedback signal is received, in our scheme the window is reduced to a certain percentage of its old value. The reason is when congestion at a router is detected, all the sources sharing the same bottleneck are asked to cut back their windows at the same time. Thus each source does not have to cut back as much as it does with the Slow Start scheme. Another reason is that each affected source is asked to reduce its load when a router is about to be congested but not after the congestion has occurred. We chose 50% as the percentage cut back (i.e. binary exponential decrease), because this value is a good compromise between the desire to keep network throughput high and the requirement of a quick convergence to a fair share of network resources. Extensive simulation supported this empirical decision (see Section 4.5 and Tables 4.8 to 4.10). Since we combine the explicit feedback scheme with the implicit feedback scheme, a source might receive both explicit and implicit feedbacks for a single dropped packet if CHAPTER 3. SEIF - A NEW CONGESTION CONTROL/AVOIDANCE SCHEME31 the explicit feedback message could not get back to the source and disable the implicit feedback in time. The implicit feedback could be either a timeout event or the receipt of three consecutive identical acks. The latter is used by the Slow Start scheme to implement their Fast Retransmission mechanism. This mechanism was motivated by the observation that timeout occurs quite a while after a packet was dropped. Some auxiliary mechanism must be provided to detect packet drops so that the sources can reduce their load sooner and fewer network resources would be wasted. If the parameter is set to three as it was done in the Slow Start scheme, the current window size of a source has to be greater than three for the Fast Retransmission mechanism to serve its intended purpose. If a source receives both implicit and explicit feedbacks, it will not get its fair share of network resources and throughput rate will suffer. Therefore, the source must be made to react to only one of these congestion signals. Those sources that receive an explicit feedback with a sequence number less than zero simply cut back their window sizes, because no packet from them has been dropped. If the sequence number is greater than 0, the source will take one of two actions. If the dropped packet has been retransmitted due to implicit feedback, the source does nothing. Otherwise the source cuts back its window size, retransmits the packet if its window is open, and disables implicit feedback for the dropped packet. If an implicit feedback such as a timeout event is presented, a source reacts as if it has received an explicit feedback with a sequence number greater than zero. In addition, the source marks the retransmitted packet as "retransmitted due to implicit feedback". 3.3 The Synchronous Congestion Control/Avoidance Scheme with Explicit and Implicit Feedbacks Our scheme is a Synchronous congestion control/avoidance scheme with Explicit and Implicit Feedbacks (SEIF). From now on, this scheme will be referred to as the SEIF CHAPTER 3. SEIF - A NEW CONGESTION CONTROL/AVOIDANCE SCHEME32 scheme. The SEIF scheme consists of two major algorithms, i.e. the algorithm imple-mented at the routers and the algorithm implemented at the sources. The two algorithms are as follows: • Router Algorithm Each router monitors its output queue(s). — If the length of any queue (in bytes) exceeds the feedback threshold which is currently set at 80% of the total buffer space (in bytes) of that queue and the explicit feedback mechanism is not disabled, * then the router sends feedbacks to all the sources which originate traf-fic through this queue in the current routing interval and disables the explicit feedback mechanism until its queue length drops below the feed-back threshold and then reaches the threshold again. * else the router does nothing. — If the buffers are full, the router randomly selects a packet to drop. If the selected packet is smaller than the one that is just arriving * then the router drops the arriving packet, * else the router drops the selected packet. • Source Algorithm Our traffic control algorithm at the sources is based on that of the Slow Start scheme although some modifications have been made. A source can be in any one of the following states during its life time. — The Initialization State CHAPTER 3. SEIF - A NEW CONGESTION CONTROL/AVOIDANCE SCHEME33 When a source first starts data transfer, it starts with a window size of 1. It increases its window size by one upon receipt of an acknowledgement. The source continues its operations in this state until its window size reaches a transition point whose value is one half of the default maximum window size of the protocol implementation or a congestion control/avoidance signal (feed-back) is received. If the window size reaches the transition point, the source enters the Stable State. If a congestion control/avoidance signal is received, the source enters the Cut Back State. — The Stable State In this state, a source slows down the increase of its window size. Instead of increasing its window exponentially, it increases its window size linearly, i.e. window is increased by one only when a whole window of acknowledgements are received. This is an attempt to absorb the network resources left but not to saturate the network. If a congestion control/avoidance signal is received, the source enters the Cut Back State. — The Cut Back State In this state, the network would be saturated if the offered load is not reduced. A source cuts back its window size by half and then enters the Stable state. 3.4 Summary A new congestion control/avoidance scheme has been proposed in this chapter. Both explicit and implicit feedbacks are used in the SEIF scheme to perform congestion con-trol/avoidance. The main objectives of SEIF are high network throughput with only moderate overhead, and fair sharing of network resources. The first objective is achieved by minimizing the number of packets dropped and avoiding unnecessary feedbacks. The CHAPTER 3. SEIF - A NEW CONGESTION CONTROL/AVOIDANCE SCHEMES* fairness problem is attacked by synchronizing the control actions at the sources. Chapter 4 Performance Of SEIF's Design Considerations In this chapter, we present a performance study on the design of the SEIF scheme us-ing simulation. In Section 4.1, the simulator package used is briefly described. In the remaining sections, performance studies concerning the use of alternative policies and threshold values, such as feedback policies, filtering policies, buffer managements, and cut back parameters of window sizes are described. 4.1 Introduction To The Network Simulator The network simulator used is the REAL package from the Department of Computer Science, University of California at Berkeley. REAL is based on the Nest simulation testbed from Columbia University. To use the REAL network simulator, it is not neces-sary to know a lot about Nest . With the simulator, a network is modeled as a graph, where network nodes, which could be sources, destinations, or routers, are mapped to the vertices, while the output links connecting the network nodes are mapped to the edges. To simulate a network, one needs to specify the scenario of the network, which includes the topology of the network, the role of each node, the protocol each node simulates, the 35 CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 36 workload at each source, and the control parameters such as the maximum window size at the sources, the buffer sizes at the routers, capacities and the latencies of the links, in a specification file in the NetLanguage provided. The simulator takes the specification file as its input. The simulator comes with functions that simulate 20 different protocols and workload types, 5 different queueing disciplines, and two types of destinations. Each of them is simulated by a node function. The 20 source node functions are organized into five suites, 'vanilla', JK, DEC, 'mali-cious', and 'miscellaneous' suites. The 'vanilla' suite implements the basic features of the transport layer protocols, e.g., window based flow control and timeout based go-back-n error control. It does not perform congestion control. The three node functions in this suite implement the ftp (file transfer protocol), telnet (virtual terminal protocol), and pulse workload types respectively. The JK suite implements the TCP protocol with the embedded Slow Start congestion control scheme, that is, a source can adjust its flow control window dynamically as described in Section 2.3.1. There are five node functions in this suite. In addition to the three corresponding to those of the first suite, the other two are jk_mftp and jk_rate. jk_mftp is based on jk_ftp with some modifications while jk_rate is an experimental hybrid window/rate congestion control source. The DEC suite implements four types of sources, i.e., ftp, telnet , pulse, and rate based sources. The other two suites will not be mentioned here, since they are not used in this thesis. The five types of routers and the queueing disciplines they implement are as follows: 1. fcfs_router: implements the FCFS queueing. 2. fq_router: implements the Fair Queueing algorithm. 3. fqbit_router: is the same as fq_router, except that it does bit-setting on packets 4. hrr_router: implements the HRR scheme[13]. CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 37 5. dec_router: implements DEC bit-setting and FCFS queueing. As for the sinks, we only use one of them, i.e. sink.c. It receives data from any kind of sources and sends acknowledgements back for each data packet. A large number of protocols can be simulated and their performance can be evaluated and compared using the REAL package. It is helpful to study the performance of various protocols and identify their deficiencies with the simulator. What interests us most is that the simulator can simulate the two representative congestion control/avoidance schemes, i.e., the Slow Start scheme and the DEC Bit Scheme. To compare the performance of the SEIF scheme with these schemes, we only need to write two new node functions describing the policy for our source and router. The NetLanguage in which scenario specification files" are written is easy to use. The simulator is easy to use and modify. The simulator has its disadvantages, however. Because Nest does not provide a direct timer mechanism, to implement a timer, a source node of REAL sends a packet addressed to itself which would return after the timeout value. In fact, this is done by the output channel of the source. When the channel detects this packet type from the source, it holds the packet for the period of time specified in a field of the packet and then sends it back to the source. It is very inconvenient to use such an emulated timer, because once the timer is set, it can not be reset or disabled. To achieve the effect of disabling a timer, the owner of the timer has to assign every timer packet an unique id. This can be done by increasing the timer id by one and stamping the id on a timer packet before sending it out. If the owner receives a timer packet, it checks the current id with that on the packet. If they don't match, that means the timer has been disabled. Otherwise, it is the real one. REAL supports only one timer for each node at a time. To simulate more realistic networks, several universal control parameters of REAL, such as buffer_size, ftp.pktjsize, telnet_pkt_size, ackjsize, should be localized. Once these control parameters are set in a specification file, during the whole simulation, all the CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 38 routers have the same buffer size, all the sources use the same ftp packet size or telnet packet size, and all the sinks use the same acknowledgement size. Another shortcoming of the simulator is that it does not keep track of protocol process-ing overhead (i.e., the overhead is assumed to be zero), as the output links are assumed to be the primary bottlenecks. Therefore its results are biased towards complicated, high overhead algorithms, such as the DEC Bit algorithm. To simulate the SEIF scheme with this simulator, we have to write our own source node function and router node function. Our source node is based on the JK source node, since our traffic control algorithm is a modification to that of the Slow Start scheme. Besides the features provided by the JK source node, our source node also supports features such as the ability to make use of either explicit or implicit feedbacks (but not both), and the option to cut windows back to a value other than 1. Our router node is based on the fcfs_router with explicit feedback mechanism added. Mechanisms to manage buffers in bytes and priority queues are also added to the simulator which is further modified to allow for timers at routers. There are many other minor modifications which will not be listed here. 4.2 Comparison Among Different Feedback Policies a) Reacting vs Predicting Feedbacks As mentioned in Section 3.2.2, there are two major types of feedback policies -reacting feedback and predicting feedback. The shortcoming of the reacting feedback scheme is that a network is already in the congested state when feedbacks are sent. It is obvious that the sooner (potential) congestion is detected, the less effort is required to resolve the problem. Consequently, fewer network resources would be wasted and higher network throughput could be achieved. The feedback policy used by ICMP belongs to the first class, that is, a router sends feedbacks only when a packet is dropped. The CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 39 current trend, however, is to apply predicting feedback policies. There are two kinds of predicting feedback policies, static feedback policy and dynamic feedback policy. The static policy has a static threshold above which feedbacks would be sent. A dynamic policy does not have such a threshold, it determines to send or not to send feedbacks based on the current workload situation. Our scheme supports both. A router implementing our static feedback policy sends feedbacks when its queue is a certain percentage full. As can be seen, this policy is easy to implement and the overhead of implementing it is low. A router implementing our dynamic feedback policy monitors the average arrival rate of packets in bits per second and the remaining empty buffer space in bytes. The router sends feedbacks when it predicts that it would be overflowed in the future if no feedbacks are sent. We compared the reacting feedback policy (congestion control) with our static (con-gestion avoidance) feedback policy first. The parameter we selected for our static feedback threshold was 80%, i.e. a router sends feedbacks when its queue is 80% full. The reason behind the choice is that if a larger parameter, such as 90% is selected, some packet drops would not be avoidable because of the burstiness of network traffic and the delay between sending feedbacks and the resulting reduction in load by the sources. If a smaller parameter, such as 50%, is chosen, the feedback threshold would be reached so often that many unnecessary feedback messages would be sent. 80% was derived empirically as the best threshold over a wide range of traffic conditions and maximum buffer sizes. We show one set of experiment results in Table 4.1. The topology for this set of experiments is simular to that depicted in Figure 4.2. The only difference is that there are only two pairs of users in this case. The two sources send data in opposite directions. Network parameters are as follows: • Data Packet Size = 512 bytes CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 40 • Acknowledgement and Feedback Packets Size = 50 bytes • Maximum Window Size = 30 packets • Buffer Size = 10 x 512 bytes • Bandwidth of links SRi and DR6 = 128 kbps • Latency of links SRi and DR6 = 0 • Bandwidth of other links = 64 kbps • Latency of other links = 3000 ms From the experiment results, we can see that 80% is a good feedback threshold to use. It produced much higher throughput than the lower thresholds. Although 90% produced slightly higher throughput than 80%, the possibility that 90% would cause packet loss is also higher than that caused by 80%. Threshold Throughput 50% 79.05 kbps 70% 92.30 kbps 80% 101.72 kbps 90% 104.17 kbps Table 4.1: Results of Different Feedback Thresholds CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 41 To observe the effects of different feedback policies, we have conducted a number of simulation experiments, with different network topologies and parameters. We present the results of three sets of experiments . In the first two sets, we compare our static feedback policy with the reacting feedback policy. In the third set, we compare our static feedback policy with our dynamic feedback policy. We selected train topologies for almost all the experiments we performed, because this kind of topologies are simple so that we can observe the effect of different algorithms clearly. Furthermore, we are only concerned about the sources sharing the same bottleneck when we study network congestion. Train topologies provide what we need. More complicate network topologies can eventually be reduced to train topologies. For the first set of experiments, we selected a simple network topology which is depicted in Figure 4.1. Si — Source i Di - Destination j Ri - Router i O 0 © Q O Figure 4.1: Topology One There are six routers, one source and one destination in this simple network. The source simulated is doing file transfer, i.e., it sends packets out as long as its flow control window is open. The First Come First Served queueing discipline is implemented at the routers. Packets are dropped with the Random Drop decongesting mechanism when the buffers are full. The sink does not withhold any acknowledgement. That is, it sends CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 42 an acknowledgement as soon as it receives a packet which is in order and not received before. Al l output links are full duplex. For the three experiments conducted in this set, the common network parameters are as follows: • Data Packet Size = 512 bytes • Acknowledgement and Feedback Packets Size = 50 bytes • Maximum Window Size = 30 packets • Buffer Size = 10 x 512 bytes What distinguish the three experiments are their assignments of link parameters. The following are the link parameters assignments for the three experiments: 1. Experiment 1 • Bandwidth of links SR\ and DRe = 128 kbps • Latency of links SR\ and DRe = 0 • Bandwidth of other links = 64 kbps • Latency of other links = 3000 ms 2. Experiment 2 • Bandwidth of link R3R4 = 64 kbps • Bandwidth of other links = 128 kbps • Latency of all links = 3000 ms 3. Experiment 3 • Bandwidth of link R3R4 = 64k bps CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 43 • Latency of link R3R4 = 3000 ms • Bandwidth of other links = 10 mbps • Latency of other links = 0 ms Links SRi and DR& are the bottlenecks in Experiment 1. Link R3R4 is the bottleneck in both Experiments 1 and 2. The simulation time is 80 seconds. The statistics of the first 20 seconds are ignored to remove the start-up effect. The simulation results are shown in Table 4.2. Throughput(kbps) No. Packets Dropped Throughput Improvement Experimentl Congestion Control 59.62 20 Congestion Avoidance 63.97 0 7.30% Experiment Congestion Control 58.25 30.33 Congestion Avoidance 63.97 0 9.82% Experiment3 Congestion Control 57.23 46.67 Congestion Avoidance 63.97 0 11.78% Table 4.2: Results of Set 1 of Experiments For the second set of experiments, we selected a more complicated network topology CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 44 depicted in Figure 4.2. Si: Source i Di: Destination i Ri: Router i Figure 4.2: Topology Two There are two more pairs of sources and destinations in scenario2. Sources Si and 52 send in the same direction, while Source S3 sends in the other direction. In this set, we also conducted three experiments. The parameter setting of the three experiments are the same as that of their counterparts in set 1. The results of this set of experiments are listed in Table 4.3. From the simulation results, the superiority of the predicting feedback policy is obvi-ous. The amounts and ratios of throughput improvement of congestion avoidance range from 4.35 kbps to 11.58 kbps and from 5.67% to 11.78% respectively. More remarkably, the overhead of performing congestion avoidance is much lower. With congestion avoid-ance, no packet was dropped in most cases. Whereas, with congestion control, packets were dropped in all the situations simulated. Moreover, the network dropped up to 71 data packets in a period of 60 seconds. These packets had consumed considerable amount of network resources before they were dropped. Another benefit from congestion avoidance is that there is less traffic fluctuation in CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 45 Throughput(kbps) No. Packets Dropped Throughput Improvement Experiment! Congestion Control 95.03 25 Congestion Avoidance 100.42 0 5.67% Experiment Congestion Control 101.56 26 Congestion Avoidance 107.32 6.33 5.67% Experiments Congestion Control 97.44 71 Congestion Avoidance 109.02 0 11.88% Table 4.3: Results of Set 2 of Experiments CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 46 the network. The Window Size vs Time figures of the source in Experiment 1 of set 1 are shown in Figure 4.3 and Figure 4.4. They are the results of performing congestion avoidance(predicting feedback) and performing congestion control (reacting feedback) respectively. B O . O T l m e ( s e c o n d ) Figure 4.3: Window vs Time: Congestion Avoidance b) Static vs Dynamic Predicting Feedbacks We now compare the static feedback policy with the dynamic feedback policy. There is no fixed threshold in the dynamic feedback policy upon which feedback decisions are made. Instead, a formula is used to determine when feedback should be sent. (A - fi) x At > b, (4.1) where A is the average arrival rate of packets at a router, fi is the bandwidth of the output link of the router, At is the period of time from the point the router sends feedback to the point the router notices reduction in packet arrival rate (which is approximately equal to the round trip delay between the router and the sources), and b is the amount of buffer CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 47 o.o 10.0 so.a 3o.a *o.o so.o «a.o 70.0 BO.O T l m e ( s e c o n d ) Figure 4.4: Window vs Time: Congestion Control space left when the formula is evaluated. If feedbacks are sent as soon as the condition is satisfied, reduction in traffic will take place before the buffers overflow. Packet arrival rate and the round trip delay between the router and the sources are hard to obtain accurately. We can only make estimations. In order to accommodate estimating error, we adopt the formula (X-ft) x A t > b x a (4.2) instead, where 0 < a < 1. We experimented with different values for a and will only present results for a = 0.8 and a — 0.5 here. We selected scenario 2 again to perform our simulations. The network parameters were the same as those of Experiment 1 in set 2, except for the buffer size. We experimented with different buffer sizes, ranging from 2 x 512 bytes to 10 x 512 bytes. The results are listed in Table 4.4. The dynamic feedback policy is more intelligent and its feedbacks should reflect net-work situations more accurately. We therefore expected better performance than static feedbacks. However, the simulation results contradicted our expectations. In our opinion, CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 48 Dynamic 1: a = 0.8 Dynamic 2: a = 0.5 Buffer Throughput(kbps) 10x512 bytes Static 100.42 Dynamic 1 101.65 Dynamic 2 98.94 8 x 512 bytes Static 94.48 Dynamic 1 97.74 Dynamic 2 93.71 6 x 512 bytes Static 83.42 Dynamic 1 84.15 Dynamic 2 83.54 4 x 512 bytes Static 71.22 Dynamic 1 69.15 Dynamic 2 69.50 2x512 bytes Static 54.82 Dynamic 1 54.73 Dynamic 2 54.09 Table 4.4: Static Feedback Policy vs Dynamic Feedback Policy CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 49 there are three reasons behind this. First of all, the average arrival rate of packets and round trip delay between a router and the sources are not estimated accurately enough. A more accurate algorithm is difficult to be developed. Secondly, there is a limit of what a window based flow control algorithm can do. It does have control over how many packets would be pumped into the network per round trip delay period, but not the pattern of the packet stream or the size of each packet. With this algorithm, the sources may send out a whole window of packets back to back at the beginning of a round trip delay period and keep quiet for the rest of the period, which might invoke unnecessary feedbacks. Thus the network would be under-utilized. With the window based flow control algorithm, it is difficult to tune the system to obtain optimal performance at all times. The third reason is that network performance has been improved by the static feed-back scheme to an extent where further improvement is difficult. We chose the static feedback policy for its implementation simplicity , low overhead, and its satisfactory performance. 4.3 Comparison Of Feedback Generation Filtering Mechanisms A filtering mechanism determines whether a given feedback should be sent or discarded. Without the filtering mechanism, a router might send redundant feedbacks which con-sume additional network resources and lead to lower network throughput. To demon-strate this point, we performed two experiments. The results are listed in Table 4.5. • Experiment 1 1. Topology: as shown in Figure 4.1 2. Network Parameters: the same as those of Experiment 1 in Section 4.2 CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 50 • Experiment 2 1. Topology: as shown in Figure 4.2 2. Network Parameters: the same as those of Experiment 2 in Section 4.2 As can be seen, with the filtering mechanism, network throughput is improved slightly. However, the amount of feedbacks generated by the two mechanisms are significant. Take Experiment 1 for example. With the filtering mechanism, the number of feedback mes-sages generated in the period from the 20th second to the 80th second was 10. There were also 10 peaks in the region between the 20th second and the 80th second in Figure 4.3. That is, there was no redundant feedback message, as one feedback message corresponds to one peak. Without the filtering mechanism, the number of feedback messages was 16 and the number of peaks in the corresponding region in Figure 4.5 was 8. That is, two feedback messages were sent for each peak. Therefore half of the feedback messages were redundant. Throughput Experiment 1 nonfiltering 63.35 kbps filtering 63.97 kbps Experiment 2 nonfiltering 102.35 kbps filtering 107.32 kbps Table 4.5: Filtering vs Nonfiltering CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 51 3n 0 . 0 1 0 . 0 S O . O S O . 0 4 0 . 0 S O . 0 M . 0 T D . O B O . O T l m e ( s e c o n d ) Figure 4.5: Window vs Time: Without Feedback Generation Filtering 4.4 Comparison Of Byte vs Packet Oriented Buffer Management In this section, we present the performance difference between the Byte Oriented Buffer Management(BOBM) and the Packet Oriented Buffer Management(POBM). Table 4.6 contains the comparison results. The topology we selected for this experiment was simular to the one shown in Figure 4.2. They differed only in the number of pairs of sources and destinations. In this case, there were two pairs. The two sources sent packets in opposite directions. The motivation of selecting such a scenario is to combine packets of different sizes into one packet stream, so that the performance difference of the two methods of managing buffer could be observed. Al l network parameters except for the buffer size were the same for both experiments one and two. The buffer sizes were 20 x 512 bytes and 10 x 512 bytes for experiment one and experiment two respectively. The other parameters were identical to those of Experiment 1 in set 1 in Section 4.2. The performance of BOBM and POBM is as expected. In Experiment 1, where buffer CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 52 Experiment 1: Buffer Size = 20 x 512 bytes Experiment 2 Buffer Size = 10x512 bytes Throughput Experiment 1 Byte Oriented 114.82 kbps Packet Oriented 112.09 kbps Experiment 2 Byte Oriented 101.72 kbps Packet Oriented 81.24 kbps Table 4.6: Bytes vs Packets CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 53 was not the limiting resource, there was no big performance difference between BOBM and POBM(2.44%). However, in Experiment 2, where buffer was scarce, the performance difference was significant(25.21%). 4.5 Performance Comparison Of Different Cut Back Parameters When congestion is predicted or detected, a window based control/avoidance scheme would reduce the senders' window sizes by some cut back parameter. The smaller the cut back parameters, the smaller the window size becomes. Because of the exponential increase in window size in the initial or restart stage of the flow control algorithm, the difference of network throughputs produced by applying different cut back parameters is not significant in a one way traffic situation. However, in a two way traffic environment, network throughputs vary a lot depending on the value of the cut back parameter. The reason is that when a source is asked to reduce its window size by a router, it has the same number of outstanding packets in the network no matter what value is selected for the cut back parameter. The same number of acknowledgements will come back if no packet is dropped. In one way traffic situations, there is no other traffic competing with the acknowl-edgements on their way back to their sources. The acknowledgements can come back quickly, triggering increase in window sizes of the sources. Therefore, irrespective of the amount of window reduction, the sources would be able to open up the window again quickly. The throughput difference between using a large cut back parameter and a small cut back parameter is not significant. In the examples that will be presented later, we can not even detect the difference. In two way traffic situations, however, because the acknowledgements of one data packet traffic have to compete with the other data traffic for network resources on their CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 54 way back to their sources, it takes relatively longer time for the sources to resume their window sizes to where they were before the cut back if a small cut back parameter was adopted. A small cut back parameter would keep the sources in using small window sizes for relatively long time. Whereas, a large cut back parameter would prevent the sources from falling into that state, thus higher throughput could be achieved. Intuitively, it would appear that a small cut back parameter would drive the affected sources to converge to a fair share of resource usage sooner than a large one. This is however not true. Even though the reduction in window size is smaller for large cut back parameters, more feedbacks will be produced which will drive the source windows to fluctuate within the same range of values in approximately the same period of time as a small cut back parameter would do (see Table 4.8 and Table 4.10). The only problem with a large cut back parameter is the overhead it would cause. The simulator takes bandwidth overhead caused by sending feedbacks into consideration but not the processing overhead. Therefore, the simulation results would favor smaller cut back parameters. To be fair with the other schemes against which our scheme will be compared, we selected 0.5 as our cut back parameter. As can be seen in Tables 4.7 to 4.10, this value can produce as good or better throughput as 0.0 can, yet causing about the same amount of feedback messages. The minimum window size is 1, thus, window size would be reset to 1 if a cut back parameter of 0.0 is applied. We present the results of three experiments here. We performed Experiment 1 to observe the effects of different cut back parameters on network throughput in one way traffic situations, Experiment 2 to observe the effects on the degree of network fairness, and Experiment 3 to observe the effects on network throughput in two way traffic situa-tions. • Experiment 1 CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 55 — Topology: as depicted in Figure 4.1 — Network Parameters: the same as those of Experiment 1 of set 1 in Section 4.2, except that the latency of all links is 0 here. • Experiment 2 — Topology: as depicted in Figure 4.2 — Network Parameters: the same as those of experiment 1 of set 2 in Section 4.2, except that the latency of all links is 0 here and source 53 does not send any data. • Experiment 3 — Topology: as depicted in Figure 4.2 — Network Parameters: the same as those of Experiment 1 of set 2 in Section 4.2, except the latency of all links is 0 here. The result of Experiment 1 is listed in Table 4.7. We see that , in a one way traffic situation, the throughputs produced by different cut back parameters are the same and the amount of feedbacks produced by a cut back parameter larger than 0.7 is much higher than that produced by a smaller parameter. The result of Experiment 2 is presented in Table 4.8 and the fairness variances are listed in Table 4.9. The fairness variance is a measure of how fairly network resources have been utilized by the competing sources. The definition is given in Section 5.1. We see that using different cut back parameters did not cause significant difference in the degree of network fairness. The result of Experiment 3 is presented in Table 4.10. For the corresponding fairness variances produced by different cut back parameters, please refer to Table 4.11. We can CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 56 Cut Back Parameter Network Effective Throughput No. of Feedbac Messages in z Bandwidth Overhead of 0.0 64.00 kbps 5 33.33 bps 0.3 64.00 kbps 5 33.33 bps 0.5 64.00 kbps 5 33.33 bps 0.6 64.00 kbps 6 40.00 bps 0.7 64.00 kbps 7 46.67 bps 0.8 64.00 kbps 10 66.67 bps 0.9 64.00 kbps 19 126.67 bps Table 4.7: Result of Exp. 1 CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 57 Cut Back Parameter Throughput of Source 1 Throughput of Source 2 Network Throughput No. of Feedback Messages Bandwidth Overhead of Fe.e.Hhar.k 0.0 31.74 kbps 32.22 kbps 63.97 kbps 18 120.00 bps 0.3 32.15 kbps 31.81kbps 63.97 kbps 18 120.00 bps 0.5 31.81 kbps 32.15 kbps 63.97 kbps 20 133.33 bps 0.6 32.02 kbps 31.95 kbps 63.97 kbps 22 146.67 bps 0.7 31.47 kbps 32.49 kbps 63.97 kbps 28 186.67 bps 0.8 31.33 kbps 32.63 kbps 63.97 kbps 38 253.33 bps 0.9 31.61 kbps 32.36 kbps 63.97 kbps 76 506.67 bps Table 4.8: Result of Exp. 2 Cut Back Parameter 0.0 0.3 0.5 0.6 0.7 0.8 0.9 fv 0.0576 0.0289 0.0289 0.0012 0.2601 0.4225 0.1406 Table 4.9: Fairness Variances of Exp. 2 CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 58 see that the throughput difference of using different cut back parameters in a two way traffic situation. It seems that the total throughput on either direction is more or less equal, and that a larger cut back parameter would improve the sum of the bidirectional throughputs slightly even though the overhead consumed by the feedback messages in-creases. We chose 0.5 as our cut back parameter because it produced high throughputs without causing too much overhead. Cut Back Parameter Throughput of Source 1 Throughput of Source 2 Throughput of Source 3 Network Throughput No. of Feedback Bandwidth Overhead of Fp.mthact 0.0 26.49 kbps 27.31kbps 52.36 kbps 106.15 kbps 15 100.00 bps 0.3 26.62 kbps 25.87 kbps 52.36 kbps 104.86 kbps 17 113.33 bps 0.5 28.13 kbps 28.47 kbps 53.11 kbps 109.70 kbps 17 113.33 bps 0.6 27.65 kbps 27.78 kbps 55.02 kbps 110.46 kbps 20 133.33 bps 0.7 27.92 kbps 27.85 kbps 57.14 kbps 112.91 kbps 25 166.67 bps 0.8 28.81 kbps 29.08 kbps 56.73 kbps 114.62 kbps 35 233.33 bps 0.9 28.95 kbps 29.35 kbps 58.16 kbps 116.46 kbps 55 366.67 bps Table 4.10: Result of Exp. 3 4.6 Summary Following a brief introduction to the REAL network simulator package, we presented performance studies on the design considerations for the SEIF scheme. Performance dif-ferences of reacting and predicting feedback policies , and of static and dynamic predict-CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 59 Cut Back Parameter 0.0 0.3 0.5 0.6 0.7 0.8 0.9 fv 0.1681 0.1406 0.0289 0.0042 0.0012 0.0182 0.0400 Table 4.11: Fairness Variances of Exp. 3 ing feedback policies were illustrated. We see that predicting feedback produced higher throughputs than reacting feedback did for the reason that the latter dropped packets while the former did not. The maximum throughput improvement of predicting feedback over reacting feedback reached 11.88% in a two way traffic situation we simulated. The dynamic feedback policy did not bring about significant throughput improvement over the static one for three reasons. First of all, an accurate algorithm is yet to be developed to estimate the round trip delay between the congested router and the sources, and the average packet arrival rate at the router. Secondly, a flow control algorithm with finer control of network traffic than the window based algorithms is needed to fully utilize the information. Lastly, network performance has been improved by the static policy to an extent where further improvement is difficult. The performance difference between using and not using feedback generation filtering policies was also studied. With the filtering policy, network throughput was slightly higher than that without. The filtering policy did not produce redundant feedbacks, while half of the feedbacks generated by the nonfiltering policy were redundant. B O B M can bring about significant (up to 25.21% in our experiments) throughput improvement when buffers are the limiting resources. We also observed that different cut back parameters did not result in noticeable differences in the degree of fairness. A large cut back parameter drove the sources to a fair usage of network resources in about the same time as smaller ones would by generating more feedbacks. However, larger parameters meant higher CHAPTER 4. PERFORMANCE OF SEIF'S DESIGN CONSIDERATIONS 60 overhead. We selected 50% because it produced high throughput without causing too much overhead. Chapter 5 Comparison of SEIF to Other Schemes In this chapter, we compare the SEIF scheme with the DEC Bit scheme, the Slow-Start scheme, and the Tri-S scheme in terms of network throughput and the degree of fairness. The comparison of the SEIF scheme with the DEC Bit Scheme and the Slow Start scheme is presented in Section 5.1 , while the comparison with the Tri-S scheme is presented in Section 5.2. 5.1 Comparison of The SEIF Scheme, The Slow Start Scheme, and The D E C Bit Scheme Extensive simulation experiments have been carried out. Four sets of typical results are presented here to compare the performance of the three schemes under various situations. Networks with different topologies, different network parameters, and different types of workload were simulated. The experiment results presented fall into three classes. Class 1, class 2, and class 3 contain experimental results for one way traffic, two way traffic, and telnet type workloads respectively. 61 CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 62 1. Class 1: one way traffic We experimented with different number of pairs of sources and destinations sharing the same path. The results are presented in Tables 5.1 to 5.4. The topology used for this experiment is depicted in Figure 5.1. The network parameters are as follows: • Data Packet Size = 512 bytes • Ack Packet and Feedback Size = 50 bytes • Maximum Window Size = 30 packets • Buffer Size = 20 x 512 bytes • Bandwidth of links between routers = 64 kbps • Bandwidth of the other links = 128 kbps • Latency of all the links = 0 ms The bottlenecks in the four experiments are the links between the routers. DEC Bit Scheme Slow Start Scheme SEIF Scheme Source 1 Throughput 61.24 kbps 61.17 kbps 64.00 kbps Network Throughput 61.24 kbps 61.17 kbps 64.00 kbps Table 5.1: One Source We introduce a measure, fairness variance, to evaluate the fairness of different CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 63 Si — Source i Di — Destination j Ri - Router i Figure 5.1: Class 1 Topology DEC Bit Scheme Slow Start Scheme SEIF Scheme Source 1 Throughput 31.06 kbps 33.27 kbps 31.88 kbps Source 2 Throughput 31.27 kbps 29.70 kbps 31.09 kbps Network Throughput 62.33 kbps 62.96 kbps 63.97 kbps Fairness Variance 0.010 3.191 0.010 Table 5.2: Two Source CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 64 DEC Bit Scheme Slow Start Scheme SEIF Scheme Source 1 Throughput 20.75 kbps 21.62 kbps 21.09 kbps Source 2 Throughput 20.89 kbps 18.20 kbps 21.71 kbps Source 3 Throughput 20.82 kbps 22.91 kbps 21.16 kbps Network Throughput 62.46 kbps 62.74 kbps 63.97 kbps Fairness Variance 0.003 3.947 0.076 Table 5.3: Three Sources CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 65 DEC Bit Scheme Slow Start Scheme SEIF Scheme Source 1 Throughput 12.97 kbps 17.77 kbps 16.03 kbps Source 2 Throughput 12.97 kbps 13.04 kbps 15.84 kbps Source 3 Throughput 18.50 kbps 16.98 kbps 16.04 kbps Source 4 Throughput 18.50 kbps 14.70 kbps 16.11kbps Network Throughput 62.94 kbps 62.49 kbps 64.00 kbps Fairness Variance 7.644 3.494 0.010 Table 5.4: Four Sources CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 66 schemes. The fairness variance is defined by the formula below: where X{ is the throughput of source i , n is the number of sources involved, and x is the average throughput among the sources. Note that only the sources com-peting for the same resources should be considered when the fairness variance is calculated. For example, two fairness variances should be calculated in a two way traffic situation, one for the sources sending in each direction. In the above 4 experiments, the throughput of the SEIF scheme is the highest among the three schemes. In terms of fairness, our scheme is always better than the Slow Start scheme, because the Slow Start scheme does not use any synchronization mechanism to coordinate traffic reduction at the affected sources. In Experiment 3, the DEC Bit is slightly better than the SEIF scheme, but it is much worse than the SEIF scheme in Experiment 4. The reason is that with the DEC Bit scheme, the routers set congestion indications which more or less serve the purpose of synchronization. However, the synchronization is not strong, that is , the sources sometimes have the same information about network situations, but sometimes they do not. In the latter case, the sources would use different window sizes. This point will be illustrated later. The reason why the Slow Start scheme produced lower throughput is that more packets were dropped than with the other schemes, thus wasting network bandwidth. The network is under-utilized with the DEC Bit scheme as it tries to maintain the queue lengths at the routers at a low level. To further observe the throughput and fairness differences of the three schemes, the results of two more experiments, 5 and 6 are presented in Table 5.5 and Table 5.6 respectively. The link related parameters were set for the two experiments as CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 67 follows: • Bandwidth of link between router 3 and router 4 = 64 kbps • Bandwidth of the other links = 128 kbps • Latency of all the links = 3000 microsecond The bottleneck in the two experiments was the link between router 3 and router 4. DEC Bit Scheme Slow Stan Scheme SEIF Scheme Source 1 Throughput 63.76 kbps 59.67 kbps 63.97 kbps Network Throughput 63.76 kbps 59.67 kbps 63.97 kbps Table 5.5: One Source The effect of asynchronous window reduction on the performance of the Slow Start scheme and the DEC Bit scheme is more obvious in Experiment 6. The shapes of window size vs time for source 1 and source 2 are shown in Figure 5.2 and Figure 5.3, Figure 5.4 and Figure 5.5, and Figure 5.6 and Figure 5.7 for the DEC Bit scheme, the Slow Start scheme, and the SEIF scheme respectively. Window adjustment for the Slow Start scheme is obviously asynchronous (Figure 5.4 and Figure 5.5). This is also the case for the DEC Bit scheme. With this scheme, One of the windows fluctuated between 2 and 3, while the other fluctuated between 3 and 4 ( Figure 5.2 and Figure 5.3). With the SEIF scheme, window adjustment was synchronous and the two windows fluctuated in the same range of values, from 3 to 7 (Figure 5.6 and Figure 5.7). CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 68 DEC Bit Scheme Slow Start Scheme SEIF Scheme Source 1 Throughput 36.11kbps 34.20 kbps 31.47 kbps Source 2 Throughput 27.72 kbps 26.83 kbps 32.49 kbps Network Throughput 63.83 kbps 61.03 kbps 63.97 kbps Fairness Variance 17.60 13.58 0.26 Table 5.6: Two Source a-3 -- I — 1 — i 1 — i — • — i — • — i 1 — i — • — i 1 1 — ' — i a.o ia.o 20.0 so.o 40.0 so.a aa.o 70.0 ao.o T l m e ( s e c o n d ) Figure 5.2: Window vs Time: Source 1 of the DEC Bit scheme CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 69 3 - , . a-o -o c $ a-3-/iuirinnnriiinnnfuTm T | 1 | 1 1 1 | ' I ' 1 | ' ' ' 1 0 .o io.o 20.a 30.0 *a.o so.o so.o 70.0 «o.a T i m e ( s e c o n d ) Figure 5.3: Window vs Time: Source 2 of the DEC Bit scheme a-. 0 . 0 I O . O 2 D . O 3 O . 0 4 O . 0 S O . 0 V O . D T O . O B O . O T l m e ( s e c o n d ) Figure 5.4: Window vs Time: Source 1 of the Slow Start scheme CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 70 Figure 5.6: Window vs Time: Source 1 of the SEIF scheme CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 71 3-, 3 I ' i • 1 • , • 1 1 i 1 i . r — — i O.O 1 0 . 0 2 D . O 3 0 . 0 4 0 . 0 OO.O M . O 7 0 . 0 BO.O T l m o ( s o o o n d ) Figure 5.7: Window vs Time: Source 2 of the SEIF scheme 2. Class 2: two way traffic We present the results of some simulations of networks with two way traffic. The network topologies are depicted in Figure 5.8, scenario one, and Figure 5.9 scenario two. We present the results of 8 experiments with scenario one and of 3 experiments with scenario two. In the first four experiments with scenario one, all routers have 20 x 512 bytes of buffer associated with each of their output links. The figure is 10 x 512 bytes for the other four experiments. The other network parameters are as follows: • Data Packet Size = 512 bytes • Ack Packet and Feedback Size = 50 bytes • Maximum Window Size = 30 packets • Bandwidth of the links between the routers = 64 kbps • Bandwidth of the other links = 128 kbps CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 72 Si — Source i Di — Destination ; Ri - Router i Figure 5.8: Class 2 Topology 1 Si: Source i Di: Destination i Ri: Router i Figure 5.9: Class 2 Topology 2 CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 73 • Latency of the links between the routers = 3000 ms • Latency of the other links = 0 In Experiments 1 and 5, there are two sources that send data in opposite directions. In Experiments 2 and 6, there are three sources. Two of them send data in one direction, while the other sends data in the reversed direction. In Experiments 3 and 7, there are four sources. Three of them send data in one direction, while the other sends data in the reversed direction. In Experiments 4 and 8, there are five sources. Three of them send data in one direction, while the other two send data in the reversed direction. The results of Experiments 1 to 4 are presented in Table 5.7. The corresponding fairness variances are listed in another table, 5.8. The results of Experiments 5 to 8 are presented in Table 5.9. The corresponding fairness variances are listed in Table 5.10. The results of Experiments 9, 10, and 11 are listed in Table 5.11. The corresponding fairness variances are listed in Table 5.12. The common network parameters for the three experiments are as follows: • Data Packet Size = 1000 bytes • Ack Packet and Feedback Size = 50 bytes • Buffer Size = 22 x 1000 bytes • Bandwidth of the link between routers 4 and 5 = 64 kbps • Bandwidth of the other links = 96 kbps • Latency of all the links = 0 The maximum window sizes are 15, 30, and 60 for Experiments 1, 2, and 3 respec-tively. CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 74 Buffer Size = 20 x 512 bytes Source 1 Throughput Source 2 Throughput Source 3 Throughput Source 4 Throughput Source 5 Throughput Network Throughput Exp. 1 Slow Start 49.90 kbps 49.79 kbps 99.70 kbps DEC Bit 27.44 kbps 27.44 kbps 54.89 kbps SEIF Scheme 57.41 kbps 57.41 kbps 114.82 kbps Exp. 2 Slow Start 26.26 kbps 24.90 kbps 43.05 kbps 94.21 kbps DEC Bit 11.95 kbps 11.88 kbps 37.61 kbps 61.44 kbps SEIF Scheme 27.03 kbps 27.10 kbps 50.79 kbps 104.93 kbps Exp. 3 Slow Start 18.46 kbps 14.62 kbps 19.24 kbps 44.28 kbps 96.60 kbps DEC Bit 6.89 kbps 11.88 kbps 11.88 kbps 30.72 kbps 61.37 kbps SEIF Scheme 19.90 kbps 18.91 kbps 18.91 kbps 54.14 kbps 111.75 kbps Exp. 4 Slow Start 19.40 kbps 16.90 kbps 15.62 kbps 22.36 kbps 25.80 kbps 100.09 kbps DEC Bit 12.15 kbps 12.15 kbps 12.08 kbps 12.15 kbps 12.08 kbps 60.62 kbps SEIF Scheme 17.20 kbps 17.20 kbps 17.07 kbps 25.05 kbps 25.12 kbps 101.65 kbps Table 5.7: Results 1 of Class 2 CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 75 Buffer Size = 20 x 512 bytes fv 1 fv2 Exp. 2 Slow Start 0.4624 -DEC Bit 0.0012 -SEIF Scheme 0.0012 -Exp. 3 Slow Start 4.0776 -DEC Bit 5.5334 -SEIF Scheme 0.2178 -. Exp. 4 Slow Start 2.4641 2.9584 DEC Bit 0.0011 0.0012 SEIF Scheme 0.0038 0.0012 Table 5.8: Fairness Variances: Results 1 CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 76 Buffer Size = 10 x 512 bytes Source 1 Throughput Source 2 Throughput Source 3 Throughput Source 4 Throughput Source 5 Throughput Network Throughput Exp. 5 Slow Start 44.60 kbps 45.65 kbps 90.25 kbps DEC Bit 27.44 kbps 27.44 kbps 54.89 kbps SEIF Scheme 50.86 kbps 50.86 kbps 101.72 kbps Exp. 6 Slow Start 23.94 kbps 22.85 kbps 42.78 kbps 89.57 kbps DEC Bit 11.95 kbps 11.88 kbps 37.61 kbps 61.44 kbps SEIF Scheme 25.74 kbps 25.12 kbps 49.56 kbps 100.42 kbps Exp. 7 Slow Start 16.84 kbps 17.50 kbps 15.36 kbps 41.82 kbps 91.52 kbps DEC Bit 6.89 kbps 11.88 kbps 11.88 kbps 30.72 kbps 61.37 kbps SEIF Scheme 16.61 kbps 16.70 kbps 16.63 kbps 50.22 kbps 100.17 kbps Exp. 8 Slow Start 12.22 kbps 17.43 kbps 19.05 kbps 25.92 kbps 19.89 kbps 94.50 kbps DEC Bit 12.15 kbps 12.15 kbps 12.08 kbps 12.15 kbps 12.08 kbps 60.62 kbps SEIF Scheme 15.82 kbps 15.56 kbps 16.04 kbps 22.66 kbps 22.46 kbps 92.55 kbps Table 5.9: Results 2 of Class 2 CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 77 Buffer Size = 10 x 512 bytes fvl fv2 Exp. 6 Slow Start 0.2970 -DEC Bit 0.0012 -SEIF Scheme 0.0961 -Exp. 7 Slow Start 0.8006 -DEC Bit 5.5334 -SEIF Scheme 0.0015 -Exp. 8 Slow Start 8.4908 9.0902 DEC Bit 0.0011 0.0012 SEIF Scheme 0.0385 0.0100 Table 5.10: Results 2 of Class 2 CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 78 15 x 1000 bytes for Experiment 9 Buffer Size = 30 x 1000 bytes for Experiment 10 60 x 1000 bytes for Experiment 11 Source 1 Throughput Source 2 Throughput Source 3 Throughput Source 4 Throughput Network Throughput Exp. 9 Slow Start 20.24 kbps 15.25 kbps 20.99 kbps 57.44 kbps 113.92 kbps DEC Bit 22.00 kbps 18.53 kbps 18.80 kbps 41.20 kbps 100.53 kbps SEIF 21.07 kbps 20.53 kbps 19.60 kbps 58.00 kbps 119.20 kbps Exp. 10 Slow Start 18.13 kbps 18.37 kbps 20.51 kbps 45.95 kbps 102.96 kbps DEC Bit 22.00 kbps 18.53 kbps 18.80 kbps 41.20 kbps 100.53 kbps SEIF 20.77 kbps 20.19 kbps 20.00 kbps 56.32 kbps 117.28 kbps Exp. 11 Slow Start 23.12 kbps 17.25 kbps 16.43 kbps 46.35 kbps 103.15 kbps DEC Bit 22.00 kbps 18.53 kbps 18.80 kbps 41.20 kbps 100.53 kbps SEIF 20.19 kbps 20.61 kbps 20.05 kbps 54.94 kbps 115.79 kbps Table 5.11: Results 3 of Class 2 CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 79 15 x 1000 bytes for Experiment 9 Buffer Size = 30 x 1000 bytes for Experiment 10 60 x 1000 bytes for Experiment 11 Fairness Variance Exp. 9 Slow Start 6.4900 DEC Bit 2.4838 SEIF 0.3686 Exp. 10 Slow Start 1.1446 DEC Bit 2.4838 SEIF 0.1073 Exp. 11 Slow Start 8.8762 DEC Bit 2.4838 SEIF 0.0566 Table 5.12: Results 3 of Class 2 CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 80 It can be seen that the performance of the DEC Bit scheme is poor in two way traffic situations in terms of both throughput and fairness. Again, please note that when computing network fairness, we only compare the throughput of those sources competing for the same resources. In this case, the throughput of the sources sending data in the same direction was compared. The reason behind the low throughput of the DEC Bit scheme in two way traffic situations is that it treats the acknowledgements, whose sizes are relatively small, the same as data packets when calculating the average queue length. This leads to under-utilization of the network. The network fairness can not be ensured with this scheme since it does not provide a synchronization mechanism. The performance of the Slow Start scheme is as expected for the reasons given in Section 2.3.2. The results are not fair and the throughput are not high. The SEIF scheme produces high throughput and fair shares of network resources in all the experiments with one exception. In the experiment where three sources sent data in one direction and the other two sent data in the opposite direction, the throughput of the SEIF scheme is a little lower than that of the Slow Start scheme. The reason is that the Slow Start scheme does not consume any network bandwidth to perform congestion control; its throughput would not degrade much while the number of users sharing the network increases. Whereas, the SEIF scheme consumes a certain amount of network resources when it performs congestion con-trol/avoidance. The amount is proportional to the number of network users. In one way traffic situations, the throughput of the SEIF scheme would not degrade much since sending feedbacks does not consume network bandwidth that is used to transfer data. In two way traffic situations, sending feedbacks will consume some of the bandwidth used to transfer data. When the number of the network CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 81 users increases to a certain point, the negative effect of sending too many feed-backs becomes obvious. The guideline of the remedy to this problem is presented in Chapter 6. 3. Class 3: telnet type work load In all the experiments reported so far, we simulated only the ftp type workloads. That is, the networks are put under constantly heavy workloads since all the sources keep sending data through the networks as long as their flow control windows are open. In the experiments reported in this section, we simulated networks under telnet type workloads. Telnet workload is modelled by a random process of packet arrival. Inter packet delays of this process are exponential distributed. By setting different average inter packet delays and packet sizes, workloads of different intensities can be obtained. For instance, small packet sizes plus long average inter packet delays would result in light workloads. Under this type of workload, the SEIF scheme also performs well. We use the topology in Figure 5.9 again for our experiments. For this class of experiments, we present 8 sets of results. The results 1 to 4 are contained in Table 5.13(for fairness variances see Table 5.14) while results 5 to 8 are in Table 5.15(for fairness variances see Table 5.16). The common parameters of the 8 experiments are as follows: • Ack Packet and Feedback Size = 50 bytes • Buffer Size = 22 x 1000 bytes • Bandwidth of the link between routers 4 and 5 = 64 kbps • Bandwidth of the other links = 96 kbps • Latency of all the links = 0 CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 82 The Experiments 1 to 4 all used the data packet size of 512 bytes and average inter packet delays of 0.1, 0.064, 0.01, and 0.004 second respectively. The Experiments 5 to 8 had data packet size of 50 bytes and average inter pacter delays of 0.1, 0.064, 0.01, and 0.004 second respectively. The experiment results show that the SEIF scheme performed as well as the Slow Start scheme and the DEC Bit scheme when the data packet size is small. When the data packet size is large, the advantage of the SEIF scheme is very pronounced. 5.2 Performance Comparison Between The SEIF Scheme and The Tri-S Scheme We do not have the node function that simulates the control algorithm of Tri-S scheme. To compare the SEIF scheme with the Tri-S scheme, we ran simulations of the SEIF scheme with exactly the same topologies and parameter values used in [10]. Both the data obtained by our experiments and the data reported in [10] are listed in Table 5.17(for fairness variances see Table 5.18). We simulated all the situations described in [10] except the one in which one of the sources stopped sending data during a session while the other source remained sending, because this feature is not available in the REAL package. The topologies used are shown in Figures 5.10 and 5.11. In Figure 5.10, all the sources are sharing the same path. Whereas, the paths used by the sources in Figure 5.11 partially overlap with one another. The parameters are summerized in the following: • Data Packet Size = 512 bytes • Ack Packet and Feedback Size = 50 bytes • Maximum Window Size = 85 packets CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 83 0.1 second 0.064 second Inter Packet Delay = 0.01 second 0.001 second Data Packet Size = 512 bytes Source 1 Throughput Source 2 Throughput Source 3 Throughput Source 4 Throughput Network Throughput Exp. 1 Slow Start 19.43 kbps 17.78 kbps 18.50 kbps 38.41 kbps 94.11 kbps DEC Bit 19.13 kbps 18.81 kbps 18.83 kbps 32.66 kbps 89.43 kbps SEIF Scheme 20.83 kbps 19.54 kbps 19.82 kbps 37.61 kbps 97.83 kbps Exp. 2 Slow Start 19.09 kbps 15.59 kbps 18.96 kbps 49.34 kbps 102.99 kbps DEC Bit 19.29 kbps 19.09 kbps 18.42 kbps 38.24 kbps 95.04 kbps SEIF Scheme 20.45 kbps 19.33 kbps 18.79 kbps 49.75 kbps 108.33 kbps Exp. 3 Slow Start 20.11kbps 16.66 kbps 16.74 kbps 41.12 kbps 94.63 kbps DEC Bit 18.91 kbps 21.78 kbps 16.53 kbps 41.15 kbps 98.37 kbps SEIF Scheme 19.67 kbps 19.70 kbps 18.75 kbps 50.90 kbps 109.02 kbps Exp. 4 Slow Start 18.34 kbps 17.04 kbps 18.54 kbps 39.10 kbps 93.02 kbps DEC Bit 19.11 kpbs 24.44 kbps 13.99 kbps 40.07 kbps 97.62 kbps SEIF Scheme 20.17 kbps 18.42 kbps 19.35 kbps 51.09 kbps 109.02 kbps Table 5.13: Telnet Workloads I CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 84 0.1 second 0.064 second Inter Packet Delay = 0.01 second 0.001 second Data Packet Size = 512 bytes Fariness Variance Exp. 1 Slow Start 0.4562 DEC Bit 0.0214 SEIF 0.3070 Exp. 2 Slow Start 2.6249 DEC Bit 0.1384 SEIF 0.4780 Exp. 3 Slow Start 2.5851 DEC Bit 4.6071 SEIF 0.1944 Exp. 4 Slow Start 0.4422 DEC Bit 18.2029 SEIF 0.5111 Table 5.14: Telnet Workloads I CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 85 0.1 second 0.064 second Inter Packet Delay = 0.01 second 0.001 second Data Packet Size = 50 bytes Source 1 Throughput Source 2 Throughput Source 3 Throughput Source 4 Throughput Network Throughput Exp. 5 Slow Start 3.95 kbps 4.06 kbps 4.04 kbps 4.00 kbps 16.05 kbps DEC Bit 4.06 kbps 4.13 kbps 3.92 kbps 4.05 kbps 16.15 kbps SEIF Scheme 3.93 kbps 3.83 kbps 4.04 kbps 4.11 kbps 15.91 kbps Exp. 6 Slow Start 6.31 kbps 6.34 kbps 6.21 kbps 6.17 kbps 25.04 kbps DEC Bit 6.24 kbps 6.19 kbps 6.08 kbps 6.19 kbps 24.71 kbps SEIF Scheme 6.18 kbps 6.24 kbps 6.22 kbps 6.24 kbps 24.88 kbps Exp. 7 Slow Start 13.10 kbps 13.59 kbps 13.73 kbps 22.05 kbps 62.47 kbps DEC Bit 14.01 kbps 13.95 kbps 13.84 kbps 19.32 kbps 61.12 kbps SEIF Scheme 13.68 kbps 13.88 kbps 13.74 kbps 22.69 kbps 64.00 kbps Exp. 8 Slow Start 14.00 kbps 12.80 kbps 14.27 kbps 21.27 kbps 62.34 kbps DEC Bit 11.93 kbps 12.93 kbps 12.38 kbps 25.77 kbps 63.00 kbps SEIF Scheme 13.03 kbps 13.64 kbps 13.51 kbps 23.82 kbps 64.00 kbps Table 5.15: Telnet Workloads II CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 86 0.1 second 0.064 second Inter Packet Delay = 0.01 second 0.001 second Data Packet Size = 50 bytes Fariness Variance Exp. 5 Slow Start 0.0023 DEC Bit 0.0076 SEIF 0.0074 Exp. 6 Slow Start 0.0031 DEC Bit 0.0045 SEIF 0.0006 Exp. 7 Slow Start 0.0730 DEC Bit 0.0050 SEIF 0.0070 Exp. 8 Slow Start 0.4082 DEC Bit 0.1672 SEIF 0.0688 Table 5.16: Telnet Workloads II CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 87 Si — Source i Di - Destination i Ri - Router i Figure 5.10: Topology One Simulated By The Tri-S Scheme Si — Source i Di — Destination j Ri - Router i Figure 5.11: Topology Two Simulated By The Tri-S Scheme CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 88 • Buffer Size = 30 x 512 bytes • Bandwidth of links between two routers = 500 kbps • Bandwidth of the other links = 1 mbps • Latency of links between two routers = 50 ms • Latency of the links = 0 ms Data in the first 10 seconds of simulation was excluded to reduce bias due to startup. 1. Experiment 1 Single pair of source and destination 2. Experiment 2 Two pairs of sources and destinations sharing the same path. 3. Experiment 3 Two pairs of source and destination sharing parts of the same path. The sources start sending at the same time. 4. Experiment 4 Two pairs of sources and destinations sharing parts of the same path. Source one starts sending at the beginning of the simulation, while source two starts sending 9 seconds after the simulation is started. As can be seen, the SEIF scheme outperformed the Tri-S scheme in all four situations in terms of both throughput and network fairness. CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 89 Tri-S Scheme Our Scheme Exp. 1 Network Throughput 57.90 kbytes/s 62.50 kbytes/s Exp. 2 Network Throughput 58.54 kbytes/s 62.50 kbytes/s Souirce 1 Throughput 29.11 kbytes/s 31.35 kbytes/s Source 2 Throughput 29.43 kbytes/s 31.15 kbytes/s Exp. 3 Network Throughput 57.82 kbytes/s 62.50 kbytes/s Souirce 1 Throughput 23.71 kbytes/s 31.03 kbytes/s Source 2 Throughput 34.11 kbytes/s 31.47 kbytes/s Exp. 4 Network Throughput 60.63 kbytes/s 62.50 kbytes/s Souirce 1 Throughput 24.50 kbytes/s 30.95 kbytes/s Source 2 Throughput 36.13 kbytes/s 31.55 kbytes/s Table 5.17: The SEIF Scheme vs The Tri-S Scheme Tri-S Scheme SEIF Scheme fv : Exp. 2 0.0256 0.0100 fv : Exp. 3 27.0400 0.0484 fv : Exp. 4 33.8142 0.0900 Table 5.18: Fairness Variances of the SEIF and Tri-S Schemes CHAPTER 5. COMPARISON OF SEIF TO OTHER SCHEMES 90 5.3 Summary In this chapter, performance comparison were made among the DEC Bit scheme, the Slow Start scheme, and the SEIF scheme under various conditions. The results showed that the SEIF scheme produced better performance than the other schemes in terms of throughput and fairness. Experimental results showed that the Slow Start scheme produced relatively low throughput and unfair sharing of the network because it wasted network resources during congested periods and did not provide a mechanism to synchronize the behaviors of the sources. The DEC Bit scheme performed well in some of the simple topologies but not in the others, because the sources sometimes do not have the same information concerning network congestion. In general, its throughput was relatively low since it tried to maintain short queues at the routers. Its performance was poor in two way traffic situations because it treated acknowledgement packets the same as data packets leading to further under-utilization of network resources. The SEIF scheme performed very well in a wide variety of situations because the routers synchronized window adjustments at the sources, sent feedbacks before buffers were full to avoid dropping packets, and filtered feedbacks to prevent redundant feedbacks from being sent. The adoption of a proper cut back parameter value also contributed to the good performance of the SEIF scheme. We also compared the SEIF scheme with the Tri-S scheme. Because the node function simulating the Tri-S scheme is not available, we performed some simulations with exactly the same network topologies and experimental parameters as those used in [10] and then compared our performance results with those reported in [10]. The result of comparison showed that the SEIF scheme is better than the Tri-S scheme both in throughput and fairness for the scenarios used in [10]. Chapter 6 Conclusions In this thesis, we have described a new scheme for network congestion control and/or avoidance. The scheme, known as SEIF (for Synchronous Congestion Control/Avoidance Scheme with Explicit and Implicit Feedbacks), consists of two parts, the router algorithm and the source algorithm which are responsible for information feedback and traffic con-trol respectively. Simulation results have shown the performance of the SEIF scheme to be better than that of the other schemes under various conditions. We have simulated the SEIF scheme in a wide range of scenarios in which different network topologies, link capacities and latencies, buffer sizes, maximum flow control window sizes, types of work-load, etc, were used. Unlike the Slow Start scheme, the DEC Bit scheme, and the Tri-S scheme, the SEIF scheme produced high network throughput and fair sharing of network resources among the network users in almost all the situations simulated. The Slow Start scheme did not consume any additional network resources in performing congestion con-trol but produced modest throughput and poor network fairness. This is mainly because it does not prevent packets from being dropped and does not provide any explicit mech-anism to synchronize the behaviors of the users. The DEC Bit scheme tries to consume as little network bandwidth as possible though its processing overhead is nonnegligible. It performed quite well in some of the one way traffic situations but poorly in others. It 91 CHAPTER 6. CONCLUSIONS 92 produced low throughput in all the two way traffic situations simulated. This is because it does not treat acknowledgement packets differently from data packets which usually are much larger than the former. Their current selection of feedback threshold and feedback filtering mechanism maintains the queue lengths at the routers at a low level which also contribute to the low throughput of the scheme under two way traffic situations. The Tri-S scheme faces the same problem as the Slow Start scheme in lacking an effective synchronization mechanism. It can only ensure statistical fairness among its users. The good performance of the SEIF scheme is due to the application of proper feedback mechanism and traffic control algorithm. By sending feedback messages to all the sources involved when the queue length is 80% of the total amount of buffers, minimum(none in most cases) packets would be dropped and the sources can react to the network conditions at approximately the same time. Our feedback generation filtering mechanism prevents the routers from sending redundant feedback messages. This plus the linear increase and exponential decrease traffic control algorithm and the proper cut back parameter result in high network throughput and fair sharing of network resources with moderate overhead. 6.1 Future Works To further reduce the overhead in the SEIF scheme, we need an accurate algorithm to determine when it is worthwhile to disable the explicit feedback scheme and rely on the implicit feedback mechanism for congestion control. A potential problem with the SEIF scheme is that when the algorithm mentioned above is not accurate, the overhead of explicit feedback might be high when the number of sources sharing the bottleneck in a two way traffic situation is large. This is not a problem in one way traffic situations because in such situations the feedback messages do not have effect on the data packet traffic flowing in the other direction. Network CHAPTER 6. CONCLUSIONS 93 throughput would suffer if too many feedback messages were sent in two way traffic situations. One way to prevent this from occurring is to slow down the increase in window sizes at the sources. Slower increase in window sizes means that the feedback threshold would be reached less often, thus less feedback messages would be sent. To do this, the traffic control algorithm at the source would have to be modified. In the Stable State of the original algorithm, a source increases its window size by one when a full window of acknowledgements are received. To further slow down the increase, the source should wait for more than one window of acknowledgements before it increases the window size by one. The number should be dynamic, depending on the number of the sources involved. Preliminary experiment results show that when the number is set to two instead of one, a lot less feedback messages are generated, yet network throughput and the degree of fairness do not change much. The relationship among the number of the sources sharing the same bottleneck, the value used to slow down window size increase, and the overhead of performing congestion control/avoidance needs to be studied further so that intelligent routers could be developed to determine the optimal slow down value on the fly based on the frequency of crossing the feedback threshold and the overhead of performing congestion control/avoidance. Another problem the SEIF scheme might face is that the sources might receive dupli-cated explicit feedbacks if there are more than one bottlenecks on their communication paths. So far we have assumed there is only one bottleneck for each communication path. Nevertheless, in a large long haul network, multiple bottlenecks are possible for a single communication path. There are two simple methods to prevent this from occurring. One is to equip the sources with the capability of reacting only to the feedback messages from the router that sent the most feedback messages in a certain period of time. The other way is to prevent the routers from sending duplicated feedback messages for the same data packets. This can be done by making use of a special bit in each data packet. When CHAPTER 6. CONCLUSIONS 94 a router wants to send feedbacks, it checks the special bits of the packets in its queue. If the bit of a packet is set, the router would not send feedback to the source of the packet. Otherwise, feedback is sent and the special bits are set for all the packets from the same source in the queue. In this way, the packets will not cause further feedbacks along the path to their destinations. Another interesting research direction is to try to experiment using the distance from a router to the source to determine if explicit feedbacks should be sent. Intuitively, it is useful to send explicit feedback when the router is close to the source since the amount of network bandwidth used will be small and the feedbacks will get to the source quickly. On the other hand, if the router is very far away from the source in a long-haul network, it may be wise to use implicit feedback instead. In addition to the distance, the traffic intensity along the path to the source should also be considered. Bibliography Raj Jain, "A Delay-Based Approach for Congestion Avoidance in Interconnected Heterogeneous Computer Networks", ACM Comp. Comm. Review, 19(5) Oct 1989. John Nagle,"Congestion Control in IP/TCP Internetworks", ACM Comp. Comm. Review, 14(4) , Oct 1984. John Robinson, Dan Friedman, and Martha Steenstrup "Congestion Control in BBN Packet-Switch Networks", ACM Comp. Comm. Review, 20(1) Jan 1990. D. M . Chiu and R. Jain "Analysis of Increase and Decrease Algorithms for Con-gestion Avoidance in Computer Networks", Computer Networks and ISDN Systems, Vol. 17, pp 1-14 , 1989. K. K. Ramakrishnan and Raj Jain "An Explicit Binary Feedback Scheme for Con-gestion Avoidance in Computer Networks with a Connectionless Network Layer", Proceedings, ACMSIGCOMM '88, Standford CA, Aug. 1988, pp303-313. Van Jacobson "Congestion avoidance and control", Proceedings, ACM SIGCOMM '88, Standford CA, Aug. 1988, pp314-329. A. Demers, S. Kedshav, and S. Shenker, "Analysis and simulation of a fair queueing algorithm", Proceedings, ACM SIGCOMM '89, Austin TX, Sept. 1989, ppl-12. Lixia Zhang "Why TCP Timer Don't Work Well", Proceedings, ACM SIGCOMM '86, Aug. 1986. Allison Mankin "Random Drop Congestion Control", Proceedings, ACM SIG-COMM '90, Philadelphia, Pennsylvania, Sept. 1990, ppl-7. Zheng Wang and Jon Crowcroft "A New Congestion Control Scheme: Slow Start and Search(Tri-S)", ACM Comp. Comm. Review, 21(1), pp. 32-43, Jan. 1991. 95 CHAPTER 6. CONCLUSIONS 96 [11] Andrew S. Tanenbaum "Computer Networks", 2nd ed. published by Prentice-Hall, Inc., 1989. [12] Rick Wilder, K .K . Ramakrishnan, and Allison Mankin "Dynamics of Congestion Control and Avoidance of Two-Way Traffic in an OSI Testbed", A CM Comp. Comm. Review, 21(2), April 1991. [13] Kalmanek, Kanakia, and Keshav, "Rate Controlled Servers for Very High Speed Networks", Proceedings, IEEE GlobeComm '90, Dec. 1990. [14] F. Wong and J. Marca, "Fairness in Window Flow Controlled Computer Networks", IEEE Transaction on Communication, 37(5), May 1989. [15] B. T. Doshi and H. Q. Nguyen "Congestion Control in ISDN Frame-Relay Net-works", AT&T Technical Journal, Nov./Dec. 1988. [16] Raj Jain, "Congestion Control in Computer Networks: Issues and Trends", IEEE Network Magazine, 4(3), May 1990. [17] G. Galassi, G. Rigolio, and L. Verri, "Resource Management and Dimensioning in ATM Networks", IEEE Network Magazine, 4(3), May 1990. [18] C. Anthony Cooper and Kun I. Park, "Toward a Broadband Congestion Control Strategy", IEEE Network Magazine, 4(3), May 1990. [19] T. Nishida, M. Murata, H. Miyahara, and K. Takashima, "Congestion Control in Interconnected Local Area Networks", Chapter 6 oiuLocal Area and Multiple Access Networks", published by Rockville, Md. : Computer Science Press, cl986. [20] Gregory G. Finn, "A Connectionless Congestion Control Algorithm", ACM Comp. Comm. Review, 19(5), Oct. 1989. [21] R. Varakulsiripunth, N. Shir, and S. Noguchi, "A Congestion Control Policy on the Internetwork Gateway", Computer Networks and ISDN Systems, 11, 1986. [22] Raj Jain, "A Timeout-based Congestion Control Scheme for Window Flow-Controlled Networks", IEEE Journal on Selected Areas in Communications, SAC-4(7), Oct. 1986. [23] J. Bolot, A. U. Shankar, and B. D. Plateau, "Performance Analysis of Transport Protocols over Congestion Channels", Performance Evaluation, Vol. 11, April 1990, pp45-65. CHAPTER 6. CONCLUSIONS 97 [24] J. Bolot and A. U. Shankar, "Dynamical Behavior of Rate-Based Flow Control Mechanisms", ACM Comp. Comm. Review, 20(2), April 1990. [25] Debasis Mitra and Judith B. Seery, "Dynamic Adaptive Windows for High Speed Data Networks: Theory and Simulations", Proceedings, ACM SIGCOMM '90, Philadelphia, Pennsylvania, Sept. 1990, pp30-40. [26] S. Hamaloddin Golestani, "A Stop-and-Go Queueing Framework for Congestion Management", Proceedings, ACM SIGCOMM '90, Philadelphia, Pennsylvania, Sept. 1990, pp8-18. [27] P. Karb and C. Partrige "Improving Round-Trip Time Estimates in Reliable Trans-port Protocols", Proceedings, ACM SIGCOMM '87, Stowe VT, Aug. 1987, pp2-7. [28] K. K. Ramakrishnan, "Analysis of a Dynamic Window Congestion Control Pro-tocol in Heterogeneous Environments Including Satellite Links", Proceedings, IEEE Symposium on Computer Networking, Nov. 1986. [29] G. Ramaurthy, "Dynamic Flow Control in a Computer Communication Network", Proceedings, IEEE Symposium on Computer Networking, Nov. 1986. [30] Lixia Zhang, "Virtual Clock: A New Traffic Control Algorithm for Packet Switching Networks", Proceedings, ACM SIGCOMM '90, Philadelphia, Pennsylvania, Sept. 1990, ppl9-29. [31] Douglas E. Comer and Rajendra S. Yavatkar, "A Rate-Based Congestion Avoid-ance and Control Scheme for Packet Switched Networks", Proceedings, IEEE Inter-national Conference on Distributed Computing Systems, June 1990. [32] Raj Jain and K. K. Ramakrishnan, "Congestion Avoidance in Computer Networks with a Connectionless Network Layer: Concepts, Goals, and Methodology", Proceed-ings, IEEE Symposium on Computer Networking, Washington DC, 1988 ppl34-143. [33] Allison Mankin and Kevin Thompson, "Limiting Factors in the Performance of the Slow-start TCP Algorithms", Proceedings, USENIX, Winter 1989. [34] S. Shenker, L. Zhang, and D. D. Clark, "Dynamics of a Congestion Control Algorithm", ACM Comp. Comm. Review, 20(5), Oct. 1990, pp30-39. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0051988/manifest

Comment

Related Items