Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A study of bandwidth allocation mechanisms on sonet-ring-based transport networks Zhang, Dingliang 1999

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-ubc_1999-0246.pdf [ 6.25MB ]
JSON: 831-1.0051609.json
JSON-LD: 831-1.0051609-ld.json
RDF/XML (Pretty): 831-1.0051609-rdf.xml
RDF/JSON: 831-1.0051609-rdf.json
Turtle: 831-1.0051609-turtle.txt
N-Triples: 831-1.0051609-rdf-ntriples.txt
Original Record: 831-1.0051609-source.json
Full Text

Full Text

A STUDY OF BANDWIDTH ALLOCATION ON SONET-RING-BASED TRANSPORT  MECHANISMS NETWORKS  by DINGLIANG ZHANG Ph. D., The University of British Columbia, 1995 M. Sc., The University of British Columbia, 1991 B. Sc., Hangzhou University, Hangzhou, Zhejiang, China, 1982  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENT FOR THE DEGREE OF MASTER OF SCIENCE IN THE FACULTY OF GRADUATE STUDIES (Department of Computer Science)  We accept this thesis as conforming to the required standard  The University of British Columbia April \W © D. ZHANG, April 1999  In presenting this thesis in partial fulfillment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis forfinancialgain shall not be allowed without my written permission.  Computer Science The University of British Columbia 2366 Main Mall Vancouver, BC Canada V6T 1Z4  Abstract A new SONET-based network simulator was designed and implemented to study the performance of different routing mechanisms on transport networks based on the SONET ring architecture. The simulator incorporates some unique features of the SONET technology. Three static routing schemes were evaluated on the simulator at different levels of traffic load. The hop-by-hop routing (HHR) scheme uses the physical network as a fat bit pipe to transport the data traffic from one node to its neighboring node. Utilizing the Digital Cross-connect System in the SONET technology, virtual topology routing (VTR) schemes establish the multiple-hop point-to-point links across a SONET ring in order to reduce overall nodal processing time. The simulation results show that virtual topology routing schemes have advantage of low network latency over the hop-by-hop routing scheme only when the bursty traffic load is well below the network capacity. In medium to high traffic load conditions, increased queuing delay counteracts the reduced overall nodal processing time in the VTR schemes, resulting in high network latency, while the HHR scheme have the advantage of high bandwidth utilization and low packet-drop rates. To overcome the drawbacks of the static routing schemes, dynamic bandwidth-allocation mechanisms combining VTR and HHR principles were suggested. The mechanisms allocate bandwidth dynamically between VTR and HHR regions in response to the changes in traffic patterns on a SONET ring. Both centralized model and distributed model were proposed. For the centralized model, the problem of optimal effective bandwidth allocation on the Unidirectional Path Switched Ring (UPSR) architecture was formulated and a solution based on a Greedy algorithm with cost of 0 ( n m + mlogm) was provided for 2  UPSR architecture with n nodes and m flows. A heuristic bandwidth optimization algorithm based on the solution for UPSR was developed and evaluated numerically for Bidirectional Path Switched Ring (BPSR) architecture. The possibilities of extending the heuristic algorithms to other SONET-ring-based architectures were also explored. For the distributed model, a mechanism was proposed to maintain the tension between bandwidth utilization and network latency in local scope rather than in global scope. By resolving problem early and locally, intensive computation of global bandwidth optimization and synchronization of global reconfiguration may be avoided.  ii  Table of Contents Abstract  ii  List of Figures  vi  List of Tables  vii  Acknowledgments  viii  Chapter 1 General Introduction  1  1.1 SONET Overview  1  1.1.1 Terminal Multiplexer, Add-drop Multiplexer and Digital Cross-connect System Devices 1.1.2 Payload Pointers and Floating Pay loads .  1.1.3 SONET Rings  2 •  3 .  4  1.2 Architectures of SONET-Based Networks 1.3 Motivation  4 •  5  1:4 Organization of the Thesis  —.7  Chapter 2 Related Work  8  2.1 Spare Channel Assignment in Self-Healing Network  8  2.2 Virtual Topology and Virtual Path  9  2.3 Mechanisms and Protocols  14  2.4 Concluding Remarks  15  Chapter 3 Design and Implementation of SONET-based Transport Network Simulator 17 3.1 Introduction  17  3.2 General Design  18  3.3 Event and Scheduler Objects  20  3.3.1 NSEvent Object  20  3.3.2 PDU Object  20  iii  3.3.3 NEE vent Object  20  3.3.4 Scheduler and EventQueue Objects  21  3.4 Network Element Objects  21  3.4.1 NENode and SonetNode Objects  22  3.4.2 DCSMatrix Object  22  3.4.3 ForwardModule Object  22  3.4.4 ForwardTable Object  22  3.4.5 PDUQueue Objects  •  23  3.4.6 TrafficGenerator and TrafficSink Objects  24  3.4.7 Aggregater and Segregater Objects  24  3.4.8 NELink and SonetLink Objects 3.4.9 SonetChannel Object  :  •• 24  :  25  3.4.10 NEMonitor Object  •  3.4.11 SonetRing Object  26 26  3.5 RandomVariables Package  27  3.6 Graphic User Interface Objects  27  3.7 Some Implementation Issues  •  3.7.1 Memory Management 3.7.2 Performance Issues  28 •  28  :  •  29  3.8 Summary  29  Chapter 4 Static Bandwidth Allocation Schemes On SONET Rings  31  4.1 Simulations  31  4.1.1 Configuration of the Simulated Network  31  4.1.2 Bandwidth Distribution of Routing Schemes  32  4.1.3 Generation of Traffic Patterns  35  4.1.4 Measurement Metrics  36  4.2 Results and Discussion  37  4.2.1 Performance in the Low Traffic Load Condition  37  4.2.2 Performance in the Medium Traffic Load Condition  44  4.2.3 Performance in the Heavy Traffic Load Condition  46  4.3 Summary  49  iv  Chapter 5 Dynamic Bandwidth Allocation Models  52  5.1 Overview of the Dynamic Bandwidth Allocation Scheme  52  5.2 A Centralized Model for Dynamic Bandwidth Allocation  53  5.2.1 Optimization Algorithm for Bandwidth Allocation on UPSR  54 Optimal Bandwidth Assignment on UPSR  54 Heuristic Bandwidth Allocation Algorithm for UPSR  59  5.2.2 Optimization Algorithm for Bandwidth Allocation on BPSR  63 Extension of Bandwidth Assignment from UPSR to BPSR  63 Evaluation of the Optimization Algorithms for BPSR  65  5.2.3 Extension of Heuristic Algorithm from Single Ring Configuration to Multiple Rings with Matched Nodes  68 Unidirectional Path Switched Double Ring with One Matched Node  68 Multiple UPSRs with Two or More Matched Nodes  68 Bi-directional Path Switched Rings (BPSR) with One Matched Node  70  5.3 Distributed Model  70  5.3.1 Reduce Network Latency  71  5.3.2 Increasing Bandwidth Utilization  74  5.4 Summary  Chapter 6  77  General Conclusion and Future work  78  6.1 General Conclusions  78  6.2 Future Work  79  References  81  v  List of Figures Figure 3-2 Schematic View of a SONET Node Structure  23  Figure 4-1 Bandwidth Distribution (Clockwise) of the Three Static Routing Schemes  33  Figure 4-2 Distribution of Physical Link Utilization in Low Traffic Load Condition  38  Figure 4-3 Statistics on Flow Network latency in Low Traffic Load Condition  40  Figure 4-4 Distribution Pattern of Minimum Network latency and Path Lengths of Traffic flow 0-14  42  Figure 4-6 Drop Rate per Node in Medium Traffic Load Condition  44  Figure 4-5 Distribution of Physical Link Utilization at Medium Traffic Load Condition  45  Figure 4-8 Distribution of Physical Link Utilization in Medium Traffic Load Condition  47  Figure 4-7 Statistics on Network Latency in Medium Traffic Load Condition  48  Figure 4-9 Drop Rate per Node in Heavy Traffic Load Condition  49  Figure 4-10 Statistics on Flow Network Latency in Heavy Traffic Load Condition  50  Figure 5-1 Bandwidth Allocation Algorithm for UPSR  60  Figure 5-2 An Example of Bandwidth Assignment on a 5-node SONET UPSR Network  61  Figure 5-3 Heuristic Bandwidth Allocation Algorithm for BPSR  66  Figure 5-4 Unidirectional Path Switched Double Ring with One Matched Node  69  Figure 5-5 Unidirectional Path Switched Rings (BPSR) with Two Matched Nodes  69  Figure 5-6 Bi-directional Path Switched Double Rings with One Matched Node  70  Figure 5-7 Establishment of Direct Point-to-Point Link to Reduce Network Latency  73  Figure 5-8 Synchronization of Point-to-Point Link Splitting Operations (1)  75  Figure 5-9 Synchronization of Point-to-Point Link Splitting Operations (2)  76  vi  List of Tables Table 4-1 Compositions of Three Traffic Patterns  36  Table 5-1 Bandwidth Assignment for the UPSR Example  61  Table 5-2 Partial Link Utilization Values in Bandwidth Assignment for the UPSR Example  62  Table 5-3 Evaluation of Heuristic Optimization Algorithms for Bandwidth Allocation on BPSR  67  vii  Acknowledgments I would like to express my sincere appreciation to my research supervisor, Professor Dr. Gerald Neufeld, for his inspiration, encouragement and guidance during my studies in the Department of Computer Science. My gratitude are extended to other faculty members of this department who have taught me throughout the undergraduate and graduate studies in this department. Without their excellent teaching, it is impossible for me, as a layman with little knowledge of bits and bytes in computer science three years ago, to conduct the research presented in this thesis today. The staff of this department are also thanked for their excellent services. My present and former co-workers in the research group are also thanked for their help and enlightening discussions during my tenure. In particular, I would like to thank Mr. Mark McCutcheon for his help in setting up research equipment in the Protocol Lab, lining Tian for his help in installing the LBL NS network simulator. Daniel and Norton are also remembered for their company in the Protocol Lab late nights and weekends. Finally, I would like to thank my wife, Weihua Chen, for her kind understanding, invaluable encouragement and full support during my years of graduate study.  viii  Chapter 1  General Introduction  The exponential growth of the Internet in the last few years has put a great pressure on network infrastructures for high bandwidth and fast access. At the endpoints of the networks, desktop systems are becoming faster and cheaper. In the local area network (LAN) environment, new technologies, such as 100- Mbps Ethernet and gigabit Ethernet [Ethernet], are being built to address bandwidth limitations. Faster network technologies of higher capacity are required for connecting these LANs to transport the data traffic generated by the bandwidth-demanding and latency-critical applications over wide area networks (WANs). Synchronous optical network (SONET) provides a means of relief from this growing bandwidth problem in the WAN environment  [Goraiski97,  BW97, Nonei96.].  Currently, SONET technology can provide  bandwidth as high as tens of gigabit per second (Gbps). In spite of the high network bandwidth, how to route the data traffic efficiently with minimal network latency remains an interesting research topic. In this thesis, different network bandwidth-allocation mechanisms on a SONET-ring-based transport network are studied.  1.1 SONET Overview SONET defines the standard for transmission of digital signals over opticalfibers.The basic unit of SONET is a frame that is transmitted every 125 us. The frame sizes are different at different levels of the SONET signal digital hierarchy, resulting in different transmit rates. At electrical synchronous transport signal level one (STS-1), for example, the frame structure is an 810-octet frame (90 columns x 9 row), which is transmitted on opticalfiberas optical carrier level one (OC-1) signal at speed of 51.84 Mbps (810 bytes/frame x 8 bits/byte -s- 125 us/frame). In a SONET-based network, digital signals from various types of network services can be mapped into the payload envelope of a virtual tributary (VT) transmitted at different speeds. Low-speed digital signals are synchronously multiplexed into high-speed signals and eventually converted to a base format of an STS-1 signal. Several synchronous STS-ls can be byte-multiplexed together to form an electrical STS-n signal. Electrical STS-n signal is directly converted to optical OC-n  1  signal and then is transmitted over an optical network. SONET has several unique features. First, its multiplexing system is synchronous. With the help of Add/Drop Multiplexer (ADM) and Digital Cross-connect System (DCS) devices, low-speed signals can be identified, extracted, inserted, or tunneled through a SONET node without de-multiplexing the high speed SONET signals. Secondly, SONET supports a floating mode for VT mapping operations. It allows a payload to begin anywhere in the payload envelope section of a SONET frame. The time justification technique not only minimizes the delay and buffer size, but also makes it possible to transport plesiochronous network traffic. Finally, the commonly deployed SONET architecture—SONETring—isperhaps the most distinctive feature of SONET networks. A SONET ring can provide certain degree of fault tolerance with redundant bandwidth for protection. In the rest of this section, these unique features are briefly discussed.  1.1.1 Terminal Multiplexer, Add-drop Multiplexer and Digital Cross-connect System Devices At an entry point of a SONET-based transport network, external traffic from different transmission media is gathered and converted to the SONET format. These low-speed digital signals from different sources are multiplexed into high-speed signals and transmitted over SONET links at high speed. At an end point of the SONET network, the high-speed SONET signals are demultiplexed to the low-speed signals and eventually converted to the format which can be transmitted over the destination transmission media. The function of multiplex/demultiplex is usually implemented with a Terminal Multiplexer (TM) device. A T M device can be thought as an ADM device operating in the terminal mode. They differ only in their locations in the network and in their functions. An ADM device is located in the middle span of a SONET network and has two network connections. It is used to aggregate or to segregate SONET traffic at various speeds. It may connect to several terminal multiplexers to add or to drop network traffic. Depending on the configuration of a SONET network, signals at different speeds, from DS-0 channel to OC-level can be dropped or added as desired.  [Goraiski97]  A DCS device is an even more sophisticated version of the ADM. The heart of the DCS device is a cross-connect matrix, which can also be configured to switch signals from an input port to an output port.  2  All ports can be connected through the matrix. The granularity of the traffic to be switched or to be crossconnected can be asfineas the DS-0 level. [Goraiski97j Thus, a DCS device can be configured for an individual channel to allow incoming digital signals to be dropped and local traffic to be inserted. It can also be configured to let the incoming traffic directly tunnel through the node at which the DCS device is located.  1.1.2 Payload Pointers and Floating Payloads A SONET frame is divided into two sections: overhead and payload. For example, thefirstthree columns of an STS-1 frame form Transport OverHead (TOH). The rest of the frame, called SONET Payload Envelop or SPE in short, is used for carrying user data and its associated Path Overhead (POH). By providing adequate overhead, SONET allows each network element in the transmission system to communicate with a centralized management system, or to distribute the management responsibility to the network elements throughout the network. Among all SONET overheads, the concept of payload pointers is one of the great innovations in the SONET technology. As mentioned above, SONET is a synchronous network in the sense that it transmits 8000 frames every second. All SONET components and payload should be synchronized precisely. A strict synchronous network requires that all components can be traced back to a common reference clock. This is very expensive to achieve in real world. SONET is often called mesochronous network because it only guarantees that all network clocks are timed from the same clock source. There are phase differences and jitters across the network. Non-SONET signals are also plesiochronous, meaning that the network components using separate clocks. In any case, the network-timing signal has jitter and wander within predefined limits. The use of payload pointer enables SONET to transport plesiochronous network traffic. The Payload pointer allows the payload to float within an SPE of the SONET frame. The three bytes (HI, H2 and H3) of the pointer are located in the fourth row of TOH of an STS-1 frame. Thefirst4 bits of the HI byte, called new dataflag,indicate the introduction of a new pointer value. The last 2 bits of the HI byte and the entire H2 byte constitute the actual pointer. Their values indicate the relative position of thefirstbyte of the payload within the SPE. H3 byte is allocated to compensate for the SPE timing variation. Because of network time jittering, more than 783 bytes may be ready to be sent out within a 125microsecond period. If the excess bits are more than 8 bits, they are carried in the H3-byte. This is called negative timing justification. On the other hand, if less than 783 bytes are available to fill the SPE, positive 3  time justification is needed. A stuff byte in the position next to the H3 byte is transmitted to compensate for the time jittering. Thus, during transmission across the network, if any variations occur in the timing, the pointer value need only be increased or decreased to compensate for the variation. The use of a payload pointer in SONET has two effects. First, positive and negative timing justifications compensate for slight time jitter across the network. Second, in thefloatingmode, the actual payload can begin anywhere within SPE in the frame. The presence of extra bytes, or a shortfall of bytes, can be adapted in the synchronous multiplexing process by changing the position of the payload within SPE. All of the constituent parts of SONET payloads can easily be found, even when the payload source is totally plesiochronous. In addition, because a payload can start at any position in SPE, only a small buffer is needed and low latency is expected. 1.1.3  S O N E T Rings In principle, SONET links can be deployed in a straight point-to-point fashion. It can also be  deployed as multi-point patterns that make glooming and hubbing easier for service providers. However, the ability of SONET to be deployed in ring architectures has become the defining feature of SONET to date. SONET rings are typically deployed as local and long-haul backbone networks. In a SONET ring, each node is connected to two adjacent nodes by means of at least one set of fibers operating in a duplex fashion. A SONET ring may provide redundant bandwidth and redundant network equipment for protection. A SONET ring may be characterized by three main features: the number of fibers per link, the direction of the signal, and the level of protection switching. The most commonly deployed SONET rings are 2-fiber Unidirectional Path Switched Ring (2-fiber UPSR) and 2-fiber bidirectional Line Switched Ring (2-fiber BLSR). Currently, only a few 4-fiber BLSRs are deployed in large-scale WANs. 1.2 A r c h i t e c t u r e s o f S O N E T - B a s e d N e t w o r k s In OSI seven-layer reference model, SONET is a physical layer protocol at layer 1. IP (Internet Protocol) is a connectionless protocol at layer 3. A data link layer (layer 2) is required to interface the SONET protocol with IP. ATM (Asynchronous Transfer Mode) technology, as a high-speed switching technology running on top of SONET, once attempted to replace IP which used relatively slow routing  4  techniques[KW95]. A f t e r r e a l i z i n g it i s i m p o s s i b l e t o r e p l a c e I P i n the e x i s t i n g Internet, i n d u s t r y a t t e m p t e d to adapt the A T M i n t h e Internet a r c h i t e c t u r e fPST95. R F C I 9 3 2 ] . H o w e v e r , A T M p e r f o r m s m a n y f u n c t i o n s that are o v e r l a p p e d w i t h those o f other l a y e r s . F o r e x a m p l e , I P a n d A T M e a c h p e r f o r m r o u t i n g f u n c t i o n s . I n a d d i t i o n , its s m a l l c e l l s i z e leads t o i n e f f i c i e n c y i n t e r m s o f f r a g m e n t a t i o n / r e a s s e m b l e a n d h i g h o v e r h e a d / payload ratio.  In t h e last f e w y e a r s , n e w I P s w i t c h i n g t e c h n o l o g i e s h a v e b e e n d e v e l o p e d . E x a m p l e s are C i s c o ' s t a g g e d s w i t c h i n g [Cisco96] a n d I p s i l o n ' s  flow-labelled  s w i t c h i n g [NMLU97J. N e t w o r k r e s e a r c h interest h a s  b e e n s h i f t e d f r o m IP over ATM over SONET a r c h i t e c t u r e t o IP over SONET d i r e c t l y o r IP over PPP ( P o i n t to P o i n t P r o t o c o l ) over SONET ( I P / P P P / S O N E T ) [RFC16I9,  R F C I 6 6 I . RFCI662].  T h i s r e n e w e d interest i n the I P /  P P P / S O N E T a r c h i t e c t u r e i s r e f l e c t e d i n t h e r e c e n t a p p e a r a n c e o f Internet drafts r e l a t e d t o t h e a r c h i t e c t u r e [FC97, MDAVL97. A A K W W 9 7 . simpson98, Narten98, Maiis98]. I n t e r m s o f transport e f f i c i e n c y , I P o v e r A T M o v e r  S O N E T i s m u c h less e f f i c i e n t t h a n I P o v e r P P P o v e r S O N E T . A T M c o n s u m e s 15 t o 2 0 p e r c e n t o f b a n d w i d t h as o v e r h e a d . I P o v e r S O N E T i s b e l i e v e d t o b e c o m e a d o m i n a n t W A N t e c h n o l o g y i n t h e f u t u r e Internet.  1.3 Motivation I n e x i s t i n g I P o v e r P P P o v e r S O N E T a r c h i t e c t u r e , there i s a o n e - t o - o n e c o r r e s p o n d e n c e b e t w e e n l o g i c a l a n d p h y s i c a l connections a m o n g routers. S O N E T does not provide Q o S , traffic a n d b a n d w i d t h m a n a g e m e n t . T h e s i m p l e s t c o n f i g u r a t i o n f o r a S O N E T - b a s e d transport n e t w o r k w o u l d b e o n e l o g i c a l point-to-point l i n k per p h y s i c a l link. T h e routing scheme c o u l d be very simple since n o point-to-point l i n k that s p a n s m o r e t h a n o n e l i n k o n the S O N E T r i n g . W h e n a p a c k e t a r r i v e s at a n o d e o f the S O N E T r i n g , i t s h e a d e r i s e x a m i n e d at l a y e r 3. I f the p a c k e t r e a c h e s its d e s t i n a t i o n n o d e o r its T I L ( t i m e t o l i v e ) e x p i r e s , i t is d r o p p e d f r o m t h e r i n g ; o t h e r w i s e , the p a c k e t i s f o r w a r d e d t o the n e x t n o d e . T h i s s i m p l e r o u t i n g s c h e m e is c a l l e d H o p - b y - H o p R o u t i n g ( H H R ) s c h e m e . T h e H H R s c h e m e i s e x p e c t e d t o h a v e h i g h b a n d w i d t h u t i l i zation. H o w e v e r , it m a y have relatively l o n g network latency because each packet has to b e stopped, p r o cessed, p o s s i b l y queued and retransmitted.  A n o t h e r r o u t i n g a p p r o a c h i s the s o - c a l l e d V i r t u a l T o p o l o g y R o u t i n g ( V T R ) s c h e m e [CGK92, CGK93]. A s d i s c u s s e d i n t h e p r e v i o u s s e c t i o n s , the u s e o f A D M a n d D C S d e v i c e s a l l o w s a p a c k e t b e i n g sent f r o m its s o u r c e n o d e d i r e c t l y t o its d e s t i n a t i o n n o d e w i t h o u t b e i n g p r o c e s s e d f o r r o u t i n g p u r p o s e at i n t e r m e d i a t e  5  nodes. By creating a direct point-to-point link between a source node and a destination node, the packet need not be processed at layer 2 and/or layer 3 in the intermediate nodes. It results in two potential benefits: •  Reduced processing delay and queuing delay at intermediate nodes may result in low network latency for long-haul traffic flows across multiple intermediate nodes; and  •  Reduced load on the intermediate nodes may reduce the network latency in turn for the traffic flows originated or destined to these nodes because of shorter queuing time.  However, when a direct link is established specifically for transporting packets from a particular source node to a particular destination node, the bandwidth associated with the link is dedicated. The bandwidth cannot be used for transporting other packets within the network span between the two nodes. Thus, the advantage is its potential low network latency. The disadvantage is bandwidth fragmentation and therefore possible low utilization. Some point-to-point links may have under-utilized bandwidth while others are overloaded. In an ideal VTR scheme, a virtual meshed network based on n-node SONET ring allows each node to have (n-1) direct links connected to every other node. Adequate bandwidth is allocated to each direct link for the traffic flows that travel through the link. However, it is not practical to deploy such a virtual meshed network. First, a meshed network is not easily constructed on a SONET ring with limited granularity of managed bandwidth. Second, a meshed network incurs high management overhead. Finally, a meshed network may suffer from high segmentation of bandwidth resulting in under-utilization of bandwidth. From a meshed network, many possible virtual topologies can be derived. Thus, it is possible to configure a logical network on a SONET-based transport network, which is optimized both for bandwidth utilization and network latency with limited granularity of managed bandwidth. Conversely, virtual topologies of the network have significant impact on its performance in terms of network latency and utilization. In the logical network, not allflowshave minimum network latency but they are optimized within limited capacity of the network. It is also possible to change the virtual network topology on a physical network and to distribute bandwidth among the virtual point-to-point links dynamically to accommodate the changing network traffic patterns. Therefore, dynamic bandwidth-allocation mechanisms are needed.  6  In this thesis, the effects of different routing schemes on the performance of a SONET-ring-based transport network are studied. A new SONET-based network simulator, incorporated with some of the unique feature of the SONET technology, is designed and implemented. A number of static routing schemes are examined in the network simulator with different level of traffic loads. Finally, a dynamic bandwidth-allocation scheme is proposed. In this scheme, bandwidth is dynamically allocated for virtual links to accommodate changing traffic load. 1.4 O r g a n i z a t i o n o f t h e T h e s i s  A short review on previous research of bandwidth allocation mechanisms is presented in the next chapter. In Chapter 3, the design of the new network simulator is illustrated, together with a discussion of some implementation issues. This is followed by a discussion of the simulation results of different static routing schemes at different levels of traffic loads in Chapter 4. The advantage and disadvantage of the static routing schemes are discussed based on the experiment results. A dynamic hybrid routing mechanism, combining both hop-by-hop routing and virtual topology routing approaches, is proposed in Chapter 5. Finally, Chapter 6 summarizes the work accomplished in this thesis and suggests the future work that may be extended from this research.  7  Chapter 2 Related Work Optimally routing traffic over a network is closely related to allocation of network resources including bandwidth. At the planning and designing stage, a planner or designer must carefully synthesize a cost-effective network, not only with proper capacities to meet the requirement of traffic demands and quality of service but also with some redundant resources to provide certain degree of robustness. During the operation of a given network, there should be a robust routing mechanism for handling temporary burst traffic in some segment of the network. In addition, in the case of a network element failure, there should be a restoration mechanism in place to recover from failure. There is an inherent trade-off among routing efficiency, protection or robustness, and utilization of network resources. In this chapter, previous research that is closely related to this thesis is reviewed. 2.1 S p a r e C h a n n e l A s s i g n m e n t i n S e l f - H e a l i n g N e t w o r k  In high-speed transport networks, a network element failure can cause a large loss of data even in a short outage. Thus, it is necessary to make the network service interruption as short as possible. One approach is to assign each working channel with a spare channel. When failures occur in a working channel, the associated spare channel is automatically activated. This scheme is simple but not efficient. An alternate approach is the M : N protection channel sharing scheme, where M spare channels are allocated for N working channels. If a failure or degradation occurs on any working channel, affected traffic can be rerouted over the protection channel via an inter-shelf protection loop and a dedicated protection shelf at each end of the span. The increased reliability has the cost of redundant network resources. One of the early research topics is how to derive a restorable network in a cost-effective way. This is usually called spare channel assignment problem: given a network topology including working-channel assignment that satisfies all of the end-to-end demands, the assignment of spare-channels should minimize the total number of spare-channels and yet assure any line/path restoration in the case of network failure. This problem can be formulated as a linear programming problem, which gives an optimal solution. However, the computation of an optimal solution on spare channel assignment is prohibitively time-consuming. In practice, an  8  approximation or heuristic approach has to be used instead. The Iterative Cutsets Heuristic (ICH) algorithm  [SNH90]  adopts an approximate, iterative approach guided by heuristics to dramatically reduce the size  of the constraint set. It uses successive solutions of an linear programming to detect missing constraints, gradually improving the constraint set until the network is restorable. For some networks with special topology, e.g., doubly-looped network,  [KCKK9ij  a simple algorithm can be developed. While such a special  topology might be theoretically interesting, it is not practically useful. Interestingly, Grover et. al.  [GBV9i]  developed a Spare Link Placement Algorithm (SLPA) which  reversed the objective function and constraints, trying to maximize the restorability benefit obtained for each spare channel added to the network. Each time an additional spare channel is added to a span of the network, the working capacity in each span cut is rerouted using an efficient k-shortest link disjoint paths algorithm and redundant spare channels are removed. The network synthesized in this way yields nearoptimal results. A comparison study of the two approaches  [GBV9i|  shows that ICH gives better result than  SLPA. However, ICH needs more execution time than SLPA, and has exponential complexity in the worst case. SONET ring is a very important class of network architecture. Carpenter, Cosares and Saniee [CCS97]  formulated a demand routing and slotting problem (DRSP) for SONET ring architecture. Based on  the assumptions that all physical links on SONET ring have the same capacity and that network traffics cannot bifurcate, they developed a heuristic solution that minimizes the capacity of the SONET ring with given node-to-node demands. The solution has two phases: •  Solving the slotting problem which is equivalent to optimally coloring the vertices of an associated circular arc graph.  •  Solving the routing problem which determines a routing for a given set of demands.  The algorithms are used as a part of a SONET Toolkit developed for network planning and designing. [CDSW95]  With the help of the SONET toolkit, both the time-cost for planning and thefinancial-costof the  network being planned is greatly reduced. [CDSW95] 2.2 V i r t u a l T o p o l o g y a n d V i r t u a l P a t h Chlamtac et. al.  [CGK92, CGK93]  proposed the Lightnet architecture based on a packet-switching opti-  9  cal network. The principle of the architecture is the construction and use of a virtual topology composed of virtual links implemented in the wavelengths domain, embedded in the original, general topology. A virtual link is implemented by a light path, which is a pre-established transmission path maintaining the same wavelength throughout its route. Along a light path, there are two types of switches on the nodes. An active switch runs routing algorithm to decide which node it should forward a frame to. A passive switch simply forwards a frame to the next node. Using the concept of a virtual topology, both the latency of the frames along light path and the processing time on the tandem nodes of other frames are reduced. The authors focused mainly on how to embed different regular virtual topologies (e.g., a hypercube) in the physical topology using the least amount of network resources. They proposed an algorithm that provides bounds on the number of wavelengths (channels), switch sizes, and average number of switching stages per packet transmission. Besides the hypercube topology, other types of virtual topologies and their variants that have been studied include triangularly-arranged network, [ME90] bi-layered ShuffleNet [AHK93] and bi-directional Manhattan street network [PK95,  BC87J.  The use of embedded regular topologies reduces the average number of processing stages a packet has to traverse in the network and provides inherent load balancing. With a smaller number of stages, the number of service instances per packet is reduced. The total number of packets processed per unit of time is increased. This increased efficiency of the architecture is supported by the simulation result. Arbitrary virtual topology may be embedded in a given physicalfibernetwork. Design of a virtual topology over a physical network can be viewed as an optimization problem. Mukherjee divided this optimization problem into several sub-problems: F.MBRM96] 1. determine a good virtual topology 2. route the light paths over the physical topology 3. assign wavelengths optimally to the various light paths 4.  route packet traffic on the virtual topology  By relaxing the constraint of limited number of wavelengths on nodes, the subproblems 2 and 3 are thus ignored. A good virtual topology is searched using the iterative approach of simulated annealings. The algorithm starts with an initial random configuration for the virtual topology. Node-exchange operations  10  are used to arrive at neighboring configurations. In a node-exchange operation, adjacent nodes in the virtual topology are examined for swapping. Neighboring configuration which give better results than the current solution are accepted automatically. Solutions that are worse than the current one are accepted with a certain probability, which is determined by a system control parameter. The probability with which these failed configurations are chosen decreases as the algorithm progresses in time so as to simulate the cooling process associated with annealing. The probability of acceptance is based on a negative exponential factor and is inversely proportional to the difference between the current solution and the best solution obtained so far. A flow deviation algorithm is used to route traffic on the selected virtual topology. The algorithm is based on the notion of shortest-path flows. It first calculates the linear rate of increase in the delay with an infinitesimal increase in the flow on any particular channel. Some of the flows may be deviated from the ideal shortest path. An iterative algorithm determines how much of the original flow needs to be deviated. It is noted that partial deviated flow implies out-of-order packet delivery, which may have adverse impact on the performance of some reliable protocols using positive acknowledgment. From its iterative approach and complex procedures, it is hardly a scalable algorithm, although no time-efficiency analysis of the algorithm was given by the authors. Ramaswami and Sivarajan [RS95] considered the problem of routing connections in a reconfigurable optical network using wavelength division multiplexing. Each connection between a pair of nodes in the network is assigned a path through the network and a wavelength on that path, such that connections whose paths share a common link in the network are assigned different wavelengths. The authors derived an upper bound on the carried traffic of connections (equivalently, a lower bound on the blocking probability) for any routing and wavelength assignment (RWA) algorithm in such a network. The bound scales with the number of wavelengths and is achieved asymptotically by afixedRWA algorithm. Although computationally intensive, the bound can be used as a metric against which the performance of different RWA algorithms can be compared for networks of moderate size. Gersht et. al. [GS89] believed that an optimal routing algorithm that maximizes link residual capacity in a circuit switching network is relatively robust to a network failure. They  [GK93]  also explored the  property of real-time configurable SONET digital cross-connect systems, which enable dynamic bandwidth allocation for adapting to bandwidth demand dynamics as well as for emergency restoration pur-  11  poses in the case of network-element failures. A static approach does not leverage the real-time reconfiguration feature of SONET network elements to adapt to dynamic demands. It may lead to inefficient utilization of the underlying bandwidth. Dynamic bandwidth allocation and spare capacity assignment allow efficient bandwidth utilization and satisfy the requirement of full failure recovery. Under the dynamic approach, the boundary between spare and working capacities is movable in real-time according to short term demand and load provisioning. The paper describes a scheme for optimal real-time bandwidth allocation and path restorations in a meshed network via SONET in response to (1) demand dynamics and (2) link and/or node failure(s). They proposed a Network Level Control Protocol to implement optimal admission and fully restorable bandwidth allocation. They introduced a parallel algorithm to overcome combinatorial complexity of the optimization process caused by a large number of failure scenarios. While the proposed architecture is attractive, the computation complexity of the optimization is prohibitively high to be of practical use. Recently, research interests shifted to topics related to A T M technology. In A T M network, a connection (virtual circuit) must be set up between two nodes. To set up a connection, an appropriate path must be identified and sufficient resources must be allocated at every intermediate node along the selected route. The network needs to decide whether the request for a connection is accepted or rejected. There are two approaches to allocate resources at different granularity levels. One is the Virtual Circuit (VC) approach. In this approach, the network has to find a path and allocate resource for each connection. V C approach can serve as a simple and efficient paradigm for network admission control, but it has drawbacks especially when the mechanism is scaled up. When a large number of calls for connection setup and torndown occur in the network, it suffers from relatively long network access delays and heavy computational requirements on intermediate switching nodes. An alternative approach is the Virtual Path (VP) approach. In this approach, a single VP allocation request can reserve a chunk of bandwidth, which can satisfy many calls simultaneously. Once a VP is set up, subsequent arrival of calls from the same source to the same destination can be admitted into the network, and multiplexed onto the VP, wholly under the local control of the source node. Thus the network access delays will be substantially shorter. In addition, VP connect requests will occur much less frequently than VC connect requests. The reduced number of routing requests and connect requests reduces  12  the amount of computation needed. However, the potential benefits come at the price of potential underutilization of network resources. The answer to the question of which approach is more appropriate for implementation in a given network depends heavily on the traffic loads. The principle of the VP approach in ATM-based circuit-switched network is analogous to that of virtual topology approach in packet-switched optical network. Both deal with trade-off between performance of networks and utilization (throughput) of network resources. Therefore, the bandwidth allocation mechanism and VP assignment policy are closely related to each other. Similar to virtual topology approach, VP approach can be static or dynamic. In order to meet the requirement of different classes of services, a bandwidth management framework has been proposed for ATM-based B-ISDN. The framework consists of a network model and a bandwidth allocation  strategy.[LZPFBE96]  A network can be partitioned into a core network and edge networks.  Each edge network is connected to the core network with an edge node. The network bandwidth is allocated in such a way that each VP is semi-permanently allocated with a certain amount of bandwidth in the core network. Unused bandwidths are dynamically shared among different VPs based on traffic estimation. When demands overwhelm bandwidth, packets are thrown away in a packet-switched network while request for virtual circuit setup are rejected at the edge of a circuit-switched network. In an ATMbased network, an admission control protocol is responsible to setup and to tear down the virtual circuit for users. In a large and complex system, the call setup delay could be fairly large. One approach for solving this problem is trading some network utilization for short setup delay. A virtual path is established and dedicated to a specific source-destination pair. Network resources are reserved for a specific virtual path, greatly simplifying the setup of virtual circuits contained in that virtual path. One of the drawbacks of this approach is that some of the allocated bandwidth for that VP may be unused or not sufficient, resulting in unstable behavior of the network. The other implication is that potential unfairness, resulting from greed of users of that VP. Some control mechanism is needed to allocate bandwidth for VP in effective and fair way. A mechanism to allocate bandwidth for VP fairly and efficiently is to impose an interaction among the users on their individual strategies, e.g., a cost function, which reflects the inherent trade-off between user performance and the limited availability of resources. A user seeks to optimize its own performance and minimizes a cost function.  [LOP97j  13  2.3 Mechanisms and Protocols Sakauchi et. al. [SNiwo] developed a distributed control mechanism which is applicable to both line and path restoration for the SONET-based networks. For path restoration, the mechanism is divided into four phases:  1.  Release phase: when a path failure is detected, one node (SENDER) at the destination end of the path sends a message to the other (CHOOSER), release the resource used by the failed path for the other to use.  2. Broadcast phase: S E N D E R sends help messages along all possible routes to query the available resources on each tandem node. 3.  Selection phase: Upon reception of the help message, the C H O O S E R selects a restoration route from the possible routes which were searched in the broadcast phase and sends a return message to SENDER.  4.  Confirmation phase: Upon reception of the return message, the S E N D E R cross-connects to the spare channel and sends a link message to CHOOSER. Each intermediate node confirms the spare channels and cross-connects the downstream channel with the upstream channel.. When C H O O S E R receives the link message, it switches over the failed channel to the restoration channel. The destination process is completed.  A double-searching self-healing algorithm is also developed to perform bi-directional restoration in case both the upstream and downstream paths fail.[FY94] The restoration mechanism is divided into five phases:  1. Initial phase: two Senders at the ends of the failed network elements broadcast the search messages to the neighboring nodes. 2. Search phase: Each node receiving the search message selects an alternate VP, reserves the resources, decrements hop count, and then broadcasts the search message to neighboring nodes. 3.  Response phase: After repeating the search phase, a collision of two search messages occurs at a node. The node combines two messages into one called a response message and sends it back to the original Sender.  14  4. Acknowledge phase: After receiving the response messages, both originators of the search messages select alternate VPs. 5. Reset phase: After competing the restoration, the node state reset to the no-failure state. Murakami and Kim [MK96] introduced a two-step restoration mechanism to achieve fast restoration. Upon failure, a fast restoration scheme is executed for real-time recovery from the failure. After the restoration is completed, an optimal traffic distribution is recomputed for the new network topology and the flow is again changed to the newly calculated optimal solution. This scheme seems a good compromise between the requirement of fast restoration and the time needed for computing an optimal configuration. However, it may easily fail in the scenario where two consecutive failures take place in the network. Murakami and Kim [MK.94] argued that the design process and the installation of an additional facility should be initiated whenever the dynamic VP reconfiguration process cannot maintain survivability at the desired level due to a growth of traffic demand over time. They gave a survivable virtual path routing (SVPR) problem: tofinda VP configuration and bandwidth assignment in response to a dynamic change of network environment so that a self-healing algorithm can succeed. Ramaswami and Segall [RS97] proposed distributed protocols for setting up and taking down of connections in a wavelength-routed optical network. The protocol is similar to distributed control protocols for connection management of non-optical communication networks. The connection setup involves reservation of wavelength and route determination on each node. In order for the protocols to have high performance and to handle link failure and wavelength failure on a link, the controllers on each node must keep a consistent view of the state of each connection in a connection switch table. All resources taken up by a connection are released once the connection is taken down even in the case of partial failure in the segment of network. The controllers communicate via out-of-band or in-band (dedicated bandwidth). Connection update procedure is invoked when timer expires or change in traffic demand occurs. Topology update messages are broadcast periodically. Thus, point-to-point links can be set up and taken down dynamically.  2.4 Concluding Remarks In general, spare-capacity placement for a self-healing network, virtual topology routing problem in a SONET-based network and virtual path routing in a circuit-switched network are formulated as linear  15  programming problems. These problems are NP-hard, i.e. their optimal solutions are not scalable. Therefore, most researchers sought sub-optimal solutions, using either approximate or heuristic approaches at some stages of their solutions. Most algorithms so developed are still computationally expensive. Perhaps because of the expensive computation, most studies have concentrated on static bandwidth allocation, applicable to either the designing stage or expansion of a network. There has been a suggestion for real-time dynamic bandwidth allocation approach to accommodate changing network traffic patterns.[GK9:v| Lack of a fast optimization algorithm for bandwidth allocation (less than neatens of second) [GRM97]  and/or good network traffic forecast model makes such a real-time scheme unrealistic. Our interest is mainly in developing an efficient routing mechanism particularly for a SONET ring  architecture, given the network physical topology and network resources (bandwidth capacities). The HHR approach may have potentially high bandwidth utilization but also have high processing overhead on each node. Building a virtual topology on top of the physical topology leverages the inefficiency but may have bandwidth fragmentation and thus under-utilization. The argument for the VC approach vs. the VP approach in ATM-based network offers another perspective of routing traffic at different degrees of granularity. The VC approach operates at afinegranularity (per connection bases) level and therefore has high network utilization but also has high latency and gives an inefficient algorithm. The VP approach operates at a relatively coarse granularity (a set of VCs) level and has potential lower utilization but has a more efficient routing mechanism. In short, we may need both approaches, Hop-by-Hop Routing and Virtual Topology Routing, in one system. The network resources are partitioned into the two paradigms. And yet the resources should be reallocated dynamically, depending on the traffic load and distribution pattern. To realize such a dynamic architecture, fast optimization algorithms and protocols are necessary for real-time response to rapid change in network traffic. This thesis addresses the problem by proposing a dynamic bandwidth allocation scheme combining both HHR and VTR approaches and by developing a heuristic bandwidth allocation algorithm on a specific type of network architecture —the SONET-ring-based transport networks.  16  Chapter 3 Design and Implementation of SONET-based Transport Network Simulator To investigate and compare the effect of different bandwidth allocation schemes, a new SONETbased transport network simulator has been designed and implemented in this research project. In this chapter, the design of the simulator is described, followed by a discussion of some implementation issues.  3.1 Introduction There are a few publicly available network simulators available for the research purpose. The LBL Network Simulator, NS, developed by the Network Research Group at the Lawrence Berkeley National Laboratory [LBLNS, Fioyd95] is widely referred in academic research. The scope of NS is primarily focused on the activities of transport layer and mainly concerned with the congestion and error recovery mechanisms of severalflavorsof TCP over networks. Lower level elements of the network are not simulated in detail  [PRNS].  NIST ATM Simulator is a simulator developed for simulating the behavior of ATM networks for  network planning and for performance assessment of the ATM and its associated technologies[AtmNS] Our research is mainly concerned with the bandwidth allocation schemes on SONET-based transport networks, facilitated by the configurable DCS (Digital Connect System) devices (at physical layer) on SONET nodes. Both simulators do not have many features that match with the need of this research project. In particular, NS does not provide a detailed simulation of protocols lower than layer 3 in the OSI model. NIST ATM Simulator, as the name itself indicated, is specific for simulation of ATM activities. There is no easy way to extend either of these two simulators for our research without non-trivial restructuring. Only a limited number of the features in these packages that are useful for this research. Given that both LBL NS and NIST ATM simulators did not meet our needs, we decided to build a new network simulator for SONET-based transport networks. While some data structures and mechanisms are borrowed from the NS package and other projects, the simulator is designed specifically for simulation  17  of SONET-based networks. It is implemented using single language Java, which include an easy-to-use AWT utility to construct GUI (Graphic User Interface) for the simulator. 3.2 General Design The design of our simulator adopts object-oriented style to mimic the SONET-based network. Figure 3-1 illustrates major class objects that are used in the simulator and their relationship. In general, a network is composed of many network elements (NetworkElement objects). The concept of network element is a very general one. Any component that is used to construct the network infrastructure can be considered a network element, including hardware component (e.g., SonetNode and SonetLink objects) as well as software component (e.g. ForwardModule objects). Like most other network simulators, our simulator also uses an event-driven model. There are two major types of objects in this event-driven model: NSEvent objects and NSEventHandler objects that handle NSEvent objects. Most of the NetworkElement objects are also NSEventHandler objects. Two categories of NSEvent are simulated: (l)PDU and (2) NetworkEvent. PDU objects are the packets traveling along the network and are the main event type the simulator deals with. For example, data-type PDU objects are generated by a TrafficGenerator object attached to a source node. After they enter the simulated network, the PDU objects are handled by network elements along the path until they reach the destination node or they are discarded in case of network congestion or network element failure. NetworkEvents objects represent the events caused by certain network conditions, including network element failures and the situations such as the number of packets in a queue exceeds the high water level or low water level. The execution of the simulator is structured around a Scheduler object. An NSEvent object is associated with a NSEventHandler object and assigned with the time at which the event occurs. The scheduler provides interfaces to allow user to schedule an NSEvent or to cancel a previously scheduled NEEvent. It stores the NSEvent objects in a NSEventQueue in the order of time at which the events would occur. When a NSEvent occurs, the scheduler passes the NSEvent object to its associated NSEventHandler. The handler then handles the event by executing instructions of its handleQ function.  18  19  A simulation starts with instantiating network element objects necessary for constructing the network. The network is then configured to form a physical topology as desired. NetworkEvent instances are scheduled and TrafficGenerator and TrafficSink instances are associated with NetworkElement instance. A PDU is scheduled once generated and handled by the NetworkElement instances along the path. Each NetworkElement adds delay to the packet, collect statistical data from the packet if it is monitored and reschedule the PDU event for the next NetworkElement to handle until it arrives the destination node and gets out of the network. 3.3 E v e n t a n d S c h e d u l e r O b j e c t s In this simulator, two categories of NSEvent are simulated: (1) PDU and (2) NetworkEvent. There are two types of PDU objects: data packet and management packet. NetworkEvents are the events caused by certain network conditions such as network element failures or traffic congestion. Data-type PDU objects are generated by TrafficGenerator objects and registered in the event queue of the simulator together with a NSEventHandler instance, to which the TrafficGenerator instance is attached, as the event handler. 3.3.1 N S E v e n t O b j e c t NSEvent class is a virtual class. It has some common properties of all NSEvent classes. Each event has a time at which it should occur and associated NetworkElement object that handles the event. It is extended by two subclasses: PDU and NetworkEvent. 3.3.2 P D U O b j e c t PDU (Protocol Data Unit) event is the major object class in the simulator. It has two parts: PDUHeader and data. Data part is represented by data length but not the actual bits and bytes. To measure the packet delay, a time field is also included in the class to record the time when it is generated. PDUHeader follows the convention of most protocol but in a simpler format. Both the source and destination addresses are represented using two integers respectively. A sequence number is used to measure the out-of-order delivery of packets. If dynamic routing or load balancing is used, out-of-order delivery is inevitable. 3.3.3 N E E v e n t O b j e c t  20  NEEvent (for Network Element Event) class represents the event occurred in network elements. The type of NEEvent must be specified when instantiated. Currently, we have implemented transmitting and receiving event for SonetChannel class. 3.3.4 Scheduler and EventQueue Objects  A scheduler is the driver of an event-driven model. It decides when and which event to be handled by which event handler. In general, it maintains either a virtual clock or a real clock. In this simulator, a scheduler uses relative virtual time stored in a variable clock initialized to zero at the beginning of a simulation and increment as the simulation proceeds. In the animation mode, clock is incremented by predetermined amount of ticks (millisecond per tick) in each animation step. All events occurred before the current clock time are handled in the order of occurring time and overall result is displayed graphically. In the nonanimation mode, events are handled in the order of occurring time. After all events occurred at a particular clock time are handled, the clock is advanced to the time of the next earliest event. The heart of a scheduler is an event queue. EventQueue object is used to store the scheduled events in the order of time. When running the simulation, the scheduler dequeues an event from the front of the EventQueue and calls registered event handler to handle the event. New events are queued in the EventQueue object in the order of occurring time. The simplest data structure to implement the EventQueue is a linked list, as in EventList class object. EventHeap was also implemented to improve performance. It uses Heap as the data structure, which reduces cost of insertion from O(n) to O(logn) . 3.4 Network Element Objects  A network is composed of many network elements. NetworkElement abstract class includes general properties of a simulated network element, such as connectivity to other NetworkElement objects, time-delay for handling event, etc. This abstract class is extended by two types of network elements: those that implement NSEventHandler interface and those that do not. In general, class objects representing conceptual network elements, such as NENode, NELink, SonetNode and SonetLink, do not implement NSEventHandler interface. They serve as convenient containers for the concrete network elements, such as DCSMatrix and ForwardModule objects. The concrete network elements implements NSEventHandler interface and actually handles events.  21  3.4.1 N E N o d e a n d S o n e t N o d e O b j e c t s  A network node in this simulator is split into two class objects, NENode class as base class and SonetNode as the its subclass, to accomodate future extension of this package to include other nonSONET nodes and links. It may be useful to study the interoperateability of the SONET-based network with others. For NENode, a 32-bit integer is arbitrarily chosen to represent the node address. Each node is assigned an unique identification number, which is a 32-bit integer. NENode also include some operations for graphical display. SonetNode is a major class object in this simulator. As illustrated in Figure 3-2, it uses many other network elements objects, such as DCSMatrix, ForwardModule, PDUQueue etc. A SonetNode object may have multiple links, either in upstream direction or downstream direction. In the current implementation, the number of links is equal to the number of DCS devices both in upstream and downstream directions. SonetNode also has an Aggregater object as a traffic input and a Segregater as a traffic output. It implements operations to configure these components but it does not handle events. 3.4.2 D C S M a t r i x O b j e c t  DCSMatrix object simulates the Digital Cross-connection System on a SONET node. It basically contains a matrix (a multidimensional array) of pointers to NetworkElement objects. A number of operations are available to allow a user to configure the device as desired to direct the incoming packets from upstream link to ForwardModule or directly cut through the node to downstream link. 3.4.3  F o r w a r d M o d u l e Object  ForwardModule object is a component of a SonetNode object. It has a ForwardTable (for forwarding table) and a number of operations to update the forwarding table. If a DCS device on the node is in the add-drop mode, the packets from the upstream link is received by the ForwardModule object and forwarded to appropriate output port(s) after consulting with the ForwardTable. 3.4.4 F o r w a r d T a b l e O b j e c t  ForwardTable is a table that stores the route information. It is implemented using LinkedList and  22  DCSMatrix  -•O  c  5  H  -•o -•o -•o -•o-•o -•o -•o  Cr  oO-  3  o  • ^ c  O -O O O  xxxxxnx ForwardModule  O-  ol —I  L_l L J L_l L_l l _ l L_l L_  LJTttttt Aggregater  Segregater  Figure 3-2 Schematic View of a SONET Node Structure  provides some necessary wraparound for the table operations. Each entry in the list is a representation of each row in a forward table. We allow multiple paths for each destination address. Therefore, each table entry contains a destination address and one or more paths to reach the destination for purpose of load balance. If there are more than one path available, they are listed in the order of increasing costs. Each path is specified with DCS index and channel index on that particular DCS device. 3.4.5 P D U Q u e u e O b j e c t s Each out-to-trunk port is associated with a PDUQueue. The PDUQueue maintains a list of PDUs 23  waiting to be transmitted through the out-to-trunk port. It has maximum length and high-water and lowwater levels. PDUQueue has two subclasses, DropTailQueue and REDQueue. The former is implemented using simple drop-tail queue and the latter implement more complicated RED Gateway algorithm i FJ93.1. 3.4.6  TrafficGenerator and TrafficSink Objects  TrafficGenerator is an abstract class from which other traffic generator classes can be derived. It implements methods commonly used in all types of traffic generators. Currently, three types of traffic flows are implemented: CBRTraffic (Constant Bit Rate Traffic), ExponentialTraffic (for traffic with its on/off times in the Exponential Distribution) and ParetoTraffic (for traffic with its on/off times in the Pareto Distribution). They represent three most commonly observed traffic pattern on the Internet. CBRTraffic objects generate traffic at a constant bit rate. ExponentialTraffic object implements an on/off traffic source with exponentially distributed on and off times, parameterized by average burst time, average idle time, burst rate and packet size. ParetoTraffic implements an on/off traffic source with average on and off times taken from a Pareto distribution, parameterized by the average burst time, average idle time, burst rate, and Pareto shape parameter and packet size. The TrafficSink object is the counterpart of the TrafficGenerator in aflow.It is a simple class object to receive PDU data packets from its traffic source. The implementation is trivial. It collects the statistics from the received packets if theflowis monitored and disposes the PDU data packets. 3.4.7  Aggregater and Segregater Objects  Aggregater and Segregater are used to aggregate and to segregate traffic at the access points of a . transport network. In the current implementation, they are used for facilitating the association of different types of trafficflowsto a single node. 3.4.8  N E L i n k and SonetLink Objects  The relation between NELink and SonetLink is similar to that of NENode and SonetNode. NELink is a more general object. A link has a limited bandwidth, which is used to calculate transmission delay. The distance of a link is used to calculate propagation delay. The SonetLink object implements a unidirectional link. It should be noted that, in SONET net24  works, links between two nodes may be located geographically differently for certain degree of failure tolerance. Therefore they have different link lengths and different propagation delays. It is a natural design to accomplish a bi-directional link using a pair of unidirectional links. In the SONET digital hierarchy, a group of low-speed signals are byte-interleaved multiplexed into high-speed signals. The group of low-speed signals can be modeled as parallel virtual channels in a link. SonetChannel class object is used to simulate such virtual channels. A link object contains one or more SonetChannel objects.  3.4.9 SonetChannel Object The SonetChannel object is a representation of virtual channel in a SonetLink. Each channel is associated with a queue to store the packets to be transmitted along that channel. A SonetLink object may have many SonetChannel objects, which share the same propagation delay and usually, but not always, have the same amount of bandwidth. Since the signals are transmitted in SonetChannel objects instead of SonetLink object, the transmission activity needs to be addressed at SonetChannel level. When a packet arrives a node, the forward table is consulted. If the packet needs to be forwarded through a particular channel, the status of the channel is checked. The packet is immediately transmitted if the channel is idle. Otherwise, the packet is queued in a queue associated with the channel, waiting for transmission. When a packet travels along a SonetChannel, there are three possible scenarios: transmission time is shorter than, equal to or longer than the propagation delay. To simulate all these scenarios, two additional NEEvent objects are created for each SonetChannel: transmitting and receiving. When a SonetChannel starts to transmit a packet, two events are scheduled: 1.  A PDU event scheduled at the end of the propagation delay when thefirstbit of the packet arrives at the downstream node. When the PDU event occurs, if the channel is in the add-drop mode, it triggers the receiving NEEvent at the downstream end of the link. If the channel is in the connect-cross mode, it triggers downstream link, which is connected to the downstream node of this link, to relay the transmission of the packet.  25  2. Transmitting NEEvent scheduled at the end of the transmission delay when the last bit of the packet leaves upstream node. When the NEEvent occurs, the possible transmission of next packet in the associated queue can subsequently be attempted. In the add-drop mode, a receiving NEEvent is scheduled at the time when the last bit of the packet is received and the packet can then be delivered to the ForwardModule object resident on downstream node. By such afinegranularity of simulation, a link failure can be easily followed. Loss packets due to failed link can be readily identified. However, two points should be noted. First, the simulation of byteinterleaved SONET signal as independent virtual channels avoids complicated details of SONET operations. Such simplification is necessary to minimize computation cost while keeping the relative timing of the signals. Second, the concept of SONET virtual channel models the processing of the packets in different channels in parallel. Thus, two packets travelling along a link may be transmitted to or received from different SONET channels in the same link at the very same time. While this naive simplification may not reflect the real world, it is necessary to avoid further complicating the transmission algorithm and degrading the performance of the simulator.  3.4.10 NEMonitor Object The NEMonitor object serves two purposes: for simulating monitored network element in real world and for tracing and recording simulation statistics. Some of the key network elements have a resident management module to monitor the operations of the network elements for management purpose. These network management activities can be simulated with some extra efforts. More importantly, an NEMonitor can be used to trace the performance of our model on network traffic for a specific network element.  3.4.11 SonetRing Object The ability of SONET to be deployed in ring architecture is the defining feature of SONET technology. Some of the nodes or links on a SONET ring may have different properties from others. However, many nodes and links in a SonetRing share many common properties. For example, links between tandem nodes in a SONET ring have the same bandwidth capacity. The SonetRing object is a NetworkElement class object to mimic this unique network architecture.  26  In our hierarchical address scheme, the lower four bits are allocated for node address on the same ring. Thus, maximum number of nodes on a SONET ring is 256 nodes. In current implementation, maximum of four DCSMatrix objects are allowed on a SonetNode, which limits the number of links on a SonetRing to four. This capacity has covered most common deployment of SONET network in real world. To add more DCSMatix instance and associated sets of links is trivial. In the current implementation, neither line-switched protection nor path-switched protection is implemented. 3.5 R a n d o m V a r i a b l e s P a c k a g e A RandomVariables package is implemented in the simulator. Random Variable is a general interface imports interface to generate random values. The random variables class objects included in this package are: •  EmpiricalRandomVariable  • ExponentialRandomVariable • HyperExponentialRandomVariable •  ParetoRandom Variable  • UniformRandomVariable They are used in TrafficGenerator class and its subclasses. 3.6 G r a p h i c U s e r I n t e r f a c e  Objects  Simulator class object serves both as a container for all components of the simulated objects and as a graphic user interface. As the container for the components, it maintains lists of nodes, links and rings. As the graphic user interface, it provides a number of GUI components for users of the simulator package to edit the properties of the simulated network elements and traffic conditions. Other GUI object classes include: •  ForwardTableEditDialog  •  SonetRingEditDialog  •  SonetNodeEditDialog 27  •  SonetLinkEditDialog  •  FlowEditDialog  A Simulator object can run in both GUI mode or non-GUI mode.  3.7 Some Implementation Issues Unlike NS package, which is implemented in multiple languages, our simulator is implemented in a single language Java. Java has a rich class hierarchy available, including the AWT package, which makes the implementation of GUI easier than the multiple-component package like NS. GUI is particularly useful in the debugging process. Although Java provides high-level synchronization tools, the simulator is not implemented as a multi-thread system. Using multithread tool does not help performance. On the contrary, it might deter the performance because of the overhead associated with synchronization among threads. In addition, it adds extra overhead on the design and implementation. During the implementation and testing of the simulator, we encountered some problems pertaining to the implementation language—Java. We discuss the two related problems, memory management and performance issues, and our approaches to the problems in the rest of this section.  3.7.1 Memory Management Since the exact number of objects is unpredictable before the simulation, a large number of objects have to be dynamically allocated during the simulation. Java runs garbage collector thread to recover the memory that is no longer referenced by the main thread. The garbage collector runs at a lower priority level. In the implementation of IO-intensive application programs, this releases programmer from burden of memory allocation/deallocation and avoids any "memory leaking" problem. In a computation-intensive program like this simulator, because the garbage collector thread may never get chance to run, programmer has to decide when to call the garbage collector before the program runs into "out of memory" problem. In addition, running garbage-collector thread also adds performance overhead. Our solution to the problem is simply to take over the responsibility of the memory management from the garbage-collector thread. For any class whose instances need to be dynamically created, a pool of  28  the instances is created to recycle the used instances. This strategy not only solves the "out of memory" problem but also enhances the performance.  3.7.2 Performance Issues When a program is implemented in Java, one of the biggest concerns is the performance of the program. We have attempted following strategies to improve the performance. •  Using J A V A J I T (Just In Time compiler). JIT compiles Java codes into native machine instructions at running time. It is postulated that by using dynamic recompilation at run time that Java performance can exceed that of C++. We have compared the performance of the simulator with and without JIT using JDK1.2 beta4. The result shows, however, that JIT does not give any significant improvement on the performance of the simulator.  •  Writing out logged data out i n a batch. There are many statistics data need to be collected during a simulation. For example, the latency of each arrival PDU needs to be logged when the statistics of network latency of a traffic flow is collected. The latency values are not written out immediately to disk each time when a PDU object arrives on its destination node. Rather, the latency values are stored in the memory until they are written out after a predefined time interval.  •  Instantiating objects i n a batch. If a larger number of objects of the class are expected to be used during simulation, a batch of more than one objects are allocated even only one object is needed at time of allocation.  •  Recycling used objects. Pools of used objects are created for the classes of dynamically allocated objects. The used object is reused when the simulator needs a new object of the same class. Thus, the overhead due to memory allocation is reduced.  It must be noted here that network simulation is computationally intensive due to the inherent nature of the simulator. A simulation of network activities in a minute duration could take several ten minutes, hours, and even days, depending on the bandwidth and complexity of the simulated network and traffic patterns.  3.8 Summary When designing a simulator, one must evaluate the trade-offs among simulator speed, degree of correctness and accuracy of results, and ease of implementation. Because of little overlap of currently available network simulators and the simulation of SONET-based transport network, we chose to redesign  29  and implement a new network simulator instead of modifying an existing one such as LBL NS. Our simulator is designed specifically for SONET-based network and covers a number of unique features of the SONET technology including Digital Cross Connect System, multiple links between nodes and SONET ring. The design follows Object-oriented model and allows easy extension in future. The simulator is implemented in Java, which includes AWT utilities for constructing GUI. The GUI of the simulator allows users of the simulator to generate and to modify simulated networks and traffic condition easily. As a interpreted language, however, Java deters the performance of the simulator. A number of approaches have been considered to improve the performance of the simulator.  30  Chapter 4  Static Bandwidth Allocation Schemes On SONET Rings As described in Chapter 1, the simplest routing scheme on a SONET-ring -based transport network is perhaps the hop-by-hop routing (HHR) approach. It has high available bandwidth but suffers from relatively long network latency because each packet has to be stopped, processed, possibly queued and retransmitted at each node. The virtual topology routing (VTR) approach may reduce network latency for the packets that travels multiple links but result in low available bandwidth due to bandwidth fragmentation. An ideal meshed VTR network with adequate bandwidth for all virtual links is not easily constructed on a SONETringdue to limited granularity of manageable bandwidth. Furthermore, such a network suffers from high degree of bandwidth fragmentation and associated management overhead. Between this ideal VTR scheme and the pure HHR scheme is a spectrum of schemes, reflecting the trade-off between latency and bandwidth utilization. Under such routing schemes, the network may gain benefits of both VTR and HHR strategies to some extent. In this chapter, three static routing schemes are evaluated with different levels of traffic loads on the SONET-based transport network simulator. 4.1  Simulations  Network simulation involves intensive computation. With limited computation power, some simplifications are necessary in order to reduce the computation time. However, the simplifications should not lead to discrepancy in general observations from which the conclusions are drawn. In this section, our simplified simulations are described, together with the rationale behind the choice of the simulation conditions. 4.1.1 C o n f i g u r a t i o n o f t h e S i m u l a t e d N e t w o r k  The simulated transport network is a 16-node transport network based on an OC-1 bidirectional SONET ring with line rate of 51.84 Mbps in both directions. In each direction, 28 DS-1 channels with raw channel bandwidth of 1.544 Mbps are simulated. The OAM&P (Operations, Administration, Maintenance  31  and Provisioning) activities at SONET physical layer are not simulated in this study because they do not provide additional information to the theme of this study. It is realized that an OC-1 ring has a small capacity as compared to the currently available SONET technology. However, the simulation of the networks with higher capacity requires higher computation power. The purpose of our study is to examine the effects of different routing schemes on the performance of a network. With the proportionally reduced traffic loads, OC-1 capacity is a reasonable simplification within the context of SONET technology. The neighboring nodes on the simulated SONET ring are 100 kilometers apart. On each node, a drop-tail queue with limit of maximum 20 packets is allocated for each channel. When the number of packets in a drop-tail queue reaches the limit, the next arriving packet is simply dropped. The operations of a RED queue is more complicated [FJ93]. For each arrivingpacket, an average queue size is calculated. The packet is queued when the average queue size is lower than a minimum threshold and dropped if the average queue size is higher than a maximum threshold. If the average queue size is between the minimum threshold and the maximum threshold, the packet is dropped with a certain probability, which is a function of the average queue size. Thus, the opereations of a RED queue are more computationally intensive than a drop-tail queue. With cooperative sources, RED queues are effective in congestion avoidance and in avoiding global synchronization of bursty traffic. In the simulation, aggregate traffic is generated according to the predefined traffic patterns (see Section 4.1.3). It does not respond to the packet dropping and therefor does not have any impact on the simulated traffic patterns. Because of this, a RED queue may result in slightly higher packet drop rate than a drop-tail queue. However, the difference in packet drop rate is not expected to have significant impact on the evaluation of the network performance of the different routing schemes.  4.1.2 B a n d w i d t h D i s t r i b u t i o n o f R o u t i n g S c h e m e s The bandwidth distributions for clockwise transmission, depicted in Figure 4-1, is the same as for counter-clockwise transmission. The nodes are specified numerically in clockwise order. Physical links are also numbered sequentially clockwise. The physical link from node 0 to node 1 is numbered link 0. Each line in the diagram represents a direct link and the thickness of a line reflects relative bandwidth allocated for that link. Bandwidth is distributed symmetrically for the links around SONET ring. The minimum managed bandwidth is DS-1 channel capacity (1.544 Mbps). For the first routing scheme, HHRS, all 28  32  Virtual Topology Routing Scheme 2 (VTRS2)  Figure 4-1  D S - 1 channels per  Virtual L i n k s per  Virtual L i n k  Physical L i n k  Virtual L i n k L e n g t h (hops)  4  1  1  4  2  2  4  4  4  Bandwidth Distribution (Clockwise) of the Three Static Routing Schemes 33  channels in each direction are allocated for HHR region, carrying traffic from one node to its tandem nodes. For other two routing schemes, VTRS1 and VTRS2, 24 of the 28 channels at each direction on a physical link are allocated for VTR region while the remianing 4 channels are allocated for HHR region. For VTRS1, all virtual links in the VTR span 2 physical links. Each of the virtual links has 12 channels. For VTRS2, virtual links span 2 and 4 physical links and all point-to-point links are allocated with 4 channels. In all routing schemes, an extreme simple routing algorithm is used. When a packet arrives on a node, the header of the packet is examined. If the packet is destined to this node, it is delivered; otherwise, it is dispatched to the channels allocated to a particular virtual link for transmission according to a simple forward table. If traffic load exceeds virtual link bandwidth, the packet is queued. Each link may have 1  more than one channel. Each channel is associated with a drop-tail queue. Packets are uniformly distributed to the queues associated with the channels of the virtual link so that the difference in the queue lengths is at maximum one packet. When a packet arrives on a virtual link of multiple channels, it is dispatched to the next idle channel. If all channels are busy, it is dispatched to the queue associated with the first channel. Next arrival packet will be distributed to the queue associated with the next channel. All channels are operated in parallel using the same clock and all packets has the same length. For a particular virtual link, the packets are effectively queued and transmitted on thefirst-come-first-servebasis. For a packet with multiple paths in the same direction, load-balancing is applied only if the queue of its preferred path overflows. On any source or intermediate node, a packet can be forwarded clockwise or counter-clockwise to reach its destination node. Circular routing occurs when a packet is forwarded clockwise to the next node, on which the packet is in turn forwarded counter-clockwise back to this node. To avoid this problem, the forward tables are constructed in such a way so that each node forwards a packet along the direction with the shorter distance from the node to the packet's destination node. The possibility of circular routing is thus eliminated. Bidirectional load-balancing is applied only for packets of cross-ring traffic whose source-destination distance is eight physical links apart on the simulated 16-node SONET ring. For the cross-ring traffic, the weight functions of the clockwise path and counter-clockwise path are the same. When the clockwise path is congested, a new packet is dispatched to the counter-clockwise path 1.  For simplicity, the details of multiplex/demultiplex at SONET layer is not simulated. Parallel transmission of packets over multiple channels of a virtual link effectively simulates the functionality.  34  for transmission. It must be noted that packet reordering is inevitable when load-balancing is applied.  4.1.3 Generation of Traffic Patterns To generate traffic patterns that truly reflect the characteristics of real Internet traffic is not a trivial task.[Paxson98] Because of the diverse, complex and dynamic nature of the Internet, it is difficult to measure and characterize Internet traffic. It is even more difficult to model the traffic because of the different characteristics and changing popularity of Internet applications in such a dynamic environment. Early studies used the Poison model, which has been used successfully in telephony networks, for modeling Internet traffic. Recent studies found that the Internet traffic is structured and cannot be accurately described by the Poisson model.[LTWW93] Instead, Pareto distribution better captures the self-similarity nature of the Internet traffic.[Paxson98]  For convenience of discussion, the term traffic flow, or flow in short, is used to describe packets from the same source node to the same destination node. A traffic flow is characterized by its source/destination pair, its transmission rate (Mbps), its arrival rate and type (Exponential, Pareto or CBR) and the parameters describing the type. Trafficflowsare used as the building blocks of the traffic pattern. The traffic patterns used in our simulation are simplified. First, we assume trafficflowsare possible from every possible combination of source/destination nodes. Thus, each traffic pattern is composed of 240 traffic flows ( n x ( n - l ) , where n = 16 nodes). Second, although the Pareto model is currently thought to better capture the characteristics of the Internet traffic, [Paxson98] other types offlowswould also be possible. For example, a reserved bandwidth usage may be modeled with CBR (constant bit rate) and net-phone application may be modeled with Poison model. We do not exclude CBR and Exponential types from our choice. Finally, the transmission rates of the flows are controlled in a range so that the overall traffic-load/bandwidth-capacity is at the desired ratio. The traffic patterns are generated at three different overall traffic load levels. Eachflowis assigned with aflownumber in ascending orderfirstaccording to source node and then destination node clockwise. Thus, the first 15 flows originating from node 0 to nodes 1, 2,..., 15 are numbered from 0 to 14, respectively. Flow 0 travels from node 0 to node 1. The traffic type and volume offlowsare generated randomly. The proportion of the numbers of trafficflowsfor three types are controlled in an arbitrary ratio (Pareto:  35  Table 4-1  Compositions of Three Traffic Patterns  Traffic Pattern  CBR  number of flows  Pareto  Exponential  Transmission Rate (Mbps) min.  max.  number of flows  Transmission Rate (Mbps) min.  max.  number of flows  Transmission Rate (Mbps) min.  max.  1  70  0.220  1.194  55  0.211  1.198  115  0.237  1.181  2  71  0.406  2.308  63  0.427  2.360  106  0.447  2.386  3  63  0.610  5.002  50  0.505  4.985  127  0.665  5.000  CBR: Exponential = 2:1:1). The transmission rates of the traffic flows at burst for each type are generated according to a uniform distribution in the range between minimum and maximum transmission rates. The compositions of the three traffic patterns are tabulated in Table 4-1. 4.1.4  Measurement Metrics  The performance of the network are measured with respect to three metrics: (1) link utilization, (2) network latency and its variation, (3) packet drop rate. Link utilization is defined as the ratio of bandwidth used for transporting traffic load over the overall bandwidth capacity of the link. A link may be composed of one or more channels. In the simulation, idle period for each channel is recorded. If a link is composed of k channels, the link utilization U is calculated according to the following equation: k k T  m_£ idle T  TJ =  (Eq.4-1) k  Where T is the measuring time and T J m  dle  is the recorded idle time of the i-th channel that is used for the  link during the measurement time. There is a slight difference between network latency and network delay. Network delay is the mea-  36  s u r e d t i m e i n t e r v a l f r o m the first b i t i n t ofirstbit o u t o f the n e t w o r k . N e t w o r k l a t e n c y i s the i n t e r v a l b e t w e e n the first b i t i n a n d  last bit o u t o f the  n e t w o r k ( I T U d e f i n i t i o n , see. [Goraiski97]). I n this study, the d e f -  i n i t i o n o f n e t w o r k l a t e n c y i s u s e d f o r m e a s u r e m e n t o f p e r f o r m a n c e . It i s the s u m o f p r o p a g a t i o n d e l a y a n d n o d a l p r o c e s s i n g d e l a y . P r o p a g a t i o n d e l a y i s a c o n s e q u e n c e o f the p h y s i c a l f a c t that e l e c t r i c i t y , l i g h t , o r a n y o t h e r f o r m o f e l e c t r o m a g n e t i c s i g n a l c a n n o t t r a v e l faster than the s p e e d o f l i g h t . N o d a l p r o c e s s i n g d e l a y d e p e n d s o n t w o f a c t o r s : the o v e r a l l s p e e d o f the n o d e ( m o v i n g b i t s f r o m a n i n p u t p o r t to a n output port) a n d t h e l o a d o n the n o d e . T h e m o r e t r a f f i c there i s , the s l o w e r the n o d e w i l l operate.  I n t h i s t h e s i s , b o t h m i n i m u m a n d a v e r a g e n e t w o r k l a t e n c y are m e a s u r e d s t a t i s t i c a l l y f o r e a c h f l o w . T h e n e t w o r k l a t e n c y i s m e a s u r e d i n m i c r o s e c o n d s . D e v i a t i o n f r o m average n e t w o r k l a t e n c y D  a  v  e  is an  i n d i c a t o r o f j i t t e r i n n e t w o r k latency. It i s c a l c u l a t e d a c c o r d i n g to the f o l l o w i n g e q u a t i o n :  n  rL l - L avei 2 2., L V  i =1L  Where  L  :  i s the n e t w o r k l a t e n c y o f e a c h s a m p l e ,  (Eq.4-2)  '-'ave  J  L  i s the a r i t h m e t i c average n e t w o r k l a t e n c y f o r a p a r -  n-1  ave  t i c u l a r t r a f f i c f l o w , a n d n i s the total n u m b e r o f s a m p l e s c o l l e c t e d f o r the traffic f l o w . P a c k e t - d r o p p i n g rates are c o m p u t e d f o r e a c h n o d e a n d f o r e a c h f l o w . O n e a c h n o d e , there are c o u n t e r s f o r r e c e i v e d a n d d r o p p e d p a c k e t s . T h e d r o p rate o n a n o d e is c a l c u l a t e d as the r a t i o o f the n u m b e r o f d r o p p e d p a c k e t s o v e r the n u m b e r o f r e c e i v e d p a c k e t s .  4.2 Results and Discussion T h r e e traffic patterns w i t h d i f f e r e n t l o a d l e v e l s are s i m u l a t e d o n n e t w o r k s u n d e r three r o u t i n g s c h e m e s . D e s p i t e the o p t i m i z a t i o n f o r p e r f o r m a n c e m a d e f o r the s i m u l a t o r (see S e c t i o n 3.7) a n d the s i m p l i f i c a t i o n f o r the s i m u l a t i o n (see S e c t i o n 4 . 1 ) , e a c h t r i a l o f the s i m u l a t i o n s r e q u i r e d s e v e r a l tens o f h o u r s to a w e e k , d e p e n d i n g o n traffic l o a d c o n d i t i o n . F o r e a c h t r i a l , l o g g e d s t a t i s t i c a l d a t a ( 1 5 - 1 8 0 M B ) are p r o c e s s e d . T h e results o f the s i m u l a t i o n s are p r e s e n t e d a n d d i s c u s s e d i n the rest o f t h i s s e c t i o n .  4.2.1 Performance in the Low Traffic Load Condition T h e o v e r a l l b a n d w i d t h u t i l i z a t i o n o f p h y s i c a l l i n k s f o r the entire s i m u l a t i o n p e r i o d ranges  37  from  15% to 35% under the three routing schemes (Figure 4-2). In such a light traffic condition, the volumes of traffic flows are well below the capacity of the network most of time, even at their burst. Logged statistical data indicates that there is no packets queued in HHRS during the entire simulation. There are very few packets of a limited number of flows queued in cases of VTRS1 and VTRS2. Thus, it is not surprising since there is no packet dropping for the three routing schemes.  (a) Overall Physical Link Utilization for HHRS 0.4  0.35  =  0.3  o  CO M  !=  3  0.25  —  o  o  e  13  14  15  1  r—  0.2  counter—clockwise 0.15 0.1  _J  O  I  L_  1  2  3  4  5  6  7  8  9  10  11  12  Link N u m b e r  (b) Overall Physical Link Utilization for VTRS1 i  counter—clockwise  0.3  0.25  1  1  1  11  12  13  14  15  13  14  15  ...  ....  0.2  0.15  0.1  _J  0  1  2  I  3  4  l _  I  5  6  7  10  8  Link N u m b e r  (c) Overall Physical Link Utilization for VTRS2  clockwise  O  1  2  3  4  5  6  7  8  9  10  11  12  Link N u m b e r  Figure 4-2  Distribution of Physical Link Utilization in Low Traffic Load Condition  38  As shown in Figure 4-2, while the distribution patterns of the overall bandwidth utilization for two VTR schemes are similar, the distribution pattern for the HHRS is remarkably different from those for the VTR schemes. The similarity in distribution patterns for VTR schemes is easy to understand. An overall link utilization of a physical link is calculated as the ratio of the bandwidth used for transporting the packets over the bandwidth capacity. It is determined by the total number of packets transmitted along the link. Thus, the distribution of link utilizations is mainly determined by the traffic pattern. Because there is no packet dropping observed for all three routing schemes, the distribution patterns of overall link utilization for all routing schemes, should be similar for the same traffic pattern, which is the case for VTR schemes. The difference in distribution patterns between the HHRS and the VTR schemes may be attributed to the bidirectional load-balance mechanism. The bidirectional load-balance mechanism is applicable only to the cross-ring traffic flows. For cross-ring traffic flows, the clockwise path is assigned arbitrarily as the preferred path to the counter-clockwise path although both paths have the same weight function. For the HHRS, all bandwidth of a physical link (28 DS-1 channels at each direction) is used for transporting packets of various traffic flows. In low overall traffic load condition, the preferred clockwise path almost always has available bandwidth for forwarding arriving packets. Effectively, the bidirectional load-balance mechanism is not activated. Consequently, the clockwise link utilization is higher than the counter-clockwise link utilization. For the VTR schemes, only 12 DS-1 channels in VTRS1 and 4 DS-1 channels in VTRS2 are allocated in each direction as preferred paths for transporting cross-ring traffic flows (see Section 4.1.2 and Figure 4-1). The bidirectional load-balance mechanism is readily activated because of the bandwidth fragmentation phenomenon. Therefore, the effects of the arbitration policy of the clockwise path over counterclockwise path on the link utilization become less significant than in the HHRS.  1  The distribution of flow's minimum network latency, average network latency and variations in average network latency are illustrated in Figure 4-3 (a), (b) and (c), respectively. For all three routing schemes, the distribution patterns of flows' minimum network latency and average network latency are repeated every 15 flows. This repeated regular pattern is closely related to the way in which the traffic flows are numbered (see Section 4.1.3). In the traffic pattern, each node has 15flowsdestined to other 15  1.  If the path for a cross-ring traffic flow is randomly chosen from clockwise and counter-clockwise paths, the effect on the margin between clockwise and counter-clockwise link utilization in HHRS will probably be eliminated.  39  (a) Minimum Network Latency 10  X 51  I  1  1  1  1  1  1  1  O  15  30  45  60  75  90  1  1  1  1  1  1  1  1  2  105  120  135  150  165  180  195  210  1  1  r  225  240  240  Number  Flow  (b) Average Network Latency ~i  -  | 4  8  CD  s  Hi as  o ^  -££ o  i  1  O  15  30  45  60  75  90  105  120  Flow  135  150  165  180  195  210  225  1  1  1  1  Number  (c) Deviations from Average Latency I  I  I  I  I  I  1  75  90  I  I  I  1  1  1  0.4  .1  0.3  Td  "> CD  %  0.2  CO  I  0.1  o -0.1  _ Jv I  »  0  15  t 30  i 45  l  l 60  •  1  105 Flow  Legend:  HHRS  •  120  1  135  1  150  165  180  1  195  1  210  1  1  225  240  Number  VTRSi  VTRS2  Figure 4-3 Statistics on Flow Network latency in Low Traffic Load Condition  40  nodes. They are numbered sequentially clockwise. Except the cross-ring flows, all other flows are forwarded only along its shorter path, either clockwise or counter clockwise, in the simulated bidirectional SONET ring. Path lengths offlowschange regularly as theflownumber changes. As an example, Figure 44 (a) and (b) shows the correspondence between the distribution pattern of minimum network latencies and the flow numbering pattern for the first 15flowsthat originated from node 0. Measured by the number of physical links, the path length increase as the flow number increases for the first sevenflows(flows 0-6), which travel clockwise. The path length reaches highest for the cross-ring traffic (flow 7). For the other sevenflows(flows 8-14), the path lengths decrease asflownumber increases. Before the differences in network latency in the three routing schemes are discussed, it is helpful to decompose the network latency into fixed components and variable components. Network latency is composed of nodal processing delay and propagation delay. The propagation delay is afixedvalue for a particular traffic flow after its route is determined. Nodal processing delay includes the time for a packet to be processed, queued and re-transmitted (or delivered) by nodes. Queuing delay is highly variable. It depends on the volume and burstiness of traffic flows. Packet processing time, including any necessary CRC computation, header examination and forward table lookup, is almost constant and relatively insignificant as compared to other components of the network latency. Transmission delay for a packet on a single node is also afixedvalue because the packet length is fixed in the simulation. For the flow that travels multiple nodes, however, the overall processing and retransmission delay varies. It is determined by the number of the nodes at which the packets are stopped, processed and retransmitted for routing purpose. The overall processing and retransmission delay for the flow is shorter in VTR scheme than in HHR scheme. Therefore, among the components, the overall processing and transmission delay and queuing delay are the main contributions to the difference in network latency between the HHR and VTR schemes. In the low traffic-load condition, queuing is not observed during the entire simulation for the HHRS and most of simulation period for the VTRS1 and VTRS2. Queuing delay is not expected to affect minimum network latency for allflows.The variation in network latency is mainly a result of the overall retransmission delay and propagation delay. In the VTR schemes, network latencies are significantly reduced for theflowsthat span two or more physical links, because theflowsmay cut through the intermediate nodes without being routed and retransmitted. As shown in Figure 4-4 (a), the minimum network  41  (c) The number of intermediate nodes  Number of Intermediate Nodes  Flow Number  clockwise  counter -clockwise  HHRS  0  14  0  0  0  I  13  1  0  0  2  12  2  1  1  3  11  3  1  0  4  10  4  2  1  5  9  5  2  1  6  8  6  3  2  7  7  7  3  1  VTRS1 VTRS2  (b) Minimum Network Latency 8 1  1 1 1 1 1 1  7  1 1 1 1 1  1  o  6  0  Q  «  "O  S  p  5  b  0  s  q  (0  <s> 4 E  I  E  3  c5  HHRS  r -4'  1  2  ^  *  f  4-  *  b  '+•  /\ /\ / \/ \ — — — ^ y •° t  2  0  Legend:  /  o  &  3  4  VTRS1  5  6 7 8 9 Flow Number  -  0 X x  — *  ~*  b  10 11 12 13 14  VTRS2  Figure 4-4 Distribution Pattern of Minimum Network latency and Path Lengths of Traffic flow 0-14  42  latency of the first 15flowsoriginated from node 0 increases linearly from flow 0 to flow 7 and decrease linearly fromflow7 to flow 14 under HHRS. In VTRSI, direct links spanning two physical links are established. Thus, packets of flow 1 are directly transmitted from node 0 to node 2 clockwise without being routed and retransmitted at node 1. The difference in network latency results from the difference in propagation delay, which is relatively small as compared to other components of network latency in the simulation. The similar analysis can be applied to pairs offlows2 and 3, flows 4 and 5,flows6 and 7 as well as the trafficflowstraveling counter-clockwise. In VTRS2, direct links spanning two and four physical links are established. Packets offlows1 and 3 can be directly transmitted from node 0 to nodes 2 and 4 clockwise, respectively, without being routed and retransmitted at nodes 1, 2 and 3. The differences in the minimum network latency amongflows0, 1,3 result from the differences in the propagation delays. Without direct link from node 0 to node 3, packets of flow 2 has to be routed and retransmitted at node 2 or 1. The contribution of extra retransmission delay results in a significant increase in the network latency. Thus, the minimum network latency is higher for flow 2 than forflows0,1 and 3. Similar phenomena are observed forflows6, 8 and 12. Average network latency is perhaps a better metric than minimum network latency to measure the overall performance of the routing schemes. As seen from Figure 4-3(b), the average network latencies are the same as the corresponding minimum network latencies for allflowsin the HHR scheme. This is also true in VTRSI with exception of only a few flows. For all of these exceptional flows, packet queuing on nodes is observed. In VTRS2, however, average network latency is higher than the corresponding minimum latency for almost all flows. A similar trend can also be observed from the standard deviations from the average network latency illustrated in Figure 4-3(c). There is no deviation from average latency for allflowsin the case of HHR and only insignificant deviations for someflowsin the case of VTRSI but significant deviations in the case of VTRS2. The deviation of average network latency from the average latencies and the trend of increasing standard deviations from HHRS to VTRSI to VTRS2 is mainly due to the queueing delay incurred when link bandwidth is overwhelmed by traffic load at burst. Because of the fragmentation of the bandwidth among different virtual links in case of VTR schemes, more so in VTRS2 than in VTRSI, packets are queued when bursty traffic load exceeds the bandwidth allocated for a particular virtual link,  43  although the overall utilization remains at a low level (15% ~ 35%). The network latency in V T R schemes is significantly reduced from that in H H R scheme because of the reduced number of packet processing delay and retransmission delay. However, the fragmentation of bandwidth among different virtual links counteracts the reduction of network latency by adding queuing delay. Queuing delay is highly variable and would likely cause latency jitters.  4.2.2 Performance in the Medium Traffic Load Condition In the medium traffic load condition, the overall physical link utilization ranges from 35% to 60% of the link capacity (see Figure 4-5). Similar to the simulation result in the low traffic load condition, the distribution pattern of overall link utilization for the HHRS is significantly different from those for the V T R schemes at the medium level of overall traffic load. As discussed in Section 4.2.1, this is attributed to the bidirectional load balancing policy applied to the cross-ring traffic flows. The bidirectional load-balance mechanism is activated in the V T R schemes more readily than in the HHRS, because of the bandwidth fragmentation phenomenon in the V T R schemes.  The bandwidth fragmentation in the V T R schemes also results in loss of packets. Figure 4-6 shows the packets drop rates on each S O N E T node. No packet dropping is observed on any nodes for HHRS and some packet dropping (< 2%) is observed only on two of the nodes for VTRS1. However, packet droping is observed for all nodes except two nodes for VTRS2. The packet loss rates range from 0.1% to 14%.  0.14  h  Node Number  Figure 4-6  Drop Rate per Node in Medium Traffic Load Condition  44  The distribution of minimum and average network latencies are shown in Figure 4-7(a) and (b). Notice the difference in the scale of the vertical axes between two diagrams. While the minimum network latencies are close to those in the low traffic load condition, the average network latencies deviate dramatically from their corresponding minimum network latencies, especially in the case of VTRS2. In case of the HHRS and VTRSI, the average network latencies are comparable in magnitude to their corresponding  (a) Overall Physical Link Utilization for HHRS  O  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  13  14  15  13  14  15  Link N u m b e r  (b) Overall Physical Link Utilization for VTRSI -1  g  I |5  X  T  -I  -  1  1  -  clockwise^  .--V  os  1  -h  \  y'  0.45  i,— 3  0.4  counter—clockwise 0.35  0.3  O  1  2  3  4  5  6  7  8  9  10  11  12  Link N u m b e r  (c) Overall Physical Link Utilization for  VTRS2  counter—clockwise  O  1  2  3  4  5  6  7  8  9  10  11  12  Link N u m b e r  Figure 4-5 Distribution of Physical Link Utilization at Medium Traffic Load Condition  45  minimum network latencies. However, the average network latencies in VTRS2 is much higher than minimum network latencies for many flows. The exceptionally high average network latencies for some of theflowsin VTRS2 are mainly due to the high degree of bandwidth fragmentation among the virtual links. As traffic load increases, it is expected that more packets are likely queued at source and intermediate nodes when the bursty traffic load momentarily exceeds the bandwidth, thereby introducing queue delays. Queuing delay becomes the dominant component of the network latency. There is no exception for all three routing schemes. However, high degree of bandwidth fragmentation in VTRS2 further reduces the available bandwidth for individual flows and increases probability of queuing. This dramatic increase in average network latency from HHRS and VTRSI to VTRS2 is consistent with the increased packet drop rates in each case. Therefore, while virtual topology routing strategy can potentially reduce the number of packet processing and retransmission delays, it can also potentially increase queuing delay. Note that this is an observation on average of all flows. Any individual flow may still have lower latency even when the link are all congested.  4.2.3 Performance in the Heavy Traffic Load Condition In heavy traffic load condition, the overall link utilization for each link ranges from 60% to 100% of the link capacity (see Figure 4-8). At such high level of traffic loads, the network is easily congested for the bursty traffic under the static routing schemes. Obviously, when the overall link utilization reaches 100%, the link is congested during the entire simulation period. When the network is overloaded with bursty traffic, packet drops are inevitable in all three routing schemes, including HHRS. The drop rate is closely related to the link bandwidth utilization. From HHRS to VTRSI to VTRS2, the bandwidth utilization decreases and the packet drop rates increase as bandwidth fragmentation increases. Another factor contributing to the drop rate is the unused queues associated with channels in the VTR schemes. In the simulation, when a channel is cross-connected on a node, the queue associated with the channel is not used for other traffic flows. The total amount of buffer for the burst traffic flows is reduced. If the queues are allocated for other traffic flows, it is expected that the drop rate on the node would be reduced while queuing delay would further increase for the VTR schemes.  1  When the network bandwidth is overwhelmed by the heavy traffic load, more packets are queued  46  in the source node and intermediate nodes, prolonging the network latency. Among the three major components, queueing delay becomes the dominant contributor to the overall network latency. In the simulation, each channel is associated with a queue with length of 20 packets. Maximum queuing delay per node is thus 103627 (Xs (20 packet x 1000 byte/packet x 8 bit/byte + 1.544Mbps). The maximum propagation  (a) Overall Physical Link Utilization for HHRS Cr" 0.95  •',>  ><*^  c l o c k w i s e  0.9 0.8S  c o u n t e r — c l o c k w i s e  IZ!  1  0.8  2  0.7S  ez  0.7 0.65 0.6 0.55  9 Link  1 0  11  1 2  1 3  1 4  1 5  N u m b e r  (b) Overall Physical Link Utilization for VTRS1 -1  r-  0.95 0.9  c l o c k w i s e  O.B5 |  c o u n t e r — c l o c k w i s e  0.8  V  '• 0.75  o.r  L  0.65 0.6 O.S5  7 Link  8 N u m b e r  9  1 0  _1  11  I  1 2  1 3  1 4  1 5  (c) Overall Physical Link Utilization for VTRS2 0.95  -  0.9  -  . O.SS  -  c l o c k w i s e  i  -  c o u n t e r — c l o c k w i s e ^  0.8  | 0 . 7 5 1  s  -  or0.65  -  0.6  -  0.55  2  3  4  5  Figure 4-8 Distribution of Physical Link Utilization in Medium Traffic Load Condition.  1.  It is assumed that each node has enough computation power to process traffic load within the capacity of the  SONET ring.  47  (b) Average Network Latency x 10  45  60  75  90  105  120  135  F l o w Number  1 50  1 65  180  210  225  240  210  225  240  (c) Deviation from Average Network Latency  30  Legend:  45  60  75  HHRS  Figure 4-7  90  105  120  135  F l o w Number  150  VTRSl  165  180  195  VTRS2  Statistics on Network Latency in Medium Traffic Load Condition 48  0.4 0.35  0.3 0.25  13  cr  _  0 2  <=>  0.15  o  0.1 0.05 0  Figure  4-9 Drop Rate per Node in Heavy Traffic Load Condition  delay is only 4,000 ms (8*100,000m/20,000,000m/s) . The longest path in the ring is 8 links apart. Overall 1  transmission delay ranges from 5182 \is to 41,450 p.s. Therefore, the overall network latency depends mainly on the variation of queuing delay and overall transmission delay. The minimum network latencies achieved in the low traffic load condition are no longer achievable for manyflowsin this overloaded traffic condition because of the queueing delay. The increased portion in minimum network latency is proportional to the number of nodes on which its packets are processed. For average network latency, a more reasonable measure for network performance, the situation is more complicated. For some flows, the average network latency in VTR is better than in HHR. For the others, the trend is reversed. In general, the differences in average latencies among three routing schemes are not as dramatic as those in the medium traffic load condition. However, it is difficult to conclude which routing scheme is better than the other in terms of network latency.  4.3 Summary In this chapter, the performance of three different static routing schemes are examined. In the HHR scheme, a packet is processed and forwarded at all intermediate nodes until it reaches the destination node. The physical link between any two neighboring nodes are used as a big bit pipe for transport packets between two nodes. In VTR schemes, virtual links are established among some nodes across the SONET  1.  In the opticalfiber,light travels approximately at two thirds of the speed of light  in vacuo. 49  (b) Average Network Latency  30  Legend:  Figure 4-10  105 120 Flow  45  135  Number  HHRS  1SO  VTRS.1  185  180  195  210  225  240  VTRS2  Statistics on Flow Network Latency in Heavy Traffic Load Condition 50  ring. The bandwidth is statically allocated for virtual links. The packets of afloware not processed and retransmitted at intermediate nodes but directly cut through the nodes, utilizing the Digital Cross-Connect functionality of the SONET technology. Different levels of traffic loads are simulated under the three statical routing schemes. At low traffic load condition, the amount of traffic introduced by burstyflowsare below or compatible with its allocated bandwidth in all three schemes. The two VTR schemes have advantage over the HHR scheme in that they have significantly lower network latency due to reduced overall packet processing and re-transmission delays. When traffic load increases, the queuing delay plays a more important role in the overall network latency. The fragmentation of bandwidth among different virtual links results in long queuing delay and counteracts the reduction in the number of packets processing delays and retransmission delays in the VTR schemes. For someflows,their average network latencies in the VTR schemes are higher than those in the HHR scheme. A general tendency is that the more bandwidth fragmentation, the higher latency jitter. In addition, packet drop rate on nodes increases when bandwidth fragmentation increases in the medium and heavy traffic load condition. In general, static bandwidth allocation schemes are not adequate for bursty traffic.  51  Chapter 5 Dynamic Bandwidth Allocation Models  In Chapter 4, static routing schemes on a SONET ring were examined. Both virtual topology routing (VTR) and hop-by-hop routing (HHR) schemes have their own merits and drawbacks. The HHR scheme uses all physical link capacity as a fat bit-pipe for transportation of data traffic, resulting in high link utilization when traffic load is heavy but high network latency when traffic load is light. The VTR scheme allocates bandwidth to individual virtual links. It reduces network latency when traffic load is well below link capacity but under-utilizes the link capacity when traffic is heavy. In the medium traffic-load condition, the performance of the network depends on the distribution of the traffic volume and allocated bandwidths. With changing network traffic patterns in real world, the network bandwidth should also be dynamically allocated to route the traffic with minimal network latency. In this chapter, a dynamic bandwidth allocation scheme combining both VTR and HHR approaches is proposed.  5.1 Overview of the Dynamic Bandwidth Allocation Scheme In a dynamic bandwidth allocation scheme, the network bandwidth resource is divided into two regions: HHR region and VTR region. The boundary between VTR and HHR regions needs to be adjusted adaptively in response to changing traffic patterns in the hope that maximal amount of traffic flows can be routed over the ring with minimal network latency. To guarantee that the management activity reaches all nodes across the network, a minimal amount of bandwidth is allocated to the HHR region. A dynamic bandwidth-allocation scheme is expected to have a more sophisticated mechanism and higher management overhead than static routing schemes. Bandwidth management protocols are needed for exchanging information on traffic load and for synchronizing the reconfiguration of the network. For each node, configurations of both hardware (DCS device at SONET layer) and software (a dynamic forwarding table at data link layer) need to be updated. The question is whether the improvement of performance outweighs the management overhead. The major challenge is to develop a correct, simple and yet efficient mechanism, including a fast optimization algorithm to compute the optimal or sub-optimal band-  52  width configuration. Both centralized model and distributed model may be used for the dynamic bandwidth allocation scheme. In the centralized mode, a central manager collects the traffic load information, computes the optimal configuration of the point-to-point links, and distributes new configuration. The distributed model enables each node or a set of nodes to make a reconfiguration decision. This is useful especially in cases where the change in traffic patterns is only regionally significant. In such cases, it may be sufficient to adjust the bandwidth allocation locally or regionally instead of globally.  5.2 A Centralized Model for Dynamic Bandwidth Allocation In the centralized model, a central manager collects traffic information, computes optimal configuration and dispatches the reconfiguration commands to each node. The mechanism may be divided into three phases: (1) information collection, (2) optimal configuration computation and (3) reconfiguration dispatch. In order to reconfigure the virtual topology of the SONET ring adaptively, information on realtime traffic patterns and the network performance needs to be monitored by periodically exchanging messages between control center and each node on the ring. Each node has a local-resident module to collect traffic load and performance statistics on the node. The manager may actively poll the nodes periodically. Alternatively, the mechanism may also be triggered by a hot spot node when congestion occurs or is about to occur. A counting mechanism is established at each node to monitor the traffic. Theoretically, there are (n-1) counters at each node in aringwith n nodes. Each of the counters records the traffic from this node to the other (n-1) nodes. Practically, there might be far less than (n-1) because the network is usually not a meshed network. After the information is collected, the central manager computes the optimal or near-optimal configuration. An empirical model may have to be created to predict future traffic pattern according to accumulated statistical data based on the past and the real-time traffic information. A simple and scalable algorithm is needed to compute optimal or near-optimal bandwidth allocation within a predetermined time for a given traffic pattern. If any reconfiguration is necessary, the manager then dispatches management messages to relevant 53  nodes. The node which needs to be reconfigured must use the interfaces provided by layer 1 (SONET) to physically reconfigure the SONET DCS devices and update the dynamic forwarding table. The reconfiguration on nodes must be synchronized to avoid packet loss and incorrect delivery. A key issue in the centralized model is to develop a simple and fast algorithm that can compute optimal or near-optimal bandwidth allocation for a given traffic pattern. Different SONET architectures may have different characteristics. Therefore, various optimization algorithms may be employed to achieve particular optimization goals. In this subsection, optimization algorithms for some of the common SONET ring architectures are discussed. In the following discussion, we assume that the volume of a flow can be predicted based on real-time traffic information and accumulated statistical data.  5.2.1 Optimization Algorithm for Bandwidth Allocation on UPSR Intuitively, a flow with lower volume/bandwidth ratio is less prone to congestion. In other words, for a particularflowon a point-to-point link, the more bandwidth allocated to that link, the lower the probability that the link will be congested in future. However, the more bandwidth allocated to point-to-point links, the higher bandwidth fragmentation. With the predicted volumes of flows, the optimization goal for a UPSR is to maximize bandwidth for allflowsfairly. The optimization algorithmfirstuses a Greedy algorithm to assign bandwidth to each of all trafficflowsfairly within the network capacity. Then, it uses some empirical criteria to determine whether to establish a virtual link for a particular flow. Optimal Bandwidth Assignment on UPSR For the convenience of the description, we define the following terms. For a flow i on a physical linkj,  v- • denotes its predicted volume, b: = denotes the assigned bandwidth. Theflow'sbandwidth uti-  lization on link j,  Uj j is defined as the ratio of the traffic volume and the assigned bandwidth:  u.  j  =  ^  (Eq.5-1)  If there are mflowstraveling through physical link j with overall capacity C j , the overall traffic volume on linkj is V j ,  54  Vj-Iv^  (Eq.5-2)  i = 1  If all of the link capacity is assigned to the flows, then Link utilization on link j is  ^  = ^  (Eq.5-3)  and we should have  Cj=Ib i=  (Eq.5-4)  u  1  For a traffic flow that spans from physical link 1 to physical link n, we have  \ l  = i , 2 = ••• = i , n = i v  v  (Eq.5-5)  v  The effective utilization of assigned bandwidth is determined by the bottleneck bandwidth in the flow's path. Thus, same amount of bandwidth should be assigned for a flow across all physical links in its path. We define effective bandwidth for the flow i as bj = M i n { b , b u  i > 2  ,.. b M  i > n  }  (Eq.5-6)  and the effective bandwidth utilization as Vj Uj = - = Max{Uj j,Uj , 2  Uj } n  (Eq. 5-7)  For a given traffic pattern in a network, the value of U j is fixed while the values of Vj j , b j ,  Vj, bj  ;  may vary under different bandwidth assignment schemes. The bandwidth assignment problem is defined as follows. Let F be a set offlowsto be routed on a unidirectional ring network G = (N, L ) with a set of nodes N and links L (|L| = n). There is limited bandwidth capacity Cj on each link j e L . Suppose that allflowsalways take their fixed paths. Suppose further that allflowshave no preference of one over another. How is the effective bandwidth utilization for each flow in F minimized within bandwidth capacity of G? 55  We propose a Greedy algorithm as a simple solution to the problem. The algorithm performs bandwidth assignment for each link in the order of descending Uj values, starting from the flows on the link 1  that has the highest link bandwidth utilization U  .  To prove the proposed solution is a Greedy algorithm yielding optimal solution, wefirstprove that the assigned bandwidth forflowson link l  m a x  is part of the global optimal solution. The set offlowsF is  divided into the following three subsets:  F „:  the set offlowsthat share the link 1  F  snare  the set offlowsthat share any link lj (lj * l  F  noshare  mo  IllaX  with one another;  v  III a A .  : :  m a x  ) with f €  F ; and max  the set of the remainingflowswhich does not share the any link with f e  F . max  We shall prove the following: 1.  the bandwidth assignment on l  m a x  gives maximum effective bandwidth for f e  F ; max  and 2.  equal or more, but not less, effective bandwidth can be assigned to flows that belong to  F  and  after bandwidth assignment for flows on link l  noshare  First, the bandwidth assignment on l tive bandwidth utilization for f €  F„ m  v  m a x  F  snare  .  gives maximum effective bandwidth and minimum effec-  m a x  . Theflowbandwidth utilization for f €  IIldA  F__  v  reaches minimum  lIldA  because the entirety of the link bandwidth is fairly assigned to theflowssuch that  u  l  1  =  '•'max  u  = ••• = i 1  = ••• = max  u  2 1 •'•''max  U  The assigned bandwidth is also the effective bandwidth for any flow f e flow going through link l 7 U;j ^ (j = 1,  7  m a x  only. For any flow f e  (Eq.5-8)  iiiaA  '> 'max  F  F . max  This is obvious for any  that travels multiple links, pick any linkj with  max  r n; j *m l ax and U F a,x also travels. Independent bandwidth j <U max ) that some flow f e m  7 J  m ! l v  assignments on linkj and l  ;  m a x  m  v  give:  b j =  and  f  'J U • j  b ! f  (Eq. 5-9)  = ^  '" M  11  max  56  because IL < U J  , b • >b ,  max  f  . From (Eq. 5-6), the effective bandwidth forflowf is  f  >J  l  1  max  b = b , f  Thus, the bandwidth assignment on l  m a x  (Eq. 5-10)  f  gives maximum effective bandwidth for any f e  Obviously, bandwidth assignment for any flow i e assignment on link l  m a x  , because any flow i e F share  d o e s  F  m a x  is not affected by the bandwidth  n o s h a r e  n o t s n a r e  F  a  n  v n  n0  ° k with anyflowf e  F  m a x  .  Consequently, we need only to prove that equal or more amount of effective bandwidth can be assigned to  F  for any flow i €  s h a r e  after the bandwidth assignment on l  m a x  . Choose any flow i e  F  on some  s h a r e  link j with ILJ (j = 1, ..., n; j * 1max „ and Uj < U max ) so that there is at least one flow f € F max , that also J  v  ;  m  travels link j . If we perform bandwidth assignment on link j before link l  m a x  v  , any flow i (including f) on  link j could be assigned with bandwidth as follow:  b  i,i=  T  J  i =  v  j  i.J-^  ( E q  J  Now we perform the bandwidth assignment on link j after link l  m a x  as the ratio of the to-be-optimized traffic volume V j = Vj - V °  c  j  - r -r  o p t  j  —  - 5  H )  . We define partial link utilization U'j p t  over the remaining link capacity  • '  = ^  (Eq.5-12)  It is obvious that Uj = Uj' when none of theflowsis optimized on the link j. Assume there is onlyflowf in set  F  m a x  also travels on link j, then the remaining trafficflowsand link capacity  V  C  'j  =  V  J- f.J  'j = ^ - b j = Cj - b = Cj - b f>  f  (Eq.5-13)  V  f>1  m  = Cj -  £L  ( . 5-14) Eq  max  Thus,  57  --^rx  U  'j  =  c  i  =  j  p  j U  *"*  t. j  U  J  =  c  j  max  - "  ( E q  =  5  1 5 )  p t,J J Uj  (Eq. 5-16) U  ~ c  J  v = v j v • v : Umax^U^^-S^^Cj-^^Cj-^J f  f  max  f  f  (Eq.5-17)  j  max  j  A comparison of (Eq. 5-16) and (Eq. 5-17) gives  U  i  or U^U'j  (Eq.5-18)  Anyflowi e F on linkj is fairly assigned with effective bandwidth:  b'jj = ^  (Eq.5-19)  A comparison of the above b' • value and b = value in (Eq. 5-11) gives ;  ;  b  ' i  i  \j  U  i  = - i>1 J  (Eq. 5-20)  U  b'j j > b j  (Eq.5-21)  4  For the same trafficflowi e F , b'j j is obtained from the bandwidth assignment on link j after on link s  l  m a x  while bj j is obtained from the bandwidth assignmentfirston linkj before on link l  or more amount of effective bandwidth are possible forflowi i. F l  m a x  m a x  m a x  . Thus, equal  after bandwidth assignment on link  is performed. Likewise, it can be shown that effective bandwidth for f e F  reduced if bandwidth assignment proceeds on link j with Uj < U  m a x  m a x  before link l  m a x  would have been . Thus, effective  bandwidth for allflowscan be maximized fairly if we begin bandwidth assignment with theflowson link 58  'max'  w m  c h has the highest partial link utilization U',  After optimal bandwidth assignment for flow f G F  M A X  on link l  m a x  , the optimization problem  for network G is reduced to a sub-problem of optimal bandwidth assignment for a set of flow F' = F - F  M  A  X  over residual network G ' = ( N , L ' ) with L ' = L - l  m a x  . Each link j has the remain-  ing capacity C j . The sub-problem has the same form as the original problem. By induction on the number of optimal assignment of effective bandwidth on the link with the highest partial link utilization, the algorithm is indeed a Greedy algorithm that yields the optimal solution. Heuristic Bandwidth Allocation Algorithm for U P S R The algorithm is given in Figure 5-1. In each optimization cycle, the algorithm starts with the computation of the partial link utilization for all of the to-be-optimized links of the UPSR in order to find the most bandwidth-contentious link. Each flow i on the link is assigned with bandwidth b = V j / U ' ;  m a x  from the remaining physical link capacity. Theflowson the link and the link itself are marked optimized. In the next optimization cycle, partial link utilization is recalculated for each of the to-be-optimized links with volumes of to-be-optimized flows and remaining link capacity. It must be emphasized here that bandwidth can only be assigned in multiples of the managed bandwidth unit (STS-n, VT or DS-n etc.). This has two side effects. First, assignment of bandwidth at the granularity of managed bandwidth for one link in the previous optimization cycle may cause an increase or decrease in partial link utilization values of other links in this cycle. The order of remaining links according to their Uj values may change depending on the granularity of the managed bandwidth and the volumes of flows. The recalculation of partial link utilization guarantees correct determination of the most bandwidth-contentious link in each optimization cycle. Secondly, it is possible that new U ' higher than U '  m a x  m a x  values found in this optimization cycle are  found in the previous optimization cycle because of the limitation of the granularity of  managed bandwidth. Therefore, the algorithm does not always yield the optimal solution. After bandwidth is optimally assigned to allflows,flowbandwidth utilization is calculated using the volume of the flow and the assigned bandwidth. Before a virtual link is established for a particular flow, the algorithm first checks whether the flow bandwidth utilization is lower than a high-water-mark threshold. The rationale behind this check is that there should be some reasonable margin between the volume of  59  1 2 3 4 5  repeat until all physical links are optimized { u =0; l = NULL; for( each not-optimized physical link j) { compute partial link utilization Uj'; m a x  max  6  7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23  if ( uv = = U '  u  m a x  m a x  )  = Uj,l  max  = j,  } queue to-be-optimized flows on the link l  max  in order of their path length;  for (each flow in queue) { assign the flow i with bandwidth b = v / U' ; mark the flow i as optimized; } mark the link l as optimized; ;  ;  max  max  } for( each optimized flow i) { compute the flow bandwidth utilization; if ( U j j ' < maximum threshold ) establish VTR link with assigned bandwidth; }  Figure 5-1 Bandwidth Allocation Algorithm for UPSR  traffic flow and the capacity of the virtual link. Therefore, if the flow bandwidth utilization is higher than the threshold, the flow is routed through the HHR region to take advantage of better bandwidth utilization. The threshold is an empirical parameter less than unity.  1  The optimization process is further illustrated in the following hypothetical example, where 10 traffic flows are to be routed over a 5-node SONET UPSR network with 48 Mbps bandwidth capacity for each physical link (Figure 5-2). In this simple example, The bandwidth assignment takes two optimization cycles as shown in Table 5-1 and Table 5-2. In each cycle, partial link utilization U- is computed for each 1.  If link utilization is greater than unity, the capacity of the link is overwhelmed by the volume of the traffic load and the link is congested.  60  Bandwidth Capacity  Traffic Pattern  b a n d w i d t h a n d traffic v o l u m e s are i n u n i t o f Mbp  Figure 5-2  A n E x a m p l e of B a n d w i d t h A s s i g n m e n t on a 5-node S O N E T U P S R N e t w o r k  p h y s i c a l l i n k that has n o t b e e n o p t i m i z e d a n d the b a n d w i d t h a s s i g n m e n t i s p e r f o r m e d f o r the l i n k w i t h m a x i m u m U'j f r o m the a v a i l a b l e b a n d w i d t h o f the l i n k . I n the first o p t i m i z a t i o n c y c l e , l i n k E A has the h i g h e s t U'j v a l u e ( T a b l e 5 - 2 ) . C o n s e q u e n t l y , the a v a i l a b l e b a n d w i d t h (48 M b p s ) i s d i s t r i b u t e d t o s e v e n flows that t r a v e l a l o n g l i n k E A (Table 5-1). T h e s e v e n flows a n d L i n k E A are then m a r k e d  optimized.  I n the  f o l l o w i n g s e c o n d o p t i m i z a t i o n c y c l e , p a r t i a l l i n k u t i l i z a t i o n U'j v a l u e s are c o m p u t e d f o r the r e m a i n i n g p h y s i c a l l i n k s u s i n g v o l u m e s o f n o t - o p t i m i z e d flows a n d r e m a i n i n g b a n d w i d t h c a p a c i t i e s o f the l i n k s . T h e result i n d i c a t e s l i n k B C has the h i g h e s t U'j v a l u e ( T a b l e 5 - 2 ) . C o i n c i d e n t l y , the three r e m a i n i n g flows a l l t r a v e l a l o n g l i n k B C . T h e r e f o r e , a l l f l o w s are a s s i g n e d b a n d w i d t h a n d a l l l i n k s are m a r k e d  optimized  after  t h i s o p t i m i z a t i o n c y c l e . A t this p o i n t , the b a n d w i d t h a s s i g n m e n t c o m p l e t e s .  Table 5-1  B a n d w i d t h A s s i g n m e n t f o r the U P S R E x a m p l e  F l o w (src-dst)  A-D  A-E  B-C  B-A  C-A  C-B  D-B  D-C  E-A  E-C  Volume (Mbps)  2.15  2.34  5.07  5.11  5.49  8.87  5.81  3.80  4.13  5.69  6  6  10  7  5  6  8  0.761  0.688  0.711  Assigned Bandwidth  cycle 1 cycle 2  Flow Utilization  6  7  16  0.359  0.334  0.317  0.851 0.915 0.887 0.830  61  Table 5-2 Partial Link Utilization Values in Bandwidth Assignment for the UPSR Example Physical Link Partial Link Utilization  AB  BC  CD  DE  EA  Cycle 1  0.597  0.503  0.499  0.654  0.810  Cycle 2  0.249  0.330  0.173  0.167  optimized  Cycle 3  optimized  optimized  optimized  optimized  v  i  The next step is to compute the bandwidth utilization u= = — for each flow. It must be noted here b  i  that the resultant u values are different among allflows(see Table 5-1), even for theflowswhose band;  width is assigned in the same optimization cycle. As mentioned above, this is due to the limited granularity level of the managed bandwidth unit (Mbp in this example). Virtual links are established only for flows whose value is higher than a predefined threshold. If the threshold is arbitrarily chosen to be 0.8, then no direct links are established forflowsB-A, C-A, C-B and C-B. Theseflowsare routed in Hop-by-Hop fashion to reduce the bandwidth fragmentation and therefore bandwidth utilization. For example, on physical link DE,flowsB-A, C-A, C-B and D-B has overall utilization: = U  5.11 +5.49 + 8.87 + 5.81 6 + 6+10 + 7  =  25.28 = 0.766 33  (Eq. 5-22)  It is lower than the U j values of any of these four flows.  Now we analyze the complexity of the algorithm. Assume mflowsare to be routed over a UPSR with n physical links. For /-th optimization cycle, partial link utilization is calculated for each of remaining to-be-optimized flows on (n - i) links. The cost of U' calculations is in O(nm) per optimization cycle and 0(n m) overall in the worst case (lines 4-8). The in-order queuing offlowson the link with U ' 2  m a x  (line  10) is bound to O(mlogm) overall in the worst case. Each flow is involved in the bandwidth assignment (lines 12-15) only once in the entire algorithm, which incurs the cost of O(m). Obviously, the cost for determining whether to establish point-to-point link is again O(m). Therefore, the complexity of the algorithm is:  0(n m + mlogm + m + m) = 0(n m + mlogm) 2  2  . For a given UPSR, the complexity of  the algorithm is O(mlogm) .  62  5.2.2 Optimization Algorithm for Bandwidth Allocation on BPSR The most noticeable difference of the BPSR architecture from the UPSR architecture is that there are two possible point-to-point links, in opposite direction, between any two nodes in the BPSR. A traffic flow can go either clockwise or counter-clockwise, resulting in two paths for the same point-to-point link. The two routes may be different in path length and network latency. In addition to the goal for optimizing bandwidth distribution for the traffic flows, the dual-paths-for-single-link nature of the BPSR adds another optimization goal to our solution, i.e., minimizing the overall latency.  1 Extension of Bandwidth Assignment from UPSR to BPSR It is implausible to use case-by-case analysis for each direction for eachflowbecause it is an undesirable exponential algorithm with complexity of 0(2"). Some heuristic methods should be used instead. It is expected that such heuristic methods will give a sub-optimal result with less computational cost. A BPSR configuration may be viewed as two separate UPSRs, one in each direction. If we can determine the directions of the trafficflowsheuristically, a near-optimal configuration may be produced by applying the Greedy algorithm similar to the one we have developed for UPSR. Thus, the key issue in the algorithm for BPSR is tofinda way to determine the direction of the trafficflowson the SONET ring which will eventually result in an optimal or near-optimal configuration. Thefirststep of the algorithm is to determine the directions for all flows and divide them into two groups: clockwise traffic or counter-clockwise traffic. It is not too difficult to understand that two optimization goals for BPSR architecture, minimizing overall latency and minimizing overall load/capacity ratio, are actually correlated to each other. A trafficflowtaking a shorter path takes up less volume of the network bit-pipe and, therefore, would less likely causes congestion in the network, or vise versa. In other words, with the same traffic load, a configuration with all traffic flows taking their shortest paths (and therefore with the shortest propagation delay) is the most likely the configuration with the lowest load/capacity ratio. While this is not always true, such a configuration may serve as a good start point to approximate the solution by modifying the Greedy algorithm for the unidirectional ring architecture. Therefore, two approaches may be taken for the initial determination of the direction of aflowon 1.  In the following discussion, we assume all inter-nodes distances on the BPSR are equal. The latency is normalized to the number of physical links.  63  the SONET: shortest-path-first approach and low-utilization-first approach. In the shortest-path-first approach, the direction of each flow is determined according to its shortest path. However, the overall bandwidth utilization is not necessarily minimized. Some of the links may have a high utilization exceeding unity (i.e., congested) while others have very low utilization. Further optimization may be performed by altering the directions of someflowsat the congested link. Such alternation of aflow'sdirection should not cause any new congestion in the other direction while it relieves congestion in one direction. Using shortest-path-first approach, aflowwith the longer path and the closest volume to what is needed to relieve the congestion is selected as a candidate. Reversing the flow with the closest volume will incur the cost of latency on the least volume of affected traffic. Reversing the flow with the longer path has impact on fewer links because it has a shorter path in the opposite direction in a ring architecture. Reversing aflowwith a longer path has less impact on the number of links. In addition, reversing theflowwith the longer path has less impact on overall latency. If we assume the network latency is proportional to the number of links aflowtraverses, the difference in network latency becomes less significant as the difference in path lengths of the two paths (clockwise and counterclockwise) reduces. For a SONET with n links, if the clockwise path lengths for a flow is j links, the difference is j - ( n - j ) = 2 j - n . Keeping in mind that all traffics under consideration are forced to take shorter path with path length of i < - (i.e. their directions have not been changed). The difference in network latency approaches nil as i approaches n/2. It should be noted that for odd number of nodes there is always a difference between two paths whereas for even number of nodes there is no difference whether the traffic goes clockwise or counterclockwise when  The heuristic algorithm based on the shortest-path-first approach is shown in Figure 5-3. After initial determination offlowdirections, someflows'directions on congested links are reversed in order to resolve the congestion. The flows on a congested link are sorted in the order of first descending path lengths and then descending flow volumes before being selected for reversal. The optimization stops as soon as the congestion is resolved. On the other hand, the low-utilization-first approach attempts to minimize bandwidth utilization with less concern about the network latency. The algorithm based on this approach is similar to the shortest-path-first based algorithm (Figure 5-3) except line 1 and line 9, reflecting the difference in the place-  64  ment policy and the sorting order policy between the two approaches. In the low-utilization-first approach, Eachflowtakes the direction that leads to lower utilization at the time of the determination. Further optimization in the low-utilization-first algorithm is similar to that in the shortest-path-first algorithm. However, the flows on the congested link are sorted first in the order of longest path first and then in the order of heavy volumefirst.Theflowsare considered one by one until the congestion is relieved. Evaluation of the Optimization Algorithms for BPSR We first analyze the complexity of the shortest-path-first optimization algorithm for BPSR in Figure 5-3. Assume the number of the physical links is n and the number of flow is m. The placement of flows in the ring data structure takes O(m). The computation of the link utilization (lines 3-7) takes O(nm) in the worst case. Sorting the congested flow list (line 9) takes 0(mlogm) . In the worst case, all mflowsare congested and each congested flow takes one or two parses on n links to process. Thus, the attempts to relieve the congestion by reversing flow direction (lines 12-32) is O(nm). Finally, the application of the optimization algorithm for UPSR to two rings of the BPSR takes 0(n m + mlogm) 2  (See section The overall complexity of the algorithm is bound to the same order for the optimization algorithm for UPSR: 0(m + nm+ n + mlogm + nm + n m + mlogm) = 0(n m + mlogm). 2  2  The sub-optimal solutions from the heuristic algorithms for BPSR are compared with the optimal solution from the case-by-base analysis. The result is shown in Table 5-3. The case-by-case analysis is an exponential algorithm. Limited by the computation power, only small number offlowson a small ring is tested to compare the two approaches. Apparently, the shortest-path-first based approach outperforms the low-utilization-first based algorithm. The solutions from the shortest-path-first approach are very similar to those from case-by-case analysis in most cases. It should be noted here that the trafficflowsare generated randomly in terms of source/destination pair and volume, which tends to give uniformly distributed traffic pattern. From the data in Table 5-3, the low-utilization-first approach appears very limited. One of the drawbacks of this approach is that the determination of aflow'sdirection depends on the directions of previous flows. Although the flows can be sorted according to a particular order (e.g. their volumes) before direction determination, such ordering offlowsmay or may not give a good configuration.  65  I 2 3 4 5 6 7 8 9 10 II 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35  place all flows in the SONET ring according to their shortest paths; for (each physical link) { compute available bandwidth and link utilization factor; if( link is congested ) queue the flow in the congested flow list; } sort the congested flow list in the order from the heaviest and the longest path; mark all flows in the congested flow list as to-be-optimized; for( each to-be-optimized flow in the congested flow list) { flag =1; for (each links at the other direction ) { if( available bandwidth < flow volume) { flag = 0; break; } } if( flag = 1 ) { reverse the direction of the flow; recalculate partial bandwidth utilization for each link along both paths; if( the congestion is relieved ) { mark flows and links related to the congestion as optimized; remove the relieved flows from congested flow list; } } else { remove the flow from the congested flow list } mark the flow as optimized; } mark all flows as to-be-optimized; Apply Greedy algorithm for UPSR to each direction of the BPSR  Figure 5-3 Heuristic Bandwidth Allocation Algorithm for BPSR  66  Table 5-3 Evaluation of Heuristic Optimization Algorithms for Bandwidth Allocation on BPSR Number of Flows" 10  11  12  13  14  15  16  17  18  Case  Utilization-first  Latency-first  Case-by-Case  b  No.  Optimality c  Congested Flows  Optimality  Congested Flows  Optimality  Congested Flows  1  10.131763  0  19.016645  0  10.056386  0  2  12.128714  3  15.599754  2  11.923938  1  3  7.427063  0  7.427063  0  7.438634  0  1  12.740566  1  12.791295  0  12.587412  1  2  13.623420  3  13.776506  4  12.722558  3  3  9.419052  0  9.463726  0  9.419053  0  1  12.777963  3  16.748877  2  12.686038  3  2  9.966221  0  9.920314  0  9.692607  0  3  10.374588  1  10.243771  0  10.243771  0  1  13.630909  2  14.769254  0  13.497295  1  2  15.457264  3  21.143480  2  15.333348  4  3  15.316359  2  23.357504  2  15.173657  2  1  14.160752  1  19.237741  4  14.097288  1  2  14.613918  0  21.991215  0  14.537759  0  3  15.025297  2  17.407854  2  14.809400  2  1  20.243656  3  22.726084  5  19.972858  3  2  15.492043  1  18.892954  3  15.492044  1  3  21.445099  4  30.673292  8  21.277084  2  1  22.435507  4  25.268618  8  22.299610  4  2  19.989080  1  19.989079  1  19.914669  1  3  19.075283  1  23.953600  6  18.961231  1  1  25.162674  5  28.148506  8  25.212437  7  2  19.289131  3  19.150562  3  19.153793  3  3  19.693415  3  24.188282  3  19.693417  3  1  25.397146  8  24.968666  6  24.968666  6  2  22.125919  1  25.973032  6  21.893955  2  3  23.211870  4  31.166794  8  23.320623  6  67  Table 5-3 Evaluation of Heuristic Optimization Algorithms for Bandwidth Allocation on BPSR Number of Flows 19  20  a. b. c.  3  Case No.  Latency-first  Utilization-first  Case-by-Case  b  Congested Flows  Optimality  Congested Flows  Optimality  c  Congested Flows  1  24.579937  3  35.925102  12  24.376532  4  2  25.048611  7  36.830357  12  24.923361  7  3  27.789974  10  27.680077  10  27.715565  10  1  28.755087  9  46.340038  16  28.814304  9  2  28.831577  6  56.954418  15  28.790331  7  3  25.332384  2  32.189476  8  25.312071  1  Optimality  Case-by-base analysis is an exponential algorithm. Limited by computation power, the evaluation is carried out for a small ring with five nodes, which still needs hours of computation. Optimal configuration is the one with minimal optimality and minimal number of congested flows. Optimality is defined as the summation of (utilization X path length) over all traffic flows.  5.2.3 Extension of Heuristic Algorithm from Single Ring Configuration to Multiple Rings with Matched Nodes The significant difference between single ring architecture and multiple ring architecture is the addition of inter-ring traffic, which may further complicate the bandwidth assignment problem. In this section, the possibility of applying the heuristic optimization algorithms in previous sections to the multiple ring architectures are briefly discussed. Unidirectional Path Switched Double Ring with One Matched Node Unidirectional Path switched Double Ring (UPDR) with a single matched node is the simplest configuration of a multiple-ring architecture (Figure 5-4).In this configuration, there is only one possible path from a node in one ring to another node in the other ring. This problem is simpler than the one in BPSR. Consequently, the heuristic algorithme based on the Greedy algorithm can be easily extended from UPSR configuration to UPDR configuration. This conclusion is applicable to multiple UPSRs connected to one another via a single matched node. Multiple UPSRs with Two or More Matched Nodes At the first glance, there seems to be more paths for an inter-ring traffic flow in a configuration where two UPDRs are connected with two or more matched nodes. In the configuration illustrated in  68  c  c  Single path for inter-ring traffic Figure 5 - 4 Unidirectional Path Switched Double Ring with One Matched Node  Figure 5-5, for example, two paths are possible for the flow from A to A'. However, a close look at these two paths reveals that path 2 does not have any advantage over path 1 both in terms of latency and in terms of utilization. Both paths share links A B and B'A'. Unless the connect-lines B B ' becomes congested while C C is under-utilized, path 1 is always the optimal choice. The same conclusion can be drawn from the analysis of possible paths from D ' to D. The path with shortest length is always the optimal one. In general, in configurations with multiple UPSR connected by shared nodes, no determination of flow direction is required before bandwidth allocation. The optimal path for a cross-rings flow is always the one with the shortest path length. The Greedy algorithm we developed for bandwidth allocation can be applied.  Path  2 has  no advantage over  Path  1: it travels extra links BC and B ' C  Figure 5 - 5 Unidirectional Path Switched Rings (BPSR) with Two Matched Nodes 69 Bi-directional Path Switched Rings (BPSR) with One Matched Node In a configuration involves multiple BPSRs, the optimization algorithm becomes complicated. The direction of a flow must be determined before bandwidth can be allocated by the Greedy algorithm. The simplest configuration of this class of architectures is the one where two BPSRs have a single matched node. For an inter-ring traffic flow A A ' may go clockwise or counter clockwise in each ring, resulting in four possible paths, each with possibly different available bandwidth and path length (Figure 5-6). The number of paths will grow exponentially as the number of rings the flow traverses increase.  C  C  Multiple paths for inter-ring traffic flow A A '  Figure 5-6 Bi-directional Path Switched Double Rings with One Matched Node A Divide and Conquer strategy may be used to solve this problem. The first step is to decompose the inter-ring traffic paths. The path for an inter-ring flow may be divided into a number of sub-paths: interring path and intra-ring paths. For example, path A A ' may be divided into an inter-ring path BB', intra-ring paths A B or A D C B and B ' A ' or B'C'D'A'. After decomposing inter-ring paths into intra-ring sub-paths, the direction of the flow, therefore the sub-paths the flow travels, in each ring is independently determined similar to the heuristic optimization algorithm for BPSR architecture. Finally, the Greedy algorithm is applied to assign bandwidth for the entire network including the inter-ring links.  5.3 Distributed Model One of the drawbacks in the centralized model is the high management overhead. A locally significant change in a traffic pattern may trigger an unnecessary global reconfiguration. In the distributed model, a set of one or more nodes makes its own decisions to improve the network performance. The  70  responsibility of bandwidth optimization is distributed to nodes of the SONET ring. Nodes make the decision locally and in parallel. This not only reduces the management overhead but also reduces the response time to the changes in network conditions. Two principles are followed in this distributed model. One is to maintain a tension between the network bandwidth utilization and network latency. As mentioned in the previous chapters, the bandwidth resources are divided into VTR region and HHR region. Network latency may be reduced by establishing direct point-to-point links, moving more bandwidth from HHR region to VTR region, while bandwidth utilization may be increased by tearing down direct point-to-point links, moving more bandwidth from VTR region back to HHR region. Furthermore, network latency may also be reduced by merging short links to a longer one while bandwidth utilization may also be increased by splitting a long link into shorter ones. The other principle in this distributed model is to seek a local network reconfiguration rather than a global solution at the beginning. The motivation is to solve problem with minimal reconfiguration effort and short response time. A global reconfiguration may give a better overall solution for the entire system but needs more time for both message propagation, solution computation and synchronization. A local solution requires a short response time and avoids reconfiguration of the entire network. For a network with rapidly changing traffic patterns, it is important to reduce the response time and the reconfiguration overhead.  5.3.1 Reduce Network Latency When a traffic flow that spans multiple physical links is introduced into a ring network, a direct point-to-point link may or may not exist from the source node to the destination node for the flow. If the direct link for the flow does not exist, the flow has to be routed in a hop-by-hop fashion. When overall network traffic load is low, however, it may be favorable to establish a direct link for theflow.Similarly, when overall network traffic load decreases after a burst period, some long-path flows may be unnecessarily routed in the hop-by-hop manner. In order to improve the network performance, we need a mechanism to merge the short links and to establish a long direct link to reduce the latency. The establishment of a direct point-to-point link for a flow can be triggered by any intermediate node that is forwarding the packets for theflow.Every node has a local network management module that  71  conducts statistics on network performance including forwarding rate for flows. When aflow'sforwarding rate on the node falls into a predetermined range that is favorable to establish a direct link to let theflowcut through the node, it triggers a process of establishing the direct link for theflowif it does not incur possible congestion on the node. The node first modifies its own forwarding table and select a channel in the HHR region for forwarding theflow'spackets. Then, it requests the upstream node to send theflow'spackets to that specific channel. Upon receipt of the request from the triggering node, the upstream node checks whether bandwidth fragmentation caused by the establishment of a direct link would introduce any possible congestion for otherflows.If so, the node rejects triggering node's request but keeps the record for the request as a heuristic for future use. The process terminates when the triggering node receives the rejection. Otherwise, upstream node updates the forwarding table, sends positive ACK to the triggering node and starts to send theflow'spackets via the specified channel. Upon receipt of the response from the upstream node, the triggering node reconfigure the DCS device from Add/Drop mode to Cross-Connect mode for the channel to let the trafficflowcut through the node without involvement of layer 2 functionality. This completes process of establishing a direct link. Consider the example illustrated in Figure 5-7. The intermediate nodes B, C, D and E detect that the forwarding rate forflow1 has reached certain threshold. The bandwidth utilization of links BC, DE and EF is not high. Consequently, node B, D and E send requests to their corresponding upstream nodes. However, node C detects link CD is bandwidth-contentious and it is not favorable to establish a direct link for flow 1. For this reason, node C rejects node D's request to establish direct link while node A and node D modify their forwarding table and send packets of flow 1 via the specified channel. When node B and E receive ACK from their upstream nodes, they call the interface provided by SONET layer to cross-connect the channel. A long link can be established gradually by repeating the process described above. Therefore, not only two physical links can be linked together to establish a direct link, but also two short direct links can be linked together to form a longer direct link. Consider scenario (5) in Figure 5-7, node C detects favorable condition for establishing direct link for flow 1 when the volume of flow 5 decreases. It repeats the above-described process. When the process is successfully completed, it also notifies the downstream node  72  D, which have once requested link establishment for flow 1. Because the network traffic pattern changes rapidly, the current traffic condition on node D may be different from the previous condition. If the current condition is not favorable for V T R , node D silently drops the notification. Since whether to cross-connect on this node does not affect the upstream node, no reply is expected by the upstream node. However, if the  (1)  (2)  Nodes B, D, E detects favorable condition for establishing direct link for flow 1.  REQ  REQ  NACK  ACK  Nodes B, D, and E begin to forward packets of flow 1 to next node via a select channel and request their upstream node to do the same.  (3) Nodes A, D respond to their downstream nodes' requests. Node C cannot do so because of possible congestion forflow5.  (4) Nodes B and E cross-connect the upstream and downstream links to establish direct link upon receipt of nodes A and D's response.  A  (5) Nodes C detects favorable condition for establishing direct link for flow 1 when volume of flow 5 decreases.  (6) Repeat similar process for establishing a long direct link for flow 1. If successful, node C notifies node D.  Figure 5-7 Establishment of Direct Point-to-Point Link to Reduce Network Latency  73  condition is still favorable to VTR, then node D can also cross-connect the channel to create a direct link from the source node A to destination node E forflow1.  5.3.2 Increasing Bandwidth Utilization It is known that VTR is prone to packet drop and under-utilization of bandwidth when traffic load becomes heavy. Establishment of direct point-to-point links may reduce network latency but may cause bandwidth fragmentation. A network with more fragmented bandwidth capacity is less resilient to traffic burst. Consequently, the bandwidth fragmentation should be reduced in correspondence with increasing traffic load. A general solution for increasing bandwidth utilization is to split long direct links into shorter ones to decrease the degree of bandwidth fragmentation and to increase bandwidth utilization. The splitting mechanism is divided into two phases: solution searching phase and reconfiguration phase. The solution searching phase may be triggered by a network event on a node, such as queue length for a link exceeding the high-water threshold. In general, when one link needs the bandwidth and others have available bandwidth, the longer links are split to shorter ones so that the available bandwidth can be shared by allflowstraveling along the links. The Greedy algorithm discussed in section 5.2 is generally applicable to the problem with some modification. Different from the centralized model, however, the node first checks its local information for possible solution to the problem. Only if it fails to find the solution, it sends a request to its neighboring nodes, together with its local traffic and network resources information. If a receiving node fails tofinda solution, it in turn sends a request with accumulated information on traffic and network condition to its neighboring nodes. Consequently, one requester chain may form in the UPSR architecture and two in the BPSR architecture. As soon as a node at the end of any requester chainfindsa solution with the information collected along the request chain, it sends a response back to the triggering node. The triggering node takes thefirstreceived solution, thereby initiating a reconfiguration phase. The operations in the reconfiguration phase of the link splitting process need to be carefully synchronized. The process starts when the triggering node sends reconfiguration request to the source node of the to-be-splitting link if they are not the same node. In the case that the to-be-split link has the available bandwidth, suspending transmission of packets temporarily likely does not incur any packet loss because of low load/capacity ratio on the link. Thus, forwarding table updating and DCS reconfiguration can be performed in a single operation. As illustrated in Figure 5-8, the source node of link A-E, node A, suspends 74  transmission of packets for flow 2 and sends a reconfiguration request to the downstream node B. On receiving the request, node B updates its forwarding table and reconfigures the DCS device from CrossConnect mode to Add/Drop mode for the channel as required. It then replies with an A C K to the upstream requester node. After receipt of B's A C K , it is safe for node A to resume transmission of packets on the link. Meanwhile, node B begins another synchronization with its downstream node by sending the recon-  Link BD has 90% load/capacity ratio Virtual Link Load Capacity  BD  AE  AF  9  5  5  10  10  10  Node A stops sendingflow2'spackets and sends a reconfiguration REQ to node B  On receiving the REQ, node B configures DCS from Cross-Connect mode to Add/Drop mode, update forwarding table, forward the reconfiguration REQ to node D and then send ACK to node A.  On receiving the ACK, node A resumes sending packets forflow2 via new link AB On receiving the REQ, node D configures DCS from Cross-Connect mode to Add/Drop mode, updates forwarding table and sends reconfigureration REQ to node D and A C K to node B  Sfliiltiif&a.-  J2=5feJ  -  •  I  •  On receiving the acknowledgment, node B may starts to forward packets of flowl's and flow 2's packets via link BD with higher bandwidth. Virtual Link Load Capacity  AB  BD  DE  AF  5  14  5  5  10  20  10  10  Flow Volume  Figure 5-8 Synchronization of Point-to-Point Link Splitting Operations (1)  75  figuration request to the next downstream node if necessary. When the to-be-split link needs bandwidth, suspending the transmission of packets on the link would likely incur packet-drop because of its high load/capacity ratio. To reduce the probability of packet loss, forwarding table updating and DCS reconfiguration are performed in two steps. As illustrated in Figure 5-9, node A, the source node of link A-E,firstsends request to the destination node (node C) of link A-  Link A E has 90% load/capacity ratio AE  AC  CE  Load  9  5  5  Capacity  10  10  10  Virtual Link  REQ1 Node A sends reconfiguration REQ1 to node C for forwarding table updating.  j  ACK1  On receiving REQUEST from node A, node C updates forwarding table for route flow 1 via link CE and then sends ACK to node A.  •  31  Node A sends reconfiguration REQ to node C for forwarding table updating.  ACK2 On receiving REQUEST from node A, node C configures the DCS from Cross-Connect mode to Add/Drop mode and sends ACK to node A flow 1 is routed via link CE at node C.  On receiving ACK from node C, node A adds sub-link of original A E to link AC  Virtual Link  AC  CE  Load  14  14  Capacity  20  20  Figure 5-9 Synchronization  Flow  1  Volume  9  of Point-to-Point Link Splitting Operations (2) 76  C to update the forwarding table on node C. After receiving the ACK for forwarding table updating, node A modifies its own forwarding table to temporarily re-route the high-volume traffic via the link A-C and sends the second request to node C, which can now safely reconfigure the DCS device from Cross-Connect mode to Add/Drop mode. After receiving ACK for the reconfiguration of DCS device, node A updates its own forwarding table to route both flow 1 andflow2 in link AC of increased bandwidth. If necessary, node C can start synchronization with next downstream node immediately after sending ACK for reconfiguration of DCS device. In this way, the reconfiguration is synchronized between upstream node and downstream nodes. A global synchronization is avoided and link idle time caused by reconfiguration is minimized  5.4 Summary The idea of the dynamic routing scheme is to partition the bandwidth resources between HHR and VTR regions and to allocate memory among virtual topology links dynamically, in response to the changes in traffic patterns in a SONET-ring network. We attempted to construct a simple scheme to enable the system to maintain a balanced tension between bandwidth utilization and network latency. When allowed by traffic patterns, network latency is reduced by establishing point-to-point links to deliver traffic flows from the source node to destination node or by merging short links into longer links to minimize the involvement of the data-link layer at the intermediate nodes. When traffic load increases, bandwidth utilization is increased by reallocating bandwidth for routing traffic in hop-by-hop fashion or by splitting long point-topoint links into shorter links to reduce bandwidth fragmentation. Both centralized and distributed models were proposed. For the centralized model, optimization algorithms have been suggested for some of the SONET ring architectures. In particular, a heuristic algorithm is suggested for solving the NP hard problem of optimally allocating bandwidth forflowson a BPSR architecture. The complexity analysis shows that it is a polynomial algorithm and a numeric computation test indicates it yields sub-optimal solutions that provide reasonable approximations to the optimal solution. For the distributed model, a mechanism is sketched for localizing the balance between network latency and bandwidth utilization and synchronization of reconfiguration process.  77  Chapter 6  General Conclusion and Future work 6.1 General Conclusions A new SONET-based network simulator was designed and implemented for investigating the performance of different routing schemes on transport networks based on SONET ring architecture. Using this simulator, three static routing schemes were examined on a simulated SONET-based transport network against different levels of traffic load. The result shows that the VTR schemes gives low network latency due to reduced nodal processing time when the traffic load is well below the network capacity. However, it has bandwidth fragmentation problems caused by confining bandwidth to individual direct links. When the traffic flow is at bursty or heavy enough to exceed the capacity of the virtual link, the network latency increases due to increased queuing delay and high drop rate. On the other hand, the HHR scheme gives higher bandwidth utilization and low drop rate, especially when the traffic load becomes heavy. It also gives low network latency due to reduced queuing time in case of heavy traffic load, though it has higher network latency than the VTR schemes when traffic load is low. A hybrid model combining VTR and HHR approach was suggested. The bandwidth is dynamically allocated between VTR and HHR regions in response to the changes in traffic patterns on a SONET ring. We attempted to construct a simple scheme to enable the system to maintain a balanced tension between bandwidth utilization and network latency. Both centralized model and distributed model were proposed. For the centralized model, a Greedy algorithm with cost of 0 ( n m + mlogm) is proposed for 2  optimal allocation of effective bandwidth for the Unidirectional Path Switched Ring architecture with n nodes and m flows. A heuristic bandwidth optimization algorithm based on the Greedy algorithm are also proposed and evaluated numerically for Bidirectional Path Switched Ring (BPSR) architecture. The possibilities of extending the heuristic algorithms to other SONET-ring-based architectures were explored. For the distributed model, we proposed a mechanism to maintain the tension between bandwidth utilization and network latency in the local and regional scope. By resolving problem early and locally, computation of global bandwidth optimization and synchronization of global reconfiguration may be avoided.  78  There are a number of implications of using such a dynamic model. First, the increased complexity of the routing scheme will certainly increase the management overhead. The improvement of performance has to be carefully weighed over the management overhead. Second, the scheme must be carefully designed so that no oscillation of a channel bandwidth allocation between HHR and VTR regions frequently due to a smallfluctuationin traffic load on a point-to-point link. All these implication must be carefully taken in account when the scheme is designed.  6.2 Future Work In this project, we have only examined the performance of routing scheme against different levels of traffic loads that are almost uniformly distributed over the SONET ring. Many other factors affect the performance of the network, for example, the distribution of trafficflowsin terms of load and source/destination pairs, the queue lengths, packet lengths, etc. More studies are needed for fully understanding the implication of all these factors in the network performance. In the proposed dynamic bandwidth-allocation algorithm, we have not discussed how to integrate the self-healing algorithm. In principle, network element failure and its resulting bandwidth shortages or outages can be regarded as congestion (100% utilization) at data link layer. We should be able to generalize the dynamic bandwidth-allocation mechanism to recover from such a scenario. However, this mechanism is constructed at the network layer. If the system requires immediate recover at SONET level, the integration becomes more complicated and difficult. The heuristic bandwidth-allocation algorithm has been extended to a number of simple SONETring architecture. There is no simple solution to complex structure such as multiple BPSRs with two or more matched nodes. The existence of two or more input/output nodes per ring adds the uncertainty of which shared nodes in a ring should be used for the inter-ringflow.It appears that one more steps has to be added to the algorithm to determine which shared node should be used as output to the next ring. If so, an index must be found as an indicator for the goodness of a shared node. Finding an inexpensive algorithm for optimal bandwidth remains a challenge. Both centralized and distributed models for the dynamic bandwidth-allocation mechanism have been suggested. However, both have not been evaluated in the network simulator. Although the framework  79  of the protocol has been established, the detail design and implementation is not a trivial task. The question of whether the management overhead outweighs the performance gain of the mechanism can only be answered after they are fully evaluated in the network simulator, or more practically, in the real network environment.  80  References [AAKWW97]  [AHK93]  [AtmNS]  [BC87J  [BEB91]  [BW97)  [CCS97]  [CDSW95]  (CGK921  [CGK93]  [Cisco96]  [Ethernet]  [FC97]  [FJ93]  B. Allen, J. Anderson, D. Kostas, B. Wang and A. White, "PPP Over SONET Mapping", IETF Internet Draft, October 23, 1997. F. Ayadi, J.F. Heyes, and M. Kavehrad, "Bi-layered ShuffleNet: A New Logical Configuration for Multihop Lightwave Networks", in Proc. IEEE GLOBECOM'93, Nov. 1993, pp. 11591163. NIST, "MANUAL: The NIST ATM/HFC Network Simulator: Operation and Programming Guide Version 3.0, NIST ATM Network Simulator", March 1998. F. Borgonovo and E. Cadorin, "Routing in the Bidirectional Manhattan Network", in Pro. 3rd International Conference on Data Communication Systems and their Performance, June, 1987. K. Bala, T. E. Stern, and K. Bala, "Algorithms for routing in a linear lightwave network", in Proc. INFOCOM'91, 1991, 1-9. U. Black, and S. Waters, "SONET and Tl-Architecture for Digital Transport Networks", Prentice Hall Series in Advanced Communications Technologies, Prentice Hall, 1997. T. J. Carpenter, S. Cosares and I. Saniee, "Demand Routing and Slotting on Ring Networks", DIMACS Technical Report 97-02, 1997. S. Cosares, D. Deutsch, I. Saniee and O. Wasem, "SONET Toolkit: A Decision Support System for the Design of Robust and Cost-Effective Fiber-Optic Networks", Interfaces 25, 1995. I. Chlamtac, A. Ganz and G. Karmi, "Lightpath communications: An approach to high-bandwidth optical WAN's", IEEE Trans. Commun., Vol.40, July 1992, 1171-1182. Chlamtac, A. Ganz and G. Karmi, "Lightnets: Topologies for high-speed optical networks", IEEE/OS A J. Lightwave Technol., Vol.11, May-June 1993, pp.951-961. Cisco Systems, "Scaling the Internet with Tag Switching", white paper, 1996. Gigabit Ethernet Alliance, "Accelerating the Standard for Speed". D. Ferguson and R. Cherukuri, "Self-Synchronous Scramblers For PPP Over Sonet/SDH: Some Analysis", IFTF Internet Draft, November 1997. S. Floyd and V. Jacobson, "Random Early Detection Gateways for Congestion Avoidance", 81  IEEE/ACM Transaction on Networking 1(4): 397-413, August 1993. [Floytl95)  S. Floyd, "Simulator Tests", technical report, Lawrence Berkeley National Laboratory, November 1995.  [FY94]  H. Fuji and N. Yoshikai, "Restoration message transfer mechanism and restoration characteristics of double-search self-healing ATM networks", IEEE J. Select. Areas Commun., Vol.12, No. 1,1994, pp.149-159.  [GBV91J  W.D. Grover, T. D. Bilodeau, and B.D. Venerables, "Near optimal spare capacity planning in a mesh restorable network" in Proc. IEEE GLOBECOM, Dec. 1991, pp. 2007-2012.  [GK93]  A. Gersht and S. Kheradpir, "Real time bandwidth allocation and path restorations in SONETbased self-healing mesh networks", in Proc. IEEE ICC, May 1993, pp. 250-155.  [Goraiski97]  W.J. Goralski, "SONET-A Guide to Synchronous Optical Networks", McGraw-Hill Series on Computer Communications, McGraw-Hill, 1997.  [GRM97]  W.D.Grover, V. Rawat, and M.H. MacGregor, "Fast Heuristic Principle for Spare Capacity Placement in Mesh-Restorable SONET/SDH Transport Networks", Electronics Letters, Vol.33, No. 3, 1997, pp.195-196.  [GS89|  A. Gersht and A. Shulman, "Optimal routing in circuit switched network", IEEE Trans, on Commun., Vol. 37, No. 1, Nov. 1989, pp.1203-1211.  [KCKK91]  T. Kikuno, C. Chen, K. Kawashima and Y. Kakuda, "Spare Channel Assignment for Restoration in Fault-Tolerant Loop Network", in Proc. Pacific Rim International Symposium on Fault Tolerant Systems, Sept. 1991, pp. 108-113.  [KW95]  B.G.Kim, P. Wang, "ATM Network: Goals and Challenges", Communications of the ACM, Vol.38, No.2,Feb. 1995.  [LBLNS]  S. McCanne and S. Floyd, NS man page, July 1995.  [LOP97]  A. A. Lazar, A. Orda, D. E. Pendarakis, "Virtual Path Bandwidth Allocation in Multiuser Networks", IEEE/ACM Transactions on Networking, Vol.5, No.6, 1997, p.861.  [LTWW93]  W. Leland, M. Taqqu, W. Willinger and D. Wilson, "On the Self-similar Nature of Ethernet Traffic", Proc. SIGCOMM'93, San Francisco, September 1993.  [LZPFBE96]  K. Liu, H. Zhu, D. W. Petr, V. S. Frost, C. Braun and W. L. Edwards, "Design and Analysis of a Bandwidth Management Framework for ATM-Based Broadband ISBN", 1996.  [Maiis98]  A. Malis, "PPP over SONET/SDH", IFTF Internet-Draft, February 1999.  [MBRM96]  B. Mukherjee, D. Banerjee, S. Ramamurthy, and A. Mukherjee, "Some Principles for Designing a Wide-Area WDM Optical Network", IEEE/ACM Trans. Networking, Vol.4, No.5, 1996, pp.684-696.  [MDAVL97]  J. Manchester, S. Dravida, B. Doshi, J. Anderson, R. Broberg, Peter Lothberg, "Enabling  82  Transparency for the PPP over SONET/SDH Mapping", IFTF Internet-Draft, November 21, 1997. [ME90]  [MK94]  [MK96]  [Nanen98]  [NMLH97J  [Paxson9S]  l'PK95.)  [PRNSj  [PST95]  [RFC16I9]  [RFC 166 u [RFC1662]  [RFC1932]  [RS95]  [RS97J  [Sim son98| P  G. E. Myers and M. El Zarki, "Routing in TAC-a Triangularly arranged Network", in Proc. IEEE INFOCOM'90, April 1990, pp. 481-486. K. Murakami and H. S. Kim, "Near-Optimal Virtual Path Routing for Survivable ATM Networks", in Proc. IEEE INFOCOM'94, June 1994, pp. 208-215. K. Murakami and H. S. Kim, "Virtual Path Routing for Survivable ATM Networks", IEEE/ ACM Transactions on Networking, Vol.4, No. 1, 1996, pp.22-39. T. Narten, "Packet over Sonet/SDH", IFTF Internet-Draft, August 7, 1998. P. Newman, G. Minshall, T. Lyon and L. Huston, "IP switching and Gigabit Routers", IEEE Communications Magazine, January 1997. V. Paxson, "An Introduction to Internet Measurement and Modeling", ACM SIGCOMM'98 Tutorial, Vancouver, Canada, September 1998, S. Park and Y. Kim, "A Virtual Topology for WDM Multihop Lightwave Networks", in Proc. IEEE INFOCOM'95, 1995, pp. 701-708. R. Fromm, and D. Simpson, "A Packet Radio Network Simulator". http:// G. Parulkar, D.C. Schmidt, J.S. Turner, "IP/ATM: A Strategy for Integrating IP with ATM", ACM Computer Communication Review, 25(4): 49-58, October, 1995. W. Simpson, "PPP over SONET/SDH," IFTF RFC 1619, May 1994, W. Simpson, "The Point-to-Point Protocol (PPP)", IFTF RFC 1661, July 1994, W. Simpson, "PPP in HDLC-like Framing," IFTF RFC 1662, July 1994. R.G. Cole, D.H. Shur, C. Villamizar, "IP over ATM: A Framework Document", IFTF RFC 1932, April 1996. 1932.txt R. Ramaswami and K.N. Sivarajan, "Routing and Wavelength Assignment in All-Optical Networks", IEEE/ACM Transactions on Networking, Vol.3, No.5, 1995, p.489. R. Ramaswami and A. Segall, "Distributed Network Control for Optical Networks", IEEE/ ACM Transactions on Networking, Vol.5, No.6, 1997, p. 936. W. A. Simpson, "Applicability Statement for PPP over SONET/SDH", IFTF Internet Draft,  83  August 1998. [SNH90]  H. Sakauchi, Y. Nishimura and S. Hasegawa, "A Self-Hearing Network with an Economical Spare-channel Assignment", in Proc. IEEE GLOBECOM, December 1990, pp. 438-443.  [VGM93]  B.D. Venerables, W.D. Grover, and M.H. MacGregor, "Two strategies for spare capacity placement in mesh restorable networks", Proc. IEEE ICC, May 1993, pp. 267-271.  84  


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items