UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

IP multicast in MPLS networks Chan, Fustina 2002

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2002-0037.pdf [ 2.71MB ]
Metadata
JSON: 831-1.0051533.json
JSON-LD: 831-1.0051533-ld.json
RDF/XML (Pretty): 831-1.0051533-rdf.xml
RDF/JSON: 831-1.0051533-rdf.json
Turtle: 831-1.0051533-turtle.txt
N-Triples: 831-1.0051533-rdf-ntriples.txt
Original Record: 831-1.0051533-source.json
Full Text
831-1.0051533-fulltext.txt
Citation
831-1.0051533.ris

Full Text

IP Multicast in MPLS Networks by Faustina Chan B . S c , The University of British Columbia, 2000 A THESIS S U B M I T T E D IN P A R T I A L F U L F I L L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F Master of Science in T H E F A C U L T Y O F G R A D U A T E STUDIES (Department of Computer Science) We accept this thesis as conforming to the required standard The University of British Columbia Apr i l 2002 © Faustina Chan, 2002 In presenting t h i s thesis i n p a r t i a l f u l f i l m e n t of the requirements for an advanced degree at the University of B r i t i s h Columbia, I agree that the L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e for reference and study. I further agree that permission for extensive copying of t h i s thesis for s c h o l a r l y purposes may be granted by the head of my department or by h i s or her representatives. I t i s understood that copying or p u b l i c a t i o n of t h i s thesis for f i n a n c i a l gain s h a l l not be allowed without my written permission. Department of The University of B r i t i s h Columbia Vancouver, Canada Abst rac t The Multiprotocol Label Switching (MPLS) is an advanced technology that enables IP networks to support traffic engineering efficiently. It speeds up packet forwarding by combining layer 3 routing with layer 2 switching. In M P L S , a label in the packet is used for making forwarding decisions and a path is pre-established to switch labeled packets at layer 2. Unfortunately, M P L S was originally designed for unicast IP traffic and there is as yet no complete definition for the support of multicast IP traffic. In this thesis, a new mechanism for M P L S to support IP multicast traffic is presented. It is motivated by the idea of a data-driven upstream label allocation scheme. The dense mode of the Protocol Independent Multicast (PIM-DM) is used as the signalling protocol to support multicast label switching. Multicast labels are allocated by upstream routers and distributed towards downstream routers. This process is triggered by the arrival of multicast traffic and no explicit control message is required to piggyback the label advertisement. The key objective is to improve the network scalability by using multicast label switching to forward IP multicast packets at layer 2 with minimal forwarding at layer 3. The support of multicast IP traffic in the M P L S network has been imple-mented under the Network Simulator (NS) from U C Berkeley. Our performance results show significant improvement on the network scalability in terms of the setup time for multicast label switching and the use of the label space. M P L S with IP multicast support plays an important role in the next-generation network. i i Contents Abstract ii Contents iii List of Tables vii List of Figures viii Acknowledgements ix 1 Introduction 1 1.1 Motivation 1 1.2 Problems 2 1.3 Overview 4 2 Background 6 2.1 NS 6 2.1.1 NS Components 6 2.1.2 N A M 8 2.1.3 Simple Simulation Example 8 2.2 M P L S 9 i i i 2.2.1 Traffic Engineering and Packet Switching 9 2.2.2 Overview of M P L S 11 2.2.3 Label Distribution 13 2.2.4 Label Creation 14 2.2.5 Label Setup 14 2.2.6 M P L S Network Simulator 14 2.3 R S V P 20 2.3.1 Data Flows 20 2.3.2 Reservation Style 21 2.3.3 R S V P Network Simulator with L S P support 22 2.4 P I M 24 2.4.1 P I M - D M 24 2.4.2 P I M - S M 25 2.4.3 Multicast Network Simulator 25 3 Design 28 3.1 Overview 28 3.2 Multicast Label Distribution 29 3.3 Multicast FECs 32 3.4 The Egress L S R . . . '. 33 3.5 Mixed L 2 / L 3 Forwarding 34 4 Implementation 36 4.1 Overview 36 4.2 The Architecture of a M P L S Node 37 4.2.1 Label Classifier 37 iv 4.2.2 M P L S Unicast Classifier 40 4.2.3 Multicast Classifier 41 4.2.4 M P L S Replicator 41 4.3 New M P L S Header 42 4.4 Multicast Label Forwarding Table 43 4.4.1 Multicast Partial Forwarding Table 43 4.4.2 Multicast Label Information Base Table 44 4.5 Multicast Label Distribution 44 4.5.1 Data-driven Label Distribution 45 4.5.2 Request-driven Label Distribution using P I M - D M messages . 46 4.5.3 Request-driven Label Distribution using R S V P messages . . . 47 4.6 Share Label Binding for FEC(*,C?) 48 4.7 The Egress L S R 48 4.8 Mixed L 2 / L 3 Forwarding 50 4.9 Reverse Path Forwarding 51 4.10 Unicast and Multicast Labels 53 4.11 Path Restoration 53 5 Performance Evaluation 55 5.1 Simulation Environment 55 5.1.1 Simulation Topology 55 5.1.2 Simulation Traffic 56 5.1.3 Experimental Setup 56 5.2 FEC(5,C7) - one sender to one group 57 5.3 FEC(* ,G) - multiple senders to one group 59 5.4 FEC(5,*) - one sender to multiple groups 61 v 6 Conclusion and Future Work 63 6.1 Conclusion 63 6.2 Future Work 64 Bibliography 66 v i List of Tables 5.1 The Performance of Unicast and Multicast LSPs with F E C ( 5 , G ) . . 57 5.2 The Performance of Unicast and Multicast LSPs with FEC(*,C7) . . 59 5.3 The Performance of Unicast and Multicast LSPs with FEC(S,*) . . 61 vn List of Figures 2.1 A Simple NS OTcl Script File 9 2.2 The Label Switched Path in M P L S 12 2.3 The Architecture of a M P L S Node 15 2.4 Label Switching 17 2.5 A P I Innovation for Handling L D P Messages 19 2.6 The Establishment of a LSP using R S V P 22 2.7 The Architecture of a Multicast Node 26 3.1 Multicast Label Distribution 30 4.1 The Architecture of a new M P L S Node 38 4.2 New M P L S Header Format in NS 42 4.3 Multicast Label Forwarding Table 43 4.4 Share Label Binding Information for FEC(*,C7) . . . 49 4.5 The Reverse Path Forwarding Check 52 4.6 The Path Restoration Algorithm 54 vi i i Acknowledgements I would like to express my gratitude to my supervisor, Professor Alan Wagner, for his essential guidance, inspiration and suggestion of this thesis. I would also like to thank Professor Son Vuong for being my second reader and giving me useful comments for improvement. Finally, I would like to thank my parents for giving me valuable encouragement and full support during my years of graduate study. F A U S T I N A C H A N The University of British Columbia April 2002 i x Chapter 1 Introduction 1.1 Motivation The explosive growth of the Internet over the last few years has encouraged the de-velopment of new technologies that enable IP networks to support traffic engineering efficiently. Traffic engineering addresses the issue of performance optimization of IP networks in terms of resource utilization and traffic management. The Multipro-tocol Label Switching (MPLS) was introduced by the Internet Engineering Task Force (IETF) to perform traffic engineering efficiently in IP networks. M P L S plays an important role in routing, switching and forwarding packets through the next generation network. M P L S provides a new packet forwarding mechanism called label switching which combines the scalability and flexibility of layer 3 routing with the performance, Quality of Service (QoS) and traffic management of layer 2 switching. In M P L S , a path is first established using a signalling protocol; then a label in the packet header, rather than the IP destination address, is used for making forwarding decisions in the network. Unfortunately, M P L S was originally designed for unicast IP traffic, 1 the multicast operation of M P L S has yet to be defined. Before M P L S was created by I E T F , several technologies had already pro-posed their solution to perform layer 2 packet switching and started considering the IP multicast support in their solution. However, these technologies have their lim-itation with regard to multicast label switching: Toshiba's Cell Switch Router [10] only considers point-to-point A T M links and Cicso's Tag Switching [14] restricts label advertisement to piggybacking on multicast routing messages. Since the efficient support for IP multicast over label switching is still an open problem for research community, this thesis focusses on the issues associated with IP multicast deployment in M P L S networks. 1.2 Problems M P L S focussed on topology-driven downstream label allocation scheme for unicast traffic. The creation or the destruction of Label Switched Paths (LSPs) is triggered by the update of the routing table. Labels are assigned from downstream routers and distributed towards the upstream routers using control messages. As a result, label bindings exist before data transmission, so that all packets are switched at layer 2. However, applying the same label switching scheme for multicast traffic causes several problems. Unicast routing is very different from both dense-mode or sparse-mode mul-ticast routing. For unicast, routing tables are established based on hop-by-hop destination-based routing scheme. Unicast routing tables are created once the net-work topology information is obtained and multiple traffic that are traversed towards the same final destination can be aggregated to one entry in the routing table to obtain one single LSP. However, due to the granularity of multicast traffic, multicast 2 routing trees are either source-based or shared-based. Source-based tree uses a tree for each source and multicast group, while shared-based tree uses a tree for each multicast group. In dense-mode multicast routing, source-based trees are created in data-driven manner, so that it is impossible to topologically aggregate such trees. In sparse-mode multicast routing, shared-based tree may coexist with a source-based tree and it is difficult to always assign a common label to traffic from different sources on a branch of the shared tree. Aggregation of multicast trees with different multicast destination (group) addresses on one L S P is impossible. Downstream label binding is not suitable for multicast label switching. For multicast, an upstream router can have more than one downstream routers. Since routers bind labels independently, downstream label binding cannot ensure all down-stream routers bind the same label to the LSP. Some consistent algorithms for la-bel assignment or a per interface-based label binding are required if downstream routers are used to allocate multicast labels. Moreover, downstream label bind-ing requires explicit control messages to distribute label information back to the upstream routers. When there are a lot of multicast traffic which belong to huge number of multicast groups, high control messages processing overhead becomes the bottleneck of the network's scalability. For unicast, all LSPs are established before data transmission so that no packet is forwarded at layer 3. The disadvantage of this method is that labels are consumed even when no traffic exists. For multicast, since group membership infor-mation changes dynamically, creating all multicast LSPs before data transmission to prevent layer 3 processing does not perform efficiently. A l l multicast LSPs are required to change when a member joins or leaves a multicast group, although some of them may not be used. As a result, this method only gains the use of label space 3 without improving the processing time. 1.3 Overview In this thesis, we first describe a new mechanism for M P L S to support IP multicast, then present the implementation and the performance of multicast label switch-ing under a network simulation environment generated by the Network Simulator (NS) [18]. Multicast label switching is motivated by the idea of a data-driven upstream label allocation scheme. The dense mode of the Protocol Independent Multicast (PIM-DM) is used as the signalling protocol to support multicast label switching, since it creates the multicast routing tree in a data-driven manner. P I M - D M creates the source-based multicast routing tree by flooding the entire network with multicast data and then allows uninterested receivers to prune off the unwanted branches. This process is triggered by the arrival of multicast data and is repeated periodically to update the multicast group membership. Multicast LSPs are established at the same time when the multicast routing tree is built, and labels are allocated by the upstream routers and distributed towards the downstream routers. The upstream label allocation approach prevents the label binding consis-tency problem caused by downstream routers. Labels allocated from the upstream router can ensure that different labels are assigned to different downstream routers. Also, labels are encapsulated in the packet for data transmission. No explicit con-trol message is required to piggyback the label advertisement; therefore, high control messages overhead for exchanging labels is avoided and the L S P setup time is re-duced. The use of the data-driven approach in multicast label switching allows the 4 network has better scalability. Since multicast LSPs are created based on traffic demand, no pre-established L S P is required. As a result, the number of LSPs and required labels are minimized. When the multicast group membership changes dynamically, less number of LSPs are affected and the processing time to update the affected LSPs is reduced. Label binding information can be shared with the same multicast group to minimize the label space. Different multicast traffic that contain the same group of destinations have some common characteristics. They are transmitted to the same group of destinations through some common subpaths. Some upstream routers distribute packet copies to the same group of downstream routers. In this case, some label binding information can be shared for the same common subpaths so that the label space for LSPs with the same multicast group is minimized. To evaluate the performance of the proposed mechanism without construct-ing a real network, we tested it under the Network Simulator (NS) because NS supports both M P L S and P I M - D M functionalities. The rest of this thesis is organized as follows. Chapter 2 overviews some background knowledge. Chapter 3 discusses the design for real M P L S network to support IP multicast traffic. In Chapter 4, we describe how M P L S is extended to support multicast label switching under NS. Chapter 5 provides the performance evaluation of multicast label switching. Finally, in Chapter 6, we summarize the work accomplished and suggest future work that extends the results from this thesis. 5 Chapter 2 Background 2.1 N S NS is a network simulator targeted at networking research which was implemented by U C Berkeley in 1989 [18]. It is written in C++ and OTc l languages. C++ is used for detail protocol implementation while OTc l is used for simulation config-uration. NS can be considered as an OTcl script interpreter which contains event scheduler objects, network component objects and network setup modules [7]. To run a network simulation, a user writes an OTcl script that initiates an event sched-uler, configures the network topology using network objects and module functions in the library, and starts traffic sources through the event scheduler. 2.1.1 N S Components Event Scheduler The event scheduler is one of the major components in NS [7]. A n event in NS is an unique packet ID which contains a scheduled time and a pointer to an object that handles the event. The event scheduler keeps track of the simulation time and 6 executes all events in the queue at a given time by invoking appropriate network components. When a network component requires to spend some simulation time for handling a packet (i.e. a delay), it uses the event scheduler to issue an event for the packet and waits for that event to be executed before handling the packet any further. Another use of the event scheduler is as a timer. A timer uses the event scheduler in a similar way that a delay does. It measures a time value for a packet and performs an appropriate action on the packet at a given time but does not simulate a delay. Network Component Objects Network component objects, which contain compound network objects such as nodes and links, are used to handle packets. Network component objects can be separated into connectors or classifiers, based on the number of possible output data paths. The network objects that have only one output data path are under the connector class, and the objects that have multiple output data paths are under the classifier class. Connectors only generate data for one recipient. When a connector receives a packet, it performs certain operation according to its type and then either forwards the packet to its recipient or drops the packet. There are a number of different types of connectors in NS. Some examples are queue, delay, agent and trace. Classifiers, unlike connectors, can generate data for more than one recipient. Since a classifier can have more than one neighbor, when it, receives a packet, it requires further classification to determine which neighbor can accept the packet. Address classifier, multicast classifier and multipath classifier are a few examples of different types of classifiers. 7 Network Setup Modules Network setup modules are the functions that are used to initiate simulator objects [7]. Network setup modules also are called "plumbing" modules. The term "plumb-ing" is used for a network setup, because setting up a network requires the system to plumb all possible data paths among network objects by setting the pointer of a net-work object to the address of another appropriate network object. A new network object can be made by either implementing a new object or making a compound ob-ject from the object library and plumbing the data path through the object. Some of the network setup modules are: routing module, M P L S module and differentiated services module. 2.1.2 N A M The simulation results from NS can be displayed by a graphical user interface called Network Animation (NAM) [17]. N A M is a T c l / T k based animation .tool that is used for visualizing network simulation traces and real world packet traces. It supports topology layouts, packet animation and various techniques for data inspection. 2.1.3 Simple Simulation Example A simple NS OTcl script file is shown in Figure 2.1. To start a network simulation, a user writes a NS OTc l script file that includes topology configuration, traffic source setup and simulation scheduling. In the script file, a simulation setup is required first. It includes creating a simulation object to initiate the event scheduler, a new file to record the N A M trace data and a f i n i s h procedure to execute the trace file on N A M . Then, nodes and links are created to configure the network topology. Agents are initiated for different traffic management. Finally, a simulation scheduling is 8 set ns [new Simulator] # Create a NS simulator object set sf [open out.nam w] # Create a NAM trace file $ns namtrace-all $nf #The "finish"procedure proc finish (} { global ns n f $ns flush-trace close $nf # Close the NAM trace file exec nam out.nam & # Execute NAM on the trace file , exit 0 # Create nodes set NodeO [$ns node] set N o d e l [$ns node] set Node2 [$ns node] set Node3 [$ns node] # Create Links (nodel node2 bandwidth delay queue-type) $ns dup lex - l ink $NodeO $Node2 1Mb 10ms DropTa i l $ns dup lex - l ink S N o d e l $Node2 1Mb 10ms DropTa i l $ns dup lex- l ink $Node2 $Node3 1 M b 10ms DropTa i l # Create a CBR agent to generate constant bit rate traffic set Src [new A g e n t / C B R ] $ns attach-agent $NodeO $Src # Create a sink agent to receive traffic set Dst [new Agent /Nul l ] $ns attach-agent $ N o d e l $Dst # Connect the src and the dst together $ns connect $Src $Dst # Start simulation $ns at 0.1 "$Src start" # CBR agent starts generates Src at 0.1 sec $ns at 2.0 "$Src stop" # CBR agent stops Src at 2.0 sec $ns at 3.0 "finish" # Run the simulation $ns run Figure 2.1: A Simple NS OTcl Script File required to schedule the execution of different events (i.e. traffic transmission). 2.2 MPLS 2.2 .1 Traffic Engineering and Packet Switching Over the last few years, the Internet has grown rapidly from an experimental research network to an extensive public data network. The Internet has also encouraged the 9 development of a variety of new application and services in business and consumer markets. However, the demand placed on the network by these new application and services have been pushed to the limit of the resources of the existing Internet infrastructure. These have driven the demand for increased and guaranteed band-width requirements in the backbone of the network. In addition to the issue of the resource constraints, the explosive growth in the number of users and the volume of traffic make it difficult to support the quality of service (QoS) on the Internet. To solve these existing problems, traffic engineering and high speed packet forwarding are two research areas focussed on improving the network environment [2] [20]. Traffic engineering is used to optimize the utilization of network resources by moving the traffic flows away from the shortest path calculated by Interior Gateway Protocols (IGPs) and onto a less congested path. Although using the shortest path can conserve network resource, it causes congestion problems. Congestion problems may occur due to the shortest paths from different sources overlap at some links. Under-utilization or over-utilization of a resource may occur when the traffic exceeds the capacity of the shortest path while a longer path between these two routers is under-utilized. To balance the traffic load on different links and routers in the network, traffic engineering routes the traffic flows across the network based on the resources that the traffic flows require and the availability of the resources in the network. High speed packet forwarding allows a router to forward packets in high speed. Different techniques have been suggested to support that such as a Giga-bit router, the lookup of IP routing table in high speed with longest prefix matching and layer 2 packet switching. Layer 2 packet switching was used in Ipsilon's IP switch-ing [12], Cisco's Tag switching [14] and Toshiba's Cell Switch Router (CSR) [10]. 10 In order to do both traffic engineering and packet switching efficiently, the working group of the Internet Engineering Task Force (IETF) introduced Multipro-tocol Label Switching (MPLS) in 1997 [15]. M P L S extends routing with respect to packet forwarding, load balancing and path controlling. 2.2.2 Overview of M P L S In M P L S , packets are classified into different Forwarding Equivalence Classes (FECs). The Forwarding Equivalence Class (FEC) is a representation of a group of packets that contain the same traffic requirement [2]. A l l packets in such a group follow the same path to the destination. Each M P L S router is able to forward packets directly to a next hop by inserting a label in the packet as an index to specify the next hop. Therefore, when a packet enters the network, only one assignment of the packet to the F E C is required. Data transmission in M P L S occurs on Label Switched Paths (LSPs) which are sequences of labels at each node along the path from the source to the destina-tion. The routers that participate in the M P L S protocol can be classified into Label Switching Routers (LSRs) and Label Edges Routers (LERs). A L S R is a high-speed router located in the core of the M P L S network that is used to establish a Label Switched Path (LSP) or exchange the labels in the packets using the appropriate label signalling protocol. A L E R is a special kind of L S R located at the edge of the M P L S network and the access network. It is used to establish the LSPs, insert or remove a label in a packet header when the traffic enters or exits the M P L S network. The establishment of a L D P and how traffic traverses to a L S P are described in Figure 2.2. A LSP typically originates at the ingress L E R , traverses one or more core LSRs and then terminates at the egress L E R . The ingress L E R maps 11 (a) Setup a LSP Ingress (LER) Label Request (Dest = C ) O -Label Request (Dest = C ) Egress (LER) Labe l Mapp ing (Label = 5) Label Mapp ing (Label = 0) (b) Traffic traverses through the LSP IP data 5 IP data 0 IP data IP data Iif InLabel O i f OutLabel 3 - 4 5 I i f InLabel O i f OutLabe l 3 0 1 -Iif InLabel O i f OutLabel 2 5 1 0 Iif: incoming interface, InLabel: incoming label Oif : outgoing interface, OutLabel : outgoing label Figure 2.2: The Label Switched Path in M P L S 12 the incoming traffic onto the LSPs based on the FECs . A l l the packets that match a given F E C will be sent to the L S P corresponding to that F E C . This is done by inserting the appropriate label in the packet header. In M P L S , a label is used to identify the path that a packet should traverse. For example, as shown in Figure 2.2, an IP packet arrives at the ingress LSR A and its F E C is used to lookup the label for the L S P in the Label Information Base (LIB). The LIB is a forwarding table and is comprised of the FEC-to-label binding. LSR A then encapsulates the label 5 into the packet header and forwards it at layer 2. When LSR B receives the packet, it uses the label 5 in the header as the index to determine the outgoing interface and the new outgoing label for the packet. Then, the incoming label in the packet header is exchanged by the outgoing label and the packet is switched to the LSR C. Finally, the egress LSR C removes the label from the packet and forwards it as a regular IP packet. 2.2.3 Label Distribution Several different approaches to label distribution can be used depending on the requirements and the policies of the network. Label Distribution Protocol (LDP) is a new protocol defined by the I E T F for explicit signalling and management of the label space. It has been extended to support explicit routing based on QoS requirements. These extensions are captured in the constraint-based routing C R - L D P protocol. The Resource Reservation Protocol (RSVP) has also been enhanced to support piggybacked exchange of labels and resource reservation. Protocol Independence Multicast (PIM) is used for multicast label binding, and Border Gateway Protocol (BGP) is extended mainly for piggyback external labels for V P N s . 13 2.2.4 Label Creation Three techniques are used to establish labels for LSPs: topology-driven, data-driven or request-driven. The topology-driven method triggers the label creation before data transmission, the data-driven method uses the reception of a data flow to trigger the assignment and the distribution of a label, and the request-driven method uses the reception of the outgoing and incoming control messages to establish labels. 2.2.5 Label Setup A M P L S architecture allows different kinds of routing methods for setting up LSPs. These include hop-by-hop routing, explicit routing and constraint routing. Hop-by-hop routing is used when each L S R determines the next hop based on the F E C . This method is similar to the method that currently used in IP network. Explicit routing is similar to source routing. A n explicit path is specified before the establishment of the LSP. Each L S R follows the explicit path to assign and distribute labels. Explicit routing is mainly used for traffic engineering purpose. Constraint routing is used when each L S R selects the next hop based on different constraints like bandwidth, delay and priority. 2.2.6 MPLS Network Simulator This section provides a general overview describing the architecture of a M P L S node, the label switching and the label distribution schemes in NS [18]. M P L S network simulator is developed to support various M P L S applications without constructing a real M P L S network. Ahn and Chun [2] [3] have provided modules which extend NS to allow it to simulate M P L S networks. 14 M P L S Node Node Entry entry PFT unlabel packet Addr Classifier Port Classifier MPLS Classifier label packet LIB ERB P F T : Partial Forwarding Table L I B : Label Information Base E R B : Exp l i c i t Rout ing Information Base 'LDP Agent) ldp_agent L3 Forwarding L2 Switching Figure 2.3: The Architecture of a M P L S Node To Another Node The Architecture of a M P L S Node In NS, a node is composed of classifiers and agents. A classifier is responsible for packet classification. When it receives a packet, it examines the packet's address and maps the value to an outgoing interface object which is the next hop of the packet. A n agent is responsible for generating or accepting traffic. It represents the endpoint where network layer packets are created or consumed. It is also used for handling different control messages. A node in NS uses different kinds of classifiers and agents for different purposes. To make a new node that supports label switching, MPLS classifier and LDP agent are inserted into the IP node to become a M P L S node. The architecture of a M P L S node for label switching is shown in Figure 2.3. The components of the M P L S node are: 15 Node Entry is the entry point of a node. It is the first element which handles packets arriving at the node. The Node instance variable, entry, contains the reference to the MPLS Classifier. MPLS Classifier is used mainly for classifying the incoming packet into labeled or unlabeled packet. It executes label operation and layer 2 switching for the la-beled packet. Layer 2 switching, also called label switching, means sending the labeled packet to its next hop directly. The Node instance variable, classifier, stores the reference to the MPLS Classifier. Addr Classifier is responsible for layer 3 forwarding according to the packet's des-tination address. Port Classifier is responsible for agent selection. The Node instance variable, Idp-agents, contains the reference to the LDP Agent. LDP Agent is an agent used for handling L D P ( C R - L D P ) messages such as request, mapping, withdraw, release and notification messages. Label Switching The algorithm of label switching and the structure of different M P L S tables are de-scribed in Figure 2.4. A M P L S node contains three tables to manage label switching information; Partial Forwarding Table (PFT) , Label Information Base (LIB), and Explicit Routing Information Base (ERB). The P F T table is a subset of forwarding table. It consists of a FEC, FlowID and LIBptr. The LIBptr in P F T is a pointer that indicates an entry of the LIB table. P F T is used to map the IP packet into its L S P at the ingress LSR. The E R B table contains information for the established ER-LSPs . It consists of a LSPID, FEC and LIBptr. The LIBptr in E R B is used 16 Simple algorithm and structure of tables for MPLS label switching at MPLS Classifier YES use LSP1D as index use LIBptr to lookup OutgtjHTg Label FEC LIBptr Figure 2.4: Label Switching if LIBptr is NULL, execute L3 forwarding otherwise, execute a Label PUSH Incoming Incoming Outgoing Outgoing LIBptr Interface Label Interface Label execute L2 switching to lookup the entry of the LIB table. The E R B is used to map the labeled packet into the E R - L S P by its LSPID in the packet header. The LIB table contains in-formation for the established LSPs. It provides label switching information for the labeled packets. It consists of an Incoming Interface, Incoming Label, Outgoing In-terface, Outgoing Label and LIBptr. For a labeled packet, its Incoming Interface and Incoming Label in the header are used as the index of the LIB to find out the corresponding Outgoing Interface and Outgoing Label. The LIBptr in the LIB is used only when a L S P is tunnelled inside another LSP. It is used to connect two LSPs together. When packets enter the M P L S network, they first arrive at ingress LSRs as a result all packets that ingress LSRs receive are unlabeled. Only intermediate LSRs or egress LSRs can receive labeled packets after the corresponding L S P has been established. 17 When a M P L S node receives a packet, the MPLS classifier first classifies the received packet into labeled or unlabeled packet. If the packet is unlabeled, the MPLS classifier uses the F E C in the packet header as an index to identify the P F T entry. If the LIBptr of the P F T entry is empty, it means that the L S P corresponds to the F E C does not exist. Then, the packet is passed to the Addr classifier and forwarded to the next hop based on layer 3 forwarding. If the next hop of the packet is itself, it means the packet has arrived at the destination. The packet is sent to the Port classifier for handling L D P messages. If the packet is unlabeled but its L S P exists, the MPLS classifier inserts a -label into the packet header by executing a label push operation for the packet. This operation pushes the outgoing label of the LIB entry pointed by a LIBptr of the P F T entry into it. Finally, it performs layer 2 switching on the packet. The packet is forwarded directly to the next hop indicated by the outgoing interface of the LIB entry. If the packet is labeled, MPLS classifier uses the label in the packet header as an index to identify the LIB entry. It then replaces the packet label with the outgoing label of the LIB entry by performing a label swap operation on the packet. If the label in the packet header is 0, it means that the packet has reached the egress L S R of the LSP. The MPLS classifier executes a label pop operation instead of a label swap operation. The label pop operation removes the label in the packet header. Finally, it performs layer 2 switching for the packet. Label Distribution: L D P and C R - L D P The M P L S simulator is able to use the signalling protocol L D P and C R - L D P for label distribution. The LDP agent uses L D P ( C R - L D P ) for handling L D P ( C R -18 MPLS Node get-request-msg get-cr-request-msg get-mapping-msg get-cr-mapping get—notification-msg get— withdraw-msg get-release-msg - \ A g e n t J V A g e n t / ^ send-request-msg send-cr-request-msg send-mapping-msg send-cr-mapping-msg send-notification-msg send-withdraw-msg send-release—msg ldp-trigger-by-control ldp-trigger-by-data ldp-trigger-by-explicit-route ldp-trigger-by-withdraw ldp-trigger-by-release Figure 2.5: A P I Innovation for Handling L D P Messages L D P ) messages. Both signalling protocols supports five types of messages: Request, Mapping, Withdraw, Release and N o t i f i c a t i o n messages. A l l these messages are sent over T C P so that the label binding information are reliably delivered. When a M P L S node receives a L D P message, it handles the message and determines the next node. Then it creates a new L D P message and sends it to an L D P agent attached to the next node. This sequence of the A P I invocation is described in Figure 2.5. When a M P L S node receives a L D P message, an A P I get-*-msg is invoked by the L D P agent according to the type of the L D P message. Then, the L D P agent handles the message by calling an A P I l d p - t r i g g e r - b y - * and selects the next node that will receive another L D P message. Once the next node is determined, the L D P agent calls an A P I send-*-msg, which belongs to the L D P agent that can communicate with an agent attached to the next node. Finally, the L D P agent creates a new L D P message and sends it to the next node. 19 2.3 RSVP The Resource Reservation Protocol (RSVP) is a network-control protocol that en-ables Internet applications to obtain special quality of service (QoS) for their data flows [5]. R S V P is not a routing protocol; instead, it operates with different uni-cast and multicast routing protocols such as P I M - D M , and installs the same routes that routing protocols generate. R S V P is receiver-oriented, rather than having the senders keep track of a large number of receivers, it allows the receivers keep track of their needs and initiate resource reservation [21]. This feature enables R S V P to support multicast data flows as effectively as unicast data flow. R S V P also provides several reservation styles for improving the utilization of network resources since the reservation for the same multicast group can be aggregated differently according to its requirement. Finally, the soft state nature of R S V P allows it support dynamic membership changes and automatically adapt to routing changes. These properties make R S V P deal gracefully and efficiently with large multicast groups. 2.3.1 Data Flows In R S V P , a data flow is a sequence of messages that have the same source, destination and quality of service [5]. R S V P data flows can be categorized by sessions. A session is a set of data flows with the same destination and transport-layer protocol. R S V P supports both unicast and multicast sessions, and treats each session independently. The flow of data packets in a single R S V P session can be handled by either multicast or unicast data distribution. Multicast data distribution forwards a copy of each data packet from a single sender towards multiple receivers. However, unicast data distribution involves only a single sender and a single receiver. Data packets in a particular session are forwarded to either an IP destination 20 address or a generalized destination port [5]. The IP destination address can be the unicast address or the group address of multicast traffic. The generalized destination port is a demultiplexing point in the transport or application protocol layer. Each sender and receiver can correspond to a unique Internet host. A single host, however, can contain multiple senders and receivers distinguished by the generalized port. Therefore, multiple senders can exist for a unicast destination address, in which case, R S V P can set up reservations for multipoint-to-point transmission. 2.3.2 Reservation Style R S V P provides several reservation styles which indicate how intermediate routers can aggregate reservation requests from receivers in the same multicast group [21]. There are three reservation styles: wildcard filter (WF), fixed filter (FF) and shared explicit filter (SE) [5]. A W F style specifies a shared reservation with a wildcard scope. Wi th a W F style reservation, a single reservation is created and shared by flows from all upstream senders. Reservations can be thought of as a shared pipe whose size is the largest of the resource requests from all receivers. This reservation is propagated upstream towards all sender hosts and is automatically extended to new senders as they appear. A F F style specifies a distinct reservation with an explicit scope. Wi th a F F style reservation, a distinct reservation is created for data packets from a particular sender. This reservation scope is determined by an explicit list of senders. A SE style indicates a shared reservation with an explicit scope. It creates a single reservation shared by explicit upstream senders. Unlike the W F style, the set of senders can be specified explicitly by the receiver making the reservation. 21 Path Message (Label-Request, Explicit Route Objects) Path(B,C,D) O Path(C,D) Path(D) \ Egress LSR D ) (Receiver) Resv(label=10) Resv(label=25) Resv(label=8) Reservation Message (Label Object) Figure 2.6: The Establishment of a L S P using R S V P 2.3.3 RSVP Network Simulator with LSP support We have extended the existing R S V P network simulator in NS [11] to support the label distribution and the establishment of LSPs. This section mainly describes how the base R S V P specification has been extended to allow setting up LSPs using R S V P as a signalling protocol [16]. The Establishment of L S P using R S V P The extended R S V P protocol establishes an LSP by distributing label binding in-formation to the LSRs in the L S P [16]. It is able to create an L S P along an explicit route to support traffic engineering requirements. It can also allocate resource reser-vation in the LSRs along the LSP. The basic flow for setting up an L S P using R S V P is shown in Figure 2.6. To establish an LSP, the ingress L S R uses the required traffic parameters of the session and the administrative policies in the network to determine an explicit route for the new L S P [6]. Then, it sends an R S V P Path message with a label request towards the egress L S R based on the explicit route. When the next L S R receives the Path message and determines that it is not the egress of the LSP, it modifies 22 the explicit route in the Path message and forwards the message to another L S R according to the route. Once the egress L S R receives the Path message, it allocates the required resources and selects a label for the new LSP. Then, it sends an R S V P Reservat ion message with the selected label back to the ingress LSR. The L S P is established successfully when the ingress L S R receives the Reserva t ion message. Finally, the ingress L S R is able to use the LSP to forward traffic directly to the egress LSR. L S P related Objects Three new objects are defined for establishing L S P : label request, label and explicit route objects [4]. A label request object is included in the Path message to specify a label binding request for the path and indicates the network layer protocol that is to be carried over the path. When the receiver obtains the Path message with the label request object, it allocates a label and places the label inside the label object of the corresponding Reservat ion message. A l l the LSRs receive the Path messages store the label request objects in their path state blocks, so that the Path refresh messages will also contain the label request objects. A label object is placed in the Reservat ion message to specify a label that is assigned to the LSP. When the receiver accepts a label request object in the Path message, a label object must be included in the corresponding Reservat ion message. Since R S V P supports multicast data distribution, labels received in the Reservat ion messages oh different interfaces are always considered to be different even if the label values are the same. A n explicit route object is used to store the predetermined path for the LSP. 23 It is intended to be used only for unicast application and is not necessary to be placed in the Path message. When the explicit route object is included, the Path message is forwarded towards the egress L S R based on the path specified in the explicit route object, independent of the IGP shortest path. Each L S R receives an explicit route object is required to modify the route (i.e. removing itself from the route). If the explicit route object is empty, the L S R is the egress for the LSP. 2.4 PIM Protocol Independent Multicast (PIM) is a multicast routing protocol that supports multicast traffic over existing unicast routing protocols [1] [8]. P I M was developed to solve the scaling problems of existing multicast routing protocols. It contains two modes: dense mode (DM) and sparse mode (SM). P I M can be used to bind a multicast label and to set up multicast LSPs. A detailed description of the label distribution and the establishment of LSPs are presented in Chapter 3. 2.4.1 P I M - D M P I M - D M uses the concept of flooding and pruning to establish a multicast tree. In P I M - D M , the multicast tree is established with the arrival of multicast data from source to group. The multicast tree created is called a source tree (S,G), a tree with one source (S) and one multicast group (G). P I M - D M initially floods the network with multicast data. Nodes that do not want to receive the data of the specific multicast group are required to prune off the forwarding branch, so that P I M - D M no longer forwards data to them. The pruning process is repeated periodically; therefore, generating a very volatile tree structure. The prune state can be reset either when its lifetime expires or the receiver starts grafting the specific multicast 24 group. Both situations trigger the data to be forwarded down the previously pruned branch. 2.4.2 PIM-SM P I M - S M uses the concept of the Rendezvous Point (RP) to build a multicast tree. The Rendezvous Point is a centralized router that is used as the root of the mul-ticast tree. Senders and receivers are required to register the R P router before data transmission. Explicit join messages from the receivers are sent towards the R P router to propagate the multicast group membership. Multicast data from the senders are also sent to the R P router for future forwarding, so that the receivers are able to discover the sender of the data and start receiving the data for the group. Therefore, a multicast routing entry already exists in P I M - S M before the arrival of multicast data. The multicast tree for P I M - S M is called a shared tree denoted by (*,G), denoting one tree per multicast group. 2.4.3 Multicast Network Simulator Multicast network simulator has been implemented by extending NS [9]. It has been implemented to support both P I M - D M and P I M - S M . This section describes the architecture of a multicast node and the functionality of its components. The Architecture of a Multicast Node The architecture of a multicast node is shown in Figure 2.7. The major components of the multicast node are [19]: Node Entry is the entry point of a node. For multicast node, the entry point is a switch that looks at the first bit of the destination address to decide whether 25 Multicast Node Figure 2.7: The Architecture of a Multicast Node it is an unicast address or a multicast address. If it is a multicast packet, the switch forwards the packet to the multicast classifier; otherwise, it forwards the packet to the unicast classifier (address classifier). Multicast Classifier is responsible for classifying multicast packets according to their source and destination (group) addresses, and forwarding them to the appro-priate replicators. It maintains a hash table for mapping source-group pairs to slot numbers. The slot number is the location of a replicator. When a packet arrives containing an unknown source-group pair, the Multicast Classifier adds an new entry to its table and creates a new replicator for the new source-group pair. Then, it forwards the packet to the replicator. Replicator is used to produce copies of a packet. Each replicator is only responsible for copying packets from a specific source-group pair. It contains a table to 26 store all the outgoing link information. When a packet arrives, the Replicator replicates the packet and gives a copy to each of the nodes listed in the table. Addr Classifier is used in supporting unicast packet forwarding. When a packet arrives, it looks at the packet's destination address to figure out the outgoing interface and then applies L3 forwarding on it. 27 Chapter 3 Design 3.1 Overview M P L S has mainly focused on the label switching of unicast IP traffic; however, the label switching of multicast IP traffic has not yet developed [13]. The main difficulty for M P L S to support multicast label switching is how to bind labels to multicast F E C s and guarantee all downstream LSRs bind the same label to the F E C . The existing unicast label distribution mechanism is not suitable for multicast traffic; therefore, a new mechanism for M P L S supporting IP multicast is presented in this chapter [22]. In the proposed mechanism, P I M - D M is used as the signalling protocol to support multicast label binding and the establishment of multicast LSPs. The mul-ticast label forwarding table is created using the concept of flooding and pruning in P I M - D M . The multicast label distribution is triggered by traffic or control messages. Multicast labels are allocated by the upstream LSRs and distributed towards the downstream LSRs. 28 3.2 Multicast Label Distribution For unicast, "Downstream Label Distribution" is usually used to allocate labels. The upstream L S R initiates the establishment of a L S P by sending a label request message to the downstream LSR. When the downstream L S R receives the label request message, it is responsible for binding labels and notifying the upstream L S R about the binding via a label mapping message. Labels are distributed from the downstream L S R to the upstream LSR. However, this label distribution mechanism is not suitable for multicast label distribution. "Downstream Label Distribution" cannot ensure all downstream LSRs allocate the same label to the F E C because they distribute labels independently. In this case, "Upstream Label Distribution" is used to allocate multicast labels. That is, labels are assigned by the upstream LSRs and are distributed to the downstream LSRs. For multicast, an upstream L S R can have more than one downstream LSRs. If the network is large, using the topology to trigger label distribution before data transmission is not efficient. Since the time for all upstream LSRs to bind labels wil l be very long, the label binding time will become the bottleneck of the network's scalability. Therefore, the new label distribution mechanism for M P L S supporting IP multicast is triggered by traffic or control messages. P I M - D M is the signalling protocol used to support multicast traffic in a M P L S network. In P I M - D M , multicast routing trees are established in a data-driven manner. P I M - D M initially propagates the entire network with multicast traffic, then it prunes back paths based on uninterested receivers. This feature makes it suitable for "Upstream Label Distribution". The Multicast Label Forwarding Table is established in the same way as P I M - D M creates the multicast routing tree. It is used to store all the label binding information and is established with the arrival 29 IP data 22 IP data ( L S R G , after receive Prune M s g IP data IP data Source after receive Prune M s g Iif ILabel Oif OutLabel - 0 " 1 rz-0 - 2 18 Iif ILabel Oif OutLabel 0 18 1 6 0 18 2 4 Iif: incoming interface, ILabel : incoming label Oif : outgoing interface, O L a b e l : outgoing label Iif ILabel Oif OutLabel 0 4 1 0 Figure 3.1: Multicast Label Distribution Rl Receiver Rx Receiver R2 Receiver R3 Receiver of multicast packets from source to group. No explicit multicast routing message is required to create the label forwarding table. The upstream L S R assigns a label to each of the outgoing interface of the packet without considering the multicast group membership information. The downstream L S R rejects the label if it receives a prune message from the uninterested receivers. The Prune and Graft control messages in P I M - D M are used to notify the creation or the deletion of the multicast LSPs. A n example of multicast label distribution is illustrated in Figure 3.1. In this example, a multicast traffic with group G is initiated in source S and receivers 30 RI, R2 and R3 have already joined group G before traffic transmission has started. When a first multicast packet belonging to a new F E C arrives at an ingress LSR A, LSR A does not contain any information about the F E C . It invokes layer 3 forwarding to figure out all of its outgoing interfaces. When all outgoing interfaces of the F E C have been decided, LSR A assigns a label for each of them and stores the label binding in the label forwarding table. After that when LSR A receives a packet with the same F E C , it is able to lookup the label binding information in the table and sends a copy of the packet to each of its outgoing interfaces based on layer 2 switching with the selected label encapsulated. In this case, LSR A forwards a copy with label 12 to outgoing interface 1 and another copy with label 18 to outgoing interface 2. When LSR C receives a labeled multicast packet from LSR A, it uses the packet label 18 to check its corresponding outgoing label information from the label forwarding table. If the table does not contain any information about the packet label, LSR C forwards the packet to layer 3 to find out all of its outgoing interfaces and allocates a label for each of them. Once the label binding is inserted into the table, LSR C sends a copy with label 6 to outgoing interface 1 and label 4 to outgoing interface 2 when a packet with label 18 is received. When LSR E receives a packet with label 4, it checks its label forwarding table and finds out the corresponding outgoing label. Since the outgoing label is 0, it means that it is the egress L S R of the F E C . The special label 0 is assigned to the outgoing interface to represent the outgoing interface is a connected IP router. LSR E then removes the M P L S header of the packet and forwards it to the receiver R3 by L3 forwarding. The receiver Rx is not a member of group G. When Rx receives a packet 31 with group G, it sends a Prune message to LSR B. LSR B then checks its label forwarding table and deletes the entry with the outgoing interface the same as the interface that receives the Prune message. If the deleted entry is the only entry in the table that matches the group G, another Prune message is sent towards the upstream LSR A to notify the deletion of a multicast LSP. If the receiver Rx wants to join the group G later, it sends a Graft message to LSR B. When LSR B receives the Graft message, it assigns a label to the new outgoing interface (i.e. the interface that receives the Graft message) and inserts a new entry into the table. Then, it checks its label forwarding table and determines whether another entry with the group G exists or not. If the new inserted entry is the only entry with group G, another Graft message is sent towards the upstream L S R to inform the creation of a multicast LSP. 3 . 3 Multicast FECs Multicast F E C is a source-group pairs. It is mainly based on the multicast group membership information. Therefore, three kinds of multicast F E C s are derived: one sender to one group FEC(5,C7), multiple senders to one group FEC(* ,G) and multiple sender to multiple groups FEC(*,*) . FEC(* ,G) and FEC(*,*) are the special kinds of multicast FECs , whereas FEC(*,*) is used to denote the set of all senders and groups, (i.e. F E C ( * , G i ) , FEC(*,G2), FEC(*,C7„)). Different kinds of packets which are forwarded to the same group of destinations are classified into FEC(* ,G) . These kinds of packets have some common characteristics. They are transmitted to the same group of destinations through some common subpaths. Some intermediate LSRs of their LSPs distribute copies to downstream LSRs through some common outgoing inter-32 faces. In this case, instead of having a separate multicast label tree for F E C ( S i , G ) , FEC(5 , 2 ,G) , FEC(S„,G) , a shared multicast label tree can be used for FEC(* ,G) . That is, for the same multicast group, LSRs assign the same label for the same outgoing interface. The main purpose of having a shared multicast label tree for FEC(*,G) is to minimize the label space in the table. To create the multicast label forwarding table with shared label for FEC(* ,G) , a F E C group checking in the table is required before label binding: When a L S R receives a packet with a new F E C , it checks the F E C information in the label for-warding table. If there is an F E C in the table with the same group as the new F E C , no new label is allocated to the outgoing interface of the new F E C . The L S R uses the label binding information in the table to assign labels to the new F E C . As a result, F E C s with the same multicast group have the same label binding for the same outgoing interface. 3 . 4 The Egress LSR For multicast, it is important to determine who is the egress L S R in the multicast tree. However, a multicast L S P is point-to-multipoint and a L S R can have more than one outgoing interface. In this case, the egress L S R cannot be determined by matching the packet's F E C and its address since the F E C of the multicast packet is a source-group pair. There are two kinds of egress LSRs for multicast. One contains outgoing interfaces with only IP routers. The other one contains some outgoing interfaces with IP routers and some with M P L S routers. The first one is the regular egress LSR. It is required to pop off the M P L S header before executing L3 forwarding. The second one is the special case of the egress LSR. It is required to forward packets 33 to IP routers at L3 while switching packets to M P L S routers at L2. The multicast egress L S R is required to support both L2 switching and L3 forwarding at the same time. This feature is called "Mixed L2 /L3 Forwarding". It is described in the next section. 3.5 Mixed L2/L3 Forwarding For a point-to-point unicast LSP, each unicast IP traffic has one incoming and one outgoing interface so that it is either forwarded at L2 or at L3. However, multicast IP traffic can be forwarded to more than one outgoing interface. A multicast L S P is point-to-multipoint, and each multicast IP traffic can contain one incoming and multiple outgoing interfaces. In this case, each L S R is required to have the ability to forward to some branches on L2 and to some other branches on L3. This feature is called "Mixed L 2 / L 3 Forwarding". In order to support "Mixed L2 /L3 Forwarding", each L S R is required to determine which outgoing interfaces are connected to IP routers and which others are connected to M P L S routers. The multicast routing tree that is created by P I M -D M is allowed to split in branches in which only "IP branches" are forwarded at L3. In this case, L3 forwarding is required to ensure that the traffic which is not forwarded through " M P L S branches" in the multicast routing tree has already been switched to the M P L S routers in L2. For a LSR, outgoing interfaces that are connected to IP routers are called "Egress Interfaces". A special label 0 is assigned to the "Egress Interfaces", so that the L S R is able to use the label binding information in the multicast label forwarding table to figure out which are "Egress Interfaces". When the L S R receives a labeled packet, according to the label binding in the forwarding table, it performs 34 L3 forwarding to the "Egress Interfaces" and L2 switching to the others. 35 Chapter 4 Implemen ta t ion 4.1 Overview The new mechanism for M P L S supporting IP multicast has been implemented by extending the M P L S network simulator in NS. The primary purpose of this work is to use NS to simulate a M P L S network environment to demonstrate the performance of the mechanism without constructing a real M P L S network. This chapter describes our implementation for simulating MPLS-based IP multicast traffic in NS. A new architecture of a M P L S node, a new M P L S header and a new multicast forwarding table were added to NS for the multicast label distribution. A new M P L S replicator was also created to perform multicast label operation and to replicate label traffic to all receivers. The capabilities of the extended M P L S network simulator are as follow: 1. The multicast forwarding table is created using the concept of flooding and pruning in P I M - D M . 2. The multicast LSP can be established by data-driven label distribution based 36 on the arrival of multicast traffic. 3. The multicast L S P can be established by request-driven label distribution using P I M - D M messages. 4. The multicast L S P with resource reservation can be established by request-driven label distribution using R S V P messages. 5. LSRs can support the "Mixed L 2 / L 3 Forwarding" feature that allows some nodes in a multicast tree to be forwarded at L3 while others are switched at L2. 6. The "Reverse Path Forwarding" checking is used to eliminate unnecessary entries in the forwarding table and prevent data overflow in the M P L S network. 7. No conflict between unicast and multicast labels since unicast and multicast label distribution are performed independently. 8. The "Path Restoration" is used to generate an alternative multicast L S P to reroute multicast traffic around a failure in a LSP. 4.2 The Architecture of a MPLS Node The architecture of a M P L S node that supports both unicast and multicast traffic is shown in Figure 4.1. 4.2.1 Label Classifier The label classifier is the main classifier in the M P L S node. It is used to classify the incoming packet into different categories, execute appropriate operations according 37 P F T : Partial Forwarding Table L I B : Label Information Base E R B : Exp l i c i t Routing Information Base M P F T : Mult icast Partial Forwarding Table M L I B : Mult icast Labe l Information Base Figure 4.1: The Architecture of a new M P L S Node 38 to the type of the packet, and then forward the packet to other classifiers or repli-cators. The label classifier is also responsible for managing different kinds of tables. It contains five tables that store information related to L S P : • Partial Forwarding Table (PFT) , Label Information Base (LIB) and Explicit Routing Information Base (ERB) are used to maintain unicast L S P informa-tion. • Multicast Partial Forwarding Table (MPFT) and Multicast Label Information Base (MLIB) are used to manage multicast LSP information. The major operations that are performed in this classifier are: Packet Classification Packets are classified into four categories: unicast label, unicast unlabel, multicast label and multicast unlabel. Packets without a M P L S header are considered to be unlabeled packets. The label classifier uses the destination address of the packet to determine whether the packet is unicast unlabeled or multicast unlabeled. Packets with a M P L S header are labeled packets. Label classifier looks up the label flag in the M P L S header to identify whether the packet is unicast labeled or multicast labeled. Unicast Label Binding The label classifier is responsible for unicast label bind-ing. It allocates labels for unicast LSPs and stores the label binding informa-tion in the P F T , LIB and E R B tables. Unicast Label Operation Unicast label operations consists of uPush, uSwap and uPop. The uPush operation is performed in ingress LSRs for inserting the M P L S header into the packet. The uSwap operation is executed in intermediate 39 LSRs for swapping the label inside the M P L S header and uPop operation is performed in egress LSRs for removing the M P L S header from the packet. Unicast LSP Restoration The label classifier is able to reroute unicast packet around a link failure using the following three methods: drop, L3 forwarding or secondary LSP. When a link failure is detected, label classifier either drops the packet, executes L3 forwarding on the packet or reroutes the packet through a secondary LSP. Multicast Label Binding The label classifier is responsible for multicast label binding. It assigns a label for each of the outgoing interfaces in the F E C and records the label binding information in the M P F T and M L I B tables. Multicast LSP Restoration The label classifier is able to reroute multicast packet to a secondary L S P when a link fails. A secondary L S P is created dynami-cally when a link failure is detected and it is destroyed when the link has been recovered. 4.2.2 MPLS Unicast Classifier The M P L S unicast classifier is similar to the old M P L S classifier. It is responsible for classifying the packet into labeled or unlabeled one. The M P L S unicast classifier does not contain any L S P information. A l l the L S P information is stored in the label classifier; therefore, the label operation is performed inside the label classifier. The unicast classifier only needs to check whether the packet has included a M P L S header or not. If the packet contains a M P L S header, the unicast classifier looks up the outgoing interface from the header and performs L2 switching on the packet directly. Otherwise, it passes the packet to the address classifier and lets the address 40 classifier perform L3 forwarding based on the packet's destination address. 4.2.3 Multicast Classifier The multicast classifier is the same as the one located in a multicast node. It is used to classify multicast packets based on their source and destination (group) addresses and forward them to the appropriate replicators. It maintains a table for mapping source-group pairs to replicators. Each source-group pair has its own replicator for generating copies. When the multicast classifier receives a packet with an unknown source-group pair, it adds an new entry to its table and creates a hew replicator for the new source-group pair. Finally, it forwards the packet to the replicator to distribute copies to all related outgoing links. 4.2.4 MPLS Replicator The M P L S replicator is responsible for producing copies of the packet and assigning labels on the copies. Similar to the replicator in a multicast node, M P L S replicator is only used for generating copies of a specific source-group packet. The label classifier contains a table to store information about M P L S replicators and source-group pairs. If an unknown source-group packet arrives, the label classifier creates a new M P L S replicator for its group and inserts an entry into the table. Besides making copies of the packet, the M P L S replicator is required to assign labels to different copies. A l l label operation such as push and pop are executed inside the replicator. When it receives a packet from the label classifier, it generates a^copy for each outgoing interface and assigns a different label to each copy based on the label binding information obtained from the label classifier. The M P L S replicator supports "Mixed L 2 / L 3 forwarding". It is able to 41 F E C Label Flag Label Interface T T L Mcast List Figure 4.2: New M P L S Header Format in NS forward packets on either L2 or L3. If the outgoing interface is connected to an IP router, the M P L S replicator executes L3 forwarding on the packet. If the outgoing interface is connected to an LSR, it performs L2 switching on the packet. 4 . 3 New MPLS Header A new M P L S header is introduced for multicast label distribution in NS. The header format is illustrated in Figure 4.2. FEC indicates the forwarding equivalence class of the incoming packet. For mul-ticast packet, the F E C is a source-group pair (S,G). For unicast packet, the F E C is the destination address. Label Flag indicates the type of label such as unlabel, unicast or multicast label. Label represents the value of the label. Interface represents the outgoing interface for switching the packet to the next hop. TTL provides the time-to-live value of the packet label. Mcast List contains a list of outgoing interface-label pairs of the packet. It is used only for multicast labeled packets to transfer the outgoing interface-label in-formation from the label classifier to the M P L S replicator within a M P L S node. This field is not used for carrying information from one M P L S node to the other (i.e. across the M P L S network). 42 A multicast packet arrived pass this to MPLS replicator Figure 4.3: Multicast Label Forwarding Table 4 . 4 Multicast Label Forwarding Table The Multicast Partial Forwarding Table (MPFT) and Multicast Label Information Base Table (MLIB) are used to store the multicast L S P information. The structure of these tables are described in Figure 4.3. 4.4.1 Multicast Partial Forwarding Table The Multicast Partial Forwarding Table (MPFT) is a subset of multicast forwarding table and is used to map an IP multicast packet into a multicast L S P at the ingress LSR. It consists of a FEC, Incoming Interface, Incoming Label, FlowID, MLIBptr, secMLIBptr and altPath. For multicast, F E C is a source-destination pair (S,G). Different sources can have the same destination group and the same source can have different destination groups. A l l of them are considered to be different FECs 43 and each has a separate entry in M P F T . For an unlabeled multicast packet, its FEC is used as an index in M P F T to check whether a L S P exists or not. For a labeled multicast packet, its Incoming Interface and Incoming Label are used to find out the MLIBptr and use the MLIBptr to obtain a list of outgoing interface-label pairs from the M L I B table. When a link failure occurs, a secondary L S P is used before the link has been recovered. The altPath indicates the use of the secondary L S P and the secMLIBptr is used as the index of the M L I B table to obtain the secondary list of outgoing interface-label pairs. 4.4.2 Multicast Label Information Base Table The Multicast Label Information Base Table (MLIB) contains all the label infor-mation needed for the establishment of multicast LSPs. It is used to provide label binding information for the labeled packets. It consists of an Outgoing Interface, Outgoing Label, mLIBptr and SenderList. For FECs with different sources but the same group, they share the same outgoing label with the same outgoing interface. The mLIBptr is used to link all outgoing interfaces and labels of F E C s with the same group together and the SenderList is used to store the list of senders which contain the corresponding outgoing interface. 4.5 Multicast Label Distribution This section describes different kinds of multicast label distribution that are sup-ported in NS. 44 4.5.1 Data-driven Label Distribution Data-driven label distribution is triggered with the arrival of multicast packets. Since labels are only allocated when there is traffic on the multicast tree, it con-sumes less L S P setup time and fewer labels than the topology-driven method. This mechanism uses the concept of flooding and pruning in P I M - D M to create the M P F T and M L I B tables. L S R initially floods the M P L S network with multicast data, cre-ates entries and assigns labels for all its outgoing interfaces in the M P F T and M L I B tables. If the receivers are not a member of the group, they send a prune message back to the previous LSR. When a L S R receives a prune message, it determines the F E C of the prune message and deletes the corresponding entry in the M P F T and M L I B tables. When a multicast packet with a new F E C arrives at a LSR, the label classifier identifies it as an unlabeled multicast packet. Since the L S P of this new F E C has not been established, the label classifier forwards the packet to the multicast classifier to determine all its outgoing interfaces and perform L3 forwarding. Once the outgoing interfaces are determined, the label classifier checks its M P F T to assign a new label for each outgoing interface and stores the label binding information in both the M P F T and M L I B tables. After that, the label classifier is able to use the F E C of the packet as an index to find out the list of interface-label pairs, and then encapsulates this information in the packet header and passes the packet to the M P L S replicator. For a labeled multicast packet, the label classifier uses the incoming interface and the packet label to look up the list of interface-label pairs. The label classifier cannot produce copies for the packet, instead all label operation for multicast packets are performed in the M P L S replicator. When the M P L S replicator receives the packet, it extracts a list of interface-label pairs from 45 the packet header. Then it performs a label mPop operation to delete the M P L S header of the packet before generating copies. Next, the M P L S replicator executes a label mPush operation onto each copy. That is, it pushes a M P L S header into each copy with an assigned label based on the interface-label pairs. Finally, it distributes copies to every interface in the list by L2 switching. 4.5.2 Request-driven Label Distribution using PIM-DM messages The Prune and Graft control messages in P I M - D M are used to trigger the creation or the deletion of a multicast L S P when a receiver joins or leaves a multicast group. This label distribution is called request-driven because whether the L S P is created or deleted depends on the reception of multicast control messages. If a receiver joins a multicast group, a Graft message with the group informa-tion is sent to the previous upstream LSR. When a L S R receives a Graft message, the label classifier checks its M P F T and finds all F E C s with the same group as the Graft message. If the M P F T does not contain any F E C with this group, the label classifier sends another Graft message towards the upstream L S R to notify it about the establishment of a multicast LSP. Otherwise, for each of the F E C s with this group, the label classifier assigns a new label to the new outgoing interface (i.e. the interface that receives the Graft message) and inserts a new entry in the M L I B tables. If a receiver leaves a multicast group, it sends a Prune message with the group information to the previous upstream LSR. When the previous L S R receives the Prune message, the label classifier checks its M P F T and deletes the appropri-ate entry in which its outgoing interface matches the interface that receives the Prune message. If the deleted entry is the only entry in the table that matches the 46 group information in the Prune message, another Prune message is sent towards the upstream L S R to notify the deletion of a multicast LSP. Every time a receiver joins a multicast group, a new set of labels are used to bind the multicast LSPs. Although labels are reclaimed after receivers leave the multicast group, there is no guarantee that the L S R allocates the same label for the L S P when the receiver joins the group again. In this case, the multicast LSPs are very volatile when receivers join or leave a multicast group very often. 4 . 5 . 3 Request-driven Label Distribution using RSVP messages R S V P can also be used to support multicast traffic in the M P L S network. Since R S V P originally supports multicast traffic, with label binding extension, it is able to establish multicast LSPs easily. The same mechanism can be used with R S V P to allocate multicast labels. The key objective is to provide multicast LSPs to support QoS using R S V P . Multicast LSPs with QoS support are established in request-driven man-ner. The Path and Reservat ion control messages in R S V P are used to trigger the establishment of multicast LSPs. The Path message is used to initiate a request to setup a multicast L S P and the Reservat ion message is used for notifying the sender about the status of the LSP such that the L S P is established or rejected. La-bel binding is triggered by the arrival of the Path message and is distributed from upstream to downstream LSRs. When a sender wants to setup a multicast LSP with QoS support, it starts binding labels to the corresponding outgoing interfaces and encapsulates the reservation information as well as the selected label in a Path message. Then it sends the Path message towards the receivers. The intermediate LSRs also initiate the label assignment with the arrival of the Path message. When 47 the receivers receive the Path message, it reserves the resource to the L S P and replies back to the sender through a Reservat ion message. Once the Reservat ion message arrives at the sender, the establishment of the L S P is completed. In this case, the L S P is established before data transmission so that no packet is forwarded at layer 3. 4.6 Share Label Binding for FEC ( * , C 7 ) FECs with the same multicast group can share labels for the same outgoing in-terface. For multicast label distribution, the M P F T and M L I B tables are used to manage multicast L S P information. The M P F T stores the F E C , incoming interface and incoming label of the L S P and is mainly used to lookup the location of the corresponding label binding information in the M L I B table. Figure 4.4 shows how FEC(*,C7) share label binding information. In order to share labels for the FECs with the same multicast group, their LSPs contain the same M L I B location in the M P F T table. The SenderList field in the M L I B table is used to keep the list of senders that share the corresponding outgoing interface-label pair. For FEC(* ,G) , to obtain their own list of outgoing interface-label pair, a SenderList is checked to eliminate any unnecessary outgoing interfaces. Only the outgoing interfaces with F E C s sender name in the SenderList are considered to be a real outgoing interface. 4.7 The Egress LSR For unicast packets, the L S P is point-to-point so that a L S R is either connected to another L S R or an IP router. In NS, when an unicast L S R receives a packet with 48 M P F T FEC IIF IL FlowID MLIBptr secMLIBptr altPath I (Sl.Gl) 12 4 100 0 - - | 1 (S2.G1) 12 4 120 0 - - i ! (S3.G1) 8 7 212 0 - - 1 (S1.G2) 2 9 96 4 - -(S1.G3) 3 8 72 6 - -(S3.G3) 3 8 43 6 - -F E C s wi th the same group have the same M L I B p t r M L I B OIF OL mLIBptr SenderList 1 8 10 1 S1.S2 | 1 12 9 2 S3 I I 6 21 3 S1.S2.S3 j I 5 0 - SI I 1 18 5 SI 4 6 6 SI They share some common label binding information The outgoing interface-label list: FEC(S1,G1) - (8,10), (6,21), (5,0) FEC(S2,G2)-(8,10),(6,21) FEC(S3,G1)- (12,9), (6,21) Figure 4.4: Share Label Binding Information for FEC(*,C7) label 0, it is considered as an egress LSR. It pops off the M P L S header and executes L3 forwarding for the packet. It is the responsibility of the "Penultimate Egress LSR" to determine when to assign a label 0 to the packet. This process is called "Penultimate Hop Popping" [2]. The "Penultimate Egress L S R " is the previous upstream L S R of the egress LSR. It determines whether the next downstream L S R is egress or not by looking at the address of the next hop and the F E C of the packet. If the F E C is the same as the address of the next hop, the next hop is the destination of the packet. The "Penultimate Egress LSR" allocates a label 0 for the packet to notify the next hop to execute a label pop operation. In the case of multicast, the L S P is point-to-multipoint and a L S R can have more than one outgoing interface. For multicast LSPs, the egress L S R cannot be determined by matching the F E C of the packet and its address since the F E C of the multicast packet is a source-group pair. Therefore, the "Penultimate Hop Popping" 49 is not suitable for multicast label distribution. A similar approach is applied to multicast label distribution. Instead of determining whether the next hop is an egress LSR, each L S R determines which of its interface is the "Egress Interface". Outgoing interfaces that are connected to an IP routers are "Egress Interfaces". Since all multicast label operation are executed inside the M P L S replicator, when a M P L S replicator receives an outgoing interface-label pair with label 0, the corresponding outgoing interface is considered to be an "Egress Interface". It pops off the M P L S header and executes L3 forwarding for that outgoing interface. In a LSR, the label classifier is required to decide when to allocate a label 0 to an outgoing interface. In NS, an IP router which contains a M P L S module is a LSR. The label classifier uses this property to decide which are "Egress Interfaces" and allocates a label 0 to theni to inform the M P L S replicator that a label pop operation is required. 4.8 Mixed L2/L3 Forwarding In a M P L S node, the M P L S replicator is able to perform the "Mixed L 2 / L 3 For-warding". As a result, the label classifier does not need to forward the packet to the multicast classifier for L3 forwarding and to the M P L S replicator for L2 switch-ing at the same time. It passes the multicast label packet to the M P L S replicator regardless of the type of routers the outgoing interfaces are connected to. Again, as before the special label 0 is used to trigger the execution of L3 forwarding. The M P L S replicator performs a label pop operation before generating copies. No label push operation is required for the copies that forward to "Egress Interfaces", so that no M P L S header is included for those copies. A label push operation is executed only for the copies that switch to interfaces connected with 50 LSRs. Finally, the M P L S replicator executes L3 forwarding to "Egress Interfaces" and L2 switching to the others. 4.9 Reverse Path Forwarding The multicast forwarding table is created by flooding multicast packets throughout the M P L S network. However, this process results in LSRs receive multiples copies of the packet. As the result of these copies, the multicast forwarding table contains unnecessary entries such that one entry contains a particular interface as the incom-ing interface of a F E C while another contains that as the outgoing interface of the same F E C . To eliminate unnecessary entries in the table, a "Reverse Path Forward-ing" check is proposed. The advantage is to minimize multicast traffic that is sent throughout the network by eliminating loops in the multicast forwarding table. The "Reverse Path Forwarding" check is shown in Figure 4.5. The "Reverse Path Forwarding" check eliminates loops in the multicast for-warding table by ensuring none of the interfaces in a L S R can be considered as both the incoming and outgoing interfaces of a F E C . When a L S R receives a multicast packet, the label classifier first checks whether the packet is labeled or not. If the packet is unlabeled but a L S P of its F E C already exists, the label classifier uses the F E C as the index into a list of interface-label pairs in the M L I B table. If the packet is labeled, the label classifier uses its label and the incoming interface that receives the packet to find out a list of interface-label pairs. Then, the "Reverse Path For-warding" check is performed to make sure none of the outgoing interfaces in the list matches the incoming interface. If a match is found, the packet has come from a wrong incoming interface (i.e. this interface is not used to forward the packet back to the source through the shortest path). The label classifier discards the packet 51 I i f i L O i f o L 0 12 1 20 0 12 2 24 I i f i L O i f o L - - 0 12 1 J 2 _ i n I i f i L O i f o L 0 24 1 8 0 24 2 7 ?4 _'—^  7 1 U IV T 32 © ± ^ © ^ © T i i 6 T i i I i f i L O i f o L 0 20 1 16! I i f i L O i f o L 0 16 1 32 0 16 2 11 I i f i L O i f o L 0 12 1 20 0 12 2 24 Iif i L O i f o L - - 0 12 I i f i L O i f o L 0 24 2 7 A © © — © — © — © " 1 Prune Prune © — © — © Iif i L O i f o L 0 20 1 161 I i f i L O i f o L 0 16 2 11 III IV Suppose L S R E receives the packet wi th Label 16 from L S R C first, it distributes a copy to both L S R D and E . Second, when L S R E receives the packet wi th Labe l 8 from L S R D , it checks that a match found. Then it drops this packet and sends a Prune msg back to L S R D . Suppose L S R D receives the packet wi th Label 24 from L S R B first, it distributes a copy to both L S R E and F . Second, when L S R D receives the packet wi th Labe l 32 from L S R E , it checks that a match found. Then it drops this packet and sends a Prune msg back to L S R E . W h e n L S R D receives the Prune msg, it deletes the entry wi th O i f = 1. W h e n L S R E receives the Prune msg, it deletes the entry wi th O i f = 1. Iif: incoming interface, i L : incoming label Oif : outgoing interface, o L : outgoing interface Figure 4.5: The Reverse Path Forwarding Check 52 and sends a Prune message back to the previous upstream LSR. When the previous L S R receives the Prune message, it deletes the entry which contains the same group information as the Prune message and the same outgoing interface as the interface that receives the message. After this has occurred, all unnecessary entries in the table are removed and no L S R receives multiple copies of the packet any more. 4.10 Unicast and Multicast Labels The assignment of labels to unicast LSPs is completely independent from the as-signment of labels to multicast LSPs. There is no conflict between unicast and multicast labels even when the same label value is allocated for a unicast L S P and for a multicast LSP, since a field in the M P L S header indicates the label type. This field is used by the label classifier to determine whether to perform multicast label binding or unicast label binding on the packet. Different classifiers or replicators are used for unicast/multicast L2 switching and L3 forwarding. It is not possible to execute both multicast or unicast label binding on the same packet. 4.11 Path Restoration In a M P L S node, the label classifier is able to generate an alternative multicast L S P to reroute multicast traffic around a single link failure in a LSP. That is, a multicast packet will be forwarded along the secondary L S P in the case of link failure. The secondary path is established after a failure link has been detected and is deallocated once the link has been recovered. When a link failure is detected, P I M - D M notifies all LSRs to setup a sec-ondary multicast L S P for the F E C that is affected by the link failure. P I M - D M 53 1 MPFT contains the affected FEC 1) the incoming interface of the FEC does NOT change i) all outgoing interfaces of the FEC do NOT change - no secondary LSP setup ii) the outgoing interfaces of the FEC change a) no further packet forwarding - no secondary LSP setup - the altPath field of the MPFT entry is set b) a new list of outgoing interfaces obtain - a secondary LSP is required - a new entry with a new label is inserted into MLIB for each outgoing interface - the altPath field of the MPFT entry is set - the secMLIBptr is pointed to a new MLIB location 2) the incoming interface of the FEC changes i) no futher packet forwarding - no secondary LSP setup - the altPath field of the MPFT entry is set - a new entry with the new incoming interface is inserted into MPFT ii) a new list of outgoing interfaces obtain - a secondary LSP is required - a new entry with a new label is inserted into MLIB for each outgoing interface - a new entry with a new label is inserted into MLIB - the altPath field of the MPFT entry is set - the secMLIBptr is pointed to a new MLIB location 2 MPFT does NOT contain the affected FEC - the LSR treats the affected FEC as a new FEC Figure 4.6: The Path Restoration Algorithm also informs all LSRs about the updated information of the affected F E C . Figure 4.6 describes how to setup a secondary LSP. When the failure link is recovered, P I M - D M also notifies all LSRs about the link recovery and informs them to delete the secondary LSPs of the affected F E C . A l l LSRs then remove the corresponding entries of the affected F E C in both of the M P F T and M L I B tables, and begin to use the original L S P to forward packets. 54 Chapter 5 Performance Evaluation 5.1 S i m u l a t i o n E n v i r o n m e n t The M P L S Network Simulator with multicast traffic support has been implemented over NS version 2.1b8 by extending the existing M P L S module [2] and P I M - D M module [9]. Various kinds of multicast FECs are used to analysis the simulation results between unicast label distribution and multicast label distribution. 5.1.1 Simulation Topology In our simulation, we use the network topology generator of NS to generate a random network with 160 nodes and 202 links. In the network, 50 nodes are M P L S nodes and 110 nodes are source and destination nodes. Each node is connected to an adjacent node with a duplex link of 1Mbit bandwidth and 20ms delay. In the M P L S domain, 7 nodes are ingress LSRs, 13 nodes are intermediate LSRs and 37 nodes are egress LSRs. 55 5.1.2 Simulation Traffic The simulation traffic used in the experiment is constant bit rate (CBR) traffic with rate of 100Kbit/s and packet size of 50 bytes. The traffic is generated in the source node and is propagated to different number of destinations (receivers) in the topology. The source nodes are attached to ingress LSRs and the destination nodes are attached to egress LSRs. A l l source and destination nodes are IP nodes. 5.1.3 Experimental Setup In the experiment, unicast and multicast LSPs are established by data-driven label distribution. The total setup time and the label space of both unicast and multicast LSPs are measured to analyse their performance under different circumstances. For unicast, the setup time of a "multicast L S P " is the total setup time to establish all unicast LSPs for each receiver. The setup time of an unicast L S P is the sum of the processing time to create the P F T and LIB tables for all LSRs in the L S P and the transmission time of a packet from the ingress L S R to the egress LSR. The label space of a "multicast L S P " is the total space required to store all unicast LSPs information which is the total number of entries in the P F T and LIB tables for all LSRs in the LSP. For multicast, the setup time of a multicast L S P is the sum of the time to create the multicast L S P and the time that the LSRs process all prune messages from the uninterested receivers. That is, the processing time to establish the M P F T and M L I B tables for the LSP, the transmission time of a packet from the ingress L S R to the furthest egress L S R and the time to delete all unwanted entries in the M P F T and M L I B tables. If all receivers in the topology join the multicast group, no prune messages are needed and the setup time of the L S P is only the processing 56 time to create the tables and the transmission time of a packet through the LSP. The label space of a multicast LSP is the total space requires to store the multicast L S P information, that is the total number of entries in M P F T and M L I B tables for all LSRs in the LSP. 5.2 F E C ( 5 , G ) - one sender to one group In this case, F E C with one sender and one group is used to analysis the performance between the unicast and multicast label distribution with different number of re-ceivers in the group. Since the unicast F E C is based on one destination, to establish a multicast L S P for FEC (S ,G) , multiple unicast LSPs are required. A separate unicast L S P is required for each receiver in the group; therefore, the number of unicast L S P is the same as the number of receivers in the multicast group. When there is only one receiver in the group, the multicast L S P for this group becomes point-to-point. This multicast LSP performs like an unicast LSP. The simulation results are shown in Table 5.1. Receivers Unicast LSPs Multicast L S P Setup Time (s) Label Space Setup Time (s) Label Space 1 0.08672 8 0.28728 10 5 0.438576 42 0.22536 38 10 0.938448 90 0.22536 59 50 4.761456 440 0.16416 129 100 9.83507 862 0.16416 177 Table 5.1: The Performance of Unicast and Multicast LSPs with FEC(5,C7) 57 Result As the number of receivers in the multicast group increases, multicast label dis-tribution performs much better than unicast label distribution. Since P I M - D M is meant to be used in a network environment with a huge number of receivers, the mechanism used for establishing a multicast LSP benefits with a larger number of receivers in the multicast group. In our experiment, P I M - D M initially floods the multicast traffic to all 100 destination nodes in the topology. As more receivers join the multicast group, less prune messages are required to prune off the label binding information for the LSP. As a result, less L S P setup time is obtained. The multicast L S P setup time is also affected by the number of egress LSRs that propagate multicast traffic to the receivers in the group. The more the egress LSRs involve in traversing traffic to the set of receivers, the less the L S P setup time is obtained because less label binding information are required to prune off. The multicast L S P setup time is bound by the total number of egress LSRs for traffic transmission. In our experiment, there are only 37 egress LSRs in the topology. When all 37 egress LSRs are used to propagate traffic to the set of receivers, in-creasing the number of receivers in the multicast group does not affect the L S P setup time. For example, the multicast L S P setup time for 50 or 100 receivers are the same. By using multicast label distribution, point-to-point unicast LSPs with the same source but different destinations can be reduced to a single point-to-multipoint multicast LSP. In this case, both the LSP setup time and the label space are greatly reduced. For instance, when there are 100 receivers in the multicast group, the LSP setup time decreases from 9.83 sec to 0.16 sec and the label space is dropped from 862 to 177. 58 Multicast label distribution is not suitable for a multicast group with very few number of receivers. For example, when there is only one receiver in the multicast group, although the multicast LSP operates the same way as the unicast LSP, it requires more setup time and label space than the unicast one due to the high overhead for processing prune messages. 5.3 F E C ( * , G ) - m u l t i p l e senders t o one g r o u p In this case, different senders are used to generate multicast traffic to the same group of receivers. By increasing both the number of senders and the number of receivers in the multicast group, the performance of unicast and multicast LSPs can be compared. The results are shown in Table 5.2. Receivers 1 Sender 5 Senders 10 Senders Setup Time (s) Label Space Setup Time (s) Label Space Setup Time (s) Label Space Unicast 1 0.08672 8 0.55592 20 1.12712 26 5 0.438576 42 2.933696 94 6.345876 122 10 0.938448 90 6.190688 202 13.251618 258 50 4.761456 440 29.533456 1004 61.956756 1292 100 9.83507 862 61.79297 2018 129.12977 2586 Multicast 1 0.28728 10 1.41552 44 2.83104 78 5 0.22536 38 1.2696 125 2.5596 224 10 0.22536 59 1.2696 178 2.5596 317 50 0.16416 129 0.99832 306 2.035117 515 100 0.16416 177 0.995119 354 2.028714 563 Table 5.2: The Performance of Unicast and Multicast LSPs with FEC(* ,G) For unicast, to establish a multicast L S P for FEC(*,C7), a separate unicast L S P is required for each sender-receiver pair. The number of unicast LSPs for FEC(* ,G) is the number of senders times the number of receivers in the group. For 59 multicast, the number of multicast LSPs for FEC(*,C7) depends on the number of senders that generate traffic. Result As the number of senders increases, the label space required to store all multicast LSPs increase more rapidly than the label space required to store all unicast LSPs. Since unicast label switching allows aggregation of flows, different traffic with the same destination can merge together and share the same LSP. In this case, some unicast LSPs can aggregate together to minimize the label space required in the LSRs. Although multicast label switching makes it possible to share some common labels for FEC(*,c7), it cannot merge all these multicast LSPs together into one single LSP. It only allows these LSPs to share labels in some common subpaths. Multicast label distribution allows sharing label binding for F E C s with the same multicast group to reduce the use of label space. However, share label binding information cannot improve the LSP setup time for FEC(* ,G) . For example, when there are 100 receivers, 1 sender generates traffic to one multicast group takes 0.16 sec to establish the L S P but 5 different senders generate traffic to the same multicast group require 0.99 sec to setup five LSPs. There is no saving for FEC(* ,G) to setup LSPs by sharing label binding. Unicast label switching consumes a huge amount of setup time and label space, when there are a larger number of senders and receivers. For 10 senders and 100 receivers, it requires about 5 times more label space and 64 times more setup time compare to multicast label switching. Therefore, the M P L S network has better scalability when multicast label switching is supported. 60 5.4 FEC(5,*) - one sender to multiple groups In this case, one sender generates traffic to different multicast groups but all groups contain the same receiver membership information. The key objective for this test case is to compare the different between FEC(5,*) and FEC(* ,G) for both unicast and multicast label distribution. For unicast to establish a multicast LSP for FEC(5',*), a separate unicast L S P is required for each distinct receiver in all multicast groups, so that the number of unicast LSPs is the number of different receivers in all groups. The number of multicast groups does not affect the number of unicast LSPs. For multicast, the number of multicast LSPs for FEC(S,*) is the number of multicast groups. The performance of FEC(S,*) is shown in Table 5.3. Receivers 1 Group 5 Groups 10 Groups Setup Time (s) Label Space Setup Time (s) Label Space Setup Time (s) Label Space Unicast 1 0.08672 8 0.08672 8 0.08672 8 5 0.438576 42 0.438576 42 0.438576 42 10 0.938448 90 0.938448 90 0.938448 90 50 4.761456 440 4.761456 440 4.761456 440 100 9.83507 862 9.83507 862 9.83507 862 Multicast 1 0.28728 10 1.4364 50 2.8728 100 5 0.22536 38 1.1268 190 2.2536 380 10 0.22536 59 1.1268 295 2.2536 590 50 0.16416 129 0.82077 645 1.64127 1290 100 0.16416 177 0.82077 885 1.64127 1770 Table 5.3: The Performance of Unicast and Multicast LSPs with FEC(5,*) 61 Result For multicast label distribution, FEC(5,*) requires more label space than FEC(*,C7) but their L S P setup time are very similar. The mechanism used for multicast label binding is on a per-group basis. Although all multicast groups have the same receiver membership, no label can be shared for different multicast groups. Increasing the number of multicast groups does not affect the performance of unicast label distribution when all groups contain the same set of receivers. Only the number of different receivers (destination) affects the L S P setup time and the label space of unicast label distribution. 62 Chapter 6 Conclusion and Future Work 6.1 Conclusion The explosive growth of the Internet has driven the trend of combining layer 3 routing and layer 2 switching together to improve its performance. In addition to the traditional data services currently provided over the Internet, new voice and multimedia services are being developed and deployed. M P L S was introduced mainly for improving packet forwarding and it must support IP multicast traffic in order to meet the requirement of new services. In this thesis, we described a mechanism for M P L S to support IP multicast traffic and we showed that the performance of the network can be improved by multicast label switching. Multicast label switching is motivated by the idea of a data-driven upstream label allocation scheme. P I M - D M is used as the signalling protocol to support mul-ticast label switching. Upstream LSRs are used to allocate label binding information to prevent the label consistency problem causes by downstream LSRs. Label bind-ing is triggered by the arrival of the first multicast packet to minimize the packet processing time in layer 3, so that all packets, beyond the first one, are switching 63 at layer 2. Multicast traffic with different senders but the same multicast group can share some common labels to reduce the label binding information stored in LSRs. We have implemented multicast over the M P L S network simulator in NS and have evaluated its performance. Our simulation results show significant improve-ment on the network scalability in terms of the LSP setup time and the use of the label space. We observed that due to the use of P I M - D M as the signalling proto-col, multicast label switching performs much better when more receivers are in the multicast group. Different senders generate multicast traffic to the same group of receivers have a better performance than a single sender generates traffic to multiple group of receivers. This is because some label binding information can be shared by the same multicast group. However, multicast label distribution is not suitable for a multicast group with few receivers. 6.2 Future Work The mechanism described in this thesis is on a per group basis. It is not possible to minimize the label binding information for FEC(5,*) or FEC(*,*) . In the future, we would investigate how to reduce the use of labels for multicast traffic with different multicast groups. There are several approaches can be used. One approach is to use a shared-mode multicast routing protocol like P I M -S M to generate a shared label forwarding table for all multicast groups. P I M - S M uses a centralized router as the root of the multicast routing tree. Both senders and receivers are required to register the centralized router before data transmission. In this case, the centralized router obtains a current list of all senders and receivers in the multicast groups, so that it is able to create a pre-established shared L S P for all multicast groups. Multicast traffic will follow the shared L S P after passing through 64 the centralized router. Another approach is to gather the receiver information of all multicast groups in the ingress LSR. Each ingress L S R is responsible to determine the common re-ceivers in all groups and allocate common labels for them. Some explicit messages are required to notify the ingress L S R about the receiver information. Applying the label stack mechanism in the multicast label switching may be useful to reduce the use of labels. A shared L S P for the multicast groups can be tunnelled inside an original multicast LSP for each F E C . In this case, another label is pushed when the multicast traffic enters the shared L S P and it is pop off when the traffic exits the shared LSP. The mechanism used in this thesis is not suitable for a multicast group with few receivers. In the future, we would consider using other multicast routing pro-tocols that benefit for few receivers in the group to allocate multicast labels. For example, P I M - S M can be used for multicast label distribution when a small number of receivers in the multicast group. 65 B i b l i o g r a p h y [1] A . Adams, J . Nicholas, and W . Siadak. Protocol Independent Multicast - Dense Mode (PIM-DM) Protocol Specification. Internet Draft, Nov 2001. [2] G. Ahn and W . Chun. Design and Implementation of M P L S Network Simulator Supporting L D P and C R - L D P . IEEE International Conference on Networks (ICON2000), Sept 2000. [3] G. A h n and W . Chun. Design and Implementation of M P L S Network Simulator Supporting QoS. IEEE Network, Feb 2001. [4] D. Awduche, L.Berger, D. Gan, T. L i , G. Swallow, and V . Srinivasan. R S V P -T E : Extensions to R S V P for L S P Tunnels. Internet Draft, Aug 2001. [5] R. Braden, L. Zhang, S. Berson, S. Herzog, and S. Jamin. Resource ReSerVa-tion Protocol (RSVP) . RFC 2205, Sep 1997. [6] P. Brittain and A . Farrel. M P L S Traffic Engineering: A Choice of Signaling Protocols. Data Connection, Jan 2000. [7] J . Chung and M.Claypool. NS by Example. http://nile.wpi.edu/NS/, accessed in Apr i l 2002.' [8] B . Fenner, M . Handley, H . Holbrook, and I. Kouvelas. Protocol Independent Multicast - Sparse Mode (PIM-SM) Protocol Specification. Internet Draft, Jan 2001. [9] P. Huang. Protocol Independent Multicast (PIM) in Network Simulator. http://www.tik.ee.ethz.ch/~huang, accessed in Apr i l 2002. [10] Y . Katsube, K . Nagami, and H . Esaki. Toshiba's Router Architecture Exten-sions for A T M : Overview. RFC 2098, Feb 1997. [11] S. Murphy. Updated version of Marc Greis' R S V P / n s software. http://www.teltec.dcu.ie/~murphys/ns-work/rsvp/, accessed in Apr i l 2002. 66 [12] P. Newman, W. Edwards, R. Hinden, E . Hoffman, and et al. Ipsilon's General Switch Management Protocol Specification. RFC 1987, Aug 1996. [13] D. Ooms and W . Livens. IP Multicast in M P L S Networks. IEEE Conference (ATM2000), June 2000. [14] Y . Rekhter, B . Davie, D. Katz, E . Rosen, and G. Swallow. Cisco Systems' Tag Switching Architecture Overview. RFC 2105, Feb 1997. [15] E . Rosen, A . Viswanathan, and R. Callon. Multiprotocol Label Switching Architecture. Internet Draft, Apr i l 1999. [16] C. Semeria. R S V P Signaling Extension for M P L S Traffic Engineering. Juniper Networks, Sept 2000. [17] U C B / L B N L / V I N T . Network Animator, http://www.isi.edu/nsnam/nam/, ac-cessed in Apr i l 2002. [18] U C B / L B N L / V I N T . Network Simulator, http://www.isi.edu/nsnam/ns/, ac-cessed in Apr i l 2002. [19] U C B / L B N L / V I N T . Ns Manual. http://www.isi.edu/nsnam/ns/ns-documentation.html, accessed in Apr i l 2002. [20] X . Xiao, A . Hannan, B . Bailey, and L. N i . Traffic Engineering with M P L S in the Internet. IEEE Network, March 2000. [21] L . Zhang, S. Deering, D. Estrin, S. Shenker, and D. Zappala. R S V P : A New Resource ReSerVation Protocol. IEEE Network, Sept 1993. [22] Z. Zhang, K . Long, W. Wang, and S. Cheng. The New Mechanism for M P L S Supporting IP Multicast. IEEE APCCAS 2000, Dec 2000. 67 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0051533/manifest

Comment

Related Items