UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Receiver-driven layered multicast using active networks Cheng, Lechang 2003

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2003-0116.pdf [ 4.29MB ]
Metadata
JSON: 831-1.0065518.json
JSON-LD: 831-1.0065518-ld.json
RDF/XML (Pretty): 831-1.0065518-rdf.xml
RDF/JSON: 831-1.0065518-rdf.json
Turtle: 831-1.0065518-turtle.txt
N-Triples: 831-1.0065518-rdf-ntriples.txt
Original Record: 831-1.0065518-source.json
Full Text
831-1.0065518-fulltext.txt
Citation
831-1.0065518.ris

Full Text

RECEIVER-DRIVEN LAYERED MULTICAST USING ACTIVE NETWORKS By Lechang Cheng B. Sc. (Chemical Engineering) Zhejiang University, P. R. China, 1997. M . A . Sc. (Chemical Engineering) University of British Columbia, Canada, 2000 A THESIS S U B M I T T E D IN P A R T I A L F U L F I L L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F M A S T E R O F A P P L I E D S C I E N C E in T H E F A C U L T Y O F G R A D U A T E S T U D I E S D E P A R T M E N T O F E L E C T R I C A L & C O M P U T E R E N G I N E E R I N G We accept this thesis as conforming to the required standard T H E U N I V E R S I T Y O F BRITISH C O L U M B I A January 2003 © Lechang Cheng, 2003 In presenting this thesis in partial fulfillment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department of by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of Electrical & Computer Engineering The University of British Columbia Vancouver, Canada Date: January 2003 Abstract This thesis introduces a new scheme of receiver-driven layered multicast using active networks ( R I M - A N ) . In order to address the problem of heterogeneity of clients, a layered multicast (RLM) scheme is used. In this method, the video is encoded into a set of layers and the client subscribes to cumulative layers according to their available bandwidth. In additional to the basic data dissemination scheme, a distributed TCP-friendly congestion control mechanism has been proposed. With the active nodes, the multicast tree is divided into a set of subsystems with a hierarchical structure and the sending node in every subsystem is regarded as a pseudo server. End-to-end congestion control is performed at each of the subsystems. In every subsystem, the receiver estimates the packet loss rate and round trip time of its upstream link and computes a fair transmission rate based on the TCP-throughput estimation function. It then calculates the subscription level and sends it to the pseudo server which will adjust the transmission rate accordingly. Simulation studies show that with the proposed congestion control mechanism, R L M - A N flows have smoother and more TCP-friendly throughputs and quicker response to congestion inside the network than R L M flows. In this thesis, a FEC-based error control mechanism is also presented for R L M - A N . The error control is performed on every subsystem. With the estimated packet loss rate, the receiver calculates the number of redundant packets for error recovery. It sends the information to the pseudo server which will dynamically adjust the error protection level. The intermediate active nodes will also perform error recovery to eliminate the error propagation problem. Simulation results show that the error control mechanism greatly reduces packet loss rate without increasing the error recovery latency significantly. iii Table of Contents Abstract ii List of Tables vi List of Figures vii Acknowledgements ix 1 Introduction 1 1.1 Motivation 1 1.2 Our Approach 3 1.3 Scope 4 2 Video Multicast over Internet 6 2.1 Architecture of the Internet 7 2.2 Video Streaming over Internet 8 2.3 Internet Multicast Schemes 9 2.3.1 Network Layer Support for Multicast 9 2.3.2 Multicast Transport Protocols 10 2.4 Multimedia Multicast using Active Networks 12 2.4.1 Active Networks Approaches 12 2.4.2 Multicast using Active Networks 14 3 Multicast Congestion Control and Error Control 15 3.1 Internet Congestion Control 16 3.1.1 T C P Congestion Control 17 3.1.2 Design Components o f Multicast Congestion Control 18 3.1.3 Classification Criteria of Congestion Control Schemes 19 3.1.4 Rate-based Multicast Congestion Control Algorithms 21 3.1.5 Evaluation Criteria for Congestion Control Mechanisms 24 iv 3.2 Internet Error Control 25 3.2.1 Existing Error Control Mechanisms in Multimedia Communications 25 4 Receiver-driven Layered Multicast Using Active Networks 29 4.1 Background and Assumptions 30 4.1.1 Layered Encoding 30 4.1.2 FEC Encoding 31 4.2 Architecture of the RLM-AN System 31 4.3 Session Control and Data Dissemination 33 4.3.1 Subscription and Unsubscription Process 33 4.3.2 Data Dissemination 34 4.3.3 Membership Management 35 4.4 TCP-friendly Congestion Control 35 4.4.1 Distributed Congestion Control 36 4.4.2 Subscription Level Calculation 38 4.4.3 Parameter Estimation 40 4.5 FEC-based Error Control 41 4.5.1 Design Goals 42 4.5.2 Distributed FEC-based Error Control 43 4.5.3 Calculation of the Error Protection Level 45 4.6 Discussion and Performance Analysis 46 4.6.1 Scalability 47 4.6.2 Complexity and Flexibility 47 4.7 Summary 48 5 Simulation Results 50 5.1 Simulation Results for Congestion Control 50 5.1.1 Scalability and TCP-friendliness 51 5.1.2 Comparison with RLM 53 V 5.2 Simulation Results of Error Control 56 5.2.1 A Simple Topology 57 5.2.2 Scalability 61 5.2.3 Heterogeneity 66 5.3 Summary 69 6 Conclusions and Future Work 70 6.1 Future work 72 References 74 Glossary of Acronyms 80 vi List of Tables Table 4.1 Membership table of the active node R2 in Figure 4.1 35 Table 4.2 Storage requirement and computational requirement of the different function units of RLM-AN 48 Table 5.1 Parameters for Random Early Detection Queuing in the simulations 51 Table 5.2 Number of redundant packets for different packet loss rates 57 Table 5.3 Simulation result for the simple topology in Figure 5.7 58 Table 5.4 End-to-end delay at the client in the topology in Figure 5.7 59 Table 5.5 Total throughput and effective throughput at the client in the topology in Figure 5.7 60 List of Figures Vll Figure 2.1 General architecture of the Internet 7 Figure 2.2 Application-specific processing with active networks 13 Figure 4.1 (a) Overall structure of the R L M - A N system 32 (b) Logical Multicast Tree of the network topology in (a) 32 Figure 4.2 Subscription process and data dissemination of R L M - A N 34 Figure 4.3 Hierarchical structure of the logical multicast tree for the topology in Figure 4.1 37 Figure 4.4 Demonstration of the flexibility of R L M - A N 47 Figure 5.1 Topology used to demonstrate scalability TCP-friendliness of R L M - A N 52 Figure 5.2 Average mean throughput of T C P and R L M - A N flows in the top in Figure 5.1 52 Figure 5.3 Comparison of the throughput of a R L M - A N flow and a T C P flow when m=5 53 Figure 5.4 Network Topology used in the comparison of R L M - A N with R L M 54 Figure 5.5 Throughput of R L M - A N , R L M and T C P flows in topology in Figure 5.4 55 Figure 5.6 Throughput of R L M - A N and R L M at end node 6 56 Figure 5.7 A simple topology used in the simulation studies for the error control algorithm 57 Figure 5.8 Packet loss number versus time 58 Figure 5.9 (a) End-to-end delay at the client E versus time (without F E C ) 59 (b) End-to-end delay at the client E versus time (with F E C ) 59 Figure 5.10 (a) Number of packets received by the client E versus time (without F E C ) . . . 60 (b) Number of packets received by the client E versus time (with F E C ) 60 Figure 5.11 Topology used to demonstrate the scalability of the error control algorithm (the number of clients vary) 61 Figure 5.12 (a) Average packet loss rate of the first client versus the number of clients . . . 62 (b) Average end-to-end delay of the first client versus the number of clients.. 62 Figure 5.13 (a) Total throughput of the first client node versus the number of clients 63 (b) Effective throughput of the first client node versus the number of clients .63 Figure 5.14 Network topology used in the demonstration of the scalability of the error control algorithm 64 Figure 5.15 (a) Average packet loss rates at the client E versus the number of active nodes (without F E C ) 64 (b) Average packet loss rates at the client E versus the number of active nodes (with F E C ) 64 Figure 5.16 (a) Average end-to-end delay at the client E versus the number of active nodes 65 (b) Increase of the end-to-end delay at the client E versus the number of active nodes 65 Figure 5.17 (a) Average total throughput at the client E versus the number of active nodes 66 (b) Average effective throughput at the client E versus the number of active nodes 66 Figure 5.18 Testing topology with heterogeneous clients 67 Figure 5.19 (a) Average packet loss rates for the clients (without F E C ) 68 (b) Average packet loss rates for the clients (with F E C ) 68 Figure 5.20 (a) Average end-to-end delay for the clients 68 (b) Increase of the end-to-end delay for the clients 68 Figure 5.21 (a) Total throughput for the clients .' 69 (b) Effective throughput for the clients 69 ix Acknowledgements I would also like to thank my research supervisor Dr. M . R. Ito for giving me the opportunity to work under his guidance. I benefited a lot from his valuable advice and I am also grateful for his support in the past two years. Special thanks go to the following people: Joy L i n for her assistance in my thesis writing and longtime encouragement, Job Wang and Rainbow Wong and other brothers and sisters in mandarin fellowship in Bethel M . B . church for their precious support. I also want to thank my friends in the HP lab at the Department of Electrical & Computer Engineer for their valuable discussion and their patience in allowing me to occupy one of the P C for such a long time. They made my study and research an enjoyable experience. And last, I would like to pay special tribute to my parents, sisters and brother for all they have done for me. I can do nothing without their love and support and also want to dedicate this thesis to them. Chapter 1 Introduction 1.1 Motivation In recent years, there has been a blossom of the multimedia applications such as audio or video conferencing, video broadcasting, or collaborative video games. Consequently, more and more demand has been put on multicasting services which transmit multimedia stream data into multi-points in the Internet. Multicast has become a topic of intense research. Many multicast schemes have been proposed [41]. In transmitting multimedia stream through the Internet, the heterogeneity of the end users is a problem which has to be dealt with by the multicast schemes. Different end users have different processing abilities and different quality requests due to the limitation in the network bandwidth. In [37], a layered multicast scheme with layered video coding has been proposed as an effective solution to address the problem of the heterogeneity of the end user in the Internet. In this method, the video is encoded into a base layer, and one or more enhancement layers. Each layer is multicasted on its own group by the source and the receivers subscribe to cumulative layers according to their available bandwidth. For multicast schemes, there are two important issues which need to be addressed: congestion control and error control. The purpose of congestion control is to regulate the data transmission of the sender so that the sender will not send in a rate which is above the network capacity. The purpose of error control is to recover packet error or packet loss so as to improve the data transmission reliability and the video quality. Multicast congestion control and error control raise new challenges such as: scalability, adaptability, and fairness 1 Chapter 1 Introduction 2 with T C P (Transport Control Protocol) flows which are dominant in the current Internet traffic. Multicast transport protocols can be classified into two categories: pure end-to-end schemes and router-assisted schemes. In a pure end-to-end layered multicast scheme, only the end systems, i.e., the senders and the receivers are responsible for collecting information about the network condition, adjusting the sending rate, and performing error recovery. In layered multicast schemes, congestion control is usually initiated at the receivers. The receivers decide the number of layers they want to subscribe to according to the observed network conditions (e.g., loss rate, round trip time). In the fundamental work of Receiver-driven Layered Multicast (RLM) by McCanne et al. [37], congestion control is conducted through the so-called "join experiment". Each receiver starts by receiving the base layer. It subscribes to higher layers i f the current layers are received successfully. The receivers drop the newly added layer i f the packet loss rate exceeds the predefined threshold. In [11], an error control scheme is proposed for the layered multicast. F E C (Forward Error Correction) is used together with pseudo-ARQ (Automatic Repeat Request) to provide error protection for the source streams. The server transmits redundant packets through separate multicast groups and the clients can join redundant layers according to its packet loss rate. In [29], a Sender-adaptive and Receiver-driven layered multicast is proposed. Each receiver estimates the available bandwidth using packet pair approach and joins or leaves the groups according to the estimated bandwidth. It also sends feedback information to the server and the sender adjusts the sending parameters accordingly. The existing pure end-to-end layers multicast schemes have been proved to be effective. However, they have many problems due to the lack of support from network routers. Firstly, each of the layers is transmitted through a multicast group. Therefore, the sender has to allocate several multicast groups. Secondly, the clients adapt its transmission rate by subscribing or unsubscribing to layers. The action of subscription and unsubscription may cause oscillation. Thirdly, each of the clients independently makes their decision of subscribing or unsubscribing to layers. It is difficult to synchronize among receivers behind the same bottleneck link [46]. Chapter 1 Introduction 3 Router-assisted multicast schemes have drawn much attention recently. However, router-assisted schemes usually require a long process of standardization before they can be widely deployed. That is where active networks can play a role. Active networks add intelligence into the network nodes. With the programmability in the network, new network protocols and services can be introduced into the current network. Furthermore, with the computation ability presented in the intermediate nodes inside the network, network problems can be treated within the network rather than at the network endpoint. Therefore, one can anticipate that multicast schemes using active networks can provide better solution to multicast routing, quality of service support, congestion control, and error control. Most of the current end-to-end layered multicast schemes (e.g., Receiver-driven Layered Multicast [37], Receiver-driven Layered Congestion Control [56], Fair Layered Increase/Decrease with Dynamic Layering [7]) are unfair to TCP traffic. The router-assisted scheme PLM (Parity-pair receiver-driven layered multicast) requires that the all routers in the network deploy a fair queuing mechanism that allocates each flow a fair bandwidth share. Therefore, this scheme is not deployable for current network. This research is aimed at designing a layered multicast transmission using active networks. We intend to make most of the computation capability of the active nodes to provide a TCP-friendly congestion control scheme. Besides, we also intend to provide an error control mechanism to improve the data transmission reliability. 1.2 Our Approach In this thesis, a layered multicast scheme using active networks (RLM-AN) is proposed for the real-time video streaming where some packet loss rate is tolerated and end-to-end delay is bounded. Active nodes inside the network not only relay the multicast streaming packets, but also perform congestion control and error control. The video server, the clients and the active nodes form a logical multicast tree. For most of the current layered multicast schemes, congestion control is conducted in an end-to-end way. In RLM-AN, a distributed congestion control mechanism is proposed. With the active nodes, the overall logical multicast tree is divided into a set of sub-systems with a hierarchical structure. Receiver-driven TCP-friendly congestion control is performed in each of the sub-systems. For each subsystem, the root node is regarded as a pseudo server. The Chapter 1 Introduction 4 receiving node estimates a fair rate based on the packet loss rate and round trip time of the upstream link, then calculates the subscription level and sends it to the pseudo server. The pseudo server will adjust transmission rate for each of the receiving nodes. The pseudo server also aggregates the feedback information from its child node so that the feedback explosion problem is avoided. A distributed FEC-based error control algorithm is also proposed for R L M - A N . In multicast error control, if some form of feedback is needed for error recovery, a single packet loss will trigger a large number of clients to send feedback messages to the server. This is known as the feedback explosion problem. In this error control algorithm, source-coding based F E C method is used to avoid the feedback explosion problem. The end-to-end FEC-based error control is also performed in each of the subsystems. In each subsystem, a receiving node estimates the packet loss rate of its upstream link, calculates an error protection level, and then feedbacks the information to the pseudo server. The pseudo server will adjust the number of redundant packets which are sent to the receiving node accordingly. Moreover, the active node also performs local recovery in order to eliminate the error propagation problem. Generally, the active node receives redundant packets from its father node. However, it is possible that the maximum error protection level of its child nodes is higher than the error protection level of its upstream link. In order to make most of the computation ability and to save bandwidth, the active node will also generate redundant packets when necessary. One challenge of the error control algorithm in layered multicast is the dependency among different layers. The decoding of the higher layer packets relies on the packets of the low layers in the same frame. In R L M - A N , a priority dropping mechanism is applied together with the error recovery mechanism so as to provide extra protection for the lower layers. The priority dropping mechanism is performed at the intermediate active node. When the active node detects congestion in the queues of its output interface, it will drop packets of high layers before packets of lower layers. 1.3 Scope In Chapter 2, a brief introduction is presented in the areas related to video multicast over Internet. Section 2.1 provides a general introduction to IP network architecture. Section 2.2 Chapter 1 Introduction 5 presents a brief overview of the main aspects of video communication over Internet. Section 2.3 discusses the network support for multicast. It includes the network layer support schemes and the multicast transport protocols. Section 2.4 proposes the basic concepts of active networks and gives some examples of multicast schemes using active network. Chapter 3 gives an overview of network management related to multicast. The focus is congestion control and error control. Section 3.1 discusses the general issues on Internet congestion control, including: T C P congestion control, the main design aspects on multicast congestion control algorithms, the classification criteria of congestion control algorithms and the existing rate-based multicast congestion control algorithms. Section 3.2 describes briefly the different approaches for the error control mechanisms for multicast. Chapter 4 presents the details of the layered multicast using active network ( R L M - A N ) . Section 4.2 proposes the group management and data dissemination mechanism. Section 4.3 presents a distributed TCP-friendly congestion control mechanism; and Section 4.4 proposes a FEC-based error control mechanism. In Chapter 5, simulation studies are performed to evaluate the effectiveness of the proposed mechanism. It consists of two parts: simulations on congestion control and simulations on error control. The simulations on congestion control are mainly focused on the scalability and the TCP-friendliness. The simulations on error control mainly serve to demonstrate that the proposed error control can greatly reduce the packet loss rate without significantly increasing the end-to-end delay. Chapter 6 presents a short summary and discusses about the future research direction. Our contributions include three aspects. Firstly, we design a TCP-friendly congestion for layered multicast. It is scalable and has smoother throughput than T C P traffic. Secondly, a FEC-based error control is proposed. This mechanism can reduce the packet loss rate without significantly increasing the end-to-end delay. With the computation ability of active nodes inside the network, this error control solves the problem of packet loss propagation and feedback explosion. Thirdly, simulations have been done to evaluate the performance of the congestion control mechanism and the error control mechanism. Chapter 2 Video Multicasting over Internet The focus of this work is to present an efficient framework for transmitting video streaming to multiple receivers using active networks. This scheme is aimed at the scenario of real-time video streaming where the end-to-end delay is bounded and some packet loss can be tolerated. In order to address the bandwidth heterogeneity of the clients, the layered multicast is used, which is first proposed in [37]. The main idea of layered multicast is to encode the video stream into a cumulative number of layers, and thus clients subscribe to different number of layers according to their available bandwidth. In order to provide a data dissemination service with efficient congestion control and error recovery, active network techniques are used so as to make most of the computation ability and local storage of nodes inside the network. The scheme includes four aspects: group management, data dissemination scheme, congestion control, and error control mechanism. As Chapter 3 will give an introduction to multicast congestion control and error control, this chapter presents a brief review of the basic architecture of the Internet, the main aspects related to video communication over Internet, the existing multicast schemes and active networks. Section 2.1 will provide a general introduction to IP network architecture. Section 2.2 presents an introduction to the main aspects of video communication over Internet. Section 2.3 gives an overview of Internet multicast schemes. Section 2.4 proposes the basic concepts of active networks and gives some examples of the multicast schemes using active network. 6 Chapter 2 Video Streaming over Internet 7 Application Layer Transport Layer Internet Layer Physical Layer End user Internet Layer Physical Layer Application Layer Transport Layer Internet Layer Physical Layer End user Network node Figure 2.1 General architecture of the Internet [51] 2.1 Architecture of the Internet The Internet can be viewed as a collection of sub-networks or autonomous systems that are connected together [53]. It is based on the concept of packet switching. In Internet, there are two different elements: network nodes and end users. One end user in the Internet generates a sequence of data packets and sends them to another end user. The data packets travel across the Internet through a set of network nodes. Most network protocols are based on the concept of layering. In order to reduce design complexity, the overall protocol is divided into non-overlapping layers. The upper layers rely on the lower layer for service. The lower layer hides the implementation details from upper layers and provides abstract service point. Figure 2.1 shows the layering structure of TCP/IP protocol set. It is comprised of four parts: physical layer, Internet layer, transport layer, and application layer. The Internet layer is responsible for getting packets from the source machine to the destination machine, without regarding to whether or not these machines are on the same network. It is the lowest layer that deals with end-to-end transmission. The protocols for Internet layer are Internet Protocol (IP), Internet Control Message Protocol (ICMP) and routing protocols (e.g., Border Gateway Protocol (BGP) [53], Open Shortest Path First (OSPF) [53]). IP protocol supports both unicast and multicast. Chapter 2 Video Streaming over Internet 8 The task of the transport layer is to provide cost-effective data transport from the source machine to the destination machine, independent of the underlying physical networks in use. In the TCP/IP protocol suite, there are two different transport protocols: the connection-oriented transmission control protocol (TCP) and connection-less and unreliable user datagram protocol (UDP). The main functionality of T C P includes: addressing, establishing and releasing connections, flow control, buffering, multiplexing and error recovery [53]. The application layer is responsible for generating data packets and initiating and controlling the data transmission. In some situations (e.g., active networks), the network nodes provide some application layer processing which can be used to improve the performance of application-level QoS control. 2.2 Video Streaming over Internet The purpose of this project is to provide a scheme for multicast real-time video stream over the Internet. It relies on the U D P to transmit data packets. This section will give a brief overview of several key areas related to video streaming. A detailed survey of those areas can be found in [65]. The main aspects related to video streaming are: video compression, video server, application-level QoS control, media distribution services, and transport protocols for streaming. Streaming servers are very important in providing video streaming to clients. Two major approaches are emerging for streaming multimedia content to clients. The first is to use a web-server and the H T T P protocol to get the multimedia data to the client. The second is to use a specialized server for the video streaming task. A specialized streaming server typically consists of three parts: a communicator (e.g., transport protocols), an operating system, and a storage system. Raw video generally contains a lot of information redundancy and it must be compressed before it could be sent out. Video compression schemes can be classified into two approaches: scalable and non-scalable video coding. A non-scalable coding scheme only generates a data stream. A scalable coding scheme generates multiple sub-streams from the raw video. The basic sub-stream provides a video signal with lowest quality and the other sub-streams provide enhancement to quality of the basic video signal. Chapter 2 Video Streaming over Internet 9 A lot of transport protocols have been designed for multimedia communication between clients and streaming servers. The protocols can be classified into two categories: network layer protocols such as Internet Protocol (TP), transport protocol such as User Datagram Protocol (UDP), Transmission Control Protocol (TCP). Multicast transport protocol usually has to address the application-level QoS control. The objective of application-layer QoS is to "avoid network congestion and maximize video quality in the presence of packet loss" [65]. The application-layer QoS control techniques include congestion control and error control. Chapter 3 will give an overview of current congestion and error control techniques. In order to provide efficient multimedia transmission, some form of network support is beneficial. Network support can reduce transport delay and packet loss ratio and improve efficiency of streaming video over the Internet. Continuous media distribution services include network filtering, application level multicast, and content replication [65]. 2.3 Internet Multicast Schemes Internet Protocol supports both unicast and multicast. Unicast delivers a packet to a single destination. Multicast simultaneously delivers a packet to a set of hosts that want to receive it. Network support for multicast can be divided into two aspects: network layer support and multicast transport protocol. There are a lot of multicast transport protocols available. The following sections will give a short introduction to the network layer multicast support and multicast transport protocols. 2.3.1 Network Layer Support for Multicast Current network layer support for multicast includes two methods: TP multicast and MBone. 2.3.1.1 IP Multicasting EP multicasting was first proposed in R F C 1112 [16] as multicast enabling extension to IP. It is based on the concept of "host group". In IP multicasting, a host group means a set of receivers who want to receive a particular data stream. The membership of a host group is dynamic. Hosts can join and leave group at any time. In IP multicasting, a host group is identified by a class D IP address. Senders use the multicast address as the destination IP Chapter 2 Video Streaming over Internet 10 address. Multicast packets are delivered to all members of a host group with best-effort service as unicast EP datagram. IP datagrams are replicated by multicast enabled routers whenever the data paths diverge so as to improve bandwidth efficiency. Multicast enabled routers wil l forward multicast packets to an attached local area network (LAN) i f there are hosts on the L A N that are in the multicast group. IGMP (Internet Group Management Protocol) [16] is the protocol used to regulate the membership of IP multicast. It is employed by routers to detect the presence of group members on the network that are directly attached to the router. It runs between hosts and multicast routers. When a host wants to receive packets from the multicast server, it will send a Join Group message to the multicast router. Routers periodically send out IGMP queries to the directly attached sub-network to discover which groups are active or inactive. Host can also leave multicast group by using Leave Group message [16]. 2.3.1.2 MBONE M B O N E stands for Multicast Backbone. It is a virtual network built on the top of the Internet. It consists of a set of connected sub-networks and routers that support the delivery of IP multicast traffic. In Internet, the scheme of moving multicast packets by putting them in regular unicast packets is called tunneling [47]. In MBone, multicast enabling subnets are connected with LP tunnels. IP tunnels are simply unicast routes. The IP tunnel allows the multicast packets to travel through the non-multicast-capable parts of the Internet. Data delivered through the IP tunnel is not processed between the end points of the IP tunnel. Currently, MBone is only available to the subnets where multicast routers are installed. For hosts residing outside the MBone service area, the hosts would need to establish tunnels to their nearest multicast routers. MBone has a relatively fixed topology and multicast routers do not change their tunnels connecting the MBone often [47]. 2.3.2 Multicast Transport Protocols Multicast transport protocol has been a topic that draws intensive research effort due to the increase in multicast applications such as video-conferencing and multipoint data distribution services. Early multicast mechanisms used to be designed as a general solution to multipoint communication problems. However, recent research shows that a single Chapter 2 Video Streaming over Internet 11 generic protocol cannot satisfy all types of multicast applications. Therefore, recent multicast mechanisms are designed for specific applications. Usually, the multicast transport protocols can be classified according to the following features: • G r o u p management: how the protocol manages multicast group membership • Data Dissemination: the mechanism used to transmit data to multicast receivers. • E r r o r recovery mechanism: Error recovery mechanism deals with the issues of how the protocol recovers from data loss. It includes the problem of how the protocol requests retransmission of lost data, and how the protocol transmits data that needs to be retransmitted. • F l o w control and congestion control: flow control is used to regulate the data transmission of the sender so that it will not overflow the receiver's buffer. Congestion control is used to regulate the data dissemination of the sender so that it will not send at a rate that is over the capacity of the network. • Target application: the application that the protocol was proposed or implemented to support. Many multicast protocols have been proposed in the literature. For their target applications, multicast transport protocols can be classified into three categories: general purpose protocols, protocols for multipoint interactive applications, protocols for data dissemination services. General-purpose protocols include: Multicast Transport Protocol (MTP) [3], Reliable Multicast Protocol (RMP) [59], Xpress Transport Protocol (XTP) [52]. Multicast protocols for interactive applications include: Real-time Transport Protocol (RTP) [48], Scalable Reliable Multicast (SRM) [19], Reliable Adaptive Multicast Protocol ( R A M P ) [29]. Examples of multicast transport protocols for data dissemination services includes: Muse [32], Multicast Dissemination protocol (MDP) [35], Reliable Multicast Transport Protocol (RMTP) [33], Multicast File Transfer Protocol (MFTP) [39]. Interested readers are referred to [41] for a thorough review on the multicast protocols. Chapter 2 Video Streaming over Internet 12 2.4 Multimedia Multicast using Active Networks Most of the current video multicast schemes are so-called end-to-end schemes. Only the end hosts take care of network issues such as congestion control, error control, quality of service support. However, some of the issues can be better handled inside the network. In active networks, the internal nodes have the ability to perform customized computation. With the computation power and storage provided by the active nodes, a more efficient multicast scheme could be designed with better congestion control and error control. 2.4.1 Active Network Approaches The current IP network is based on the concept of layering of protocol. The functionality of the network is broken into hierarchy. Upper layers rely on the services provided by lower layers and the implementation of each layer is abstracted and hidden from the upper layer. Layering of protocol speeds the development of network. However, today's network has some disadvantages due to its lack of flexibility. It is difficult to integrate new technologies and standards into the existing network infrastructure. Moreover, the independent implementation may cause redundant functionality between different layers which leads to poor performance. Also, the lower layer can provide only limited functionality without adequate information of the higher layer implementation details [54]. In recent years, active networks have been proposed to bring intelligence to networks. The active networks give the internal nodes the power to perform customized computation on received packets. Figure 2.2 shows how the EP network with active nodes processes messages flowing through them. There are two primary approaches for active networks: programmable switches - the discrete approach and capsules - the integrated approach. These two approaches differ in the way programs and messages are treated. Chapter 2 Video Streaming over Internet 13 Network Source Transport Layer Internet Layer Physical Layer Traditional Router Internet Layer Physical Layer Active Router Destination l k r i k Internet Layer Internet Layer t k r k Physical Layer Physical Layer i k i k Network Figure 2.2 Application-specific processing with active networks [54] • Programmable switches - the discrete approach: In the discrete approach, the processing of message is separated from loading programs into the active node. Program code must firstly be injected into the internal active nodes. Then users can send their packets to such programmable switches. When a programmable switch receives a packet, it will examine the header of the packet and the appropriate program will be used to process the content of the packet. • Capsules - the integrated approach: In the integrated approach, every message is a capsule. Each capsule contains a program fragment and embedded data. When a capsule arrives in an active node, it will be dispatched to an execution environment where the contents of the capsules are interpreted. In the integrated approach, a capsule also has the ability to modify the state of the active node as well as the state of the capsule [54]. There are some other approaches for active networks architectures. As the main concern of this research is to apply active network in multicast, we only give the brief introduction of the active networks. Interested reader is referred to [54] for a survey of active network techniques. Chapter 2 Video Streaming over Internet 14 2.4.2 Multicast using Active Networks A number of multicast services have been proposed for active networks. Multicast can benefit from in-network computation ability and storage provided in active networks. In the following, we will give some examples of existing multicast mechanism using active networks. • Data Dissemination Mechanism In [66], a simple data dissemination mechanism using active networks was proposed. It uses reverse path forwarding method to construct the multicast tree [53]. Each client sends subscribe active packets to the server. The active subscribe packets travel in the reverse path. After receiving subscribe packets from the client, the intermediate active nodes wil l set up forwarding pointers for the client or active nodes. Data packets can later be sent through the forwarding pointers. This scheme only provides basic data dissemination mechanism. It does not provide congestion control mechanism or error recovery service. • E r r o r Recovery using Active Service In addition to the issue of data dissemination, the local storage and computational power in the active node can be used to provide active error recovery service for video multicast. In [26], the authors proposed an active reliable multicast scheme. To deal with packet loss, it provides caching, scoped retransmission, and N A C K suppression. When the data packets pass through an intermediate active node, the intermediate active node buffers the data packets. When a receiver detects a packet loss, it sends a negative acknowledgement (NACK) upstream. An intermediate active router that receives the N A C K will retransmit the packet i f it has the lost packet in cache. This thesis proposed a layered multicast scheme using active networks. It not only provides the basic data dissemination, but also proposes a congestion control algorithm and error control service. As the error recovery service proposed in [26] is NACK-based, we present a distributed FEC-based error control mechanism which is adaptable to heterogeneous network conditions. Chapter 3 Multicast Congestion Control and Error Control The purpose of this research is to design a multicast system for real-time video streaming based on active networks. Basically it is a transport protocol that relies on IP service to transmit the video streaming packets. For any transport protocols, congestion control and error control are two important issues which need to be addressed. The Internet Protocol (IP) does not provide any mechanisms for adjusting the transmission rate of end systems. If an end user continuously sends out data packets without regarding to the actual network resource, it can cause network overload and packet loss. More seriously, it can lead to bandwidth distribution unfairness and even congestion collapse. To avoid such a situation, congestion control algorithms must be provided for adjusting the transmission rate according to network conditions. In the Internet, packet loss can result from link error or network congestion. Therefore, mechanism for recovering lost packets is also needed in order to improve reliability and to preserve video quality [36]. This chapter is organized as following: Section 3.1 gives a general survey of congestion control mechanisms. In this part, the main design aspects and the classification criteria of congestion control algorithms are discussed. Existing multicast congestion control algorithms and evaluation criteria for multicast congestion control schemes are also presented. Section 3.2 describes briefly the different approaches of error control mechanisms for multicast. 15 Chapter 3 Multicast Congestion Control and Error Control 16 3.1 Internet Congestion Control The Internet Protocol (IP) does not provide any mechanisms for regulating the transmission rates of end systems. Therefore, i f an end system keeps sending data at a higher rate than the network can handle, it can cause unfairness problems and even congestion collapse. The unfairness problem occurs because of the existing architecture of the network in the Internet. Most of the traffic in the Internet is TCP traffic [15]. TCP connections wil l reduce its transmission rate in response to packet loss. However, those protocols without congestion control mechanism will not response to packet loss and continue to send packets at a high rate. It wil l eventually use most of the available bandwidth and cause TCP connections to utilize a small portion of the bandwidth. A more severe situation is called congestion collapse. It happens when an increase in the network load results in a decrease in the useful work done by the network. Congestion collapse was first reported in the mid of 1980s [2]. It was largely due to TCP connections unnecessarily retransmitting packets that were either in transit or had already been received at the receiver. One of the solutions to avoid the unfairness and congestion collapse is to control the amount of data that can be sent to network in a time interval. Therefore, congestion control is a very important issue for any transmission protocol. To quote: "The success of the Internet relies on the fact that best-effort traffic responds to congestion on a link by reducing the load presented to the network. Congestion collapse in today's Internet is prevented by the congestion control mechanisms in TCP. " [2] However, TCP is not suitable for real-time media streaming, especially for multicast scheme. Firstly, TCP retransmits lost packets to guarantee reliability. Retransmission of lost packets often takes a round trip time. If the round trip time is large, it is very likely that the retransmitted packet will miss its play-out time; Secondly, TCP's throughput is usually with great fluctuations. This will cause high delay jitter for the end users. In addition, the sudden quality degradation of multimedia applications is very annoying to the viewer's eyes; Thirdly, to perform congestion, TCP senders rely on receiver's acknowledgement to infer the information of network condition. Therefore, it is not suitable for large scale multicast applications because of the problem of acknowledgement explosion. Chapter 3 Multicast Congestion Control and Error Control 17 Nowadays, only few multimedia applications would use TCP directly. Therefore, most of the multimedia applications use UDP as its underlying transmission protocols. However, in order to fairly share the bandwidth of the Internet and avoid congestion collapse, a congestion control mechanism must be provided for multimedia applications. As most of the Internet traffic is TCP, it is desirable to design the congestion control mechanism to be compatible with the rate-adaptation mechanism of TCP flows. Thus it has to be TCP-friendly. The issue of multicast congestion control has received great attention in the literature. The following subsection will give a brief survey of the multicast congestion control. As this work is to design a congestion control algorithm that deals fairly with TCP flows, the transmission behaviors of the TCP congestion control algorithms will be examined first. 3.1.1 T C P Congestion Control TCP is a connection-oriented unicast protocol that provides reliable data transfer service. For TCP Tahoe, a TCP sender uses a congestion window (CWND) to control the amount of outstanding packets a TCP sender can send to the network before receiving an acknowledgement (ACK). There are two congestion algorithms used in TCP: slow start and congestion avoidance. A variable ssthresh is used to determine when slow start or congestion avoidance algorithm should be used to control the data transmission. • Slow start: When there is no packet loss, TCP uses the slow start algorithm and doubles the congestion window per round trip time. Slow start is used at the beginning of data transfer loss in order to slowly probe the available bandwidth of the network. • Congestion avoidance: When the congestion window (CWND) exceeds the ssthresd, the congestion avoidance algorithm is used, which increases the CWND by 1 full-sized segment per round trip time. In case of packet loss, the congestion window is reduced to one 1 segment and TCP switches to slow start algorithm [52] Chapter 3 Multicast Congestion Control and Error Control 18 3.1.1.1 TCP Throughput The long-term throughput of T C P depends mainly on round-trip time tR7T, segment size s, and the packet loss rate p . Using these parameters, the long-term throughput of a T C P connection can be calculated as [43] _ (3 1) fair tR7T(J^Jj3+(l2^3p~/S)p(l + 32p2)) This model is a simplification in that it does not take into account T C P timeouts. 3.1.1.2 TCP-friendless Non-TCP flows are defined as TCP-friendly when "their long-term throughput does not exceed the throughput of a conformant T C P connection under the same conditions [63]. • TCP-friendliness for Unicast: A unicast-flow is considered TCP-friendly when "it does not reduce the long-term throughput of any co-existent T C P flow more than another T C P flow on the same path would do under the same network conditions" [63]. • TCP-friendliness for Multicast: There is an ongoing debate on the definition of the term TCP-friendliness for multicast. A simple definition is that a multicast flow is defined as TCP-friendly when "for each sender-receiver pair, the multicast flow is unicast TCP-friendly". Some people argued that multicast flows should be allowed to use a greater portion of bandwidth than unicast flows, since they serve multiple receivers [63]. In this research, the simple definition of TCP-friendliness for multicast will be used. 3.1.2 Design Components of Multicast Congestion Control The design of good multicast congestion control protocols is more difficult than the design of unicast protocols. Multicast congestion control schemes should scale to large number of receivers and be able to handle heterogeneous network conditions at the receivers. Successful congestion control requires good cooperation between the sender and the receiver. The execution of rate adjustment of the multicast congestion control can be divided into a set of small tasks. In order to make the congestion control algorithm efficient and scalable, one should carefully design approaches to finish these tasks and distribute them to Chapter 3 Multicast Congestion Control and Error Control 19 the senders, receivers or other network elements (for example, active routers inside the network) involved in congestion control. Firstly, the main tasks of multicast rate regulation are as follows: • Probing for extra bandwidth: To fully utilize the link bandwidth, it is necessary to determine whether there is extra bandwidth available. This can be done either by increasing the sending rate automatically, or by using bandwidth estimation through usage of the TCP-throughput equation (e.g., eqn. (3.1)), or by special extra probing packets. • Detection of network congestion: In order to adapt the transmission rate to the network condition and avoid congestion collapse, the occurrences of network congestion must be detected. Network congestion can be inferred by the end systems through longer delay, packet loss, or sender time out. • Calculation of the adaptation algorithm: There are two types of adaptation parameters: window or rate. This task is to determine the adaptation parameter, either via calculating an appropriate transmission rate or the simulation of a T C P congestion window. • Execution of the adaptation: This is where the actual rate adaptation is executed. Generally, it is the sender who controls the sending rate. In receiver-driven multicast algorithms, the receiver can also control the arrival rate by subscribing to different layers of multicast. 3.1.3 Classification Criteria of Congestion Control Schemes Multicast congestion control schemes can be classified in different ways, depending on their design goals, architectures, applications and various issues. A brief overview of the criteria which are used to classify congestion control schemes is given as below [63]. 3.1.3.1 Window-Based versus Rate-Based Based on the rate adaptation parameter, the multicast congestion control algorithms can be classified as window-based or rate-based. Window-based methods adapt their transmission rate based on a congestion window while rate-based approaches use a T C P model (e.g., eqn. Chapter 3 Multicast Congestion Control and Error Control 20 (3.1)) to calculate the appropriate rate. B y adapting the sending rate to the average long-term throughput of T C P , rate-based congestion control can produce much smoother throughput at the receiver. 3.1.3.2 Single-rate versus Multi-rate Multicast congestion control algorithms can also be classified according to the possible sending rate as single-rate or multi-rates. In single rate schemes, all receivers are receiving packets in the same rate and this limits the scalability of the mechanism. In multi-rate congestion control protocols, different clients can receive at different rates. Such scheme is more scalable to large number of receivers with heterogeneous network conditions. A typical multi-rate congestion control approach is layered multicast: a sender divides the data into several layers and transmits them to different multicast groups. Each receiver can subscribe to as many groups as permitted by the bandwidth between that receiver and the sender. 3.1.3.3 Sender-initiated versus Receiver-initiated In sender-initiated algorithms only the sender is responsible for adjusting the rate or window. The receivers provide only feedback information about the network conditions (e.g. round trip time or packet loss). Receiver-driven congestion control is usually used together with layered congestion control approaches. The receivers decide whether to subscribe or unsubscribe from additional layers, based on the network conditions. 3.1.3.4 End-to-end versus Router-assisted Many of the congestion control schemes are called end-to-end schemes since they are designed without any support from the routers. In end-to-end congestion control scheme, only the end systems (i.e., the senders and receivers) are responsible for collecting information about the network condition and adjusting the sending rate. The performance of multicast congestion control can be greatly improved i f it can obtain network information from nodes inside the network. Congestion control schemes that rely on additional functionality of the routers inside the network are called router-assisted schemes. Chapter 3 Multicast Congestion Control and Error Control 21 3.1.4 Rate-based Multicast Congestion Control Algorithms This section will give an overview of some of the typical rate-based multicast congestion algorithms. Multi-rate multicast congestion control protocols usually take the form of layered multicast which was proposed in [37]. Since then, many layered multicast schemes have been proposed. This section will mainly discuss some of the layered multicast congestion control algorithm. • R L M : The pioneering work of layered multicast is the Receiver-driven Layered Multicast (RLM) developed by McCanne, Jacobson and Vetterli [27]. In R L M , the sender compresses the video into a cumulative number of layers. The basic idea of layered multicast is to combine a layered compression algorithm with a layered transmission scheme. In this approach, a signal is compressed into a number of layers. The lower layer contains the basic information and the higher layers incrementally refine the quality of the video stream. The receiver adapts its rate via so-called "join-experiment". A receiver starts by subscribing to the first layer. When the receiver does not experience packet loss for a certain period of time, it subscribes to the next layer. When a receiver experiences packet loss, it will unsubscribe from the highest layer it is currently receiving. In order to address the problem of scalability and interference of join experiment, the distributed receivers can learn from each other about the failed join experiment [27]. The congestion control approach of R L M has some disadvantages such as difficulty in synchronizing the join and leaving actions of the receivers behind the same bottleneck, throughput oscillation due to join and leave action, unfairness to TCP flows due to the lack of consideration for the round trip time. Consequently, several new protocols have been proposed to improve the congestion control performance of R L M . • RLC: In order to solve the problems of the congestion control algorithm of R L M , Vicisano, Crowcroft and Rizzo presented the Receiver-driven Layered Congestion Control (RLC) in [39]. In R L C , the bandwidth of the layer increases exponentially. A user drops the highest layer in case of packet loss. If packet loss is not experienced for a period of time, the Chapter 3 Multicast Congestion Control and Error Control 22 receiver will increase the subscription level. Subscription delay also increases exponentially. R L C uses synchronization points (SP) to improve synchronization between receivers. SPs are specially flagged packets in the data stream. Receivers may join a layer only after receiving an SP. The distance between SPs on each layer is related to the subscription delay [39]. R L C has improvements in the congestion control mechanism over R L M . However, it has some disadvantages. Firstly, the rate adaptation granularity is very coarse because of the exponential distribution of the layers. Secondly, R L C does not take the round-trip time into account when determining the sending rate. This can lead to unfairness towards TCP flows with a high round-trip time. Furthermore, the algorithm is too complicated compared with R L M and hard to implement. • FLID-DL (Fair Layered Increase/Decrease with Dynamic Layering) Byers et al. [5] proposed FLID-DL in order to reduce the join and leave latency of R L M . FLID-DL introduces the concept of dynamic layering. With dynamic layering, the bandwidth of a layer decreases over time. The receiver reduces its rate simply by not joining additional layers. It has to periodically join additional layers to maintain its receiving rate. In order to increase the rate, the receiver must join additional layers beyond those which need to maintain a constant rate. In FLID, the receiver partitions time into slots of fixed duration. It drops a layer in case of packet loss in the time slot. The sender adds increase signal into data packets. A sender will add a layer at the beginning of a slot i f the increase signal indicates a layer number larger than the highest layer that the receiver has subscribed to. The increase signal is carefully designed so that the receiver can achieve a rate compatible with TCP flows. The advantage of FLID-DL is that it does not suffer from long leave latencies. However, FLLD-D L still has unfair behavior towards TCP. It has high communication overhead due to frequent joining and leaving actions. • MLDA The Multicast Loss-Delay Based Adaption Algorithm (MLDA) [35] is a congestion control protocol that uses layered multicast. In M L D A , the sender periodically sends a report, containing the information about the rate of sent layers, to all the receivers. After receiving a Chapter 3 Multicast Congestion Control and Error Control 23 sender report, each receiver measures the packet loss rate and round trip time and estimates a TCP-friendly transmission rate. Based on the transmission rate, the receiver makes decisions whether to join an additional layer or leave the highest layer. In addition, the receiver sends a report to the sender indicating its estimated fair rate. The sender adjusts the size of the different layers according to the reports from receivers. M L D A uses R T C P reports for the signaling between the sender and the receivers. M L D A can react to congestion faster than other layered schemes by reducing the rate on a layer that causes congestion rather than waiting for all receivers behind a bottleneck to leave the corresponding multicast group. A major disadvantage of M L D A is the complexity of the protocol compared with R L M . • PLM The packet-pair receiver-driven layered multicast (PLM) [23] is based on layered transmission and the use of packet-pair approach to infer the bandwidth available at the bottleneck. With P L M the sender periodically sends a pair of its data packets as a burst to infer the bandwidth share of the flow. The receivers use the gaps between the specially marked packet-pairs to estimate their bandwidth share. Based on the estimated share they determined the number of data layers they can join. The rate adaptation method of P L M is simply and easy to implement. However, P L M assumes that the all routers in the network deploy some kind of a fair queuing mechanism that allocates each flow a fair bandwidth share. This assumption is unrealistic for current network. In the layered multicast schemes mentioned above R L M , R L C and F L I D - D L are not T C P -friendly. M L D A flows are fair to T C P traffic. However, it is not scalable since all the receivers have to periodically send reports to the sender to indicate their estimated fair rates. Furthermore, the M L D A sender adjusts the size of different layers. This adds extra complexity to the congestion control algorithm. The rate adjustment algorithm of P L M is simple and easy to implement. However, it requires that the all routers in the network deploy some type of a fair queuing mechanism. Besides, none of the above layered multicast schemes provide error control mechanism. Chapter 3 Multicast Congestion Control and Error Control 24 This work is aimed designing a layer multicast scheme. In addition to the data dissemination scheme, we intend to design a TCP-friendly congestion control mechanism. The focus is to make most of the computation ability to design the congestion control scalable. Beside, we also intend to provide an error control mechanism for the proposed layered multicast scheme to improve the data transmission reliability. 3.1.5 Evaluation Criteria for Congestion Control Mechanisms The purpose of deploying the congestion control mechanisms for multimedia communication is to improve the network performance compared to the case when no congestion control mechanism is deployed. In this section, some of the evaluation criteria of the performance of the congestion control mechanisms are described. 3.1.5.1 Scalability Based on the multicast architecture of the Internet, a sender can send a media stream to nearly unlimited number of receivers in an effective way. The performance of an efficient congestion control scheme should therefore not degrade substantially with increased number of receivers in a multicast session. Additionally, the complexity of the schemes used should still be accepted. 3.1.5.2 Response Time Response time is generally measured by the speed or time with which a system approaches some stable state from any starting state. Efficient adaptation algorithms need to have short convergence period and lead to a stable state. 3.1.5.3 Throughput Smoothness Throughput smoothness is measured by the variance of the arrival throughput at a relative steady state. It is not very important for application such as file download or www. However, it is a major factor that affects the quality of many multimedia applications such as: video on demand, video conference, etc. Chapter 3 Multicast Congestion Control and Error Control 25 3.1.5.4 Fairness In the Internet, there are many connections or users competing for some shared resources such as bandwidth. Therefore, the fair resource distribution should be considered. In multimedia communication, there are two types of fairness definition: max-min fairness and TCP-friendly fairness. Max-min fairness criterion is usually used when the number of competing flows is known. However, for a packet-switched network, it is hard to know the number of competing flows. Therefore, the adaptation should be based on an estimation of the performance of a system. For example: adapting the transmission behavior or of a video stream based on the measured loss and the delay values in the network. A s around 95% of the Internet traffic is T C P traffic [15], congestion control approaches should be T C P -friendly. 3.2 Internet Error Control Error control is the mechanism of detecting packet error or loss and correcting it. Data transferred from one place to the other has to be transferred reliably. Unfortunately, in many cases the physical link cannot guarantee that all bits are transferred correctly. It is the responsibility of the error control algorithm to detect those errors and correct them. With the development of reliable transmission links, packet loss or error due to link error has been greatly reduced. However, packet loss can occur due to network congestion and other reasons like route changes. Packet loss will cause deterioration of video quality, which is annoying to users' eyes. Therefore, it is desirable to incorporate error control mechanisms into the multicast transmission schemes. 3.2.1 Existing Error Control Algorithms in Multimedia Communications The difference between traditional text data communication compared to multimedia communication is that some visual quality degradation is often acceptable while delay must be bounded. This feature introduces many new error control mechanisms applicable to video application, but not applicable to traditional data. Generally used error control schemes include: Forward Error Correction (FEC), Retransmission, Error-resilient Encoding, Error Concealment. A brief introduction to each of the error control methods is given below. Chapter 3 Multicast Congestion Control and Error Control 26 3.2.1.1 FEC The FEC strategy is mainly used in links where retransmission is impossible or impractical. The principle of FEC is to add redundant information so that original information can be reconstructed in the presence of packet loss. It is very effective when the average number of consecutive packet loss is small. FEC can be divided into three categories: channel coding-based approach, source coding-based approach, and joint channel and source coding based approach [64]. • Channel coding-based approach In channel-coding-based approach, the encoder groups N source packets into a block and then produces additional K redundant packets so that the total number of packets in the block becomes N + K. The user will be able to reconstruct all the original source packets as long as it receives N packets. The advantage of channel coding is that it can scale to arbitrary number of receivers in a large multicast group. The problem of channel coding based approach is that the transmission rate and end-to-end delay increases [58]. • Source coding-based approach In the source coding-based FEC (SFEC) approach, the n'h packet contains both n'h group of blocks (GOB) and redundant information about the n-Vh GOB. With the redundant information in the n'h packet, n-l"' GOB can be reconstructed with coarser quality. As a result, when there is packet loss, channel coding could achieve perfect recovery while SFEC recovers the video with reduced quality. One advantage of SFEC over channel coding is lower delay (no need to wait for N packets) [4]. • Joint source/channel coding approach Based on Shannon's separation principle, both source coding and channel coding can be done independently. But it is only valid on the assumption of allowing infinitely long code-words. If more is spent on the bits of the source coding, this means that there is not enough channel protection leading to errors. The received video quality becomes poor. Spending more on the bits of the channel coding means enough protection and no transmission errors, but the source material is over-compressed and the received video quality is again poor. Chapter 3 Multicast Congestion Control and Error Control 27 Thus, there is a trade-off and balance point where the channel capacity is optimally allocated between source and channel to achieve the best received video quality. Joint source/channel coding is to optimize rate allocation between source and channel coding. The task of joint channel and source coding can be divided into three steps: (1) finding an optimal rate allocation between source coding and channel coding for a given channel loss characteristic, (2) designing a source coding scheme to achieve its target rate; (3) designing channel codes to match the channel loss characteristic and achieve the required robustness [64]. 3.2.1.2 Retransmission The A R Q (Automatic Repeat Request) strategy uses error detection methods combined with retransmission of corrupted data. The A R Q strategy is generally used mainly because the retransmission-based error control has modest bandwidth and processing costs. However, retransmission-based error recovery is considered unsuitable for multimedia multicast communication applications. The main reason is the feedback explosion problem due to the large number of clients. Meanwhile, that retransmission requires at least one additional round-trip time (RTT) to recover lost packets. The extra delay as mentioned causes the retransmitted packet arrive at the receiver after its play-out time. 3.2.1.3 Error Resilient Encoding The objective of error-resilient encoding is to enhance the robustness of compressed video to packet loss. It is executed by the source. It is different from F E C in that it prevents propagation or limits the scope of the damage (caused by packets losses). One examples of error resilient encoding is M D C (multiple description coding). It compresses the multimedia streaming into multiple streams and each streams provides acceptable visual quality. The more streams a receiver can obtain, the better video quality it will get. The advantage of error resilient encoding is its robustness to loss and the enhanced quality. But it has the disadvantage of tradeoff between compression efficiency and the reconstruction quality from one stream [64]. Chapter 3 Multicast Congestion Control and Error Control 28 3.2.1.4 Concealment Strictly speaking, concealment is not an error control scheme because it does not actually recover lost data, but rather creates an approximate reconstruction of the missing data based on received data. There are two basic approaches for error concealment, namely, spatial and temporal interpolation. In spatial interpolation, missing pixel values are reconstructed using neighboring spatial information. In temporal interpolation, the lost data is reconstructed from data in the previous frames. Concealment is generally necessary at the receiver to reduce the effect of occasional unavoidable loss. However other error control method still should be used to keep packet losses low in order to reduce the cost of concealment and to achieve graceful degradation [58]. Chapter 4 Receiver-driven Layered Multicast Using Active Networks In this chapter, the details of a layered multicast scheme using active networks ( R L M - A N ) is proposed for the real-time video streaming scenario. The focus of this research is to make the most of the computation ability and the local storage of the active networks to design an efficient and scalable multicast scheme. In addition to the group management and data dissemination mechanism, congestion control and error control are two important issues that need to be addressed for multicast transport protocols. With the active node inside the network, the multicast tree is formed into a hierarchical structure. Each of the active nodes inside the network is taken as a pseudo server and the multicast tree is divided into a set of subsystems. A receiver-driven T C P -friendly congestion algorithm is applied to each of the subsystems. In every subsystem, the receiver estimates a fair transmission rate and sends feedback information to the sender which will adjust the transmission rate accordingly. A FEC-based error control mechanism for R L M - A N is also presented is this chapter. The error control is performed on every subsystem. With the estimated packet loss rate of its upstream link, the receiver calculates the number of redundant packets for error recovery. It sends the information to the sender which will dynamically adjust the error protection level. The intermediate active nodes will also perform error recovery. This chapter is organized as follows. In Section 4.1, a brief introduction is given on the layered encoding and F E C encoding since they are fundamental to layered multicast and FEC-based error control. Section 4.2 discusses the overall structure of R L M - A N . Section 4.3 29 Chapter 4 Receiver-driven Layered Multicast using Active Networks 30 presents the subscription process and group management of R L M - A N . In Section 4.4, the distributed congestion control protocol is presented. Section 4.5 gives the protocol details of the error control algorithm. In Section 4.6, some discussion and analysis of the performance of R L M - A N are also presented. Section 4.7 summarizes the whole chapter. In the Chapter 5, simulations will be used to assess the performance of the congestion control and the error control algorithm. 4.1 Background and Assumptions Basically, R L M - A N is a layered multicast scheme that combines a layered encoding scheme with a layered transmission mechanism. In order to improve data transmission reliability and to preserve video quality, a FEC-based error control mechanism is also provided to recover lost packets. In this section, background knowledge and assumptions on layered encoding and F E C encoding algorithms are presented. 4.1.1 Layered Encoding For layered multicast schemes, it is generally assumed that the source video stream is encoded by a layered encoder. The input signal is compressed into a number of layers. The layers are arranged in a hierarchy that provides progressive improvement of the video quality. If only the lowest layer is received, the decoder produces a video signal with the lowest quality. If the decoder receives two layers, it combines the second layer information with the first layer to produce improved quality. The more layers that are received, the better the quality of the video will be. Many layered encoding schemes have been proposed [28]. One example of the layered encoder is the three-dimensional (3-D) SPIHT for video encoding [27]. However, this work is mainly focused on designing a layered transmission scheme. Therefore, it is simply assumed that there exists a layered encoding scheme that produces a set of streaming layers. The bandwidth for each of the streaming layers is fixed and known. Regardless of how the source layers are produced, it is also assumed that there is a dependency relationship among packets from different layers and that the dependency relationship is known at the decoder in the client and the active node. Chapter 4 Receiver-driven Layered Multicast using Active Networks 31 4.1.2 F E C Encoding The idea of FEC is to add redundancy to source packets so that lost packets could be recovered from received packets. The degree of redundancy is dependent on the packet loss rate. The higher the loss rate, the more redundancy is added. In R L M - A N , a source coding-based FEC approach is used. The data packets of each layer are grouped into blocks of size N. The block size N is constant for all layers. For each block of N source packets, K parity packets are generated using a systematic (N + K, N) Reed-Solomon style erasure correction code. The parity packets are generated byte-wise from the source packets. Assume that 5, ••• sN be fixed-length column vectors which represent the source packets. P is the Nx(N + K) matrix for generating the parity portion of a systematic Reed-Solomon code. Then the parity packets px ••• pK are calculated as: • k sN]P = [p{ ••• pK] (4.1) Generally, P is constructed such that any N columns of G = [P i] are linearly independent. Therefore, any subset of N packets (source packet or redundant packet) is enough to reconstruct the original N source packets [62]. 4.2 Architecture of the RLM-AN System Figure 4.1a illustrates a receiver-driven layered multicast system using active networks (RLM-AN). It consists of three parts: a video streaming server, a set of client and a number of active nodes. The video streaming server generates a flow of active dada packets and sends them to the clients through the intermediate active nodes. The server, the intermediate active nodes and the clients thus form a logical multicast tree in which the server is the root and the clients are the leaves. Every node in the multicast tree receives video streaming packets from its father node and relays them to their child nodes. Figure 4.2b shows the logic multicast tree for the topology in Figure 4.1a. Chapter 4 Receiver-driven Layered Multicast using Active Networks 32 192kbps 128kbps 56kbr5s li) (S3 Layer 2 -> Layer 1 Layer 0 R L M - A N server R L M - A N client Ri Ri Non-active node Active node Figure 4.1 (a) Overall structure of the R L M - A N system Virtual link Figure 4.1 (b) Logical Multicast Tree of the network topology in (a) There are two types of intermediate nodes inside the network: active nodes and traditional non-active nodes. When an active node identifies active packets, it will deliver them to the active agent in the node. The active agent will process the active packets, replicating them and delivering them to its child nodes. Non-active nodes are traditional routers and simply pass the active packets on as regular IP packets. In the following, the data path from a father node to its child node in the logical multicast tree is regarded as a virtual link. Therefore, the logical multicast tree can be viewed as a set of elements connected by virtual links. For convenience, at each virtual link, the father node which sends packets is called the sender, and the child node which receives Chapter 4 Receiver-driven Layered Multicast using Active Networks 33 packets is called the receiver. A sender can be the video streaming server, or an active node. A receiver can be a client or an active node. A n active node is both a sender and a receiver. For an active node, the link from the active node to any of its child node is called the downstream link and the link from the father node to the active node is called the upstream link. With the existence of active nodes, the overall multicast tree is transformed into a hierarchical architecture. The goal of this research is to make most of this hierarchical architecture and the processing and buffering ability of the active node to provide a scalable and efficient multicast scheme. For a multicast scheme, there are four mechanisms that have to be provided: session control and group managements, data dissemination, congestion control and error control. These aspects will be addressed in the following sections. 4.3 Session Control and Data Dissemination Video transmission starts after the subscription of the clients. For simplicity, R L M - A M is based on IP service and no group address is used. 4.3.1 Subscription and Unsubscription Process Figure 4.2 illustrates the subscription process for the topology in Figure 4.1. When the session starts, there is no client or traffic flow in the network. When clients E l intends to join the video streaming session, it sends a subscribe packet directly to the server. The subscribe packet contains two parameters: the client address and the subscription level (the number of layers) of the client. For example, the subscription level of client E l is 5. While traveling to the server, the subscribe packet will be caught by the active node R4. R4 registers the client as its child node and keeps a record of the address of the client and its subscription level. Since R4 has not been registered in this video streaming session, it will create an active agent for this video streaming session. It will replace the client address in the subscribe packet with its own address and forward the subscribe packet to the server. The subscription process will continue through active node R2 until it reaches the server. The server will register the active node as its child node and start to send 5 layers of data to the active node and the active node will forward the data packets to client E l . Chapter 4 Receiver-driven Layered Multicast using Active Networks 34 Subscribe packet Virtual link Outgoing interface Figure 4.2 Subscription process and data dissemination of R L M - A N After the video streaming session starts, another client E2 also intends to join the video streaming session with a request of 3 layers. It will send a subscribe packet to the server. As the subscribe packets reaches the R4, the active node will register the client E2 as its second child node. Since the subscription level of the active node is higher than that of client E2, the subscription request will stop at R4. R4 replicates data from the first 3 layers and forwards them to client B . It is possible that the subscription level of client E2 is higher than that of the active node (for example, client E2 requests 6 layers). In such a case, the active node R4 will send a subscribe packet to the server to require an additional layer after it receives the subscribe packet from client E2. In order to unsubscribe from the video session, a client simply sends an unsubscribe packet to its father node which is an active node (an unsubscribe packet is a subscribe packet with subscription level 0). This active node will prune the client out of the multicast tree. If all the child nodes of an active node leave the video session, the active node will send an unsubscribe packet to the upper level active node or the server. 4.3.2 Data Dissemination Every time an active node receives a subscribe packet from another node (a client or an active node), the active node will register this node as its child node and setup an outgoing interface for the subscribing node (in Figure 4.2, the solid arrow inside the active nodes Chapter 4 Receiver-driven Layered Multicast using Active Networks 35 represents the outgoing interface). When the active node receives a data packet from its father node, it will compare the layer number of the received packet with the subscription level of each of its child nodes. If the subscription level of the child node is higher than the layered number of the received data packet, the active node will replicate the data packet and send it to the outgoing interface associated with the child node. 4.3.3 Membership Management The session membership is dynamic such that the client can join or leave the video streaming session at any time. Each of the active nodes and the server keep a membership table. The membership table of an active node or the server contains the address, the subscription level, the error protection level and the pointer to the outgoing interface for each of its child nodes (Error protection level is for the purpose of FEC-based error control. It means the number of redundant packets in each of the data blocks. Section 4.5 will discuss error control mechanism in detail.). Table 4.1 illustrates the membership table in the active node R2 after client E l , E2, E3, E4 all subscribe to the video streaming session. Instead of maintaining a global membership at the server, the membership R L M - A N is managed locally. The active node or the server only maintains a list of its own direct child nodes instead of all clients in the video multicast session. Table 4.1 A n example of the membership table of the active node R2 in Figure 4.2 Node Subscription Level Error Protection Level Outgoing interface R3 5 0 A2 E3 4 0 A3 E4 3 0 A 4 4.4 TCP-friendly Congestion Control In this section, a distributed congestion control protocol is presented to operate with the basic data dissemination mechanism of R L M - A N . Congestion control has two objectives: avoiding congestion collapse and achieving fairness. In R M - A N , congestion control takes the form of the rate control. The server or the active node inside the network adjusts its transmission rate according to the available network resource and the network conditions. In Chapter 4 Receiver-driven Layered Multicast using Active Networks 36 designing efficient congestion control algorithms for multicast communication schemes, there are several aspects that require careful consideration: • TCP-friendliness: The majority of the traffic in the Internet is T C P flows. Therefore, the multicast flow should fairly share the network bandwidth with T C P traffic. Namely, the congestion mechanism should be TCP-friendly. • Scalability: Video multicast systems generally have a large number of clients. The performance of congestion control mechanism should not degrade greatly with the number of the clients. Moreover, existing congestion control mechanisms for layered multicast are generally conducted in a receiver-driven way. The client estimates the available bandwidth and feedback information to the server, which adapts its transmission rate accordingly. However, when the number of clients increases, the feedback packets may overwhelm the server and cause the congestion of the link around the server. A mechanism should be provided to eliminate the feedback explosion problem. • Heterogeneity: The clients in the multicast session vary greatly in their capability and bandwidth. Besides, network conditions vary greatly from area to area in the network. The occurrence of congestion in one area does not imply the congestion in another area. Therefore, congestion control should take the heterogeneity of clients and the network conditions into consideration. 4.4.1 Distributed Congestion Control A distributed congestion control mechanism has been designed in order to satisfy the design goals. For most of the current layered multicast scheme, congestion control is conducted in an end-to-end receiver-driven way in which the client estimates its subscription level according to the bandwidth of the data path from the server. It then sends feedback information to the server which will adjust transmission rate accordingly. However, pure end-to-end multicast congestion control mechanisms often have the problem of feedback explosion. Chapter 4 Receiver-driven Layered Multicast using Active Networks 37 R3 R2 Subsystem Figure 4.3 Hierarchical structure of the logical multicast tree for the topology in Figure 4.1 The key idea of the distributed congestion control mechanism in R L M - A N is to divide the overall logical multicast tree into a set of subsystems with a hierarchical structure and to perform receiver-driven TCP-friendly congestion control in each of the subsystems. Figure 4.3 shows the hierarchical structure of the multicast tree for the topology in Figure 4.1. With the presence of active node R2 and R3, the multicast tree is divided into three subsystems. The root node in each of the subsystem is regarded as the pseudo server and child nodes are regarded as pseudo clients. The subsystem is also organized in a hierarchical structure in that the pseudo server in one subsystem is a pseudo client in the upper level subsystem. For each subsystem, the pseudo client estimates a fair rate based on the packet loss rate and round trip time of the upstream link, then calculates the subscription level and feedbacks it to the pseudo server. The pseudo server will adjust transmission rate for each pseudo clients accordingly. The pseudo server will aggregate the feedback information of its clients and determinate its subscription level on the maximum subscription level of its child nodes and the available bandwidth of its upstream link. In this distributed congestion control scheme, the server or the active nodes only receives feedback from its direct child nodes. As the active node suppresses the feedback packets from its child node, the feedback explosion problem is eliminated. The behaviors of the distributed congestion control protocols are described in the following from the perspective of the client, the active node, and the server. • Client: When the client receives data packets, it estimates the round trip time (RTT) and packet loss rate of the upstream virtual link. For every round trip time, it calculates a fair Chapter 4 Receiver-driven Layered Multicast using Active Networks 38 transmission rate and a subscription level (the algorithm of calculating the subscription level will be explained in Section 4.4.2). It will then send a feedback packet to its father node. • Active node: An active node determines its subscription level based on two factors: the available bandwidth of its upstream virtual link and the maximum subscription level of its child nodes. As the active node can be viewed as a pseudo client in the upper level subsystem, the node determines the available bandwidth and allowable subscription level of its upstream link in the same way as that of a client. At the same time, the node keeps record of the subscription levels of its child nodes and finds the maximum subscription level. The subscription level of the active node will be determined as: L = min(max(Z,),Z) where Lt is the subscription level of its i'h child nodes and L is the allowable subscription level of its upstream link. For every round trip time (of the upstream link), the active node sends feedback information to its father node. • Server: The server simply receives subscribe packets from its child node and produces the number of layers and sends out to its child nodes according to their subscription levels. 4.4.2 Subscription Level Calculation In this section, the algorithm is presented to calculate the allowable subscription level of a virtual link. This algorithms consists of two steps: 1) calculate a fair rate which is determined by the packet loss rate and round trip time of the virtual link. 2) compute the subscription level with the fair rate. The algorithm of computing the fair share of the bandwidth of the virtual link is similar to that of a rate-based unicast congestion control algorithm presented in [15]. A. Fair Rate Calculation For a TCP session sharing the same virtual link with a R L M - A N flow, the average throughput of the TCP flow can be calculated as: Chapter 4 Receiver-driven Layered Multicast using Active Networks 39 R-ff-^p — i i (4.2) tR7T (J2pj3 + (UfipJSMl + 32p2)) where p , is the packet loss rate, t RTT is the round trip time and s is the packet size. In order to achieve TCP-friendliness, the transmission rate of the R L M - A N flow sharing the same virtual should have similar average throughput as that of the TCP flow i f they have the same round trip time and packet loss rate. As mentioned before, a TCP-friendly flow should have similar behavior to that of TCP traffic. In TCP, there are two states: slow-start and congestion-avoidance. Accordingly, for R L M - A N two algorithms are used to determine the fair rate: • Slow start: Before any packet loss happens, slow start algorithm is used. The fair rate is doubled for every round trip time. Rictual = 2 * RJactuai (4.3) where j means the j'h feedback. R^Lai anc* RLtuai a r e the new fair rate and the previous fair rate respectively. The first transmission rate is set as: Ractual = S ^RTT • Congestion avoidance: After packet loss occurs, the congestion control algorithm is used instead. The new fair rate is calculated as: RJ+\ =lmin(RLual+s/tRTT>RTCr)> w k e n RLual ^ RTCP ( 4 4 ) [max(RJaclml -s/tRTT,RTCP), whenRJactual > where RTCP is the TCP throughput calculated using equation (4.2). B. Subscription Level Computation With the fair sending rate RJa^ml , the subscription level L can be calculated as: M-\ M ^I^<Ci^I^ (4-5) k=0 k=0 where Bk is the bandwidth for the k'h layer. Chapter 4 Receiver-driven Layered Multicast using Active Networks 40 4.4.3 Parameter Estimation As one may see from the equation (4.2), the round trip time and packet loss rate are essential to determine the fair rate in rate-based congestion control. In this section, algorithms are given to measure the round trip time and the packet loss rate. 4.4.3.1 Round Trip Time Estimation In the distributed congestion control mechanism of R L M - A N , round trip time is measured at the receiver. R T T estimation consists of two parts: initial R T T estimation and periodic R T T measurement. When the receiver sends the subscribe packet to the server, the subscribe packet contains a time stamp which records the time when the packet is sent from the receiver. The first active node that catches the subscribe packet will send an echo packet to the receiver. The echo packet contains two time stamps. One stamp records the one way trip time from the receiver to the sender and the other records the time when the echo packet is sent from the sender. Then the receiver will make the initial estimation of the R T T with the echo packet. Suppose that the client sends subscribe packet at tl, the active node receives the subscribe packet at t2, it sends back echo packet at t2 and the receiver gets the echo packet at t4 . Therefore the round trip time can be calculated as: When the clocks in the sender and receiver are not synchronized, the synchronization error can be modeled as a time measurement error At at the sender. Then equation (4.6) becomes From equation (4.7) one can see that with the echo packet, the receiver can measure the round trip time correctly in spite of the synchronization problem. For every round trip time, the receiver sends the feedback packet to its father node. The father node sends an echo packet to the receiver. Using the echo packet, the receiver can measure the R T T periodically using the same method as that of initial R T T measurement. In order to obtain relatively smooth R T T estimation, exponential smoothing is used to reduce (4.6) tRTT =t4- (t3 + At) + (t2 + At) - tl (4.7) Chapter 4 Receiver-driven Layered Multicast using Active Networks 41 the noise in the RTT measurement: ^RTT ~ ^RTT (1 0^ * ^ /! /?7T (4.8) where i'RTT is the Z'"1 smoothed RTT estimation and t'^ is the / , / ! RTT measurement, a is the exponential smoothing parameter. In our scheme, a is set to be 0.98. Higher value of a will give a smoother estimation of the round trip time iRTT and lower value of a will make the estimation respond more promptly to the change of round trip time. 4.4.3.2 Packet Loss Rate Estimation The packet loss rate is also measured at the receivers. In order to obtain a smooth estimation of the fair rate using equations 4.2 - 4.4, a smooth measurement of the packet loss rate is important. The Average Loss Interval method first presented in [15] is used in R L M - A N . The receiver groups the packet losses into loss events, which is defined as one or more packet losses during a round trip time. The number of packets between consecutive loss events is called a loss interval. The average loss interval size can be computed as the weighted average of the m most recent loss intervals lk,---,lk_m The weights for recent loss intervals are the same and the weights for older loss intervals gradually decrease to zero. The larger m is, the smooth the estimate of loss interval wil l be. The typical value of m is 8, and the weights w 0 , • • •, wm_, are 1, 1, 1, 1, 0.8, 0.6, 0.4, 0.2 [15]. At the beginning of the data transmission, the loss intervals are set to the default value. In our simulation, the default value is usually 1000 so that the estimation packet loss rate will not be larger than 1. The loss event rate p is defined as the inverse of I : p = \Uavg-In the Internet, packet loss can result from link error or from network congestions. In the following, packet loss is used interchangeably with packet error since corrupted packets wil l be discarded. In real-time video streaming, packet error or loss will cause deterioration of video quality. Therefore, it is necessary to provide an error control mechanism to recover lost packets. (4.9) 4.5 FEC-based Error Control Chapter 4 Receiver-driven Layered Multicast using Active Networks 42 For video communication, a certain amount of packet loss is usually tolerable. Therefore, the objective of the error control mechanism of R L M - A N is not to guarantee 100% reliability of data transmission, but to add enough amounts of redundant packets into the source packets so that the unrecoverable packet loss rate at the clients can be reduced to a predefined level. 4.5.1 Design Goals For layered multicast, some new challenges arise for designing an error control mechanism because of the large number of the clients, the heterogeneity of the clients and the dependency among packets from different layers, which lead to the following issues: 1. Feedback explosion: If some form of feedback is needed for error recovery, a single packet loss will trigger a large number of clients to send feedback messages to the server. This will lead to congestion in links around the server. Avoiding feedback explosion is extremely important for scalability. 2. Packet loss propagation: If a packet gets lost in one active node in the multicast tree, this packet loss will propagate to all child nodes of that active node. This wil l cause video quality degradation to many clients. Therefore, a mechanism should be provided to recover packet loss somewhere inside the network other than at the client. 3. Recovery latency: Real-time video streaming multicast systems usually require a lost packet to be recovered before the play-out time. Therefore, the latency introduced by the error recovery mechanism must be taken into consideration. 4. Adaptability: The membership of the video streaming session constantly changes. In addition to the difference in the round trip time, the packet loss rates are different for the different clients. Therefore, an efficient error control algorithm for layered multicast must work well with dynamic membership and heterogeneous receivers. 5. Packet dependency: For layer multicast, the packets in upper layers rely on the packets in lower layer to decode properly. If a packet in the lowest layer of a frame is lost, then all the packets in the upper layer of the same frame become useless. Chapter 4 Receiver-driven Layered Multicast using Active Networks 43 The purpose of this section is to present an error control mechanism for R L M - A N . In designing such an error control mechanism, one should take advantage of the computation ability and storage in the active nodes, so that the error control algorithm can work efficiently with heterogeneous clients and can solve the issues mentioned above better. 4.5.2 Distributed FEC-based Error Control A distributed FEC-based error control algorithm has been proposed for this R L M - A N . This section will describe the details of this error control algorithm. First of all, some of the key ideas of this algorithm are discussed. Then the protocol details are described. 1. FEC-based Error Control In this error control algorithm, source-coding based F E C method is used to avoid the feedback explosion problem of ACK-based error mechanisms. The basic mechanism of source coding-based F E C is to group data packets into different blocks and add redundant packets to each of the blocks. The number of the redundant packets added to each block is dependent on the packet loss rate and the target unrecoverable packet loss rate at the client. 2. Distributed Receiver-driven Error Control The key idea of the proposed error control is similar to that of the distributed congestion control. The overall logical multicast tree into a set of subsystems and the end-to-end F E C -based error control are performed in each of the subsystems. In the subsystem, the pseudo client estimates the packet loss rate of its upstream link, calculates an error protection level, and then feedbacks the information to the pseudo server. The pseudo server will adjust the number of redundant packets that are sent to the pseudo client accordingly. 3. Distributed Error Recovery: In detection of packet loss, the client will perform calculation to recover the loss packet with the received source and redundant packets. Moreover, the active node also performs local recovery in order to eliminate the error propagation problem. B y doing error recovery in the active node, the packet loss will not propagate to the child nodes of the active node. Furthermore, instead of multiple times, the packet loss recovery operation is performed only once at all clients without increasing the error recovery latency. Chapter 4 Receiver-driven Layered Multicast using Active Networks 44 Meanwhile, by conducting error recovery for each virtual link, the packet loss rate after error recovery for the virtual link will be decreased/Therefore, the overall packet loss rate from the server to the client will be greatly reduced. As the link capacity is also dependent on the packet loss rate, the effective bandwidth from the server to the client can also be improved. 4. Redundant Packet Generation Generally, the active node receives redundant packets from its father node. However, it is possible that the maximum error protection level of its child nodes is higher than the error protection level of its upstream link. In order to make most of the computation ability and to save bandwidth, the active node will also generate redundant packets when necessary. 5. Priority Dropping Mechanism One challenge of error control algorithm in layered multicast is the dependency among different layers. The decoding of the higher layer packets relies on the packets of the low layers in the same frame. If the packets in the low layer are lost, it is meaningless to recover the high layers. The priority dropping mechanism is suggested to be applied together with the error recovery mechanism so as to provide an alternative error protection for the lower layers. The priority dropping mechanism is performed at the intermediate active node. When the active node detects congestion in the queues of its output interface, it will drop packets of high layers before packets of lower layers. Details of this distributed error control scheme are as follows: • Client: In receiving data packet, the client detects packet loss and recovers them with the received source and the redundant packets. Meanwhile, the client estimates the packet loss rate of the upstream virtual link. It calculates the packet loss protection level and sends the information to its father node. The algorithm of calculating error protection level with packet loss rate is given in Section 4.5.3. • The server: The server receives the subscribe packets containing the information of error protection level from its child nodes. It will generate redundant packets along with source packets and send them to its child nodes accordingly. Chapter 4 Receiver-driven Layered Multicast using Active Networks 45 • Active node: The functionality of the active node related to error control consists of two aspects. On one hand, it can be regarded as a pseudo client to its father node. It receives data packets from its father node and recovers the loss packets. It also estimates the packet loss rate of its upstream link, calculates the error protection level and feedbacks this information to its father node. On the other hand, it can be called a pseudo server in its subsystem. It receives feedback packets from its child nodes and adjusts the error protection level for its child node accordingly. If the error protection level of any of its child nodes is higher than the error protection level of the upstream link of the active node, it will generate extra redundant packets. 4.5.3 Calculation of the Error Protection Level In this section, the algorithm of calculating error protection is presented. For FEC-based error recovery, the more redundant packets that are added to each block, the more source packets that can be recovered and the less packet loss rate after error recovery. However, the redundant packets consume the available bandwidth. Therefore, in deciding the number of packets for each level, it is desirable to reduce the packet loss rate to a certain level while maintaining the effective bandwidth. A n effective bandwidth means that the portion of bandwidth used to transfer the source packets. For every N packets, K parity packets are generated. Let D be the number of packets that are received in one block and M is the number of packets that can be recovered. It is assumed that packet loss is independent. Therefore, the probability of receiving / packets in a block is P(D = i) = Cf+K (1 -p)'p K+N~' (4.10) When D is larger than TV, all the N original packets can be recovered and M = N. When D is less than N , only the source packets are useful and the redundant packets are discarded. The probability of recovering j source packets after receiving j packets is P(M = j | D = i) = C," (1 - p)j p'~J (4.11) Therefore, the expectation of the number of source packets that can be recovered when the number of total received packets is /' (i < N) is Chapter 4 Receiver-driven Layered Multicast using Active Networks 46 E(M \D = i) = Y- • , C- (1 - p)j p'-J * j when i > K ± J . - - K ' (4.12) Tlj=QCi,0.-p)Jpi-J*j wheni<K Therefore, the expectation of the number of source packets that can be recovered is E(M) = J^_+oKE(M | £> = /)*P(D = i) (4.13) The packet loss rate after packet recovery p is p = E(M)IN (4.14) p is the function of N, K and p . Given N and p , the minimum K should be found that can reduce the packet loss to a certain level: mmP(P>N>K)<Po (4-15) K where pQ is the target packet loss rate after loss recovery. In order to keep the TCP-friendliness of the R L M - A N flow, the fair rate has to be adjusted so that transmission of the redundant packets will not consume extra bandwidth other than the fair bandwidth assigned to the flow. With the number of redundant packets, the fair rate has to be modified as: R-actual = ^-actual ft + ^ ^ where Ractual is the fair rate calculated using eqn. (4.3) or (4.4). The number of layers should be calculated as: ^ix<*flCft,fl^ ix (4-17) 4=0 k=0 4.6 Discussion and Performance Analysis In this part, some discussions on the scalability and flexibility of the proposed schemes are presented. In Chapter 5, simulations will be used to study the performance for the different network topologies. Chapter 4 Receiver-driven Layered Multicast using Active Networks 47 Figure 4.4 Demonstration of the flexibility of RLM-AN 4.6.1 Scalability The proposed algorithm is scalable in that it divides the whole system into a number of small relatively independent subsystems. If the number of child nodes in one subsystem gets too large, another active node can be invoked in order to further divide the subsystem into some smaller subsystems. Figure 4.4 illustrates the situation. At the beginning of the multicast session, there is no active node inside the network. All clients receive video packets directly from the server. When the number of clients connected to router R3 and R4 increases, the served can invoke the R3 and R4 to be active so that the multicast tree can be divided into two subsystems and the number of clients for each subsystem is only half of the original system. By invoking active nodes in the multicast tree where the number of clients gets large, a well balanced multicast tree can be obtained to keep up the performance of the whole system. 4.6.2 Complexity and Flexibility Complexity is another important factor in deciding whether a multicast protocol is deployable. At each active node, the functionality can be divided into three parts: i) data dissemination and group management, ii) congestion control, and iii) error control. Different functionality requires different computation and local data storage capabilities. The storage requirement and computation involved are shown in table 4.2. Chapter 4 Receiver-driven Layered Multicast using Active Networks 48 Table 4.2 Storage requirement and computation requirement of the different function unit of R L M - A N Functionality Storage requirement Computation requirement Data dissemination and membership management • IP address of every child nodes • Subscription level of every child node Congestion control • Estimation of the packet loss rate and the round trip time; • Calculation of the subscription level. Error control • Error protection level of every child node • A block of packets for every layer • Calculation of the error protection level. • Recovery of loss packets • Generation of redundant packets The basic dissemination function only requires the information of the IP address and the subscription level of every child node. The function here is simply forwarding packets it receives to its child node selectively. If the congestion control algorithm is performed, no further information storage is needed. However, the active node has to estimate the packet loss rate and round trip time of the virtual link. If the error control mechanism is performed as well, the active node has to buffer a block of packets for each layer. It also has to calculate the error protection level and loss packet recovery. In order to make the whole system flexible, the functionality of congestion control and error control are designed to be optional. When an active node is first invoked, it simply has the functionality of data dissemination. The congestion control functionality can be turned on when it is considered to be necessary. If there is enough memory and the active node has enough computational power, the error control can be turned on as well. 4.7 Summary In this chapter, a layered multicast scheme using active network ( R L M - A N ) is proposed. Four aspects of the schemes have been addressed: session control and membership management, data dissemination, congestion control, error control. A s most of the congestion control is performed in an end-to-end way, a distributed congestion control Chapter 4 Receiver-driven Layered Multicast using Active Networks 49 mechanism is designed for RLM-AN. The overall logical multicast tree is divided into a set of hierarchical subsystems and TCP-friendly congestion control is performed in each of the subsystem. As the intermediate active node suppresses feedback packets from its child node, the feedback explosion is solved. Furthermore, a FEC-based error control mechanism is proposed in order to improve the video quality in the presence of packet loss. The design goal is defined and the detail of the protocol is given. Similar to the congestion control algorithms, the receiver-driven error control is also performed in every subsystem. The root in each subsystem adjusts the error protection level according to the feedback information from its child node. In order to reduce error propagation problem, intermediate active node also performs error recovery. Priority dropping mechanism is suggested to add extra protection to lower layers since they are more important to video quality. The proposed scheme is rather flexible. Active nodes can be activated according to the network situation. The functionality of the active node is also optional and can be invoked when necessary. Chapter 5 Simulation Results R L M - A N has been implemented in ns-2 (network simulator 2). Ns-2 is a discrete event simulator which is aimed at networking research. It provides substantial support for the simulation of many network protocols (TCP, U D P , etc), traffic source, router queue mechanism, routing algorithms, and multicast protocols [12]. In this chapter, simulation studies are used to investigate the effectiveness of R L M - A N . The performance of the congestion control mechanism and the error control mechanism will be assessed. Section 5.1 gives the simulation results for the congestion control algorithm. The main focus is to test the TCP-friendliness, the response to network changes, and the throughput smoothness of R L M - A N flows. Section 5.2 shows the simulation results for the error control algorithm. The performance of error control algorithm will be assessed in terms of packet loss reduction, end-to-end delay, and effective throughput. Although the simulations in section 3.2 are mainly focused on the error control algorithm, they can also be used to verify the effectiveness of the congestion control algorithm since congestion control is also performed in simulations in section 5.2. 5.1 Simulation Results for Congestion Control In this section, simulation studies are used to investigate the effectiveness of the congestion control algorithm for R L M - A N . Firstly, the scalability and TCP-friendliness of R L M - A N are investigated. Secondly, as R L M is the pioneering scheme of layered multicast, R L M - A N is compared with R L M in terms of T C P friendliness and throughput smoothness. In the following simulation, the encoding is evenly distributed and each layer has a bandwidth of 64kbps. The maximum number of layers is set to be 256. The feedback interval 50 Chapter 5 Simulation Results 51 is set to be the round trip time. The total throughput is calculated as the total size of packets received in one second. The round trip time smoothing parameter a is set to be 0.98. TCP Tahoe is used in the following simulation. RED (Random Early Detection) queuing method is used. The value of main parameters of RED is as follows: Table 5.1 Parameters for Random Early Detection Queuing in the simulations Maximum threshold 15 Minimum threshold 5 Weight 0.002 Max dropping probability 0.02 5.1.1 Scalability and TCP-friendliness In this section, simulations are conducted to study the behavior of the R L M - A N congestion control mechanism when competing with TCP traffics. These simulations are aimed to examine whether a R L M - A N flow could achieve a similar throughput as that of a TCP flow when they share a bottleneck link. Figure 5.1 shows the topology used in these simulations. In this single-bottleneck topology, m R L M - A N flows share a bottleneck link (the link between node 2 and node 3) with the same number of TCP flows. The bandwidth and delay of the links are also shown in Figure 5.1. If not specified, the parameters of links are IMb/s and 10ms, respectively. The number of the TCP flows or R L M - A N flows is varied in order to test the scalability of the R L M - A N scheme. Each simulation lasts 500 seconds. In the simulation, node 3 is set to be an active node. It replicates the data packets which are received from the R L M - A N server and forwards them to the R L M - A N clients. Chapter 5 Simulation Results 52 Figure 5.1 Topology used to demonstrate scalability and TCP-friendliness of R L M - A N The mean throughput of a T C P flow or a R L M - A M flow is computed as the average throughput over the last 300 seconds during the simulation. Simulation results show that the throughputs of all the R L M - A N are exactly the same. Figure 5.2 shows that when the number of R L M - A N flows varies from 1 to 20, the average of the mean throughputs of R L M - A N flows is very close to that of T C P flows. Therefore, one can conclude that R L M -A N flows coexist well with T C P traffic. 5DQ 450 400 350 To I"300 H 250 I 200 I— 150 100 50 0 H RLM-AN I I TCP-Tahoe 10 Number of (lows m 15 20 Figure 5.2 Average mean throughput of T C P and R L M - A N flows in the top in Figure 5.1 Although the mean throughput of a T C P flow and a R L M - A N flow are similar, the variance can be quite different. Figure 5.3 shows the throughput of a T C P flow and a R L M -A N flow when m = 5. The throughput of the R L M - A N flow is much smoother than that of Chapter 5 Simulation Results 53 the TCP flow, which is desirable for multimedia applications. The throughput variance of the TCP flow is much higher than that of the R L M - A N flow. 5r 4 • 53 3 -2 -5 o j£ 1 -— R L M - A N t i i i i i > i i i 10 50 1 l i 150 2§0 25i 310 350 4§0 460 SB0 4 (b) © 50 100 15@ 200 251 m 350 408 460 930 Time(s) Figure 5.3 Comparison of the throughput of a R L M - A N flow and a TCP flow when m=5. 5.1.2 Comparison with R L M In this section, simulations are performed to test whether the congestion control mechanism of R L M - A N could work well in a network topology consisting of links with heterogeneous bandwidths and delays. R L M - A N is also compared with R L M in the same network topology. Figure 5.4 shows the network topology. In this topology, there are 6 end nodes and one source node. In the first simulation, every end node has a TCP receiver and a R L M - A N client. The sender and the R L M - A N server reside in the source node. In the second simulation, the topology is the same except that the R L M - A N clients and the R L M - A N server are replaced by the R L M clients and a R L M server respectively. R L M - A N is compared with R L M in terms of TCP-friendliness, the response time, and smoothness. Chapter 5 Simulation Results 54 10Mbps] 20ms 720kbps 130ms RO 100Mbps 10ms RI © R Source node End node Intermediate node 720kbps 60ms 10Mbps 50ms R2 2Mbps 40ms R5 600kbps 20ms 780kbps| 30ms 2Mbps 30ms R3 720kbps / g \ 80ms V _ y 800kbps 20ms R4 1.2Mbp| 20ms Figure 5.4 Network Topology used in the comparison of R L M - A N with R L M TCP-friendliness: Figure 5.5-higher shows the mean throughput of the R L M - A N flow versus that of the T C P flow at each end node. It shows that, for each end node, the throughput of R L M - A N flow is fairly close to the throughput of the competing T C P flow. Figure 5.5-lower shows the mean throughput of the R L M flow versus that of the T C P flow at each end node. For nodes 1, 2, 3 and 5, R L M occupied most of the shared link bandwidth. Therefore, R L M - A N traffic is much more TCP-friendly than R L M traffic due to fair use of the shared link bandwidth with the competing T C P traffic. Response: Figure 5.6 plots the throughput of the R L M and R L M - A N flow at end node E6. While it takes only a few seconds for R L M - A N flow to reach its steady state throughput, it takes around 300 second for the R L M flow to reach the steady state. Therefore, R L M - A N flows have much faster response than R L M flows. The main reason of the slow response of R L M flows is that R L M increases the throughput using the so-called join experiment. There is one join timer for each layer. If the join experiment to a layer fails, it will increase the join timer for that layer and wait for the join timer to expire in order to do the next experiment. The layer join timer is decreased only after the layer is received successfully and no packet loss happens in a certain amount of time. However, packet loss is almost unavoidable and the join timer will get larger and larger and thus the R L M will have slower and slower response. Chapter 5 Simulation Results 55 - g 6i@ - I •5" o S— I («) •(b) hJtLJfci Node it • R L M - A N r~i TCP 1T in in HH I I TCP Q _ M . _ Figure 5.5 Throughput of R L M - A N , R L M and T C P flows in the topology in Figure 5.4 • Smoothness: From Figure 5.6 one can see that the overall throughput of the R L M - A N flow is much smoother than that of R L M flow. R L M increases its transmission through join experiment. The sudden jump in Figure 5.6-lower is due to the failed join experiment in R L M . Failed join experiments also cause the increase of the join timers. As the join timers get large, the R L M client will have to wait for a long time to add new layers. This is seen in the delayed response of the R L M flow that is seen in Fig. 5-6 lower. Overall, R L M - A N flows have smoother throughputs, better link bandwidth sharing when competing with T C P traffic, and much quick response compared with R L M traffic. R L M -A N flows achieve a good balance between promptness and smoothness. Chapter 5 Simulation Results 56 g I I I J I J l l : 1 I I i 51 188 ISO 2B0 25i 3§8 351 4S§ 450 500 Time(s) Figure 5.6 Throughput of R L M - A N and R L M at end node 6 5.2 Simulation Results for Error Control In this section, the performance of the error control algorithm of R L M - A N under different scenarios is investigated. Firstly, the effect of the error control mechanism on the packet loss rate and the end-to-end delay is studied; Secondly, when the number of receivers or the intermediate active nodes increases, the scalability of the error control algorithm is examined; Thirdly, the performance of the error control mechanism for network topologies with heterogeneous receivers bandwidth, link delay and packet loss rate are studied. Figures 5.7, 5.11, 5.14 and 5.18 show the 4 topologies that are used in the following simulations. In these simulations, the data block size is set to be 8. The targeted unrecoverable packet loss rate is 0.1%. The parameter of the layer encoding is the same as that in section 5.1. Based on the equation 4.15, the error protection levels for different packet loss rates are calculated in the following table. Chapter 5 Simulation Results 57 Table 5.2 Number of redundant packets for different packet loss rates Packet loss rate [0 [0.001 [0.004 [0.015 [0.035 0.001] 0.004] 0.015] 0.035] 0.055] Error protection level 0 1 2 3 4 Packet loss rate [0.055 0.08] [0.08 0.1] [0.1 0.13] [0.13 1] Error protection level 5 6 7 8 5.2.1 A Simple Topology In this section, simulations are conducted to study the performance of the proposed error control mechanism for a simple network topology. The focus is to examine whether the error control scheme can greatly reduce the packet loss without significantly increasing the end-to-end delay. Figure 5.7 shows the network topology. In this topology, there is one source, one client, and one active node. The bandwidth, link delay and loss rate are specified in this figure. Table 5.3 and Figures 5.8-5.10 show the simulation results. 0 100Mbps, 40ms 0.02 RI 5Mbps, 50ms 0 2Mbps, 10ms, 0.01 Figure 5.7 A simple topology used in the simulation studies for the error control algorithm • Packet loss rate: The active node R2 divides the path from the source S to the client E into two virtual links. Without error recovery, the packet loss rate detected by the client E is calculated as 2.66%. When redundant packets are produced at the source and the error recovery is performed at the active node, the client packet loss rate drops to 0.59%. If error recovery is also conducted at the client E , the packet loss rate further drops to 0.026% which is well below 0.1%. Figure 5.8 shows the packet loss number versus time. It shows that when F E C is conducted at both two virtual links, the client detects almost no packet loss during the simulation time. Chapter 5 Simulation Results 58 -| r- without FEC 0 50 100 150 200 250 300 350 40D 50 100 150 200 250 SOD 350 4D0 with error recovery at R2 &. E 50 100 150 2D0 250 Time (s) 300 350 400 Figure 5.8 Packet loss number at the client E versus time for the topology in Figure 5.7 Table 5.3 Simulation results for the simple topology in Figure 5.7 With error recovery at S and R2 With error recovery at S Without error recovery Average packet loss (/s) 0.01 0.23 1.04 Packet loss rate (%) 0.026 0.59 2.66 • End-to-end delay: One disadvantage of FEC-based error control algorithm is that it tends to increase the end-to-end delay due to the error recovery latency, which is the amount of time which is needed to recover the lost packets. Some multimedia applications have stringent requirement of the end-to-end delay. Table 5.4 shows that when error recovery is performed at both the active node R2 and the client E , the delay increases from 113.1 ms to 145.1 ms, an increased of 28.29%. The maximum end-to-end delay is 484ms and the variance of the delay is 50ms. The increase in the end-to-end delay is not significant since it is still in the magnitude of milli seconds. Furthermore, buffering can be used in the clients to eliminate the effect of delay jitter. Chapter 5 Simulation Results 59 500 r 400 -(ms) 30D -CJ 2QD -Q 10D =• D -5DD 4DD f 3DD f 200 a 100 0 (a) -i 1 1 1 r-without FEC Q 5D 100 150 2D0 25D 3D0 (b) n 1 _J : i i i i i_ • 50 100 350 400 -i r~ with FEC 150 20D 250 300 350 Timefs) 400 Figure 5.9 (a) End-to-end delay at the client E versus time (without F E C ) ; (b) End-to-end delay at the client E versus time (with F E C ) Table 5.4 End-to-end delay at the client in the topology in Figure 5.7 With F E C Without F E C Average delay (ms) 145.1 113.1 Maximum delay (ms) 485.4 119.9 Variance of the delay (ms) 50.4 2.5 • Throughput: Another cost of F E C is the loss of the effective bandwidth due to the need to transmit redundant packets. Table 5.5 shows the throughput of the client with and without F E C , respectively. Total throughput means the total packets that received in one second. Due to the dependency between packets from different layers, i f one packet of the lowest layer is not received, all the packets from the upper layers will be discarded. The effective throughput means the packets that can be used for decoding. Table 5.5 shows that the deployment of the error control mechanism does not affect the total throughput of the client. The total throughput at the client is similar with or without the error control. However, the effective throughput is lower with F E C than without F E C . This difference is due to the bandwidth which is used to transmit redundant packets. Chapter 5 Simulation Results 60 Table 5.5 Total throughput and effective throughput at the client in the topology in Figure 5.7 With F E C Without F E C Total throughput (KB/s) 38.9 39.1 Effective throughput (KB/s) 31.0 36.5 1D B —' 4 2 D 15D 2DD 250 300 350 400 00 ^ 1-with FEC 0 50 100 150 200 250 Time(s) 3D0 350 400 Figure 5.10 (a) Number of packets received by the client E versus time (without F E C ) ; (b) Number of packets received by the client E versus time (with F E C ) • Variance of the effective throughput: Video quality is another criteria to evaluate the error control algorithm. One of the purposes of designing congestion control and error control schemes for multicast system is to improve the smoothness of the throughput so that the viewer will not sense larger fluctuation of the video quality. In layered multicast, a packet loss in the lower layers can cause all the packets in the upper layers to be discarded, and thus greatly affect the video quality. Figure 5.10 shows the number of packets that can be used for each frame (there are 8 frames each second). This figure shows that the number of packets is quite stable with F E C . B y using redundant packets, Chapter 5 Simulation Results 61 packet loss in each of the layers is reduced to a negligible level. Therefore, the received effective bandwidth and thus the video quality are relatively stable. 5.2.2 Scalability Scalability is one of the most important factors that affect the deployment of any multicast scheme. Scalability requires that the proposed scheme maintain its performance when the number of clients or intermediate nodes increases. In the following simulation studies, two different topologies are used to investigate the scalability of error control algorithm for R L M - A M . In the first topology (Figure 5.11), there is only one active node (R2) and the number of clients varies. B y increasing the number of clients, the breadth of the multicast tree is changed. In the second topology (Figure 5.14), there is one server and one client. The number of intermediate active nodes in the path from the server to the client is then varied, which in turn varies the depth of the multicast tree. If the proposed scheme works well with these two topologies, it can be inferred that the scheme will work well when the number of the intermediate nodes and the clients both increase. 5.2.2.1 Scalability in terms of tree breadth In the topology in Figure 5.11, there is only one session and the number of clients varies from 1 to 40. The effect of the number of clients toward the error control performance is to be investigated. As the links are exactly the same for all clients, the packet loss rate and end-to-end delay for every client are similar. Therefore, the first node (from the top) in the topology will be used as the representative for performance comparison. 2Mbps,20rm( E 0.0C 10Mbps, 20ms 0.005 R I 2Mbps, 20ms 0.005 2Mbps,2' 0.005 Figure 5.11 Topology used to demonstrate the scalability of the error control algorithm (the number of clients varies) Chapter 5 Simulation Results 62 (a) -3 0-5 ro CL 0 - * - without FEC -e- with FEC 0 10 15 2DD 15D : ioo 5D 0 20 (tO 25 30 35 40 e—o o - © -— * - with FEC -e- without FEC 10 15 2D 25 Number of Clients 3D 35 40 Figure 5.12 (a) Average packet loss rate of the first client versus the number of total clients; (b) Average end-to-end delay of the first client versus the number of total clients • Packet loss rate: Figure 5.12-upper shows that when the number of clients increases from 1 to 40, the packet loss rate with or without F E C remains almost constant. Therefore, when the number of clients increases, the proposed error control algorithm recovers most of the lost packets and the unrecoverable packet loss rate is around the predefined value (0.1%). • End-to-end delay: A possible cost of F E C is the error recovery latency that can result in an increase in the end-to-end delay. Figure 5.12-lower shows that when the number of clients increases from 1 to 40, the average end-to-end delay remains nearly constant even with F E C . The error control algorithm maintains its end-to-end delay performance when the number of clients increases. • Throughput: The error control mechanism is relatively independent from the congestion control mechanism. Therefore, the error recovery mechanism should not affect the rate adjustment in R L M - A N . Figure 5.13 plots the total throughput and effective throughput versus the number of clients. It shows that the number of the clients has slight effect on either the total throughput or the effective throughput. The error control algorithm works Chapter 5 Simulation Results 63 well together with congestion control algorithm and it is scalable when the number of clients increases. 15 20 25 Number of Clients without FEC -e- with FEC Figure 5.13 (a) Total throughput of the first client node versus the number of total clients; (b) Effective throughput of the first client node versus the number of total clients Based on the above simulation studies, it can be concluded that the proposed error control algorithm is scalable in that it maintains its performance in terms of packet loss recovery, end-to-end delay and effective throughput. 5.2.2.2 Scalability in terms of tree depth In topology b (Figure 5.14), there is only one server and one client. The number of the intermediate active nodes is varied. A l l the links in the topology have the same property. The effect of the number of active nodes on the performance of the error control algorithm is to be investigated. Chapter 5 Simulation Results 64 ® 2Mbps, 20 0.005 ms R 2Mbps, 20ms 6.005 R 2Mbps,20ms 0.005 <5) Figure 5.14 Network topology used in the demonstration of the scalability of the error control algorithm Packet loss rate: Figure 5-15 shows that without F E C , the packet loss rate increases when the number of intermediate active nodes increases. When the error control mechanism is applied, the packet loss rate drops to below 0.2% and does not vary much when the number of active nodes changes. Therefore, the proposed algorithm maintains its performance in packet loss recovery when the number of intermediate active nodes increases. (a) 1 0 15 Number of Active Nodes Figure 5.15 (a) Average packet loss rates at the client E versus the number of active nodes (without F E C ) ; (b) Average packet loss rates at the client E versus the number of active nodes (with F E C ) • End-to-end delay: Figure 5.16 shows the end-to-end delay versus the number of intermediate nodes. The figure shows that the delay increases almost linearly with the Chapter 5 Simulation Results 65 number of intermediate nodes (with or without F E C ) . But the percentage of increase of the end-to-end delay is almost constant in the simulations with different number of active nodes. (a) 1DDD I 1 1 , 1 1 to 0 5 1D 15 20 25 00 100 | 1 1 1 — i 1 g aa -03 | GO-Q I i i : i i I 0 5 10 15 20 25 Number of Active Nodes Figure 5.16: (a) Average end-to-end delay at the client E versus the number of the active nodes; (b) Increase of the end-to-end delay at the client E versus the number of active nodes • Throughput: Figure 5.17 plots the total throughput versus the number of intermediate active nodes. This figure shows that as the number of intermediate active nodes is varied, the total throughput remains nearly constant, independent of whether the error recovery scheme is implemented or not. Figure 5.17-lower compares the effective throughput with and without F E C . Without F E C , the effective throughput decreases when the number of intermediate node increases. This is due to the fact that when the number of intermediate nodes increases, the packet loss rate increases. Thus, more packets are discarded because of the packet losses at lower layers. With the implementation of F E C , the effective throughput remains almost constant in spite of the increase in the overall packet loss rate. Therefore, the proposed algorithm is not only scalable when the number intermediate nodes increases, but also protects the packet loss rate well at low layers so that the effective throughput does not decrease significantly. Chapter 5 Simulation Results 66 150 § 100 3 50 (a) 10 15 1D 15 Number of Active Nodes without FEC with FEC 2D 25 Figure 5.17 (a) Average total throughput at the client E versus number of active nodes; (b) Average effective throughput at the client E versus number of active nodes Based on the above simulation studies, it can be concluded that the proposed algorithm is scalable in terms of packet loss recovery, end-to-end delay, total throughput and effective throughput when the number of the intermediate active nodes increases. It wil l work well with network topologies with large depth. 5.2.3 Heterogeneity In this section, simulations are performed to investigate the performance of the error control mechanism on network topology with links with heterogeneous bandwidth, link delay and packet loss rate. Figure 5.18 shows the topology which is used in the simulation. In this topology, there is only one R L M - A N server. There are 8 R L M - A N clients. A l l the internal routers are set as active nodes. The simulation lasts 300 seconds for both the situations with FEC and without FEC. The packet loss rate, effective throughput, end-to-end delay for each of the clients will be compared. Chapter 5 Simulation Results 67 Figure 5.18 Testing topology with heterogeneous clients • Packet loss rate: Figure 5.19 shows the packet loss for each of the clients. Without F E C , the packet loss rate of the 8 clients varies from 2% to 4%. With F E C , the packet loss rate drops to below 0.1%. For client 1, 7 and 8, the packet loss rate in the simulation time is almost 0. • Latency: Figure 5.20 shows the end-to-end delay for different clients. One can see that the deployment of error recovery mechanism will increase the end-to-end delay. But the increase is relatively insignificant and the percentage of increase is almost constant for different clients. • Throughput: Figure 5.21 shows the total throughput and effective throughput for the clients. When F E C is applied, both the total throughput and the effective throughput decrease. The decrease in the total throughput or the effective throughput is not significant. Chapter 5 Simulation Results 68 (a) 2 3 cn z 2 •5 1 ro 1 Q. 0 0.2 '0.15 0.1 0.05 0 without FEC i l l 4 5 (b) "T 1 1 1 1 1 1 without FEC • M . I 4 5 Client Figure 5.19 (a) Average packet loss rates for the clients (without F E C ) ; (b) Average packet loss rates for the clients (with F E C ) ; (a) 40D W30D E, •5" 200 c cu 5 100 0 1 1 1 1 1 - T • with FEC [ 1 without FEC . , 1 1 l l 1 II Figure 5.20 (a) Average end-to-end delay for the clients; (b) Increase of the end-to-end delay for the clients Chapter 5 Simulation Results 69 BD r "to* (KBJ BD -3 o> 4D -1 thro 2 D -ra jo D -BD r Jt (KB) BD -CL CD =3 4 0 -2 D UJ D i m u 11 • I with FEC I I without FEC (b) • with FEC I I without FEC 11 l i l l i I 1 2 3 4 5 B Client Figure 5.21 (a) Total throughput for the clients; (b) Effective throughput for the clients Based on the above simulation studies, it can be concluded that the proposed algorithm works well with network topologies with heterogeneous clients in terms of packet loss recovery, end-to-end delay, total throughput and effective throughput. 5.3 Summary In this paper, simulation studies are used to study the performance of the R L M - A N scheme. In section 5.1, R L M - A N traffic is shown to be compatible with the current T C P traffic. R L M - A N is scalable, more TCP-friendliness, and quicker in response to network conditions than the R L M scheme. In section 5.2, simulation studies are conducted to assess the performance of the error control algorithm. The results of simulations demonstrate that R L M - A N helps decrease the packet loss rate without increasing the end-to-end delay significantly. R L M - A N is also scalable when the number of the clients or the intermediate active nodes increases. The last part of the simulations shows that R L M - A N works well in network topologies with heterogeneous clients. Chapter 6 Summary and Future Work This research is aimed at studying the issue of designing a multicast scheme for the real-time video streaming using active networks. The focus is to make use of the computation and local storage of the active nodes in order to provide a multicast transmission scheme which works efficiently in heterogeneous networks. In chapter 4, the details of the layered multicast scheme using active networks ( R L M -A N ) are presented. A R L M - A N system consists of a video stream server, a set of active nodes, and a number of clients. The active nodes, the video server, and the clients form a logical multicast tree. The active nodes inside the network replicate data packets and relay them to their child nodes to make the data transmission efficient. In addition to the basic data dissemination mechanism, congestion control is also addressed. For multicast schemes, scalability is the most important goal in designing congestion control algorithm due to the large number of clients. In R L M - A N , a distributed congestion control is proposed. The key idea is that with the active nodes, the overall logical multicast tree is divided into a set of subsystems with a hierarchical structure. In every subsystem, the sending node is regarded as a pseudo server and receiver nodes are viewed pseudo clients. Receiver-driven TCP-friendly congestion control scheme is performed in each of the sub-systems. For each subsystem, the pseudo client estimates a fair rate, calculates the subscription level and sends it to the pseudo server. The pseudo server will adjust transmission rate in response to the feedback from the pseudo clients. The fair rate is calculated with the T C P throughput function to guarantee TCP-friendliness of the rate adjustment. The pseudo server also aggregates the feedback information from its child node so that the feedback explosion problem is avoided. 70 Chapter 6 Summary and Future Work 71 A distributed FEC-based error control algorithm has been also proposed in chapter 4. In this error control algorithm, source-coding based F E C method is used to avoid the feedback explosion problem. The end-to-end FEC-based error control is also performed in each of the subsystems. The pseudo client estimates the packet loss rate of its upstream link, calculates an error protection level, and then feedbacks the information to the pseudo server. The pseudo server will adjust the number of redundant packets which are sent to the pseudo client accordingly. Moreover, the active node also performs local recovery in order to eliminate the error propagation problem. Furthermore, when the maximum error protection level of its child nodes is higher than the error protection level of its upstream link, the active node will also generate extra redundant packets. In order to put extra protection for lower layers, a priority dropping mechanism is applied together with the error recovery mechanism. The priority dropping mechanism is performed at the intermediate active node. When the active node detects congestion in the queues of its output interface, it will drop packets of high layers before packets of lower layers. The proposed scheme is rather flexible. The functionality of congestion control and error control are designed to be optional. When an active node is firstly registered, it simply has the functionality of the data dissemination. The congestion control functionality can be invoked when it is considered to be necessary. If there is enough memory and the active node has enough computational power, the error control mechanism can be invoked as well. In chapter 5, simulations are performed in NS-2 to investigate the performance of the congestion control and error control mechanism of R L M - A N . Two main criteria are used to assess the performance of the congestion control scheme: scalability and TCP-friendliness. Simulations are performed on a well-known single bottleneck topology in which a number of T C P flows share the bottleneck link with the same number of R L M - A N flows. The results of the simulations show that the R L M - A N flow achieves a similar average throughput as that of the T C P flow. The R L M - A N flow is compatible with T C P traffic and it is scalable since its TCP-friendliness is not affected by the number of clients. In order to show the improvement of the congestion control R L M - A N , it is compared with R L M . Simulation results demonstrate that R L M - A N flows are more TCP-friendly, quicker in response to network conditions and smoother in throughput. Chapter 6 Summary and Future Work 72 Section 4.2 is focused on performance assessment of the error control algorithm. The criteria used in the evaluation are the packet loss rate after error recovery, the end-to-end delay, and the effective throughput. Simulations on a simple topology show that the error recovery mechanism can greatly reduced the packet loss rate without significantly increasing the end-to-end delay. In the following simulation, the clients or the active nodes vary and the proposed algorithm maintains its performance of reducing the packet loss rate. Furthermore, simulations are conducted in a topology with heterogeneous clients. The results show that the proposed error control algorithms work well in heterogeneous networks. 6.1 Fu ture W o r k In error control, one of the disadvantages of FEC-based method is the waste of bandwidth which is used to transmit redundant packets and the computation that is needed to recover lost packets. In situations where the packet loss rate is low, FEC-based method might be inefficient and NACK-based method is more suitable for its low processing overhead. Therefore, a combined F E C and N A C K based error control can be provided. In this scheme, which method to choose depends on the packet loss rate and the round trip time. If the round trip time is small and packet loss rate is low, N A C K based method is used. Otherwise, the FEC-based method is used. In this research, simulations have been done to evaluate the performance of the congestion control and error control mechanism in different network topologies. However, the simulations are by no means comprehensive. Further research can be done on performing simulations on more complicated network topologies. Furthermore, in the simulations, there is only one R L M - A N session. Further simulation can be done where multiple R L M - A N sessions coexist so that the interaction between R L M - A N flows can be investigated. Besides, a network protocol cannot be deployed before it has been widely tested in real networks. Therefore, the proposed scheme can be implemented in active network platforms such as A N T S [60] and its performance can be tested in really network conditions. This work provides a generic scheme of multicast using active networks. This scheme can be applied to other applications with modification. For example, the functionality of the active node can be implemented in a proxy server in the server machine of an ISP provider. Chapter 6 Summary and Future Work 73 This proxy is viewed as a pseudo server to the client application of the ISP subscriber and is viewed as a pseudo client to the video streaming server. The idea of distributed congestion and error control can be applied to other application such as collaborative video games. In addition to congestion control and error control, the intelligence of active nodes can be applied to provide better QoS support of R L M - A N . New algorithms can be into the functionality of the active node to support differential service or integrated service. References [I] H . Akamine, N . N . Wakamiya, and H . Miyahara, "Heterogeneous Video Multicast in an Active Network," IEICE Trans. Commun., vol. E84-B, no. 1, Jan. 2001. [2] M . Allman, V. Paxson, and W. Stevens, " T C P Congestion Control," Internet R F C 2581, Apri l 1999. [3] S. Armstrong, A . Freier, and K . Marzullo, "Multicast Transport Protocol," Internet R F C 1301, Feb. 1992. [4] J. C . Bolot, S. F. Parisis, and D . Towsley, "Adaptive FEC-Based Error Control for Internet Telephony," In Proc. of IEEE INFOCOM'99, New York City, N Y , U S A , March 1999, vol. 3, pp. 1453-1460. [5] J. Byers, M . Luby, M . Mitzenmacher, and A . Rege, " A Digital Fountain Approach to Reliable Distribution of Bulk Data," In Proc. of ACM SIGCOMM'98, Vancouver, B C , Canada, October 1998. [6] J. Byers, M . Luby, and M . Mitzenmacher, "Accessing Multiple Mirror Sites in Parallel: Using Tornado Codes to Speed Up Downloads," In Proc. IEEE INFOCOM, New York City, NY, U S A , March 1999, pp. 275-283. [7] J. Byers, M . Frumin, G. Horn, M . Luby, M . Mitzenmacher, A . Roetter, and W. Shaver, " F L I D - D L : Congestion control for layered multicast," In Proc. Second Int 7 Workshop on Networked Group Communication (NGC 2000), Palo Alto, C A , U S A , Nov. 2000. [8] J. Byers, M . Luby, and M . Mitzenmacher, "Fine-Grained Layered Multicast," In Proceeding of IEEE INFOCOM 2001, Anchorge, A K , Apr. 2001. [9] H . J. Chao and X . Guo, Quality of Service Control in High-Speed Networks, Wiley-Interscience Publication, 2002. [10] Y. Chawathe, S. McCanne, and E . A . Brewer, " R M X : Reliable Multicast for Heterogeneous Networks," In Proceeding of IEEE INFOCOM'2000, TelAviv, Israel, March 2000. [II] L . Cheng and M . R. Ito, "Receiver-driven Layered Multicast with TCP-friendly Congestion Control using Active Networks," To appear in Proc. of 10th International 74 References 75 Conf. on Telecommunications (ICT2003), Tahiti, Papeete, French Polynesia, Feb. 2003. [12] L . Cheng and M . R. Ito, "Receiver-driven Layered Multicast using Active Networks," submitted to 2003 IEEE International Conference on Multimedia and Expo, Baltimore, M D , U S A , July 2003. [13] P. A . Chou, A . E . Mohr, A . Wang, and S. Mehrotra, "Error Control for Receiver-Driven Layered Multicast of Audio and Video," IEEE Trans. Multimedia, vol. 3, pp. 108-122, March 2001. [14] J. Chung and M . Claypool, "NS by Example," http://nile.wpi.edu/NS/. [15] K . Claffy and G. Miller, "The Nature of the Beast: Recent Traffic Measurements from an Internet Backbone," In Proc. ofINET'98, Geneva, Switzerland, July 1998. [16] S. Deering, "Host Extensions for LP Multicasting," Internet R F C 1112, August 1989. [17] T. Faber, " A C C : Using Active Networking to Enhance Feedback Congestion Control Mechanisms," IEEE Network (Special Issue on Active and Programmable Networks), pp.61-65, May/June 1998. [18] K . Fall and K . Varadhan, "The NS manual," 2002. [19] S. Floyd, V. Jacobson, S. McCanne, C . Liu, and L . Zhang, " A Reliable Multicast Framework for Light-weight Sessions and Application Level Framing," in Proc. of A CM SIGCOMM '95, New York City, N Y U S A , 1995, pp. 342-356. [20] S. Floyd, M . Handley, J. Padhye, and J. Widmer. "Equation-based congestion control for unicast applications," In Proceedings of ACM SIGCOMM 2000. [21] P. Ge and P. K . McKinley, "Comparisons of Error Control Techniques for Wireless Video Multicasting," In Proceedings of the 21st IEEE International Performance, Computing, and Communications Conference, Phoenix, Arizona, Apri l 2002. [22] S. J. Golestani and K . K . Sabnani., "Fundamental Observations on Multicast Congestion Control in the Internet," In Proceeding of IEEE INFOCOM 99, March 1999. [23] R. Gopalakrishnan, J. Griffioen, G. Hjalmtysson, C . J. Sreenan, and S. Wen, " A Simple Loss Differentiation Approach to Layered multicast," In Proceeding of InfoComm 2000. [24] M . Greis, "Tutorial for Network Simulator," References 76 http://www.isi.edu/nsnam/ns/tutorial/index.html. [25] B . Hochwald and K . Zeger, "Tradeoff between Source and Channel Coding," IEEE Trans. Inform. Theory, vol. 43, pp. 1412 - 1424, Sept. 1997. [26] S. Kasera, S. Bhattacharyya, M . Keaton, D . Kiwior, J. Kurose, D . Towsley, and S. Zabele, "Scalable Fair Reliable Multicast Using Active Services," IEEE Network Magazine (Special Issue on Multicast), Jan./Feb. 2000. [27] B . J. K i m and W. A . Perlman, "An Embedded Wavelet Video Coder Using Three-dimensional Set Partitioning in Hierarchical Trees (SPIHT)," In Proceedings of Data Compression Conference, Snowbird, U S A , March 1999. [28] T. K i m and M . H . Ammar, " A Comparison of Layering and Stream Replication Video Multicast Scheme," In Proceeding of 11th International workshop on Network and Operating Systems Support for Digital Audio and Video, Port Jefferson, New York, United States, 2001. [29] A . Koifman and S. Zabele, "Ramp: A Reliable Adaptive Multicast Protocol," In Proc. Oflnfocom '96, San Francisco, C A , Mar. 1996, pp. 1442 -1451. [30] K . W. Lee, S. Ha, and V. Bharghavan, "IRMA: A Reliable Multicast Architecture for the Internet," In Proceedings of the Conference on Computer Communications (IEEE Infocom), New York, U S A , Mar. 1999. [31] A . Legout and E . W. Biersack, " P L M : Fast Convergence for Cumulative Layered Multicast Transmission Schemes," In Proceeding of ACM SIGMETRICS 2000, Santa Clara, California, U S A , June 2000. [32] K . Lid l , J. Osborne, and J. Malcolm, "Drinking from the Firehose: Multicast U S E N E T News," In Proceedings of the Usenix Winter Conference, San Francisco, C A , U S A , January 1994, pp 33 -45. [33] J. C . L in and S. Paul, " R M T P : A Reliable Multicast Transport Protocol," In Proc. INFOCOM '96, San Francisco, C A , U S A , Mar. 1996, pp. 1414 -1424. [34] A . Lo, "Wide Area Network Video Stream Multicast using Active Network," M . A . Sc. Thesis, Department of Electrical & Computer Engineering, University of British Columbia, August 2001. [35] J. Macker and W. Dang, "The Multicast Dissemination Protocol (mdp) Framework," References 77 Internet draft, IETF, draft-macker-mdp-framework-OO.txt, Nov. 1996. [36] A . Mankin, A . Romanow, S. Bradner, and V. Paxson, "LEFT Criteria for Evaluating Reliable Multicast Transport and Application Protocols," Internet R F C 2357, June 1998. [37] S. McCanne, V. Jacobson, and M . Vetterli, "Receiver-driven Layered Multicast," In Proc. of A C M S I G C O M M , pp. 117 -130, Palo Alto, C A , U S A , Aug. 1996. [38] S. McCanne, and M . Vetterli, "Low-Complexity Video Coding for Receiver-driven Layered Multicast, " IEEE Journal on Selected Areas in Communications, vol. 15, No. 6, pp. 983-1001, Aug. 1997. [39] K . Miller et al., "Starburst Multicast File Transfer Protocol (mftp) Specification," Internet draft, IETF, draft-miller-mftp-spec-02.txt, Jan. 1997. [40] Q. N i , Q. Zhang, and W. Zhu, " S A R L M , Sender-adaptive & Receiver-driven Layered Multicasting for Scalable Video," IEEE International Conference on Multimedia and Expo (ICME '01), Aug. 2001. [41] K . Obraczka, "Multicast Transport Protocols: A Survey and Taxonomy," IEEE Communications Magazine, pp. 1-15, Jan. 1998. [42] J. Padhye, J. Kurose, D. Towsley, and R. Koodli. " A Model Based TCP-Friendly Rate Control Protocol," In Proceedings of the Ninth International Workshop on Network and Operating Systems Support for Digital Audio and Video (NOSSDAV'99), July 1999. [43] J. Padhye, V. Firoiu, D . Towsley, and J. Kurose, "Modeling T C P Reno Performance: A Simple Model and Its Empirical Validation," IEEE/A CM Transactions on Networking, vol. 8, no. 2, pp. 133-145, April 2000. [44] J. Padhye, V. Firoiu, D . Towsley, and J. Kurose, "Modeling T C P Throughput: A Simple Model and its Empirical Validation," In Proceedings of the ACM SIGCOMM '98, August, 1998, pp. 303-314. [45] I. Rhee, N . Balaguru, and G. Rouskas, " M T C P : scalable tcp-like congestion control for reliable multicast," In Proceeding of IEEE INFOCOM, March 1999, vol. 3, pp. 1265 -1273. [46] L . Rizzo, "pgmcc: A TCP-friendly Single-rate Multicast Congestion Control Scheme," References 78 In Proc. ACM SIGCOMM, pp. 17 - 28, Stockholm, Sweden, August 2000. [47] K . Savetz, N . Randall, and Y. Lepage, " M B O N E : Multicasting Tomorrow's Internet," Hungry Minds, Inc, March 1996. [48] H . Schulzrinne, S. Casner, R. Frederick, and V. Jacobson, "RTP: A Transport Protocol for Real-Time Applications," Internet R F C 1889, Jan. 1996. [49] C . Semeria, and T. Maufer, "Introduction to IP Multicasting Routing," http://suresh_kr.tripod.com/ipmulticast.html. [50] D . Sisalem and A . Wolisz, " M L D A , A TCP-friendly Congestion Control Framework for Heterogeneous Multicast Environments," In Proceeding of 8th International Workshop on Quality of Service (IWQoS 2000), Pittsburgh, U S A , June 2000. [51] D . Sisalem and A . Wolisz, "TCP-Friendly Congestion Control for Multimedia Communication in the Internet," PhD thesis, Technical University of Berlin, Berlin, October 2000. [52] T. Strayer, Xtp Web page, http://www.ca.sandia.gov/xtp/xtp.html. [53] A . S. Tanenbaum, Computer Networks, Prentice-Hall International Inc., Third Edition, 1996. [54] D . L . Tennenhouse, J. M . Smith, W. D . Sincoskie, D. J. Wetherall, G. J. Minden, " A Survey of Active Network Research," IEEE Communications Magazine, vol. 35, no. 1, pp. 80-86. January 1997. [55] K . Thompson, G. Miller, and R. Wilder, "Wide-area Internet Traffic Patterns and Characteristics," I E E E Network, vol. 11, no. 6, Nov./Dec. 1997. [56] L . Vicisano, J. Crowcroft, and L . Rizzo. "TCP-like Congestion Control for Layered Multicast Data Transfer," In Proc. of IEEE INFOCOM, vol. 3, pp. 996 - 1003, March 1998. [57] H . A . Wang and M . Schwartz, "Achieving Bounded Fairness for Multicast and T C P traffic in the Internet," In Proceedings of ACM SIGCOMM, 1998. [58] Y. Wang and Q. F. Zhu, "Error Control and Concealment for Video Communication: A Review," In Proceeding of the IEEE, vol. 86, no. 5, pp. 974 - 997, M a y 1998. [59] B . Whetten, T. Montgomery, and S. Kaplan, " A High Performance Totally Ordered Multicast Protocol," in Theory and Practice in Distributed Systems, Springer Verlag, References 79 L C N S 938. [60] D . Wetherall, J. Guttag, and D. L . Tennenhouse, " A N T S : A Toolkit for Building and Dynamically Deploying Network Protocols," In Proceedings of IEEE OPENARCH'98, San Francisco, C A , U S A , Apr. 1998. [61] D . Wetherall, "Service Introduction in an Active Network," Ph.D. Thesis, Massachusetts Institute of Technology, 1999. [62] S. Wicker, Error Control Systems for Digital Communication and Storage, Prentice-Hall 1995. [63] J. Widmer, R. Denda, and M . Mauve, " A Survey on TCP-Friendly Congestion Control (extended version)," Tech. Rep. TR-2001-002, Department of Mathematics and Computer Science, University of Mannheim, Feb. 2001. [64] J. Widmer and M . Handley, "Extending Equation-based Congestion Control to Multicast Applications," Praktische Informatik IV, University of Mannheim, Germany, Technical Report T R 13-2001, May 2001. [65] D . Wu, Y. T. Hou, W. Zhu, Y. Q. Zhang and J. M . Peha, "Streaming Video over the Internet: Approach and Directions," IEEE Trans, on Circuits and System for Video Technology, vol. 11, no. 3, pp. 282-300, March 2001. [66] L . Yamanoto, and G. Leduc, "Adaptive Applications over Active Networks: Case Study on Layered Multicast," In Proceeding of First IEEE European Conference on Universal Multiservice Networks (ECUMN'2000), Colmar, France, Oct. 2000. [67] R. Yang and S. S. Lam, "Internet Multicast Congestion Control: A Survey," In Proceedings ofICT2000, Acapulco, Mexico, May 2000. [68] Q. Zhang, G. Wang, W. Zhu, and Y. Q. Zhang, "Robust Scalable Video Streaming over Internet with Network Adaptive Congestion Control and Unequal Loss Protection," In Proceeding of Packet Video, May, 2000. [69] Z . C . Zhang and V. O. K . L i , "Router-Assisted Layered Multicast," In Proceeding of IEEE International Conference on Communications, vol. 4, pp. 2657 -2661, 2002. [70] "Multicast Routing, Cisco," http://www.cisco.com/warp/public/614/17.html. Glossary of Acronyms A C K Acknowledgement B G P Border Gateway Protocol F E C Forward Error Correction F L I D - D L Fair Layered Increase/Decrease with Dynamic Layering I C M P Internet Control Message Protocol IP Internet Protocol M B O N E Multicast Backbone M D P Multicast Dissemination protocol M F T P Multicast File Transfer Protocol M L D A Multicast Loss-Delay Based Adaptation Algorithm M T P Multicast Transport Protocol N A C K Non-acknowledgement O S P F Open Shortest Path First QoS Quality of Service R A M P Reliable Adaptive Multicast Protocol R C L Receiver-driven Layered Congestion Control R F C Request for Comments R L M Receiver-driven Layered Multicast R L M - A N Receiver-driven Layered Multicast using Active Networks R M P Reliable Multicast Protocol R M T P Reliable Multicast Transport Protocol RTP Real-time Transport Protocol S F E C Source-coding Based Forward Error Correction SPIHT Set Partitioning in Hierarchical Trees S R M Scalable Reliable Multicast T C P Transmission Control Protocol 80 Glossary of Acronyms 81 UDP User Datagram Protocol XTP Xpress Transport Protocol 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0065518/manifest

Comment

Related Items