Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Enhancing quality of service for video streaming over IP networks Bai, Yan 2003

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2003-860450.pdf [ 9.06MB ]
Metadata
JSON: 831-1.0065527.json
JSON-LD: 831-1.0065527-ld.json
RDF/XML (Pretty): 831-1.0065527-rdf.xml
RDF/JSON: 831-1.0065527-rdf.json
Turtle: 831-1.0065527-turtle.txt
N-Triples: 831-1.0065527-rdf-ntriples.txt
Original Record: 831-1.0065527-source.json
Full Text
831-1.0065527-fulltext.txt
Citation
831-1.0065527.ris

Full Text

Enhancing Quality of Service for Video Streaming over IP Networks by Y A N B A I M . Sc., Sam Houston State University, U S A , 1997 A THESIS S U B M I T T E D IN P A R T I A L F U L F I L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F D O C T O R O F P H I L O S O P H Y in T H E F A C U L T Y O F G R A D U A T E S T U D I E S (Department of Electrical and Computer Engineering) We accept this thesis as conforming to the required standard T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A July 2003 © Yan Ba i , 2003 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of fckW/frii (j.ni9*tef Bnj'rt^riAj The University of British Columbia Vancouver, Canada Date - S e p * - 11 , A o c - 3 DE-6 (2/88) ABSTRACT Streaming video over IP networks has become increasingly popular; however, compared to traditional data traffic, video streaming places different demands on Quality of Service (QoS) in a network, particularly in terms of delay, delay variation, and data loss. In response to the QoS demands of video applications, network techniques have been proposed to provide QoS within a network. Unfortunately, while efficient from a network perspective, most existing solutions have not provided end-to-end QoS that is satisfactory to users. This dissertation studies in-network QoS control, with the goal of delivering excellent user-perceived QoS for video over IP networks. To this end, the study addresses two critical questions: how should perceived QoS be quantified, and which aspects should be considered when developing in-network control to enhance the perceived QoS? A n Active QoS control framework that uses active networking technology was developed. This framework consists of a QoS description model, a set of nodal-based QoS control techniques, and a set of QoS distribution strategies. The QoS description model quantifies the perceived QoS. The QoS description for an application reflects the quality perceived by application users and can be directly converted to network-controllable parameters. On the other hand, the QoS description for a network is based on users' satisfaction with the performance of an application. This model provides efficient support for achieving perceived QoS: the application quality perceived by end users can be explicitly managed from within the network. The nodal-based QoS control is composed of buffer management and packet scheduling schemes. Both schemes take into account the relationship between video-specific characteristics, required network resources, and the resulting video quality. Buffer management and packet scheduling i i increase the quality of each video, provide an equitable share of service between competing videos, and improve the network efficiency locally at each node. A set of QoS distribution strategies is used for inter-node adaptation. These dynamically adjust local loss constraints in response to network conditions in order to keep up an end-to-end loss requirement. Using QoS distribution and packet scheduling together increases the probability of meeting end-to-end QoS. Simulation results demonstrate the advantages of the buffer management and packet scheduling schemes. iii TABLE OF CONTENTS Abstract Table of Contents List of Tables List of Figures Glossary Acknowledgements C H A P T E R I INTRODUCTION 1.1 Motivation 1.2 Thesis Statement and Contributions 1.3 Outline of This Thesis C H A P T E R II A C T I V E N E T W O R K S 2.1 Overview of Active Networks 2.2 Active Networks and their Application C H A P T E R III A C T I V E QOS C O N T R O L F R A M E W O R K 3.1 Application Requirements and Network Services 3.2 System Overview 3.3 Proposed QoS Description Model 3.3.1 Background 3.3.2 Joint Application-network QoS Description Model iv C H A P T E R IV B U F F E R M A N A G E M E N T 4.1 Background 29 4.2 Mechanism Scenarios 34 4.2.1 Single Video Source - Frame Level Discard with Dynamic Thresholds (FDDT) 34 4.2.2 Multiple Video Sources - Loss-based Buffer Sharing with Frame-Level Packet Discard ( L B S _ F P D ) 35 4.3 Performance Evaluation of Proposed Buffer Management Schemes 39 4.3.1 The Single Video Source Case 43 4.3.1.1 Effectiveness of F D D T 44 4.3.1.2 Sensitivity of F D D T to Parameters and Traffic Patterns 47 4.3.2 The Multiple Video Sources Case 51 4.3.2.1 Effectiveness of L B S _ F P D 51 4.3.2.2 Impact of Threshold on L B S _ F P D 53 4.3.2.3 Effect of Diverse Video Sources 56 4.4 Chapter Summary 65 C H A P T E R V P A C K E T S C H E D U L I N G 5.1 Background 68 5.2 Proposed Packet Scheduling Scheme 69 5.2.1 Single-class - Multilevel Priority Adaptive Packet Scheduling ( M P A P ) 69 5.2.2 Multi-class - Dynamic Multi-priority Packet Scheduling ( D M P S ) 72 5.3 Performance Evaluation of Packet Scheduling Schemes 75 5.3.1 Performance Evaluation of M P A P S 77 5.3.1.1 Effectiveness of M P A P S 77 5.3.1.2 Effect of Sources on M P A P S 79 5.3.2 Performance Evaluation of D M P S 91 5.3.2.1 Effectiveness of D M P S 92 5.3.2.2 Impact of the Choice of Classes on D M P S 99 5.4 Chapter Summary 103 v C H A P T E R VI E N D - T O - E N D LOSS DISTRIBUTION 6.1 The Multi-hop QoS Problem: Some Insights 106 6.2 Approaches for Loss Distribution Cross Multi-hop Path 106 6.2.1 Equal Allocation (EA) 107 6.2.2 Excess Move (Ex_Move) 108 6.2.3 Excess Even (Ex_Even) 108 6.2.4 Excess Adaptation ( E x _ A ) 109 6.2.5 Excess Proportion (Ex_P) 109 6.3 Performance Evaluation of Nodal Loss Allocation Strategies 110 6.3.1 Effectiveness of Coordinated M P A P S Schemes 112 6.3.1.1 C - M P A P S with E A 112 6.3.1.2 C - M P A P S with E x _ M o v e 114 6.3.1.3 C - M P A P S with Ex_Even - 119 6.3.1.4 C - M P A P S with E x _ A 122 6.3.1.5 C - M P A P S with E x _ P 124 6.3.2 Performance Evaluation of Coordinated D M P S Schemes 127 6.3.2.1 C - D M P S with E A 128 6.3.2.2 C - D M P S with Ex_Move 130 6.3.2.3 C - D M P S with Ex_Even 136 6.3.2.4 C - D M P S with E x _ A 139 6.3.2.5 C - D M P S with E x _ P 143 6.3.3 Comparison of Local Loss Allocation Schemes 146 6.4 Cost of Coordinated Packet Scheduling 149 6.5 Chapter Summary 150 C H A P T E R VII CONCLUSIONS AND F U T U R E W O R K 7.1 Conclusions 153 7.2 Future Research 155 BIBLIOGRAPHY 160 vi LIST OF TABLES Table 2-1 A comparison of in-network QoS control approaches 21 4-1 F D D T performance with different thresholds 47 4- 2 F D D T performance with diverse videos 49 5- 1 M P E G - 4 video statistics 92 5-2 Comparison of Sinter between D M P S and F C F S 98 5-3 Impact of varying number of video distribution between classes on D M P S 100 5-4 Impact of varying loss distribution between classes on D M P S 101 5-5 Joint Impact of varying number of video distribution between classes and varying number of classes 102 vi i LIST OF FIGURES Figure 1-1 A summary of QoS control for video communication 4 1- 2 A n overview of new QoS control development 8 2- 1 Taxonomy of AN-based QoS control for networked video 16 3- 1 A n overview of A-QoS framework 23 3- 2 A n overview of the joint application-network QoS description model 26 4- 1 M P E G video structure 31 4-2 The effect of packet loss on M P E G - 1 video 32 4-3 Simulation diagram for F D D T and L B S _ F P D 39 4-4 Network model for simulation of F D D T and L B S _ F P D 40 4-5 Traffic distribution of M P E G - 1 video traces 42 4-6 Difference in F E R between F D D T and D T 44 4-7 Comparison of PLR(I) and PLR(P) between F D D T and D T 46 4-8 Difference in E T between F D D T and D T 46 4-9 Performance comparison between L B S _ F P D and D T 52 4-10 Impact of threshold settings on L B S _ F P D with N=6 55 4-11 Impact of threshold settings on L B S _ F P D with different number of videos 56 4-12 Difference in F E R between L B S _ F P D and D T with varying number of videos 58 4-13 Difference in F l between L B S _ F P D and D T with varying number of videos 59 4-14 Difference in E T between L B S _ F P D and D T with varying number of videos 60 4-15 Impact of diverse video sources on L B S _ F P D 62 vii i 4- 16 Impact of diverse QoS requirements on L B S _ F P D 63 5- 1 The M P A P S scheme 71 5-2 A n example of how the M P A P S works 72 5-3 The D M P S scheme 74 5-4 Simulation diagram for M P A P S and D M P S 76 5-5 Network model for simulation of M P A P S and D M P S 76 5-6 Performance comparison between M P A P S and FGFS for the "Asterix" trace 79 5-7 Performance comparison between M P A P S and F C F S for the "Soccer" trace 81 5-8 Performance comparison between M P A P S and F C F S for the "Talk" trace 82 5-9 Difference in F E R between M P A P S and F C F S with varying number of videos 83 5-10 Difference in E T between M P A P S and F C F S with varying number of videos 85 5-11 Compar ison of Sinter between M P A P S and F C F S with varying number of videos 85 5-12 Difference in FI between M P A P S and F C F S with varying number of videos 86 5-13 Difference in F E R between M P A P S and F C F S with diverse loss requirements 88 5-14 Difference in E T between M P A P S and F C F S with diverse loss requirements 89 5-15 Compar ison of S i n t e r between M P A P S and F C F S with diverse number of videos 89 5-16 Difference in FI between M P A P S and F C F S diverse loss requirements 90 5-17 Difference in F E R between D M P S and F C F S 94 5-18 Difference in E T between D M P S and F C F S 94 5- 19 Difference in FI between D M P S and F C F S 97 6- 1 The simulated network for C - M P A P S and C - D M P S 111 6-2 Performance Comparison between C - M P A P S with E A and U - F C F S 114 6-3 Performance Comparison between C - M P A P S with Ex_Move and U - F C F S 115 ix 6-4 Comparison of packet loss at each node between C - M P A P S with E x _ M o v e and U -F C F S 118 6-5 Performance Comparison between C - M P A P S with Ex_Even and U - F C F S 121 6-6 Packet loss of C - M P A P S with Ex_Even at individual node 121 6-7 Performance Comparison between C - M P A P S with E x _ A and U - F C F S 123 6-8 Packet loss of C - M P A P S with E x _ A at individual node 124 6-9 Performance Comparison between C - M P A P S with E x _ P and U - F C F S 126 6-10 Packet loss of C - M P A P S with E x _ P at individual node 127 6-11 Performance Comparison between C - D M P S with E A and U - F C F S 130 6-12 Performance Comparison between C- D M P S with Ex_Move and U - F C F S 132 6-13 Comparison of packet loss at each node between C - D M P S with E x _ M o v e and U -F C F S 135 6-14 Performance Comparison between C- D M P S with Ex_Even and U - F C F S 139 6-15 Packet loss of C - D M P S with Ex_Even at individual node 139 6-16 Performance Comparison between C - D M P S with E x _ A and U - F C F S 142 6-17 Packet loss of C - D M P S with E x _ A at individual node 142 6-18 Performance Comparison between C - D M P S with E x _ P and U - F C F S 145 6-19 Packet loss of C - D M P S with E x _ P at individual node 145 6-20 Comparison of local loss allocation schemes 148 x G L O S S A R Y A N Active Networking A P I Adaptive Priority Index A-QoS Active Quality of Service C - M P A P S Coordinated Multilevel Priority Adaptive Packet Scheduling C - D M P S Coordinated Dynamic Multi-priority Packet Scheduling D M P S Dynamic Multi-priority Packet Scheduling D S Differentiated Services D T Drop Tail E A Equal Allocation E T Effective Throughput Ex_Move Excess Move Ex_Even Excess Even E x _ A Excess Adaptation E x _ P Excess Proportion F C F S First-come-first-serve F D D T Frame-level Discard with Dynamic Thresholds F E C Forward Error Correction F E R Frame Error Rate F l Fairness Index G O P Group of Picture G O V Group of V O P InType Incoming Packet Type IP Internet Protocol IS Integrated Service L B S _ F P D Loss-based Buffer Sharing with Frame-level Packet Discard M P A P S Multilevel Priority Adaptive Packet Scheduling M P E G Moving Picture Experts Group xi PLR Packet Loss Rate QoS Quality of Service RED Random Early Drop RTP Real-time Transport Protocol SNR SNR Signal-to-Noise Ratio SL Service Level TCP Transmission Control Protocol U-FCFS Uncoordinated Packet Scheduling UDP User Datagram Protocol VOD Video-on-demand VOP Video Object Plane xii ACKNOWLEDGEMENTS I owe a hearty thank you to my supervisor, Professor Mabo Robert Ito, for his invaluable guidance and inspiration throughout my thesis work. In addition to his support, I would like to thank him for his patience during my periods of slow progress. I am grateful that I had the opportunity to learn from him as I conducted research and wrote research papers. I would also like to thank the members of my thesis committee, Professor Hussein Alnuweiri, Professor Takis Mathiopoulos and Professor Steven Wilton. Their comments have enhanced my research in innumerable ways. I am also very grateful to my university examiners, Professor Son Vuong and Professor Norman Hutchinson, and my external examiner, Professor Bharat Bhargava, for reading and supporting my work. I feel deep gratitude towards my parents, who raised me, educated me, and motivated me to pursue excellence in my career. It was their unconditional love and support that accompanied me as I walked through the difficult times in my life. Most of all, I am deeply indebted to my husband James for his love and understanding during my graduate years. He was always behind me and gave me his unconditional support. Last but not least, thank you to our lovely son Bi l l for the joy and happiness that he brings to both of us. xiii Chapter 1 Introduction 1.1 Motivation Technological advances in video coding, computers, and computer communications have made networked video increasingly popular. Examples of such advances include Video M a i l , Video Conferencing, and Video-on-Demand (VOD) . Of these applications, V O D is immensely attractive due to its great market value. B y its nature, V O D places different demands on Quality of Service (QoS) in a network than bulk data distribution. Firstly, V O D requires bounded delay and delay variation. Video needs to be presented at certain frame rates and video data that does not arrive in time to be displayed at the receiver, leading to a late loss, cause the playback to pause and lose continuity. The screen may appear blank or the picture freezes [31]. The variation of end-to-end delay, also called jitter, is another critical video quality issue. The predominant problem caused by jitter is colour distortion such as blurring [1]. Secondly, V O D imposes stringent restrictions on the amount and distribution of network loss that can occur, because data loss can severely damage the visual quality of video. Typically, compressed video used in the V O D system employs predictive coding techniques. Errors in a picture due to data loss tend to propagate and the effects last for several video frames 1 [16]. For example, i f an error occurs in an I-frame in M P E G video, over ten video frames can be corrupted, leading to a drastic degradation in the presentation quality of the video. Since V O D is an application that transports stored video rather than interactive video, information exchange in a timely manner is less crucial, and a certain amount of delay can be tolerated. However, data loss requirements are more stringent for V O D applications. The IP-based Internet has very limited ability to respond to the QoS needs of V O D applications. The Internet was initially designed for the transport of data without any knowledge of data content: it does not differentiate between data and therefore provides no guarantee of delivery time or reliability. In response to the QoS demands of video applications, two approaches have been developed: the end-system-based and the network-based approach (Figure 1-1). Generally, end-system-based approaches control visual errors in applications. They aim at minimizing the visual impact of network loss and delay on video quality, and they are performed on the sender's or receiver's side. On the other hand, network-based approaches control network congestion. They emphasize reducing data loss and delay in the transmission path. End-system-based mechanisms include Forward Error Control (FEC) [9-12], Layer Coding [14], Bandwidth Adaptation [29], Rate Control [28], Packetization [29], and Error Concealment [27, 30-31]. The end-system-based approaches employ various methods of data management to meet QoS demands: • F E C adds redundancy to transmitted data to allow reconstruction of the lost data at the receiving end. Two types of F E C exist. One type is independent of the contents of the stream, while the other type uses knowledge of the stream 2 and adds redundancy to more important portions of the bit stream. Usually, F E C does not adequately handle packet losses that occur in bursts [13]. • Layer Coding separates video data into different layers. A base layer provides a lower quality image and additional layers improve image quality. Since the transmission of layered video requires support from the underlying network, it is not expected to be available in the current Internet. • Bandwidth adaptation adapts the output flow rate of a video stream to the available reception rate at the client and is often carried out dynamically on a real-time encoder. Bandwidth adaptation can be used for the transportation of live video, but it is not appropriate for stored video. • Rate Control matches the sending or receiving rate of video streams to the available network bandwidth. This technique is associated with TCP-based transmission, not UDP-based transmission. Therefore, rate control is often unsuitable for video transportation. • Packetization reduces the effect of loss by altering the packet size or the transmission order of the originally adjacent data; however, issues such as delay and network efficiency have not been completely solved yet. • Error Concealment conceals lost data to make the presentation as appealing to the human eye as possible. While it is suitable in an environment where packet loss is relatively rare, error concealment is inadequate in an environment that has frequent and bursty packet losses [2]. In general, end-system-based approaches do not offer an adequate solution to the QoS problems of video applications. 3 Q o S C o n t r o l for V i d e o C o m m u n i c a t i o n E n d - s y s t e m - b a s e d N e t w o r k - b a s e d S e n d e r ' s s i d e • F E C Q L a y e r c o d i n g • B a n d w i d t h a d a p t i o n • R a t e contro l • P a c k e t i z a t i o n C l i e n t ' s s i d e • E r ro r c o n c e a l m e n t • L ink - layer • IP S w i t c h i n g • N e t w o r k - l a y e r • Integrated S e r v i c e s • Di f ferent iated S e r v i c e s CJ Active Networking Figure 1-1. A summary of QoS control for video communication Network-based approaches can be divided into those dependent on link layer infrastructure equipment and those based on new IP architectures. Presently, IP switching technology is used in the link-layer approach. This technology takes advantage of the high bandwidth and low latency of switching to transport a packet as quickly as possible through a network. Techniques include Epsilon's IP Switching, Cisco's Tag Switching, IBM's A R I S (Aggregate Route-based IP Switching), Toshiba's C S R (Cell Switch Router) and M P L S (Multiprotocol Label Switching). These techniques provide l ink-level QoS at the expense of complexity. The architecture category provides network-level QoS assurances. As shown in Figure 1-1, one proposal in this category is Integrated Services (IS) [32-33]. IS supports rate-based service guarantees and throughput fairness based on a per-flow end-to-end resource reservation. However, there are several drawbacks in assessing the effectiveness of IS for video traffic. Since there is no straightforward relationship between perceived video quality and bit rate, IS cannot guarantee that quality meets the users' expectations. 4 Due to inefficient resource reservation for bandwidth-intensive video, IS also decreases video quality and network utilization, making it difficult to stream video in a reliable, scalable, and fair manner to a potentially large number of concurrent clients. Although IS-based techniques ensure that delay constraints are compatible with the requirements of some real-time video, such as video conferencing, they are not capable of accommodating diverse QoS requirements, such as loss guarantees for loss-sensitive video to heterogeneous clients. Finally, IS cannot be expected to directly benefit video applications in the near future, because under the current Internet paradigm, router vendors are responsible for the deployment of IS-related schemes. The standardization of the LP packet header format and the algorithm itself is required to ensure compliance between vendors. There would be a very long and complex standardization process between the invention of an IS scheme and its implementation in a real network. Another proposal is Differentiated Services (DS) [24-25, 35]. DS-based techniques provide qualitative differentiation on the loss and delay experienced by different classes of traffic, and they do so only on a per-hop basis. Unfortunately, the distribution of end-to-end QoS to each hop has not been effectively addressed. Providing absolute per-flow-based end-to-end QoS guarantees, as would be required in video applications such as VOD, remains an unresolved issue in DS networks. As with IS, a long standardization process prevents DS from being applicable in the near future. Overall, IS and DS cannot adequately support the QoS requirements of video applications. Another recently developed technology in the architecture category, Active Networking (AN) technology [34, 36, 47], allows end users or applications to program network elements such as LP routers. This new architectural approach has three 5 implications. First, applications or users can instruct the network about how to treat their packets by injecting user-specific programs into routers. Network elements such as routers can also perform computations on individual packets up to the application layer and deal with application-specific states within a network according to the injected control logic. Therefore, the network can provide necessary service guarantees that match the quality requirements of the application. Second, in A N , application-specific or user-aware control is performed in a dynamic manner, allowing greater effectiveness and flexibility in assigning network resources at different times and at different network nodes. Finally, network services can be deployed end-to-end without the need for a lengthy and complex standardization process. Deployment would simply involve downloading the appropriate service logic into a network and allowing the network to adapt to the specific application. In the Active Network research area, studies have been done on prototype implementations of A N , and on critical issues of A N such as security, performance, and scalability. However, to our best knowledge, less work has been conducted on the benefits of A N technology, such as the ability to enhance perceived QoS for streaming video over an IP-based network. In this thesis, the QoS problem is approached with an A N paradigm, after a close examination of the key functionality and relative strengths and weaknesses of video-specific QoS control techniques as related to the IS, DS, and A N approaches [17]. The goal of this research is to provide application users with satisfactory network services for streaming video over an IP-based network, while efficiently utilizing network resources. 6 1.2 Thesis Statement and Contributions This dissertation outlines a framework for network service control for streaming video over IP networks. While conventional in-network QoS control approaches attempt to realize QoS from a network perspective and are independent of an application viewpoint, the aim of this research is to deliver satisfactory QoS to application users while efficiently utilizing network resources. The framework proposed in this research adopts the ideas of end-system-based QoS control approaches, minimizing or eliminating the effects of network loss on the visual quality of video inside a network (Figure 1-2). There are three objectives for this design of an in-network QoS control: • Application Performance - User-expectations for the presentation quality of video must be met, rather than simply meeting the QoS requirements based on the network itself. The application quality perceived by users defines the scope of applicability of a network and the final acceptance of network services; thus, the ultimate goal of in-network QoS control is the provision of a high performance video to end users. • Service Fairness - The improvement in network service seen by some end users should not be achieved at the expense of decreased service quality for other end users. The amount and quality of service provided to competing end users must be fair. Although there are mechanisms providing a fair share of network resources [37-38], these mechanisms have not addressed and cannot guarantee a fair share of network services. 7 • Network Efficiency - New QoS architectures and mechanisms that improve service quality for end users should not be accompanied by a decrease in the utilization of network resources. The assessment of network efficiency often relies entirely on network measures such as throughput. A more suitable assessment should link to the application quality perceived by end users, since network efficiency should reflect the level of network services provided to end users. Moreover, the perceived quality of network services is not always correlated with a large value of throughput. Existing Loss Control Approaches Error Control Congestion Control I "L F E C Error C o n c e a l m e n t Buf fer M a n a g e m e n t • Rely on pre- or post-processing ' Improve user-perceived application quality] • Cause more network congestion • Are ineffective for I'atge error 1 Based on in-network processing • Improves network-perceived service quality • Is inappropriate for.video application A - Q o S Approach : U s e r - p e r c e i v e d network se rv i ce improvemen t through in-network p r o c e s s i n g Figure 1-2. An overview of new QoS control development 8 With these objectives in mind, an Active QoS Control (A-QoS) framework was designed and implemented for the distribution of stored video that improves end-to-end QoS and provides more effective utilization of network resources. The A-QoS framework synthesizes a QoS description model, buffer management techniques, packet scheduling techniques, and QoS distribution methodologies. The focus is on supporting loss requirements as we study the QoS problem that arises in the distribution of stored video. Since this application has a non-real-time nature as compared to interactive video, loss performance is the most important QoS requirement. The major contributions of this study are as follows: (1) A n investigation of the factors affecting QoS of video transport, and an analysis of the QoS management problem. (2) The identification of the desirable traits of a new network service, and the design of a new QoS control framework for distributing stored video over LP networks. This framework allows application users to acquire satisfactory network services and optimize the utilization of network resources. Existing in-network QoS control does not handle user-defined QoS while supporting high network efficiency. (3) The formulation of a joint application-network QoS description model. In this model, the QoS description for an application reflects the quality perceived by application users and can be directly mapped to network-controllable parameters. On the other hand, the QoS description for a network is based on users' satisfaction with the application performance. These two aspects establish a basis for network management whereby the application quality perceived by end users can be explicitly managed from within the network. This contrasts with the existing QoS paradigm, which pays little attention to the relationship between user-perceived application quality and parameters 9 characterizing network performance. In the existing QoS paradigms, the improved quality of network services only satisfies the network itself but it does not match users' expectations well. (4) The development of two buffer management schemes within a router that maximize the visual quality of each video, utilize the network resources efficiently under given network situations, and generate a fair service distribution between videos. This creates a new design direction for buffer management: the provision of an application quality that satisfies the end users instead of the network itself. (5) The design of two application-aware packet scheduling schemes, which are the first known scheduling schemes in the literature with explicit objectives concerning loss warranty. Existing scheduling schemes only address delay guarantee. Experiments demonstrate that the proposed schemes have superior performance compared to conventional packet scheduling schemes that do not have application-level QoS management. Users perceive QoS satisfaction for the application and receive an equitable share of service, and the network is highly efficient. (6) The proposal of a set of end-to-end QoS distribution schemes, which are crucial for providing end-to-end QoS guarantees and are rarely investigated in existing research. The proposed strategies, along with the proposed nodal-based packet scheduling schemes, improve the stringency of the QoS guarantee and network efficiency. 10 1.3 Outline of This Thesis Chapter 2 presents recent progress in Active Network research area and describes related issues. Chapter 3 presents an overview of an A-QoS framework [53] for distributing stored video and describes how different components in the framework work together to improve end-to-end QoS. In Chapter 3, a new QoS description model shows that QoS description must be considered for explicit control purposes. Metrics based on this model are identified. In Chapter 4 , two buffer management schemes at a router are introduced, and performance evaluation results for these schemes are given. In Chapter 5, two nodal-based packet scheduling techniques are presented together with performance evaluation results. Both chapters 4 and 5 highlight new ideas regarding the design of buffer management and scheduling schemes. Chapter 6 describes QoS distribution strategies and demonstrates that these strategies in conjunction with the proposed scheduling schemes improve end-to-end QoS. The last chapter summarizes key results, and outlines some directions for future work. 11 Chapter 2 Active Networks This thesis studies the problem of improving perceived QoS for IP video using Active Networks. Chapter Two surveys and discusses Active Networking technology and related applications. It is evident that current research suggests active solutions for many applications; however, currently, Active Networks do not effectively address perceived QoS issues for video over IP-based networks. 2.1 Overview of Active Networks Active Networks move beyond traditional networks in which routers only perform data forwarding. Active Networks have programmable infrastructure to support customized communications and computations [36]. These networks have two key components: they can transport runnable codes between nodes via packets in the same manner as data are exchanged, and they can build an execution environment above an active node to host and run the code. There are three implications of this new architectural approach. Firstly, an application can instruct the network how to treat its 12 packets by injecting application-specific programs into routers. Routers can also perform computation up to the application layer on individual packets and deal with application-specific states within a network according to the injected code. Therefore, the network provides necessary service guarantees that match the application's quality requirements. Secondly, Active Networks allow applications to load software programs into routers in a dynamic manner, making the system more effective and flexible at assigning network resources at different times and at different network nodes. Finally, network services can be deployed end-to-end simply by downloading the appropriate service logic into a network, and allowing the network to adapt to specific service demand, eliminating the need to standardise services in a lengthy and complex standardization process. Therefore, Active Networks provide for an easy introduction of new network services. In recent years, many companies have been very active in the research and development of routers with QoS capability that conform to proposed A N architectures. AN-capable routers require very high performance because A N routers need to support dynamic changes in their processing and perform computations on and modify packet contents. Moreover, this processing can be customized on a per-user or per-application basis. Commercial products exploring the use of A N include programmable engines (PE) that can augment traditional passive routers and make them active [55], in addition to active routers designed from scratch. These products all support the essential requirements of A N as stated above, yet specific A N router architectures differ according to different manufacturers. Of the architectures, the cluster-based architecture proposed by N E C is most widely accepted. In the academic research community, studies on A N have focused on the viability of the A N concept, which includes prototype implementations, issues of security, 13 performance, and scalability [34], and the suitably and usefulness of Active Networks for different applications. 2.2 Active Networks and their Application The network programmability of Active Networks makes it possible to deploy fast and dynamic user-defined service at a desired time and location. This new, powerful communication paradigm provides substantial benefits to many applications. The main applications whose performance can be increased via active network support are Reliable Multicast, Network Management, Traffic Control, Mobi le IP Services, and Multimedia Applications. The following examples illustrate how Active Networks can benefit the above applications. Reliable Multicast: The issues in reliable multicast are caching of data, N A C K implosions, and answering individual repair requests. Active Reliable Multicast ( A R M ) is an example of an active solution to these problems [56]. In A R M , caches can be placed at the most appropriate locations in the multicast trees, and repair requests can be handled as close as possible to the lossy links. A R M also keeps track of frequently requested repair data using the soft local state in routers and suppresses duplicate N A C K s from multiple receivers to control the implosion problem. A R M introduces a small increase in wide-area end-to-end latency, and the active architecture solves deployment issues. Network Management: The major problems in current polling-based network management are implosion at the management center and response delay. One 14 representative active solution for network management is Smart Packets [57]. Unlike traditional data packets, Smart Packets carry programs that are executed at nodes on the path to one or more target hosts. Smart Packets target specific aspects of the node and moves management decision points closer to the node being managed. Therefore, the bandwidth and latency required for network management is reduced and the management of large and complex networks is improved. Traffic Control: One example of active traffic control mechanisms is Active Congestion Control ( A C C ) [58]. A C C applies active networking techniques to feedback congestion control. It allows internal network nodes to quickly find and take action to reduce congestion, thus avoiding the end-to-end delay that occurs in endpoint congestion control systems. Therefore, A C C is very effective in high bandwidth networks with high delay. Mobile IP Services: There is an increased demand for a reliable mobile LP protocol that supports a variety of services, such as access to an intranet, a fixed LP address and the ability to receive multicast datagrams. Active networks can exploit the available network resources in a more efficient way, and can be used to upgrade network services quickly. The application of Active Networks in the Mobi le Networking domain is studied in [59]. Multimedia Applications: The QoS problems of multimedia communication can be tackled by means of active network-based congestion control and error recovery. Multimedia streams consist of audio and video parts. For audio streams, as far as the authors are aware, there are only two related projects. In the first project, an optimal amount of redundancy data is added on a per-link basis to protect an audio stream against packet loss using an active technique [60]. In the second project, loss concealment is 15 used on a voice data stream at an active node to regenerate the lost packet [61]. Compared with error concealment performed at the receiver, this method significantly reduces delay and bandwidth overhead. For video streaming over IP networks, A N -based techniques include congestion control and error control. Active Networking-based QoS Control Congestion Control Error Control Traffic Filtering ~] [ Traffic Dispersal ] Retransmission Reencoding Intelligent Packet Discarding Transcoding Figure 2-1. Taxonomy of AN-based QoS control for networked video Congestion control includes Traffic Filtering and Traffic Dispersal. Traffic Filtering techniques can be further divided into two types: intelligent packet discarding and transcoding. Intelligent packet discarding drops the less essential parts of video data ahead of more important ones when congestion occurs. Examples include discards of B -packets ahead of I and P-packets in M P E G video [62], and packets from an enhancement layer ahead of those from the base layer in layered video [63]. These schemes reduce the degradation of video quality and improve the effective throughput of the network when packet loss is unavoidable, and data of unequal importance exists in the coded video. 16 Transcoding is used to transform user data within the network to conform to network conditions, either by changing its video coding format, altering coding parameters, or transforming prioritized data streams to non-prioritized ones [64, 66]. The use of transcoding can effectively deliver video to heterogeneous clients in a multicasting environment, but it involves some complex computation. Transcoding at the transport layer is proposed to reduce computational complexity [67]. The Traffic Dispersal technique deals with network congestion by balancing network traffic loads. This is achieved by dividing a flow into several parts, according to network conditions. Each part is then transmitted through different paths. This scheme is applied to control video conferencing traffic and has been proven to be effective in reducing data loss [65]. Error control mechanisms minimize the visual impact of loss at the receiver by retransmission, or by re-encoding lost data within the network. The Retransmission-based method is a reliable way to recover lost data, but it is not typically employed in video transportation due to the excessive end-to-end delay incurred. However, active networking technology makes retransmission potentially feasible because of the lower latency and greater bandwidth efficiency that is possible compared to end-to-end retransmission. Two AN-based retransmission schemes performed in a layered multimedia multicast [56] and on a point-to-point basis [68] demonstrate this. The Re-encoding scheme employs a video code installed at an active network node to perform error correction for real-time video stream [6]. Due to the shift of error recovery from end-points to an intermediate node, this method uses less, bandwidth, achieves faster error correction, and produces better video quality compared to end-to-end-based approaches but raises the issue of relative complex computation. 17 In general, A N has many advantages over traditional networks that support more advanced QoS features. Firstly, A N can support many classes of services because of its adaptability. Secondly, due to the application and user-driven computations performed, A N provides a more powerful way to satisfy particular QoS requirements. Although A N adds complexity to the network, the complexity of AN-based schemes does not exceed that of other schemes that have already received wide attention, such as R E D [16]. 18 Chapter 3 Active QoS Control Framework This chapter introduces a framework to enhance perceived QoS for video streaming over IP networks. The first section discusses the need for a new approach to improve perceived QoS, and the second section proposes a framework. Finally, a QoS description model is outlined. 3.1 Application Requirements and Network Services Networked video requires the network to support QoS and ensure that delay and losses experienced by video packets are within the limits to maintain acceptable visual quality. However, the existing best-effort IP networks provide no QoS guarantee. Methods such as IS and D S that work to improve QoS from within a network cannot adequately support video applications. IS reserves network resources such as link bandwidth, which are required by each individual flow along traversed paths (Table 3-1). Since the network load can change significantly during the duration of a video session, resource reservation is not well suited for video applications. A video application generally has session holding times much longer than the typical data transfer. If the session requirement for a user arrives during a busy period, the network gives the client a low quality video for the 19 duration of the session, even though the network may become idle during the later parts of the user's video session. Alternatively, i f the network is overly conservative in its estimate of the reservation needed by the user for the video session, it may have to reject other users at a later time. Resource reservation for bandwidth-intensive video application is also inefficient, because the multiplicity of high-bandwidth video flows may result in an insufficient availability of peak resources. Moreover, even though IS ensures throughput and delay through bandwidth reservation, it cannot guarantee a quality that meets users' expectations because there is no straightforward relationship between perceived video quality and bit rate. In general, IS cannot provide a reliable and scalable service to video applications. Within the D S architecture, the major problem is that while the desired behaviour of a video application is specified on a per-flow end-to-end basis, D S mechanisms are defined on a per-class per-hop basis for the purpose of scalability. There is no mechanism to distribute the required end-to-end QoS into local QoS onto network elements along the route. Thus, the end-to-end QoS provided by DS is unpredictable, and in many cases it does not match the application users' requirements. Thus, D S is not well suited for video applications. 20 Table 3-1. A comparison of in-network QoS control approaches IS End-to-end per-flow DS Per-hop class-based service differentiation Relative delay and loss guarantees High AN Mechanism resource reservation Network Intelligence Features Absolute delay guarantee Application-aware QoS Scalability L o w Medium Feasibility Standardization Standardization Dynamic Deployment Both IS and D S fail to satisfy users' needs because the QoS provided by them is based on the standpoint of the network itself. Neither of them includes user-level requirements, and it is difficult for application users to be fully satisfied with the network service provided. There is a gap between the requirements of the user and the QoS delivery models. If application users are not satisfied, network efficiency is not high, because application users determine whether network services are acceptable and whether network operations are effective. IS and DS also require a complex deployment process. IS and D S are enhanced services that must be added into the current Internet. Under the current Internet paradigm, router vendors are responsible for the implementation of IS- and DS-related schemes. Thus, the packet header format and the algorithm itself must be standardized to ensure compliance between vendors. This results in a very long standardization process that extends from the invention of a scheme to its implementation in a real network. QoS of video application cannot be solved by IS or DS in the near future. 21 To address the QoS issues for video transportation over LP networks, an Active QoS control (A-QoS) framework is proposed. Within this framework, application users and networks work collaboratively rather than separately as in IS and D S models. A user's demand is characterized as being as close to the network-controllable parameters as possible, while the networks perform QoS management that is aware of the application user. User-aware QoS management is based on the concept of Active Networks and is assumed to be realized using available technologies. A N adds intelligence to network elements, making them as programmable as possible. It steps beyond traditional networks, in which network elements only perform data forwarding. A N has two parts: the active packet and the active router. The active packet carries an executable program that specifies the processing that must be performed on its behalf by the active node through which the active packet passes. The active router replaces the routers in a traditional network. It executes active programs and maintains corresponding state information. The application-aware QoS control mechanisms in an A-QoS framework can be carried by active packets and can be run over active nodes. Using application-specific network processing, application users can obtain satisfactory network performance while efficiently utilizing network resources. The goal of an A-QoS framework is to f i l l the gap between the QoS requirements of application users and those provided by network services. A n A-QoS framework provides many advantages: it provides an end-to-end QoS that closely matches the user's requirements, yields a high level of service fairness between application users, and maintains an efficient network. 22 3.2 System Overview The main functional components of the A-QoS framework are a joint application-network QoS description model, nodal-based QoS management techniques, and QoS distribution strategies (Figure 3-1). When the three components are coordinated and integrated, good application performance, fair service, and network efficiency can be achieved simultaneously. d Packet Scheduling t Joint Application-network QoS Description Model Figure 3-1. A n overview of A-QoS framework The joint application-network QoS description model considers the relationship between the application and the network. The model consists of two parts. From an application perspective, the QoS description not only represents how well users' expectations are satisfied, but how it can also be transferred to network-controllable parameters. A network QoS description specifies a quality level of a network based on 23 user perception. Thus, the joint application-network QoS description model ensures that users' QoS expectations are considered in the design of in-network QoS control functions. As a result, the desired level of quality of a given application can be delivered with assurance by the network, while the network operates effectively and efficiently through management of user-perceived QoS. The nodal-based QoS management is a- core component of the A-QoS framework. Here, two types of QoS management schemes are presented: the buffer management and packet scheduling schemes. The buffer management schemes control buffer allocation between streams and between different parts of a stream. The packet scheduling schemes arrange the servicing order and frequency of streams and between different parts of a stream. Although they operate in different forms, both buffer management and packet scheduling schemes improve QoS and network efficiency within a single node system. The QoS distribution strategies assign the QoS target at individual nodes across a multi-hop LP network based on end-to-end QoS requirements. B y combining the QoS distribution strategies and nodal-based packet scheduling schemes, the end-to-end QoS requirements can be effectively monitored and achieved. 3.3 Proposed QoS Description Model 3.3.1 Background The definition of QoS differs, depending on whether it is considered from an application or a network perspective. Network QoS is taken from a network viewpoint and is independent of application and user perception. It usually relies on standardization by the International Standardization Organization (ISO). Examples of network QoS 24 include packet loss rate, available bandwidth, and throughput. However, from an application standpoint, the specified parameters differ. Application QoS can be described by both objective and subjective parameters. The objective QoS description is limited to system-level technical parameters, such as the Peak Signal to Noise Ratio (PSNR), that are not related to network-level QoS parameters in a simple way. Subjective measures of application-level QoS include the score of the video in the Human Viewer Trial , whose grading standards are set by the International Telecommunication Union (ITU). A n excellent image quality is associated with a score of 5, while a "bad" image quality is associated with a 1. This grading process also does not provide results that correlate with network-controllable parameters. Application QoS and network QoS are not closely related to each other, and none of the existing QoS descriptions simultaneously account for application characteristics, user preferences, and relationships with network parameters. Following the existing QoS paradigms, networks are not able to deliver satisfactory network services to their application users; instead, the services provide quality as perceived by the networks. However, it is perceptual quality, rather than network-perceived service quality, that determines the success or failure of a network. Using the current QoS paradigm, network operation is unsatisfactory. To bridge the gap between user requirements and network services, users' requirements for a specific application must be integrated with the parameters that characterize the underlying network performance. The following section describes a QoS description model that integrates user requirements and network performance: this is called the joint application-network QoS description model. A set of corresponding QoS metrics are then identified. 25 3.3.2 Joint Application-network QoS Description Model In the joint application-network QoS description model (Figure 3-2), the application QoS can be easily converted to network layer QoS parameters, and can therefore be supported through network-level QoS control techniques. On the other hand, network QoS is also linked to the QoS requirements of applications and users, rather than those of the network itself. Therefore, a network can manage itself to match the expectations of application users, increasing users' satisfaction with network services, while effectively utilizing network resources. Existing QoS Description Model Application QoS No explicit relationship with network performance !No explicit relationship with perceptual application quality Proposed QoS Descript ion M o d e l Transferable to network parameters Application QoS Network QoS — J ) Related to perceived application quality Figure 3-2. A n overview of the joint application-network QoS description model 26 In accordance with the joint application-network QoS description model, three parameters have been developed to increase user satisfaction and effectively utilize network resources: Frame Error Rate, Fairness Index, and Effective Throughput. The Frame Error Rate (FER) is the fraction of frames in error in each video. If one packet in a frame is lost, the whole frame and its dependent frames (P and or B) are considered to be in error. The F E R directly impacts video viewing quality [41] and is more closely correlated with the human perception of video quality than the widely used S N R [46]. With a lower F E R , the perceived video quality is higher. When using S N R as a guide, perceived video quality cannot be reliably predicted from network performance, since there is no straightforward relationship between S N R and network-level QoS parameters. Conversely, F E R is easily converted to related network-level QoS parameters such as the packet loss rate and can thus be supported through in-network QoS control. The Fairness Index (FI) is a measure of service fairness. Service fairness means that service levels are allocated fairly to different video streams, such that each individual video receives service at a level commensurate with individual expectations. Fairness in the allocation of link bandwidth or router buffer space has been explored [37, 38], but neither of these approaches accurately reflects the fairness based on users satisfaction because there is no clear relationship between network resources and application performance. Clearly, fairness of service is a more rational objective since it reflects the user perceived service quality that a network attempts to offer, while fairness in resource allocation is simply a means of achieving a network service goal. B y definition, FI is the ratio of actual packet loss rate over the acceptable packet loss rate for each video. A satisfactory loss performance yields a value of one or less than one, while an 27 unsatisfactory loss performance results in a value of greater than one. When the percentage of videos with a satisfactory loss performance is high, the fairness of service is also high. As a result, more video users can be charged for the satisfactory service, leading to more successful network operation. Effective Throughput (ET) measures network efficiency. E T differs from throughput, the conventional measure of network efficiency, because throughput does not show what percentage of delivered traffic is useful to application users and does not truly reflect network efficiency. E T reflects the percentage of traffic that is useful to end users. E T is the fraction of usable data over all video streams. "Usable data" is the video data that belongs to a successfully delivered frame, i.e., a video frame in which all of the packets and reference frames (I and or P) are completely transmitted. A high E T means truly high network efficiency. F E R , F l , and E T are measures of the design objectives of the A-QoS framework: application performance, service fairness, and network efficiency. Network-level schemes perform QoS control to satisfy this QoS triplet, and thereby the design goals of the A-QoS framework. 28 Chapter 4 Buffer Management This chapter describes the problems of existing buffer management techniques, presents proposed schemes, and evaluates the performance of the proposed schemes. 4.1 Background Buffer Management has been widely recognized as a critical component in all networking technologies. It performs congestion control within the network. Typical examples of buffer management schemes include the Drop Tai l (DT) [39] and the Random Early Drop (RED) schemes [40]. In the D T , i f a buffer is full, incoming packets are discarded. This method is the simplest and the most common buffer management technique built into existing commercial network elements such as routers. It is not applicable for QoS required traffic because it is not selective. R E D is the most recent proposed method of buffer management which aims at minimizing the number of future packets that are likely to be dropped. In R E D , two queue thresholds are defined, and incoming packets are marked or discarded if the buffer occupancy is above those 29 thresholds. However, two issues arise when R E D is applied in the transmission of video over the Internet. R E D works in an environment in which the source data behaves in a TCP-friendly manner. In practice, the most popular protocol for Internet video is the Real Time Streaming Protocol (RTSP) that employs the I P / U D P / R T P packetization method. Secondly, R E D focuses on achieving a lower packet loss rate, and is most applicable to general-purpose loss control for bulk data transmission, which is based on reducing the packet loss rate and ignores data content. Unfortunately, R E D cannot always minimize the quality loss of networked video because there is no clear relationship between packet loss and the presentation quality of video [26]. Low packet loss ratios do not necessarily translate into high visual video quality, mainly because an inter-frame video compression algorithm is often used in encoding video. Inter-frame coding exploits the temporal correlation between frames to achieve high levels of compression, independently coding reference frames and representing the majority of the frames as the difference from each frame and one or more reference frames. Error propagation between a reference frame and all the dependent frames renders that video application very sensitive to packet loss, and even a small amount of packet loss can cause a very unpleasant viewing experience for users. The widely used M P E G video provides a typical example of the effect of packet loss on video quality. There are M P E G - 1 [3], M P E G - 2 [4], and M P E G - 4 videos [5]. The basic idea of the M P E G - 1 and M P E G - 2 compression technique is to predict motion from frame to frame in the temporal direction and also to use a Discrete Cosine Transform to organize redundancy in the spatial domain. Specifically, M P E G codes video data in a layered structure that includes: Group of Pictures (GOP), picture (or frame), slice, Macroblock, and block (see Figure 4-1(a)). Of these, the most important part of M P E G 30 video traffic is the frame, since the frame is the basic unit of display. Three frame types exist in M P E G video: I-, P- and B-frames (see Figure 4-l(b)). The I-frame is coded independently, while the P-frame and B-frame are coded using the closest past I- or P-frame and the closest past and future I- or P-frames, respectively. This means that M P E G video is frame-dependent. (a) (b) Figure 4 -1 . M P E G - 1 video structure When M P E G video is transmitted over an IP network, its characteristics cause problems in the quality of networked video. Each video frame is very large, and is segmented into a sequence of IP packets when delivered through an IP network. When a network overloads, or there is traffic contention, the network drops incoming packets until the overflow disappears. Such random losses cause video frames to be corrupted. IP networks also assume that the information carried by all video frames is equally important; however, the importance of video data in different frames varies, and the loss 31 of a particular frame may cause other related frames to be discarded because of the dependent nature of M P E G video frames. This process is referred to as error propagation (Figure 4-2). For example, the loss of an I-packet distorts the whole G O P , equivalent to affecting half a second of video. Conversely, errors in B-frames have no adverse effects on other frames [26]. The extent of error propagation depends on what type of frame is dropped. Example 1: Error in I-frame Subsequent P-frame Example 2: Error in I-frame Subsequent B-frame Figure 4 -2 . The effect of packet loss on M P E G - 1 video [26] M P E G - 4 has evolved from M P E G - 1 . A n M P E G - 4 stream consists of one or more visual objects (VO) , and each V O sequence consists of several layer streams. A layer video stream is composed of a sequence of Video Object Planes (VOP) , and a V O P corresponds to a whole rectangle frame, a specific region of a frame, or an object such as a human, an animal, or a building. A l l V O P s are first divided into macroblocks of 16x16 32 pixels, and a coding operation is performed on a macroblock basis. The V O P , like the frame of M P E G - 1 and M P E G - 2 , is the basic unit of image data. There are three types of V O P s : the I -VOP, which is coded independently, the P - V O P , which is coded with reference to leading I or P V O P s , and B - V O P , which is coded with reference to leading or trailing I or P V O P s . When an error occurs at an I or P - V O P , it propagates to the other V O P s , causing errors in consecutive VOPs . A n error in an I - V O P corrupts a whole Group of V O P ( G O V ) , which is typically equivalent to about half a second of video. A small packet loss can translate into a large number of VOPs-in-error, and consequently, the visual quality of the video decreases significantly. Even with a small packet loss, random loss and error propagation can cause large numbers of video frames/VOPs to be in error [16, 26] and can degrade video quality even more than an objective packet loss metric might suggest. In a compressed video data stream, packet loss exaggerates the effects of error propagation and lack of redundancy. This problem cannot be solved through the use of a general purpose control, and packet discard schemes must designed from a visual quality control viewpoint rather than from a network perspective. A quality control perspective necessitates a video-oriented design. Video-oriented design needs to be practical, and although a video-specific procedure is currently required within a network, existing network technology cannot support it. Active Networks can resolve this issue. As described in Chapter 1, Active Networks allow users and applications to inject custom programs into routers and allow routers to perform application-specific computations on individual packets. Therefore, Active Networks can deploy video-specific network services simply by injecting the appropriate service logic into the network. Using this technique, video applications can 33 be recognized and the way that the router forwards video traffic can be modified according to the specific needs of each video packet. Two router-based buffer management schemes are proposed: the Frame-level Discard with Dynamic Thresholds (FDDT) [16] and Loss-based Buffer Sharing with Frame-level Packet Discard ( L B S _ F P D ) schemes [19]. These schemes use knowledge of the characteristics of compressed video to deliver broadcast video data that maximizes video quality under given network conditions, while making the most efficient use of network resources. 4.2 Mechanism Scenarios 4.2.1 Single Video Source - Frame Level Discard With Dynamic Thresholds (FDDT) The F D D T scheme uses three control thresholds in a router output buffer to ensure that there is enough free space to queue more important video packets and the entire set of packets of each accepted video frame. This prevents error propagation and minimizes the number of frames in error, reducing the degradation of perceived video quality during congestion episodes and improving network efficiency. Assuming that no error recovery and/or concealment are supported, F D D T works as follows: 1) The first packet of a B - or P-frame is discarded when the buffer occupancy thresholds (TB and T P ) are reached, respectively. The remaining packets of the frames are discarded, regardless of the instantaneous buffer occupancy. 2) A n I-frame packet is only discarded when there is no space in the buffer. 34 3) A l l subsequent packets of a frame are discarded once an early packet of the frame is dropped. 4) A packet from competing traffic is discarded when the buffer occupancy threshold (To) is reached ( T 0 is initially set to T B ) ; T 0 is decreased or increased by one when a P or B packet is discarded or queued. Buffer space saved by dropping less important video packets can therefore be granted to more important video packets. 5) A l l new packets are dropped when the buffer is full. 4.2.2 Multiple Video Sources - Loss-based Buffer Sharing with Frame-level Packet Discard (LBS_FPD) The L B S _ F P D scheme provides high QoS performance in a multi-stream environment. L B S _ F P D tries to maximize user-perceived video quality in the presence of packet loss, improve network efficiency, and also provide fairness of service among all competing video streams. As a result, the percentage of streams with an expected loss performance can be increased and network resources can be utilized efficiently. Fairness of service means the fair allocation of service levels to different traffic, so that all traffic receives service at a level commensurate with individual expectations. In other words, fairness of service tries to guarantee that all traffic with the same quality requirements perceives a similar QoS; traffic with higher QoS requirements receives more quality, while traffic with lower QoS requirements receives lower quality. Clearly, fairness of service is a more rational objective, compared to fairness in resource allocation, since it reflects the user perceived service quality that a network attempts to offer, rather than simply achieving a network service goal. 35 In order to achieve the objectives of fair service, high presentation quality, and high network efficiency, during congestion periods the L B S _ F P D scheme determines which data of which streams should be discarded and how much data from the selected streams should be discarded. i) Determining streams to be discarded To achieve fair service, a fair distribution of loss must be provided among contending video streams so that the actual loss performance of each video stream matches its loss constraints. In such a way, network resources can also be used effectively, resulting in high network efficiency. For this purpose, a Loss-based Buffer Sharing (LBS) method was designed for use among multiple videos in the presence of high data loss. When the buffer occupancy exceeds the buffer length threshold and i f a stream uses up its weighted buffer share, the L B S _ F P D method discards the incoming packet(s) of that stream. The weight is set as inversely proportional to the packet loss constraints of each video stream. Network service providers can specify the loss constraints, making loss consistent with the price that users pay for the video stream. In other words, each quality level requirement paid for by the user is associated with a packet loss value, which is an important metric for calculating the fair service share for a video stream. The packet loss value reflects the relationship between service charges and the quality provided to the user from a network viewpoint. There are two main advantages to the L B S method. Firstly, it allocates buffer space depending on loss requirements, providing a loss performance exactly matching the loss tolerance. Therefore, packet loss among all videos is equitably distributed and the buffer is used to serve as many streams as possible. Secondly, L B S is used only when the network is 36 highly congested; otherwise, the buffer is completely shared by all streams, leading to high utilization of the buffer and to high network throughput, ii) Determining the discard amount To achieve the best quality in the presence of loss, enough relevant data must be transmitted. Hence, video features must be recognized. One important feature of M P E G coded video is its hierarchical format, which goes from G O P at the top, to picture (or frame), slice, Macroblock, and finally, to block. Of these components, the most important component is the frame, since it is the basic unit of display. Frame error directly influences the viewing quality of video. Another key feature of M P E G video is its inter-frame dependency: the I-, P-, and B-packets have a descending order of importance. Data belonging to one complete frame must be successfully transmitted as a single entity, and priority of transmission should be given in the order of I-, P- and B-packets. Frame-Level Packet Discard (FPD), a method for managing buffer allocation between different parts of data within a video stream, was designed for this purpose. The F P D method uses two thresholds of buffer length in the output buffer of a router to control discards of packet(s), ensuring that there is enough free space to queue important video packets and to accommodate the entire set of packets of each accepted video frame. The F P D technique for a single video stream was presented in Section 4.2.1. In the current section, F P D is extended to support the transport of multiple videos by adding L B S , which considers the relationship between the multiple videos. The L B S _ F P D scheme formalizes the test of whether to discard an arriving packet based on i) and ii) above. The L B S _ F P D scheme works as described below: For each arriving packet from stream i: /* perform LBS */ 37 1) If (Buffer Length >LOW) and (Rt > I), drop the packet. Here, LOW is the initial buffer length threshold at which B-packets begin to be dropped. Ri is the ratio between the actual buffer occupancy (<20 of stream i and the allocated buffer share which is given by the product of weight W, and the buffer size B . Thus, For stream i, W, is determined from the buffer allocation in inverse proportion to its normalized packet loss ratio, and is calculated as follows: where PLRi is the acceptable P L R of stream i. Next, do the following: /* perform FPD */ 2) If Buffer Length >LOW, and the packet is the first packet of a B-frame, drop the packet. 3) If Buffer Length >HIGH, and the packet is the first P-packet or B-packet, drop the packet. Here, HIGH is the buffer length threshold at which P-packets start being dropped. If the dropped packet is a P-packet, LOW decreases by one. Let the difference between the initial and update values of LOW be A, and L O W increases by one when an I- or P-packet gets accepted subject to A remaining greater than zero. 4) If the arriving packet belongs to a partially discarded frame, drop the packet. 5) If the buffer is full, drop the packet. Next, perform: (4-1) (4-2) 38 /* update related parameters */ 6) If the packet is accepted, increase Qt and Buffer Length by one, each. 7) For each departing packet from stream i, decrease Qi and Buffer Length each by one. 4.3 Performance Evaluation of Proposed Buffer Management Schemes Extensive simulations were conducted to study the effectiveness and robustness of the F D D T and L B S _ F P D schemes. This section provides details of the simulation data and the simulation results. Figures 4-3 and 4-4 illustrate the system and network models used in the simulation. Video File ( M P E G ) > Packetization N / Network Simulation \ —7 \ Analysis 1 Packet Discarding Policies j J ( e.g. DT, FDDT,LBS_FPD) Background Traffic Generator —S " / V \ 1 / \ ( A Packet B u f f e r ) / Figure 4 -3 . Simulation diagram for F D D T and L B S _ F P D 39 O t h e r traff ic Figure 4-4. Network model for simulation of F D D T and L B S _ F P D In Figure 4 - 3 , the Network Simulation module is implemented in a C program and simulates the behaviour of an output packet buffer of a router with the F D D T , L B S _ F P D and D T schemes, respectively. The D T scheme is the conventional packet discard scheme, in which packets are dropped when the buffer is full. The Packetization module first encapsulates actual M P E G - 1 video traffic into R T P packets, then U D P packets, and finally into IP packets, with 1500 bytes as the maximum packet size. The IP packets are then sent to the Network Simulation module. The video traffic is derived from video traces where the number of bits per frame used by the M P E G - 1 coder is described [52]. The Background Traffic Generator module produces and transmits randomly distributed background traffic to the Network Simulation module in order to emulate different levels of congestion. Finally, the Analysis module takes the original video file and the output of the Network Simulation module as input to examine video loss and network throughput. 40 The test M P E G - 1 video traces are called "Soccer" (sports), "Asterix" (cartoon), and "Talk" (news), and represent high, medium, and low activity, respectively. They all follow the same G O P pattern, I B B P B B P B B P B B , and each video consists of 40000 frames. The frame rate is 25 frames per second, and the traffic distribution for each video is shown in Figure 4-5. O (0 a . E 3 Z 25000 20000 15000 10000 -I 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 No. of l-Frame 0 2 42 1712 4170 4170 3143 2712 2340 1190 759 624 481 308 60 144 171 No. of P-Frame 552 6424 7059 7724 4825 3234 1652 648 585 290 231 144 13 28 0 0 0 No. of B-Frame 10371 22442 11604 3028 1325 528 343 264 108 0 0 0 0 0 0 0 0 Packets per frame (a) High activity video, the " Soccer " trace 0) u n Q. Q) n E 3 « u u u -20000 -15000 • 10000 • 5000 n / ">\^^ ^ ^ ^ ^ ^ ^ U - 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 No. of l-Frame 0 68 204 996 2220 4704 5817 4048 2610 1020 187 60 39 0 0 0 0 No. of P-Frame 1576 6848 7197 5188 3800 2220 889 272 81 30 0 0 0 0 0 0 0 No. o( B-Frame 13522 18606 7713 3616 1350 384 189 24 0 0 0 0 0 0 0 0 0 Packets per frame (b) Medium activity video, the "Asterix" trace 41 n E 25000 - i 20000 15000 -10000 -5000 -n u - 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 No. ot l-Frame 0 0 0 188 4715 8622 4963 1400 171 30 30 0 0 0 0 0 0 No. of P-Frame 4313 9112 2256 924 610 150 0 0 0 0 0 0 0 0 0 0 0 No. ol B-Frame 23307 6354 477 4 0 0 0 0 0 0 0 0 0 0 0 0 0 Packets per frame (c) L o w activity video, the "Talk" trace Figure 4 -5 . Traffic distribution of M P E G - 1 video traces For all three traces, the packets-per-frame values are normally higher in I-frames than in P-frames, and these values in turn are higher than in B-frames (Figure 4-5). These large frames consist of multiple packets and can result in an increase in frame error rates, because the loss of even one packet in a video frame can cause a whole frame to be in error. In particular, the burst of video data introduced by the large peak values of packets-per-frame in the I- and P-frames is most likely to cause buffer overflow and is the main cause of I and P-packet loss, which leads to error propagation and to severe quality loss in M P E G video. A n examination of the differences between these traces reveals that the largest area covered by P and B-packet curves appears in the "Soccer" trace, a smaller area appears in the "Asterix" trace, and the smallest one appears in the "Talk" trace. This indicates that the "Soccer" trace has the largest number of P and B-packets, followed by the "Asterix" trace, and then the "Talk" trace. This is because the "Soccer" trace has the 42 highest level of movement, the "Asterix" trace has an intermediate level, and the Talk trace has the least movement. With an increase in the level of activity, the size of B -frames increases and the spread of P and B-packets-per-frame enlarges. Therefore, the total number of P and B-packets in the "Soccer" trace is greater than in the "Asterix" trace, which in turn has a larger number of P and B-packets than the "Talk" trace. In the simulation, each selected video trace ran for about half an hour. Each simulation lasted 40 minutes. Background traffic also consists of trace-driven M P E G - 1 sources [52]. These sources show burstiness over small and large time scales. The videos were started randomly over a 60-second interval. The capacity of the output link of the router was 100Mbps. The buffer size was 150KB. This results in a queue delay of 12ms, closely to the latency specified for interactive video [54]. The congestion level is varied by changing the number of background sources. The results presented in the following sections show the final values of the average of different runs, where the starting sequence of a video stream was randomly selected. 4.3.1 The Single Video Source Case This section shows the effectiveness and robustness of the F D D T scheme. The performance of F D D T in terms of the F E R and E T is compared with that of the conventional tail-drop scheme, D T . In the first set of experiments, the "Asterix" trace is examined. The second set of experiments examines the effects of varying threshold values and changing traffic patterns on F D D T performance. The results are presented in Sections 4.3.1.1 and 4.3.1.2, respectively. 43 4.3.1.1 Effectiveness of FDDT A. Video Quality Figure 4-6 shows the F E R as a function of the video packet loss rate (PLR) , with T B = 0.90 and T P = 0.93, for the "Asterix" trace. Here, a threshold value of 0.90 means that the buffer length threshold is 90% of the buffer size (in packets). 0.600 0.500 0.400 UJ 0.300 0.200 0.100 [<T-ld8 0.000 DT FDDT 0.010 0.020 0.030 P L R 0.040 0.050 Figure 4-6. Difference in F E R between F D D T and D T In Figure 4-6, there is a large difference in the F E R between F D D T and D T . The P L R s of 1%, 2%, 3%, 4%, and 5% correspond to the FERs of 1.0%, 5.6%, 9.2%, 12.1% and 15.2% in F D D T , and correspond to 10.8%, 18.8%, 26.2%, 41.8% and 51.2% in D T . Normally, F D D T admits packets belonging to completely correct video frames and discards packets from partially corrupted ones. Conversely, D T discards packets arbitrarily, distributing packet loss, yielding a large F E R because each lost packet may belong to a different frame. A lost packet could belong to an I or P-frame, and consequently, a small packet loss would affect a large number of consecutive frames due 44 to error propagation. F E R correlates with users' perceptions of the video, and therefore, D T results in greater video quality degradation than F D D T under an equal P L R . On the other hand, upon examining the P L R with an equal F E R (e.g. 10%), it can be seen that the P L R in F D D T rises to approximately 3 .0%, in comparison to only 1% in the D T scheme. This result shows that the F D D T scheme increases packet loss tolerance. In addition to frame-level discarding, preventative priority packet dropping also contributes to a decreased F E R in F D D T . Figure 4-7 compares the I- and P- packet loss rates (PLR (I) and P L R (P)) for F D D T and D T , respectively. In Figure 4-7, the P L R (I) and P L R (P) are significantly reduced in F D D T . Also , I-packets are not dropped until the P L R reaches about 5%. These results can be interpreted as follows: F D D T uses a preventative strategy to discard less important packets before the buffer is full, in order to protect more important packets (I and P- packets) from being discarded. Since the loss of I and P- packets is reduced, there is less error propagation between the frames, leading to a decreased F E R as compared to D T . 0.05 0.04 B 0.03 £ 0.02 h 0.01 0.00 0.000 0.000 0.01 0.02 0.03 PLR •DT • FDDT - & T 0 9 O ' Q reOfT- 0 ^ 3 0.04 0.05 (a) 45 Figure 4-7. Comparison of PLR(I) and PLR(P) between F D D T and D T B. Network Efficiency Figure 4-8 shows the E T as a function of P L R , with T B = 0.90 and T P = 0.93, for the "Asterix" trace. UJ 0.010 0.050 Figure 4-8. Difference in E T between F D D T and D T In Figure 4-8, for a P L R of 1% to 5%, F D D T delivers a large amount of E T and D T delivers a small amount of E T . For instance, at P L R = 1%, F D D T delivers 11% 46 more useful video data than D T . F D D T tries to deliver only correct video frames and thereby achieves much better network efficiency. Experiments using the "Talk" and "Soccer" traces exhibit similar trends to those presented above. The results are omitted here. Overall, the above simulation experiments demonstrate that the F D D T scheme yields better video quality and network efficiency compared to the D T scheme. 4.3.1.2 Sensitivity of FDDT to Parameters and Traffic Patterns A. Impact of Threshold Table 4-1 shows the performance of F D D T and Network Utilization for the "Asterix" trace. Network Utilization (Ut) is measured by the ratio of the average buffer occupancy to the size of the buffer. Table 4-1. F D D T performance with different thresholds Trial FDDT Ot FER ET PLR(I) PLR(P) Case Threshold TB, TP (%) (%) (%) (%) (%) 0.90, 0.90 73 14.9 89.2 0.1 2.3 1 0.95, 0.95 81 19.2 77.5 2.3 1.4 2 0.90, 0.93 75 15.2 87.9 0.3 1.8 0.90, 0.97 85 18.9 77.4 1.1 1.3 3 0.90, 1.00 80 19.3 75.8 1.5 1.0 0.95, 1.00 87 41.1 43.1 3.9 3.8 D T 1.00, 1.00 92 51.2 25.2 4.9 4.9 47 In Table 4-1, for case 1, T B and T P are both changed. The results show that as the thresholds are increased, the Ut and F E R increase but the E T decreases. Also , when the threshold values increase, the PLR(I) increases greatly; however, the PLR(P) only decreases slightly. This occurs because the probability of buffer overflow increases considerably for larger values of the thresholds, especially when large bursts of I-frames arrive. Consequently, more packets from larger frames (i.e., I-frames) are discarded than from smaller ones, since F D D T typically only accepts packets from non-corrupted frames. On the other hand, T P dominates the PLR(P) . A larger T P reduces the number of dropped P-packets. Since error propagation caused by I-packet loss is more severe than that caused by P-packet loss, the F E R increases with an increase in the value of the thresholds, while E T decreases. In case 2, T B is fixed, but T P is changed, leading to similar results. For case 3, T P is fixed at 1.0, but T B is changed. A s the threshold T B is increased, the Ut and F E R increase but the E T decreases. Furthermore, the F E R in case 3 is typically higher than that in cases 1 and 2, but the E T is lower than those in cases 1 and 2. In this scenario, simply discarding the B-packets does not generate enough available buffer space to queue to I- and P-packets, and this causes an increase in PLR(I) and PLR(P) and consequently a larger F E R and a lower E T . F D D T produces a maximum F E R of approximately 41%, and a minimum E T of about 43% in the above three cases, whereas D T produces a maximum F E R of about 51% and a minimum E T of approximately 25%. F D D T can also maintain a reasonably high Ut, normally more than 80%, except when T B , T P (or both) are set very low. For example, for T B = 0. 90 and T P = 0.90, F D D T utilizes 73% of the buffer. In order to improve network utilization, the threshold values need to be increased; however, when 48 threshold values are set high, the system operates with high buffer occupancy for most of the time. As a result, the system behaves in a manner that is closer and closer to D T , and it loses the advantages of F D D T . For instance, in case 3, as T B increases from 0.90 to 0.95, the Ut improves from 80% to 87%, but the F E R increases from 19.3% to 41.1% and approaches to 51.2%, the F E R in D T . When F D D T is deployed in real network scenarios, the above observations imply that there must be a trade-off between network utilization and video quality. Most likely, F D D T would only discard B-packets when the router is slightly congested, as shown in case 3, and F D D T would discard B and P packets for moderately and heavily congested routers, as shown in cases 1 and 2, respectively. B. Impact of Traffic Distribution Table 4-2 shows that the performance of F D D T with different input videos for T B = 0.90 and T P = 0 . 9 3 . Table 4-2. F D D T performance with diverse videos Video Traces F E R (%) E T (%) PLR(I) (%) PLP(P) (%) FDDT Talk 18.1 84.2 0.7 2.1 Asterix 15.2 87.9 0.3 1.8 Soccer 14.1 89.1 0.1 1.7 DT Talk 45.9 38.3 4.9 4.8 Asterix 51.2 25.2 4.9 4.9 Soccer 52.8 10.9 5.0 4.7 Table 4-2 shows that for F D D T , when the activity level increases from "Talk" to "Asterix" and finally to "Soccer", the F E R decreases and the E T increases. This is 49 because the higher the video activity is, the larger the number of P and B-packets of video (See Figure 4-5). The ability to prevent congestion by discarding less important video packets early is more significant in high-motion video than in low-motion video. Therefore, a lower PLR(I) and PLR(P) are exhibited in high-motion video, which correspondingly results in a lower F E R and higher ET . In all of the traces, F D D T also produces a lower F E R and a higher E T compared to D T , although there is a slight relative difference in F D D T performance among the three traces. The results suggest that small threshold values should be set for delivering low-motion video so that F D D T discards more B-packets, allowing a larger number of P- and I-packets to get through. This decreases the F E R and increases the E T , which leads to improved F D D T performance. When these specific findings are extrapolated, it is clear that F D D T works well with videos consisting of large numbers of P and B-packets. Videos with smaller numbers of P- and B-packets should receive smaller threshold values. The number of P and B packets not only depends on the activity level, but it also depends on the G O P pattern and the packetization method of the videos. G O P patterns determine the frequency of occurrence of B-frames. For example, in a video with a G O P pattern of I P B B P B B P B B , the frequency of a B-frame occurrence is 60%, but in a video with a G O P pattern of I P B B , the frequency of a B-frame occurrence is 50%. On the other hand, packetization methods determine the number of packets per video frame. 50 4.3.2 The Multiple Video Sources Case This section shows that the L B S _ F P D scheme is effective and robust and compares the performance of L B S _ F P D with that of the conventional tail-drop scheme (DT) in terms of the F E R , FI, and E T . The first set of experiments uses six videos. Each video is an "Asterix" trace with a packet loss constraint of 5%. The second set of experiments studies the impact of threshold, while the third set of experiments examines the effect of video sources. The results are presented in Section 4.3.2.1 to Section 4.3.2.3. 4.3.2.1 Effectiveness of LBS_FPD Figure 4-9 compares the performance of the L B S _ F P D scheme with the performance of the D T scheme. HIGH is set at 0.9, and LOW is initially set at 0.8. The threshold value of 0.90 indicates that the buffer length threshold is 90% of the buffer size (in packets). 3 4 V ideo ID (a) 51 LU 1.00 0.80 0.60 0.40 0.20 0.00 •0 DT - O — LBS_FPD,High=0.9, Low =0.8 Fairness Line 3 4 V ideo ID (b) DT | LBS_FPD,High=0.9, Low =0.8 0.86 0.39 ( C ) Figure 4 -9 . Performance comparison between L B S _ F P D and D T As seen in Figure 4-9, the values of F E R observed with the L B S _ F P D scheme are much lower than those observed for the D T scheme. The L B S _ F P D scheme limits packet losses to the less significant data and eliminates the transmission of partially correct frames. The random packet drop in the D T scheme spreads packet losses over many frames, increasing F E R . The decreased F E R in the L B S _ F P D scheme significantly improves perceived video quality. The FIs of five videos are below or close to the fairness line in the L B S _ F P D scheme, meaning that a large percentage of streams (five out of six) receive service with 52 acceptable packet loss. Each point on the fairness line stands for a value of FI equal to one, meaning that an exact desired loss ratio is achieved. Thus, due to the loss-based buffer sharing mechanism that is built into L B S _ F P D , the scheme almost achieves perfectly fair service. In this scheme, the buffer usage among video streams is fairly distributed and is inversely proportional to the loss constraints. Losses among the streams are equitably distributed and match their individual loss tolerance. D T lacks a buffer allocation mechanism for the contending streams, so in the D T scheme, higher and more variable FI values appear, resulting in service that is less fair. The 'greedy' streams may use buffer space more than the weighted average, forcing other streams to use less buffer space. As a result, these 'greedy' streams exhibit lower loss ratios than they should, while other streams exhibit higher losses. Therefore, the use of D T leads to a random loss distribution among video streams. The values of E T in the L B S _ F P D scheme are considerably higher than those in the D T scheme because the L B S _ F P D scheme prevents the residual part of the damaged frames from being transmitted, avoiding a waste of network resources. B y reducing the transmission of useless data, the L B S _ F P D scheme improves network efficiency. 4.3.2.2 Impact of Threshold on LBS_FPD Figures 4-10 and 4-11 show the performance results using different thresholds, given a number of video sources from 2 to 10. Each video source is an "Asterix" trace with a packet loss constraint of 5%. At the three threshold settings tested, the F E R values in the L B S _ F P D scheme are considerably lower than the F E R values in the D T scheme, while the E T values are 53 considerably higher (Figure 4-10 (a) and (c)). Figure 4-10 (a) also shows that when HIGH is set to 0.9, the FERs are lower at the threshold LOW=0J than at LOW=0.8, and when LOW is set to 0.7, the FERs are lower at the threshold HIGH=0.8 than at HIGH-0.9. This indicates that the L B S _ F P D scheme works most effectively at small threshold values (Figure 4-10 (c)). With a lower F E R and a higher E T , the L B S _ F P D scheme significantly improves perceived video quality and network efficiency. The F l curves depicted in Figure 4-10 show that the F l is rather insensitive to the threshold values tested. 2.50 DT LBS_FPD,High=0.9, Low =0.8 LBS_FPD,High=0.9, Low =0.7 • LBS_FPD,High=0.8, Low =0.7 3 4 V ideo ID (a) o - - DT * LBS_FPD,High=0.9, Low =0.8 * — LBS_FPD,High=0.9, Low =0.7 LBS_FPD,High=0.8, Low =0.7 3 4 V ideo ID (b) 54 m DT o LBS_FPD, High=0.8, Low =0.7 • LBS_FPD, High=0.9, Low =0.7 D LBS_FPD, High=0.9, Low =0.8 LU Number of Video=6 (c) Figure 4-10. Impact of threshold settings on L B S _ F P D with N=6 It is also important to note that as the number of video streams increases from 2 to 10, the difference in the E T values between both schemes narrows at LOW = 0.8, but widens at LOW = 0.7 when HIGH=0.9 (Figure 4-11). Using D T , the buffer becomes more congested as the number of video streams increases. Losses spread over a large number of frames decrease, while lengthier contiguous losses increase, reducing the total number of discarded frames. This increases E T and consequently decreases the difference in E T between the D T and the L B S _ F P D scheme. When using the L B S _ F P D scheme, the number of B - or P-frames that can be discarded increases when the threshold LOW decreases from 0.8 to 0.7, and consequently, few P- and I-frame packets are dropped. The F E R and E T advantages of the L B S _ F P D scheme become more pronounced. When the number of video streams increases, the difference in E T increases between L B S _ L P D with LOW=0J and D T . Similarly, as the number of video streams increases, the difference in the E T values between both schemes narrows at HIGH = 0.9, but widens at HIGH=0.8, all under LOW = 0.7, as indicated in Figure 4-11. 55 LU • • - - DT a LBS FPD,High=0.9, Low=0.8 A—LBSZFFD,High=0.9, Low =0.7 X — L B S FPD,High=0.8, Low =0.7 4 6 8 Number of V ideos 10 Figure 4-11. Impact of threshold settings on L B S _ F P D with different number of videos 4.3.2.3 Effect of Diverse Video Sources This section outlines the effect of the number of video sources, the traffic patterns, and the loss requirements of video sources on the performance of L B S _ F P D . HIGH is set at 0.9, and LOW is initially set at 0.8. A. Number of Video Sources Figures 4-12 to 4-14 show the effects of when the number of video sources (N) varies. Each video is an "Asterix" trace with a packet loss constraint of 5%. DT 0 LBS_FPD,High=0.9, Low =0.8 1.00 0.80 •] CC 0.60 LU 0.40 0.20 0.00 1 2 V ideo ID (a) N=2 56 ;.• DT 0 LBS_FPD,High=0.9, Low =0.8 1.20 -, 1.00 -0.80 -V ideo ID (b) N=4 ;.'DT m LBS_FPD,High=0.9, Low =0.8 1.20 i 1.00 -0.80 -U J 0.60 -LL 0.40 -0.20 -0.00 -1 2 3 4 5 6 V ideo ID (c) N=6 DT • LBS_FPD,High=0.9, Low =0.8 1.20 i 1.00 -0.80 -CC in 0.60 -LL 0.40 -0.20 -0.00 - 1 2 3 4 5 6 7 8 V i d e o ID (d) N=8 57 ;.• DT rj LBS_FPD,High=0.9, Low =0.8 1.20 -i 0.00 I ' 1 i 1 1 i 1 ' i * * i ' 1 i * ' i * ' i ' 1 i ' ' i ' ' i 1 2 3 4 5 6 7 8 9 10 V i d e o ID (e)N=10 Figure 4-12. Difference in F E R between L B S _ F P D and D T , with varying number of videos 4.00 3.00 2.00 1.00 0.00 • -O- - - DT -a— LBS_FPD,High=0.9, Low =0.8 Fairness Line V ideo ID (a) 4.00 3.00 2.00 1.00 L - O — DT -a— LBS_FPD,High=0.9, Low =0.8 Fairness Line 0.00 2 3 V ideo ID (b) 58 4.00 3.00 EE 2.00 1.00 0.00 0.00 -0 DT -O—LBS_FPD,High=0.9, Low =0.8 Fairness Line 3 4 Video ID (c) 0 - - - D T U LBS_FPD,High=0.9, Low =0.8 Fairness Line 4 5 Video ID (d) O-- DT d — LBS_FPD,High=0.9, Low =0.8 Fairness Line 4 5 6 7 Video ID (e) 10 Figure 4-13. Difference in FI between L B S _ F P D and D T , with varying number of videos 59 1.00 r LU 0.80 0.60 t - Q ' - 6 . 5 . 0.40 0.20 0.00 ••-0..46 DT —•—LBS_FPD,High=0.9, Low =0.8 • • •0 .39 . . 0.30 • • -0.32 4 6 8 Number of V ideos 10 Figure 4-14. Difference in E T between L B S _ F P D and D T , with varying number of videos These figures show that the L B S _ F P D scheme performs better than the D T scheme, irrespective of the number of video sources. Compared to the D T scheme, the L B S _ F P D scheme provides a lower F E R , a higher E T , and a FI curve which is flatter and closer to the fairness line. When the number of videos increases, the difference in the F E R between the two schemes slightly decreases, as does the difference in the E T . This is because as the number of video increases, D T decreases the losses that are spread over a large number of frames and increases lengthier contiguous losses. This reduces the total number of discarded frames, decreasing the F E R and increasing the E T . Using the L B S _ F P D scheme, the packet loss ratio increases as the number of videos increases, and the I and P-packets experience more loss. This increases the F E R and decreases the E T , consequently, the difference in F E R and E T between D T and L B S _ F P D is reduced. Using L B S _ F P D , the FI values are close to the fairness line, but in the D T scenario, when the number of videos increases, there is a large fluctuation and deviation from the fairness line. This suggests that L B S _ F P D adjusts the loss rate between videos 60 proficiently even when the router is highly congested, while the arbitrary discard feature of D T becomes more pronounced when the congestion level increases. B. Diversified Video Sources Figure 4-15 shows the results when different types of videos are used. Videos 1-3 are sports sources called "Soccer", and videos 4-6 are news sources called "Talk", all with the same packet loss constraint of 5%. The figure shows that the L B S _ F P D scheme outperforms D T when videos have diverse levels of motion, because L B S _ F P D has a low F E R , a lower FT , and a high ET . 1.00 0.80 <>-Q-8i O - DT - O — LBS_FPD,High=0.9, Low =0.8 3 4 V ideo ID (a) 3 4 V ideo ID (b) 61 DT ... LBS_FPD,High=0.9, Low =0.8 LU 1.00 0.80 0.60 0.40 0.20 0.00 0.88 0.37 Number of Video=6 (c) Figure 4-15. Impact of diverse video sources on L B S - F P D C. Diverse QoS Requirements Figure 4-16 shows the results when videos with diverse loss requirements are used. Videos 1-3 are sports sources called "Soccer", and videos 4-6 are news sources called "Talk", all with a packet loss constraint of 3% and 6%, respectively. O- DT 0.80 0.60 DC LU 0.40 6 0.76 • LBS_FPD,High=0.9, Low =0.8 0.7 O'0.68 "x>0.67 0.20 P - 0 4 3 — - H h i 0.00 1 >— 3 4 V ideo ID (a) 62 3.00 2.50 2.00 1.50 1.00 0.50 0.00 o-- DT - T J — LBS_FPD,High=0.9, Low =0.8 3 4 V ideo ID (b) DT nLBS_FPD,High=0.9, Low =0.8 UJ 1.00 0.80 0.60 0.40 0.20 0.00 0.79 0.37 (C) Figure 4-16. Impact of diverse QoS requirements on L B S - F P D In Figure 4-16 (a) and (c), the L B S _ F P D scheme yields a lower F E R and a higher E T than the D T scheme. In Figure 4-16 (b), the level of service fairness provided by the L B S _ F P D scheme is also much higher than that provided by the D T scheme. In most cases, FI in the L B S _ F P D scheme is closer to one, because the L B S _ F P D scheme makes the weight in the buffer sharing mechanism inversely proportional to the acceptable packet loss ratio. Therefore, when the L B S _ F P D scheme is applied to non-homogeneous videos, buffer sharing for videos with a higher acceptable packet loss ratio (videos 4-6) is half of that for streams with a lower acceptable packet loss ratio (videos 1-3). In turn, 63 every stream achieves loss performance that is commensurate with expectations. This results in an equitable loss distribution among video streams with different loss tolerances. Conversely, the D T scheme does not have a mechanism to adjust buffer allocation in equilibrium with different loss tolerances, and losses are arbitrarily distributed among the video streams. Thus, the streams with the same packet loss tolerance, streams 1, 2 and 3, exhibit significantly different values of FI, which means that unfair service is provided to both streams. Most values of FI in the D T scheme are greater than one. In general, the D T scheme only provides satisfactory loss performance service for only a small percentage of the video streams. A large fraction of streams cannot meet their loss constraints, and thus, the D T scheme does not provide fairness of service. 4.4 Chapter Summary The information presented in this chapter demonstrates that buffer management techniques can deliver a high quality of video, provide a fair level of service to competing videos, and improve network efficiency. The chapter provides an attractive alternative in the design of buffer management for video communication. This alternative focuses on achieving the user-expected quality of video rather than only reducing the video packet loss ratio. It increases maximum loss tolerance for a desired level of video quality, providing better quality at equal loss ratios. The results also suggest that in a 64 situation where the activity level of a video and the requirements are known, the optimal threshold values can be chosen so that minimum user discontent can be achieved. The practical use of F D D T and L B S _ F P D raises several issues: i) Implementation The F D D T and L B S _ F P D schemes exhibit clear advantages in video streaming applications at the small price of additional processing. Comparing the performance-complexity trade-off between the F D D T and L B S _ F P D schemes and two typical network-level packet-dropping methods in an IP-based network, D T and R E D , the F D D T and L B S _ F P D schemes exhibit superior performance in video transportation. The D T scheme is easy to implement, but it may seriously degrade video quality. The R E D scheme reduces packet loss and is a good scheme for general-purpose loss control; however, it is not effective in networked video because it does not have a mechanism to protect important video packets. Also, R E D requires the assistance of the sender, and thus it is not able to support the UDP-based transmissions that are most appropriate for the delivery of real-time video. In contrast, the F D D T and L B S _ F P D schemes perform selective discard to different frames of video packets, providing much better video quality than D T or R E D . The complexity of the F D D T and L B S _ F P D schemes is greater than the complexity of the D T scheme, but less than the complexity of the R E D scheme, because F D D T and L B S _ F P D only check frame and packet type, without other complex computations and without the requirement of a direct intervention by the sender. ii) Deployment F D D T and L B S _ F P D rely on video-specific functions inside the network. With the help of A N , it is feasible to implement these techniques in real networks, because 65 AN-based techniques make networks recognize application and process application-specific procedures, including video-specific processing. It seems clear that application-specific control within the network is almost inevitable, and the F D D T and L B S _ F P D schemes are good examples of this approach, iii) Video quality behaviour Previous research on video watchability experiments provides important results. If 3 to 7 consecutive frames are in error, this causes a shift in motion, and i f 8 or more consecutive frames are in error, there may be a large discontinuity or even a picture that is completely different from the original [41, 45]. The result is unacceptable video quality. Also, a frame rate of 12 frames per second or more flows smoothly for most audiences and is widely accepted [42-45]. It is more visually pleasing to watch a video with no frames in error at a lower frame rate than to watch a video with a large number of consecutive frames in error at a higher frame rate. This is why frame-level packet drop provides better video quality than occasional packet drop. Frame-level drop reduces the F E R but may also reduce the frame rate at the destination end, while random packet loss produces a large number of consecutive frames in error. A video frame often consists of several packets, and losing one packet renders the whole frame useless [7]. In such a case, random packet loss causes each individual frame to be in error. If the loss occurs in an I- or P-frame, severe error propagation wi l l occur, leading to a large number of corrupted consecutive frames. Error reconstruction techniques [48] further prove that frame-level drop is a better approach. One error reconstruction technique replaces a lost frame with its previous frame: a simple, yet efficient technique to recover a reduced frame rate. In contrast, previous error concealment techniques focused on reconstructing small 66 corrupted areas in a frame. These techniques were more difficult and produced a less pleasing visual effect due to complex computations [27]. The post-processing for recovering lost data is simpler for frame-level drop than for random packet drop; hence, F D D T and L B S _ F P D perform frame-level drop, leading to a better video presentation quality. 67 Chapter 5 Packet Scheduling This chapter reviews existing packet scheduling techniques, describes our proposed schemes, and evaluates their performance before outlining conclusions. 5.1 Background In addition to buffer management, packet scheduling is another important network-level QoS control technique. This approach is typically used to provide delay guarantees by controlling the servicing time and the servicing order of the packets [49-50]. A number of packet scheduling schemes have been proposed in the literature [8]. Among them are Priority Queuing, Weighted Round-Robin, Weighted Fair Queuing and its variants, Virtual Clock, and Earliest-Due-Date and its variants. A l l of these techniques provide an upper bound on end-to-end delay, while Jitter Earliest-Due-Date provides an upper bound on delay jitter. A comprehensive overview of these schemes is given in 68 [49]. However, in the open literature, loss assurance has not been addressed in the design of these scheduling schemes. The goal of this study is to show that packet scheduling is a viable and attractive option for providing loss guarantees. The research departs from existing approaches by suggesting that i f queued packets can be appropriately scheduled to exit the router, making buffer space available before new packets arrive, more upcoming packets wi l l be admitted into the router, thereby reducing packet loss due to buffer overflow. Two MPEG-v ideo oriented packet scheduling schemes called Multi level Priority Adaptive Packet Scheduling ( M P A P S ) [18] and Dynamic Multi-priority Packet Scheduling ( D M P S ) [20] are proposed in this chapter. 5.2 Proposed Packet Scheduling Schemes 5.2.1 Single Class - Multilevel Priority Adaptive Packet Scheduling (MPAPS) The design of the M P A P S scheme is driven by two factors: inter- and intra-stream loss differences. In inter-stream loss difference, loss demands differ among different video streams. The loss demands rely on the users' choices between the tradeoffs of quality and fees and can be set by the network service provider. In intra-stream loss difference, loss constraints vary widely between different portions of a video stream. For example, as described in Section 4.1, a M P E G video contains three types of frames or V O P s : I frames or I-VOPs coded independently, P frames or P -VOPs with reference to leading I or P frames or I-VOPs or P-VOPs , and B frames or B - V O P s with reference to leading or trailing I or P frames or V O P s . When an error occurs at an I or P 69 frame or an I or P V O P , it propagates to the other frames or V O P s and causes errors in consecutive frames or VOPs . A n error in an I frame or an error in an I - V O P can cause a whole G O P or G O V to be corrupted, equivalent to the corruption of about half a second of video. The loss constraints of I packets are more stringent than P packets, which, in turn, are more stringent than B packets. Therefore, the scheduling scheme must consider both inter and intra-stream loss differences to provide video streams with fair and efficient service and maximize the percentage of the video streams that have satisfactory loss performance. Such a scheduling scheme is the focus of this study. The M P A P S scheme depicted in Figure 5-1 uses two-level scheduling: the Group Level and the Stream Level . Group level scheduling maps the video streams into the groups with different transmission priorities according to a tuple {InType, S L } . InType represents the type of upcoming packets, and S L is a measure of the service level that a video receives from the network. The S L is defined as follows: where PLR(r) is the actual packet loss ratio and PLR(s) is the maximum allowable packet loss ratio, both defined for each individual video. Transmission precedence is given to the groups holding stream queue(s) with upcoming I packets, followed by those holding stream queues with upcoming P packets, and finally, followed by those holding stream queues with upcoming B packets. If the groups hold stream queues with the same type of upcoming packets, they are ordered with SL>1 first, followed by SL=1, and then SL<1. The drop probability of I packets becomes less than that of P packets, which in turn becomes less than that of B packets. Moreover, the videos that have a poorer loss performance than expected receive expedited and more service, while videos that have PLR(r) PLR(s) (5-1) 70 satisfactory or better loss performance receive slow or less service. Therefore, there is no over-servicing or under-servicing to any videos, and all of the videos wi l l be transmitted with acceptable loss levels. When the turn of a particular group comes, Stream Level scheduling is performed within a group. The scheduling system directs the packets for transmission in accordance with the instantaneous needs of a specific stream, which is determined by the Adaptive Priority Index (API). The A P I is defined as follows: API(i) = SL(i)xQlen(i) (5-2) where Qlen (i) is the length of an input buffer holding stream i . The header packets of the stream with the highest A P I get transmitted (de-queued) first, because a large value of A P I indicates that the stream queue has experienced a large loss, is facing an emerging loss, or both. In order to pull an unexpectedly increased packet loss ratio back to its specified value and avoid emerging packet losses, queued packets from that stream must be transmitted. Every time a stream in a group is chosen, a frame of packets at the front of the queue is transmitted. Figure 5-1. The M P A P S scheme 71 Figure 5-2 further illustrates the operations of the M P A P S scheme. Consider three streams sharing an output link. Part (1) shows a snapshot of the scheduling at the beginning of a round. Part (2) illustrates the state of the M P A P S scheduler at the end of a round. During the round the following events take place: i) the scheduler places streams Si and S2 into the first servicing group and places stream S3 into the second servicing group, and ii) the calculation of A P I indicates that Si needs to be transmitted out of the router the most urgently, and the scheduler transmits a frame of packets from S i . The type of upcoming packets is known several frames before they arrive. U p c o m i n g P a c k e t s i S2 I'M" | S3 v [ BB | PP] U p c o m i n g P a c k e t s fj : f r a m e Q u e u e d P a c k e t s Q u e u e d P a c k e t s (1) B e g i n n i n g of r o u n d k (2) E n d of r o u n d k Figure 5-2. A n example of how the M P A P S works 5.2.2 Multi-class - Dynamic Multi-priority Packet Scheduling (DMPS) The D M P S scheme is an extension of M P A P S that is applicable in a multi-class environment. D M P S also follows two-level scheduling, as performed in a single-class-based M P A P S , but in D M P S , group level scheduling maps video streams into groups 72 with different transmission priorities according to a triplet {InType, S L , C L } rather than a tuple {InType, SL} as occurs in M P A P S . The C L stands for different classes to which different videos belong. Videos with similar service requirements are assigned into a class with the same value of C L , and different classes of video have different values of C L . This parameter considers the class of service. In D M P S , the servicing order and the servicing time between classes is neither a strict priority nor weighted fair-like setting. Instead, it changes dynamically because it is associated with the other two parameters in the triplet. The D M P S scheme dynamically maps video streams into groups with different transmission priorities according to the QoS triplet {InType, S L , CL}(Figure 5-3). InType and S L are the same as in M P A P S . C L denotes a level of service priority for a group of traffic that has similar packet loss constraints. Transmission precedence is given to the groups holding stream queue(s) with upcoming I packets, followed by those holding stream queues with upcoming P packets, and finally, followed by those holding stream queues with upcoming B packets. If the groups hold stream queues with the same type of upcoming packets, they are ordered with SL>1 first, followed by SL=1, and then SL<1. Among a group of videos with the same type of upcoming packets and similar S L values such as SL>1, transmission priority is assigned to the high-priority class of video traffic. In this way, the drop probability of I packets wi l l be lower than that of P packets, which in turn wi l l be less than that of B packets. Moreover, the videos that have an unsatisfactory loss performance receive expedited and more service, whereas videos that have satisfactory or better loss performance receive slow and less service. Therefore, there is less over-servicing or under-servicing of videos, and all of the videos are transmitted with acceptable loss targets. Moreover, more important classes of videos wi l l 73 receive faster transmission and thus gain more protection against loss than less important videos. The D M P S scheme then directs the packets for transmission in accordance with the instantaneous needs of a specific stream with a group whose turn is up. This is determined by the Adaptive Priority Index (API), the product of SL(i) and the length of an input buffer holding stream i , Qlen(i). The header packets of the stream with the highest A P I get transmitted (de-queued) first, because a large A P I value indicates that the stream queue has experienced a larger loss, is facing an emerging loss, or both. Transmission of queued packets from that stream becomes urgent so that an unexpectedly increased packet loss ratio can be pulled back to its specified value and so that emerging packet losses can be avoided. Every time a stream in a group is chosen, a frame of packets at the front of the queue is transmitted. Video 1 Video 2 Video k Scheduling among groups Scheduling Among streams Figure 5-3. The D M P S scheme 74 5.3 Performance Evaluation of Packet Scheduling Simulation experiments were conducted using a discrete-event simulator implemented in a C program to study the effectiveness of the M P A P S and D M P S schemes. In the simulation, the real video data from several M P E G video traces were used to test M P A P S and D M P S . These traces represent a sequence of the frame sizes of videos, where the size of each frame is presented in bits and bytes, respectively. Each video trace runs for about half an hour. The simulation lasts for 40 minutes. The frame rate is 25 frames per second. Sources are randomly started over a 60-second interval and the measurements on Qlen and packet loss ratios for each video stream were taken continuously during the activation period of each video stream. The results show the final values of the average of different runs with a randomly selected starting sequence of each video stream. Figures 5-4 and 5-5 illustrate the system and the network models used in evaluation. In Figure 5-4, the Network Simulation module simulates the behaviour of the M P A P S scheduler, the D M P S scheduler, and the F C F S scheduler at a router, respectively. Here, a F C F S scheduling scheme is a conventional scheduling scheme: packets are stored in an input queue and are forwarded in the order of arrival. In all of the schedulers, a packet is discarded when the input buffer is full upon arrival. The Packetization module first encapsulates a M P E G video stream into the R T P packets, then U D P packets, and finally IP packets, with 1500 bytes as the maximum packet size. The IP packets are then sent to the Network Simulation module. The Analysis module takes the original video file and the output of the Network Simulation module as input. The 75 Analysis module then examines FERs for each individual video and to examine E T for all transmitted video. In the simulation, the capacity of output link of a router was 100Mbps. The buffer size was 150KB. The congestion level is varied by changing the number of background sources. Background traffic also consists of trace-driven M P E G - 1 sources [52]. Figure 5-5 shows that all tested videos are sent to the same destination over a single-hop system. The simulation results are presented in Sections 5.3.1 and 5.3.2. Figure 5-4. Simulation diagram for M P A P S and D M P S i Video Traffic Sink 5 Video ( Video Other traffic Figure 5-5. Network model for simulation of M P A P S and D M P S 76 5.3.1 Performance Evaluation of MPAPS In this section, the M P A P S performance is evaluated using M P E G - 1 video [52]. The effectiveness of M P A P S is tested in a scenario where six copies of a cartoon trace called "Asterix", with a packet loss constraint of 5%, were used as video sources. Then, the effect of video sources on performance is studied. The results are presented in Sections 5.3.1.1 and 5.3.1.2, respectively. 5.3.1.1 Effectiveness of MPAPS Figure 5-6 shows the relative performance of the M P A P S and F C F S schemes for the "Asterix" trace. As seen from Figure 5-6(a) and (c), M P A P S produces a lower F E R for most videos and a higher E T than F C F S . This means that compared to F C F S , M P A P S achieves a significant improvement in video quality and network efficiency. F C F S treats all the packets in the same way, regardless of the packet type, and performs a random drop, while M P A P S transmits packets from queues with upcoming I or P- packets earlier and more frequently. Therefore, occupied buffer spaces are released to accept upcoming I and P packets, and consequently, these packets experience less loss. This reduces error propagation, resulting in a lower F E R and high E T . Figure 5-6(b) shows that there are more F l values close to one in M P A P S than in F C F S . Moreover, for M P A P S , there is little difference in F l values between the six videos, leading to a flat F l curve. For F C F S , the F l curve is less even and exhibits stretches of abruptness. This means that for M P A P S the loss deviations from the loss target between videos are not significant. Conversely, these deviations differ significantly in F C F S . Observations indicate that M P A P S equally partitions servicing opportunities to all videos as requested, while F C F S arbitrarily offers 77 servicing opportunities between videos. Thus, compared to F C F S , M P A P S provides a higher level of service fairness between videos. When M P A P S is used, the fairness level improves for several reasons. Due to group-level priority scheduling determined by the S L , a stream queue whose loss is less than requested receives slower, reduced service, whereas a stream queue that has experienced a higher loss than requested receives expedited and increased service. Secondly, amongst the streams that have a similar loss performance due to stream-level priority scheduling dominated by the A P I , quicker and more frequent service is given to the most congested stream queues. Thus, the M P A P S scheme favors the adversely affected streams, bringing the unexpected increased packet loss in those streams back to the original specified values. It also controls aggressive streams by preventing them from transmitting too many packets. This adaptive control decreases packet loss ratios, increases the percentage of the video streams meeting desired loss constraints, and reduces loss deviation from the target. cc LU 100 80 60 40 20 0 11 MPAPS FCFS I" i rk i 1 _  r _! 1 3 4 5 Video (a) 78 (b) LU 100 0% r 80 0% -60 0% -40 0% -20 0% -0 0% -i FCFS • MPAPS 82.7% 59.3% (C) Figure 5-6. Performance comparison between M P A P S and F C F S for the "Asterix" trace 5.3.1.2 Effect of Sources on MPAPS This section investigates the influence of video source characteristics, the number of video sources, and the variety of loss requirements of videos on the performance of M P A P S . 79 A. Diversified Videos Figures 5-7 and 5-8 show the results for the "Soccer" and 'Talk' traces, respectively. The results presented here refer to two cases. Case 1 has six videos delivered and each of them is "Soccer" trace, with a packet loss constraint of 5%. Case 2 has six videos delivered and each of them is "Talk" trace, with a packet loss constraint of 5%. 100 80 g 60 bT 11 MPAPS n FCFS l ' | ! : i ! i f ! M l 1 1 1—'—i—'—'—'—r 1 2 3 4 5 6 Video (a) 0 -I , , , , : 1 1 2 3 4 5 6 Video (b) 80 UJ 100.0% 80.0% 60.0% 40.0% 20.0% 0.0% i'lFCFS I MPAPS 84.9% 56.6% (C) Figure 5-7. Performance comparison between M P A P S and F C F S for the "Soccer" trace LU 100 80 60 40 20 0 1.5 E 1 0.5 0 1 MPAPS 1"FCFS L 3 4 V ideo (a) - MPAPS - - a- - - FCFS i 2 3 4 5 6 V ideo (b) 81 100.0% r ' - ,FCFS a MPAPS 80.0% h 81 S°: I-LU 20.0% 40.0% 60.0% 59.1% 0.0% (c) Figure 5-8. Performance comparison between M P A P S and F C F S for the "Talk" trace The graphs of the F E R , F l and E T are all similar in appearance to those for the "Asterix" trace, shown in Figure 5-6. For all traces, M P A P S decreases F E R and increases E T , resulting in a dramatic improvement in video quality and network efficiency compared to F C F S . Moreover, compared to F C F S , M P A P S ensures that a larger portion of videos have their packet loss ratios close to their loss constraints, indicating that M P A P S provides a higher level of service fairness than F C F S . These results imply that M P A P S works well with all videos even i f they have different activity levels. B. Number of Videos Figures 5-9 to 5-12 show the effect of varying the number of video sources (N) in the range 10-30 on the performance of M P A P S . A l l of the videos tested are the "Asterix" trace with a packet loss constraint of 5%. 82 DC 111 100 80 60 40 20 0 MPAPS i - FCFS ~i—" —^•—r™"— i\5 co A u o) s Video ID _ oo co —k o (a) N=10 CC LU LL 100 80 60 40 20 0 MPAPS i"FCFS K>CO^.OlO5^J00CD 0 - > r O U ^ 0 1 0 ) N l 0 3 C O O Video ID (b) N=20 100 80 60 40 20 0 o-^roco^oiO)-Nicocoo-'i\3co^.oicn-vicocoo Video ID (c) N=30 Figure 5-9. Difference in F E R between M P A P S and F C F S with varying number of videos 83 LU 100.0% 80.0% 60.0% 40.0% 20.0% 0.0% L";FCFS |g MPAPS 81.7% 54.9% 80.2% 47.2% 67.6% 45.8% 10 20 30 Number of V ideos Figure 5-10. Difference in E T between M P A P S and F C F S with varying number of videos Figures 5-9 and 5-10 show that compared to F C F S , M P A P S produces a much lower F E R for most videos and a higher E T . A low value of F E R represents high visual quality of video, and a high value of E T means high network efficiency. The results demonstrate that for all of the N values tested, the M P A P S scheme offers better video quality and higher network efficiency than the F C F S scheme. This better quality and efficiency occurs because M P A P S decreases I- and P-packet loss rates, thus reducing error propagation. Consequently, lower F E R and higher E T values appear in M P A P S . For a small number of videos, the difference in F E R and E T between schemes is high. On average, when the number of videos increases, the difference is slightly reduced. This may be due to the fact that packet loss at a large number of videos is already large enough to induce a large F E R in M P A P S . Also, for M P A P S , performance degradation with an increase of the number of videos is roughly evenly distributed between videos, whereas for F C F S , a largely uneven spread of performance degradation is visible, with a maximum F E R of over 90% and a minimum F E R of less than 2% (Figure 5-9(c)). 84 100.0% r ;.• FCFS u MPAPS 80.0% h 70.0% 60.0% -65.0% 56.7% 60.0% - 45.0% • c W 40.0% h 30.0% 20.0% h 0.0% 10 20 Number of V ideos 30 Figure 5-11. Comparison of Sinter between M P A P S and F C F S with varying number of videos In Figure 5-11, the percentage of the videos whose S L is equal to or less than one (denoted by S i n t e r ) in M P A P S is approximately 10%, 20% and 26% higher than those in F C F S , for N=10, 20, and 30, respectively. Also, F C F S drops to 30% when thirty videos are delivered, while M P A P S maintains a level of over 50%. This is because both the A P I and S L prevent overshooting by a particular stream. When a malfunctioning stream continuously increases its arrival rate, the Qlen is increased and the A P I is increased. As a result, the stream queue may receive more service than expected. However, the S L is decreased with the increase of Qlen, and thus the A P I is decreased, decreasing the probability of service and achieving a balance of servicing among videos. In addition, when an aggressive stream excessively pulls up its Qlen, its group precedence is reduced due to a decrease of the S L . Therefore, no streams are over-serviced or under-serviced, and M P A P S can provide desired loss ratios quite exactly. In M P A P S , the percentage of videos that meet the loss constraint is very large. 85 8 7 h 6 5 4 3 2 1 x 0 1 8 7 6 5 EE 4 3 2 1 0 • MPAPS FCFS — X — Fairness Line - X ^ — X - -x--+-4 5 6 7 Video ID 10 (a) N=10 • MPAPS FCFS — x — Fairness Line k \ 7 /T\ \^ r\ f~/r\ ~^?^K ^ /^N ™^/N ^ r-^ r\ —»y >T\ . H 1 1 1 1 1 1 1 1 1 1—4 1 1 1 1 f — h 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Video ID (b) N=20 8 7 6 5 EE 4 3 2 1 0 • MPAPS FCFS — x — Fairness Line [ /^ Y A / V v \ / X , / v . - . - : . / ^ y(Cy(C 5tC /IC ~/IC /IC ./IC /IC^ /IC^ /IC */IC /ICOtC /IC /IC y(C^/IC ytC /IC "/IC\/IC?/IC^C /IC vIC ••••I I I t I I 'I ' I I I 1 2 3 4 5 6 7 8 9101112131415161718192021222324252627282930 Video ID (c) N=30 Figure 5-12. Difference in FI between M P A P S and F C F S with varying number of videos 86 In Figure 5-12, the F l curves give more insight into what happens to each video when the total number of transmitted videos (N) varies. For most videos in M P A P S , the F l values deviate only slightly from the fairness line in M P A P S across all the N values tested, but in F C F S , the F l values arbitrarily fluctuate and are very large for many videos, especially those with a very large N . Each point on the fairness line stands for a value of F l equal to one, meaning that the exact desired loss ratio is achieved. A s the router is heavily congested, the network resources are insufficient to support all the videos. The M P A P S scheme can still adaptively change the servicing priority according to the S L and A P I , and it almost evenly distributes the degradation of the servicing level to all the streams. Accordingly, packet loss deviations from the loss target are very small, resulting in a set of more uniform values of F L The Sinter and the F l characteristics indicate that the M P A P S scheme is fair to all the videos, while F C F S is not. C. Videos with Diverse Loss Requirements Figures 5-13 to 5-16 depict the effect of varying loss requirements with the number of videos (N), N =10, 20 and 30. A l l of the videos tested are the "Asterix" trace. The first half of videos has a packet loss constraint of 3%, while the others have a packet loss constraint of 6%. 87 MPAPS • FCFS ro w Ol OJ oo co ->• Video ID (a) N=10 100 80 £ 60 cc !£ 40 20 0 MPAPS 1FCFS roco.^.cna>-jooco j 1 j | _ i ro 0 - ' M Q ^ U l < 3 ) S C » ( C O Video ID (b) N=20 MPAPS _^FCFS -1 1 r Ti '' 1 r i' i i ,i i, n I , I 1 1 1 1 1 1 i1 ' r - ii | 1 t,i 1 1 11 ! r i i. i i g ( (t r 1 1 !1 1, " 11 1 1 1 1 1 1 1 1 | 1 i1 « i • i1 i a ^  i , i 1 11 r ' " »r 1 |! g; l; 111 1 V 1 i1 • E1 i ' ' 1 1 ' ! i 1 , , ! 1 1 i1 1 1 1, [i § • •'.1 ii . % - T O C O ^ O l C D - v l O O C O roro ro ro N> roro ro o-^row-^uicn^joocoo- ' ioco-u Video ID ro ro co 03 CD o (c) N=30 Figure 5-13. Difference in F E R between M P A P S and F C F S with diverse loss requirements 88 LU 100.0% 80.0% 60.0% 40.0% 20.0% 0.0% FCFS ' -MPAPS 82.5% 81.1% 61.5% 73.4% 54.9% 52.0% 10 20 30 Number of V ideos Figure 5-14. Difference in E T between M P A P S and F C F S with diverse loss requirements Figures 5-13 and 5-14 show that compared to F C F S , M P A P S produces a significantly lower F E R for most video and a higher E T , because M P A P S selects specific stream queues to transmit packets, thereby freeing buffer space for upcoming I- and P-packets and improving the output of perceptually important information. On the other hand, F C F S arbitrarily discards upcoming packets without protecting I- and P-packets. As a consequence, compared to F C F S , M P A P S achieves a much lower F E R for most videos and a higher E T , indicating a higher video quality and better network efficiency. 100.0% 80.0% 60.0% c 55 40.0% 20.0% 0.0% 80.0% 'FCFS o MPAPS 75.0% 60.0% 60.0% 45.0% 30.0% 10 20 30 Number of V ideos Figure 5-15. Comparison of Sinter between M A P S and F C F S with diverse loss requirements 89 8 7 6 5 4 3 2 1 * 0 • MPAPS FCFS — x — Fairness Line -x *=x x - ^ = x - ^=x 5 = = — X =±zX ===x 1 4 5 6 Video ID (a) N=10 10 • MPAPS FCFS — x — Fairness Line X *=^X &K ZX - x ^X =X - X =x - x =?x - x -ptc^x V x -=x"=x H 1 1 1 P- —I—f—I—I—¥—h 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Video ID (b) N=20 8 7 6 5 h EE 4 3 2 ^ 0 • MPAPS FCFS — x — Fairness Line 1 X=X;X / \ ^ ^L/~ VL* \y \y \^ t^^ S^ ^^ ^^ ^^ r i^ ^^^ i^^ * x ">T\ £^r\ "^K '^T\ "^rX *^r\ »*T\ " ^ K "^K '^r\ '^K msts ^\ T^*, i i v i i i v i v i i v i i i i—i—i—i—r—i—i i' i 1 2 3 4 5 6 7 8 9101112131415161718192021222324252627282930 Video ID (c) N=30 Figure 5-16. Difference in F l between M P A P S and F C F S with diverse loss requirements 90 In Figure 5-15, compared to F C F S , M P A P S delivers a much larger percentage of videos with satisfactory loss requirements. In Figure 5-16, the FIs for most videos are small in M P A P S , but in F C F S , the FIs arbitrarily fluctuate and are very large for many videos. Once again, the small FIs are the result of the adaptive mechanism in M P A P S . The adaptation ensures that those stream queues that are in danger of a violation of loss requirements receive servicing sooner and more frequently, and therefore, they are able to quickly recover from temporary congestion. Conversely, those stream queues that have a lower loss than expected receive servicing that is delayed and less frequent. Therefore, the actual loss of each stream is controlled to nearly match the specified allowable values, leading to an increase in the percentage of streams that meet their loss constraints. M P A P S provides fair service to different video streams. These observations demonstrate that M P A P S performs better than F C F S when delivering videos with different loss constraints. 5.3.2 Performance Evaluation of DMPS In this section, the performance results of the D M P S scheme are presented. Real video data from three M P E G - 4 video traces were used: "Robin Hood", "Soccer", and " A R D Talk" [51]. The reason for using M P E G - 4 video instead of M P E G - 1 video is that the M P E G - 4 video traces were available when the D M P S scheme is designed, while only M P E G - 1 video traces were found when the other schemes were developed. The emerging M P E G - 4 standard is increasingly accepted and appears to be a promising open standard for Internet video, and the frame-dependent nature of M P E G - 4 is the same as that in M P E G - 1 . It is this characteristic that is considered in the design of D M P S . 91 Their characteristics are tabulated in Table 5-1. The first set of experiments considers two classes of videos, high priority and low priority. The second set of experiments studies the effect of the choice of classes on performance. The results of the experiments are shown in Sections 5.3.2.1 and 5.3.2.2, respectively. Table 5-1. M P E G - 4 video statistics Frame Size Trace Description (byte) G O P Pattern Mean Variance Robin Hood Movie 4.6e+03 5.3e+06 I P B B P B B P B B B B Soccer Sports 5.5e+03 5.3e+06 I P B B P B B P B B B B A R D Talk News 2.7e+03 3.0e+06 I P B B P B B P B B B B 5.3.2.1 Effectiveness of DMPS Figure 5-17 to 5-19 and Table 5-2 show the relative performance of D M P S and F C F S when different number of videos was delivered. Half of the videos are M P E G - 4 cartoon traces called "Robin Hood", with a packet loss constraint of 3%, and the others are M P E G - 4 sports traces called "Soccer", with a packet loss constraint of 6%. 92 100 80 E 6 0 ffi 40 20 0 100 n 80 g 60 bT £ 40 20 0 LU 100 80 ? 60 40 20 0 8.7 5.3 ! 5.6 , FCFS DMPS 11.1 13.2 14.9 2 3 Video ID (a) N=4 10.2 10.1 .1,2.1 1 FCFS it DMPS 16.7 1-3.9 12.9 4 5 Video ID (b) N=8 i FCFS * DMPS : , i ,1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Video ID (c) N=16 93 100 80 60 DC LU u- 40 20 0 ^FCFS DMPS _ * l \ l f j ^ c n c n - v l Q O C D - * — k —fc-*—fc —fc—fc —fc —»• —* I O M N) K) P O V i d e o ID (d) N=24 Figure 5-17. Difference in F E R between D M P S and F C F S FCFS-High n FCFS-Low c DMPS-High • DMPS-Low 40 4 34.2 19.2 "12.5 49.3 51.3 51.8 53.4 30.6 | 37.3 ! 38.1 I J26.4 29.8 32.4 4 8 16 24 Number of Videos Figure 5-18. Difference in E T between D M P S and F C F S A s seen from Figure 5-17 and 5-18, compared to F C F S , D M P S produces lower F E R for most videos and higher ET . The D M P S scheme achieves a significant improvement in video quality and network efficiency. This is because F C F S treats all the packets in the same way, regardless of the packet type, and thus performs a random drop. On the other hand, D M P S protects the I and P packets from loss, thus decreasing the F E R . This minimizes the decrease in video quality in the presence of loss, and 94 consequently improves effective network throughput. In addition, the F E R for high priority videos are always lower than that of low priority videos, and so is the E T . The high priority videos are the first half of video sources, and the low priority videos are the other video sources. The difference grows significantly large as the number of videos increases, because the D M P S scheme is designed to offer better performance for the high priority class by giving servicing priority to this class. During the congestion period the higher priority class gets most of the servicing while the lower one takes the remaining portion. Consequently, with an increase in the number of videos, a greater difference in F E R appears between high and low-priority videos. The F E R values in the same class are almost equal, because D M P S equally partitions servicing opportunities to all videos in the same class as requested. Specifically, for the videos in the same class in D M P S , when a malfunctioning stream continuously increases its arriving rate, the Qlen is increased and then the A P I is increased. As a result, the stream queue might possibly receive more servicing than expected. However, the S L is decreased with the increase of Qlen, and thus, the A P I is decreased, reducing the probability of servicing. A balance of servicing among videos in the same class is achieved. In addition, when an aggressive stream excessively pulls up its Qlen, its group precedence is reduced due to a decrease in the S L . Therefore, no streams are over-serviced or under-serviced, leading to a fair share of service between videos in the same class. 95 2 3 Video ID (a) N=4 4 3.5 3 -1 2.5 2 1.5 1 0.5 0 FCFS • DMPS — x — Fairness Line 1.3 0.93 * 3C_n,aq X X.^/{QI—W ^X--{XS7_)fc 0.87 3 4 5 Video ID (b) N=8 4 3.5 3 2.5 -2 -1.5 1 0.5 0 FCFS DMPS Fairness Line ~ 1 1 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Video ID (c) N=16 96 - FCFS DMPS — * — Fairness Line 1 2 3 4 5 6 7 8 9101112131415161718192021222324 V ideo ID (d) N=24 Figure 5-19. Difference in FI between D M P S and F C F S In Figure 5-19, two important results are obtained. The figure shows that the FI values of most videos are small in D M P S , but in F C F S these randomly fluctuate and are very large for many videos. The FI values for the high priority class are also lower than for the low priority class, and a great difference appears at a large number of videos. Also, for all of the tested cases, the FI deviations from the fairness line are equally distributed in the same class, D M P S provides a fair share of service between videos. Table 5-2 presents two important results. Firstly, when using D M P S , a higher percentage of video streams meet their individual loss constraints, compared to a scenario using F C F S . This result is due to the adaptive mechanism in D M P S , which ensures that stream queues that are in danger of a violation of loss requirements wi l l receive service sooner and more frequently than other stream queues and wil l recover from temporary overloading more quickly. Conversely, those stream queues that have a lower loss than expected wi l l receive delayed service and also less frequent service. Therefore, the actual loss of each stream is controlled to nearly match the specified 97 allowable values, leading to an increase in the percentage of streams that meet their individual loss expectation and providing a fair level of service among different video streams. Secondly, compared to the low priority class, a higher percentage of videos in the high priority class have satisfactory loss performance when using D M P S . Quicker and more frequent servicing is given to the high priority class of video traffic because D M P S includes a rule that assigns servicing priority to this class. Conversely, F C F S does not have any differentiated service mechanisms and cannot provide a better service quality guarantee to more important classes of traffic. Table 5-2. Comparison of S i nter between D M P S and F C F S Number of Videos Sinter (%) DMPS FCFS 4 100.0 75.0 8 87.5 62.5 12 66.7 50.0 16 50.0 50.0 20 40.0 35.0 24 29.2 29.2 Number of Videos Sinter (%) DMPS FCFS High-priority L o w -priority High-priority L o w -priority 4 100.0 1.00.0 100.0 50.0 8 100.0 75.0 50.0 75.0 12 83.3 50.0 33.3 66.7 16 75.0 25.0 37.5 62.5 20 70.0 10.0 30.0 40.0 24 58.3 0.0 33.3 25.0 98 5.3.2.2 Impact of the Choice of Classes on DMPS Tables 5-3 to 5-5 show the effect of interclass video allocation, interclass loss distribution, and the number of classes on performance respectively. In Table 5-3, two classes of videos were considered. The high priority videos are the "Robin Hood" traces with a packet loss constraint of 3%, and the low priority videos are the "Soccer" traces with a packet loss constraint of 6%. The number of high priority videos varies from 3 to 12, and the total number of videos is 24. As shown in this Table, D M P S achieves a higher Smler than F C F S , except the case when the number of high priority videos is equal to 12. Also , the difference in Sj n t er between high and low priority becomes more pronounced as the number of videos in the high priority class increases from 3 to 9. When the number of videos in the high priority class increases to 12, equal to 50% of the total number of videos, the Smter of the low priority class drastically drops to about zero. A similar observation holds for Smter between schemes. This suggests that with an optimal distribution of videos between different classes, significant steps to achieving large percentage of videos with high QoS satisfaction can be made. The cutoff point varies based on the network condition, but the behavior is consistent when transmitting the number of videos from 8 to 16 and are omitted here, because the total network resources are not sufficient for supporting an overly large amount of high priority class traffic. As a result, no spare resources are available for the low priority class, and the high priority class also receives a corresponding service reduction. 99 Table 5-3. Impact of varying number of video distribution between classes on D M P S Number of Videos (High, Low) Sinter (%) DMPS FCFS 3, 21 58.3 29.2 6, 18 58.3 29.2 9, 15 37.5 29.2 12, 12 29.2 29.2 Number of Videos (High, Low) Sinter (%) DMPS FCFS High-priority L o w -priority High-priority L o w -priority 3, 21 100.0 52.4 33.3 25.0 6, 18 100.0 44.4 33.3 25.0 9, 15 77.7 13.3 33.3 25.0 12, 12 58.3 0.0 33.3 25.0 In Table 5-4, two classes with varying loss constraints are investigated. The high priority videos are also the "Robin Hood" traces with a packet loss constraint of from 3% to 4.5%, and the low-class videos are also the "Soccer" traces with a packet loss constraint of 6%. The number of videos in the high and low priority classes is fixed at 12 and 12, respectively. As seen in this table, in D M P S , Sj n t er in the high priority class is higher than that in the low priority class. Also in D M P S , Sinter and Si n t e r in each individual class become significantly large when the loss tolerance in the high priority classes increases. This is expected, since the high priority class reduces network resource usage due to its increased loss tolerance. As a result, both classes can use more resources, which leads to high satisfaction with loss performance. As a result, Sjnter in D M P S is increased with a decrease of loss requirements in the high priority class. 100 Table 5-4. Impact of varying loss differentiation between classes on D M P S Packet Loss Constraint (High, Low) Sinter (%) DMPS FCFS 3.0%, 6.0% 29.2 29.2 3.5%, 6.0% 41.7 29.2 4.0%, 6.0% 45.8 29.2 4.5%, 6.0% 58.3 29.2 Packet Loss Constraint (High, Low) Sinter (%) DMPS FCFS High-priority L o w -priority High-priority L o w -priority 3.0%, 6.0% 58.3 0.0 33.3 2 5 . 0 3.5%, 6.0% 75.0 0.1 33.3 2 5 . 0 4.0%, 6.0% 75.0 1 6 . 7 33.3 2 5 , 0 4.5%, 6.0% 83.3 3 3 . 3 33.3 2 5 . 0 In Table 5-5, a varying number of classes are studied. Here, twenty-four cartoon traces of "Robin Hood" were used. A s seen from Table 5-5, increasing the number of classes from 2 to 3 or 4 increases the number of videos whose loss expectations are met. This indicates that the proper selection of the number of service classes can increase users' satisfaction due to efficient utilization of network resources. It can be inferred that if there are too few service classes and the loss requirements of users are diverse, despite sufficient resources, it is possible that a QoS violation may arise or that network underutilization can occur i f stringent and non-stringent loss flows are assigned to the same high or low service class. 101 Table 5-5. Joint Impact of varying number of video distribution between classes and varying number of classes on D M P S Two classes Three classes Four classes High L o w High Medium L o w High Medium Medium" L o w Number of videos 12 12 10 2 12 8 2 2 12 Packet loss (%) 3.0 6.0 3.0 4.5 6.0 3.0 4.0 5.0 6.0 s. ^inter 66.7 0.0 80.0 100.0 0.1 100.0 1-00.0 100.0 16.7 (%) 29.2 45.8 58.3 In summary, the distribution of video between classes, the loss tolerance of each class, and the number of classes can affect resource utilization and consequently, the level of service satisfaction. 5.4 Chapter Summary The M P A P S and D M P S schemes perform better than the F C F S scheme in terms of the visual quality of each video, the provision of a fair level of service, and network efficiency, both for a single class and multiple classes of video traffic under different traffic conditions. M P A P S first maps videos into groups with different transmission priorities based on upcoming packet type and current loss performance, while D M P S maps videos into groups with different transmission priorities based on upcoming packet type, current loss performance, and classes of videos. Then in both schemes, a specific transmission schedule for a stream is set adaptively to respond instantaneously to servicing needs. The M P A P S scheme provides a fair level of service between videos, 102 while the D M P S scheme prioritizes traffic, ensuring that critical needs are met while ensuring that high priority traffic does not completely starve low priority traffic. Furthermore, D M P S provides service fairness within a class. Both schemes can support a variety of compression schemes (e.g. M P E G - 1 and M P E G - 4 ) . This work develops new roles for packet scheduling, which was initially created to resolve delay issues. A large number of traffic scheduling mechanisms that try to provide delay guarantees have been proposed and analyzed in recent literature; however, to our best knowledge, little effort has been directed toward loss-aware scheduling design. This research explores the effectiveness of using the scheduling scheme in switches (or routers) to provide loss guarantees. Several implementation and deployment issues are related to the use of the scheme. i) Delay The M P A P S and D M P S schemes are loss-aware scheduling designs created for supporting loss requirements. A video streaming application can tolerate a certain delay, but it needs a relatively high loss performance. Delay-aware scheduling schemes already widely exist [49-50]. When the M P A P S and D M P S schemes are applied in situations where both loss and delay are equally important, such as interactive video transmission, one of the many existing conventional delay-aware scheduling techniques can be combined to improve both loss and delay performance. For example, W E D D (Weighted Earliest Due Date) can be jointly used with M P A P S and D M P S . The problem of delay can be tackled by W E D D , while the problem of loss can be dealt with M P A P S and D M P S . Moreover, D M P S itself can be extended to provide improved loss and delay performance, i f class differentiation is set according to delay requirements. ii) Trade-offs 103 The M P A P S and D M P S schemes are quite effective at improving loss performance to videos. Furthermore, they adaptively adjust the probability of transmissions that individual streams receive, preventing an aggressive stream from taking all transmission opportunities. Thus, they provide fair service to all videos. Finally, the M P A P S and D M P S schemes achieve high network utilization, since they admit as many packets as the buffer allows. However, due to the requirements for stream-specific state information, the M P A P S and D M P S schemes are more applicable to situations where quality is the greatest concern and the scalability constraint is less of an issue. The M P A P S and D M P S schemes are less complex than most scheduling schemes and many buffer management techniques. It is well known that a scheduling scheme generally requires individual flow information. To this end, the M P A P S and D M P S schemes are simpler than most existing scheduling schemes because they make a per-frame-based transmission decision, and related information remains unchanged during each frame interval. Most existing scheduling schemes, such as Weighted Fair Queuing and Earliest Deadline First, make per-packet-based transmission decisions and simultaneously involve operations related to all other streams. Threshold-based or push-out related buffer management techniques can either lead to low network utilization or even more complex computations and may greatly impact scalability. In summary, when comparing achievements and trade-offs, the M P A P S and D M P S schemes are the best choice to support loss requirements for video communication, because they possesses superior performance and fairness properties with a reasonable scalability trading-off. The M P A P S and D M P S schemes could be extended to improve loss performance to classes of streams, reducing the complexity of 104 the scheme, and thus improving its scalability. This indicates that both schemes are of practical use. 105 Chapter 6 End-To-End Loss Distribution This chapter introduces a set of loss allocation schemes. The chapter is organized as follows. Section 6.1 describes the problem of end-to-end QoS control. Section 6.2 presents proposed loss allocation schemes. Section 6.3 provides an extensive performance evaluation of the proposed schemes. Section 6.4 discusses the cost of all the proposed schemes. At the end of the chapter, the conclusions for this chapter are given. 6.1 The Multi-hop QoS Problems: Some Insights A n important focus of the work presented in Chapters 4 and 5 is the investigation of how each node is able to meet its local loss constraints. In practice, loss requirements for applications are typically known on an end-to-end basis. One key issue in loss control is translating an end-to-end loss requirement into a set of local loss constraints so that the end-to-end loss requirement of the application can be satisfied through the use of nodal-based loss control techniques. To date, ~most loss control techniques focus on loss 106 management at individual nodes and ignore the problem of distributing end-to-end loss requirements to local routing nodes. To our best knowledge, very few studies have examined loss distribution between nodes along source-destination path according to per-traffic end-to-end loss requirements. This gap led to the design of the per-hop loss allocation strategies outlined in this research. In this chapter, five strategies are proposed for the allocation of nodal loss targets across a multi-hop IP network based on per-traffic end-to-end loss requirements. 6.2 Approaches for Loss Distribution Cross Multi-hop Path Nodal loss allocation strategies focus on translating the end-to-end loss requirement of a video to nodal loss constraints, so that a video that meets every local loss constraint also meets corresponding end-to-end loss requirements. Five nodal loss allocation strategies are proposed and described below. 6.2.1 Equal Allocation (EA) The Equal Allocation (EA) strategy is a static approach that allocates equal shares of the end-to-end loss requirements of a video to all the nodes along the source-destination path. Consider a video transmitted through n nodes from source to destination. Let PLRfi) be the packet loss ratio target of video j at node i , and PLRj be the end-to-end packet loss constraint, then PLRj (1) = PLRj (2) = ... = PLRj (n) (6-1) Subject to 107 £ P L K , - ( i ) = PLRj (6-2) 1=1 6.2.2 Excess Move (Ex_Move) The Excess Move (Ex_Move) is a dynamic approach consisting of two stages: "initial allocation" and "adjustment". In the "initial allocation" stage, E A is performed, while in the "adjustment" stage, Ex_Move modifies downstream loss constraints of each individual video based on its upstream loss performance. In this scheme, Aj is the difference between the actual PLRj(i) and the original PLRj(i) allocated by the E A , and Aj(i-k) is the accumulated Aj from node i to k. One can easily see that Aj(i-k) can take both positive and negative values. Positive values correspond to worse loss performance, where video j has experienced excess loss when passing through nodes i to k. Negative values indicate better loss performance where packet loss of video j is less than expected. The Ex_Move adjusts the initial local loss constraints of a video up and down node by node. For example, node k+1 increases PZJ?j(k+l) by Aj(i-k), i f Aj(i-k) is positive, and vice versa. 6.2.3 Excess Even (Ex_Even) The Excess Even (Ex_Even) is another dynamic approach. Like Ex_Move , Ex_Even performs E A in the "initial allocation" stage; unlike Ex_Move , it performs equipartitioning of Aj(i-k) (defined in Section 6.2.2) to the subsequent nodes along the path, starting from node k+1, in the adjustment stage. 108 6.2.4 Excess Adaptation (Ex_A) The Excess Adaptation (Ex_A) is also a dynamic approach. L ike Ex_Move , E x _ A performs E A in the "initial allocation" stage; unlike Ex_Move , it evenly distributes the positive Aj(i-k) (defined in Section 6.2.2) to the remaining lightly-loaded nodes, and distributes the negative Aj(i-k) to the remaining heavily-loaded nodes in the adjustment stage. 6.2.5 Excess Proportion (Ex_P) The Excess Proportion (Ex_P) is another dynamic approach. Like Ex_Move , E x _ P performs E A in the "initial allocation" stage; unlike Ex_Move , it allocates the positive Aj(i-k) (defined in Section 6.2.2) to the remaining lightly-loaded nodes and the negative Aj(i-k) to the remaining heavily-loaded nodes in proportion to weight parameters which indicate the load level on each node. The idea is that among the lightly- loaded nodes, more lightly-loaded nodes should be allocated with more of the positive Aj(i-k), whereas among the heavily-loaded nodes, more heavily-loaded nodes take on a greater share of the negative Aj(i-k). If two load levels are set in decreasing order, the weight parameters to each are o^and ®.v, and the numbers of nodes within each are nx and n%. Therefore a relationship is as follows: cortir + corns = 1 (6-3) In addition, (Or < CDs for lightly-loaded nodes, C0r > CDs for heavily-loaded nodes. 109 6.3 Performance Evaluation of Nodal Loss Allocation Strategies A n event-driven simulator was developed in C to evaluate the performance of the five nodal loss allocation strategies proposed in Section 6.2. Three sets of experiments were conducted as part of the performance evaluation. In the first set of experiments, each individual node employed the M P A P S scheme with cooperation between nodes through the use of five local loss allocation schemes, which are collectively referred to as Coordinated M P A P S schemes ( C - M P A P S ) [22]. They include C - M P A P S with E A , C-M P A P S with Ex_Move , C - M P A P S with Ex_Even, C - M P A P S with E x _ A , and C -M P A P S with E x _ P . In the second set of experiments, five nodal loss allocation strategies are combined with the D M P S scheme, which are collectively referred to as Coordinated D M P S schemes ( C - D M P S ) [21]. They include C - D M P S with E A , C - D M P S with Ex_Move , C - D M P S with Ex_Even, C - D M P S with E x _ A , and C - D M P S with E x _ P . In both experiments, C - M P A P S and C - D M P S schemes are compared with the Uncoordinated Packet Scheduling Scheme (U-FCFS) . In U - F C F S , each individual node conducts F C F S scheduling independently, without inter-node cooperation. In the third set of experiments, the performance of the five nodal loss allocation schemes is compared to each other. The simulation topology is shown in Figure 6-1. A l l videos (SI, S2, Sn) are transmitted over a k-hop system to the same destination (R). The route is assumed to be fixed because the changes in the route often occur when the network topology or major load change in large, highly dynamic networks. In fact, i f the route changes, the local loss target can be recalculated by the loss distribution scheme and such information can 110 be propagated to each node. To demonstrate its effectiveness of the technique, a stable network with load levels of 90% (High), 80% (Medium), 70% (Low), and 65% (Low") was studied. The load level at each node (L) is given by L=(N+k)xp. Here, N is the number of background flows, k is the number of test video sources, and p is the load contributed by each source. The p is calculated by p=to n/ t o rf . t o n and t0ff are the mean duration of on and off periods, respectively. In the on period, the source generates packets at a variable rate specified by the packet inter-arrival time given in a video frame interval. The congestion increases as the load level increases. The source traffic is comprised of M P E G videos that use UDP/ IP as the transport protocol. Cross traffic is inserted into each routing node independently, so that the load of each node can be effectively and precisely controlled. The results are presented in Sections 6.3.1 and 6.3.2, respectively. C r o s s T ra f f i c s i I ! Figure 6-1. The simulated network for C - M P A P S and C - D M P S 111 6.3.1 Effectiveness of Coordinated MPAPS Schemes In this section, the results for the five C - M P A P S schemes, C - M P A P S with E A , C - M P A P S with Ex_Move , C - M P A P S with Ex_Even, C - M P A P S with E x _ A , and C-M P A P S with E x _ P , are presented. Here, M P E G - 1 video traces [52] are used as video sources. 6.3.1.1 C-MPAPS with EA The first experiment considered for evaluation of C - M P A P S with E A is a baseline scenario, with each node being evenly loaded. Next, uneven load distribution among nodes where one node (node 2) slightly increases its load level is studied. Figure 6-2 depicts the results when five videos were delivered through a 3-hop path. Three are cartoon traces called "Asterix", one is a sports trace called "Soccer", and the other is a news trace called "Talk", all with the same packet loss constraint of 6%. When all of the nodes are evenly loaded, C - M P A P S with E A outperforms U - F C F S , exhibiting lower F E R values, higher E T values, and more F l values close to 1. This indicates that a higher video quality, network efficiency, and level of service fairness are achieved by C -M P A P S with E A than U - F C F S . This is a consequence of the cooperation between the M P A P S and the E A schemes. The E A sets a local target loss responsibility that leads to meeting the end-to-end loss requirement of a video, and the M P A P S scheme within each node takes the responsibility to ensure the local loss requirement of each video. On the other hand, U - F C F S does not have any mechanism to control end-to-end loss or the local loss target. Also , when different loads are applied in different nodes, the C - M P A P S with E A scheme still outperforms the U - F C F S scheme, but has a poorer performance 112 compared to its performance in the even-load situation. This is due to the fact that the C -M P A P S with E A scheme targets satisfaction of the end-to-end loss target of a video stream, assuming a uniform load distribution between nodes along the path. The 'violated' or 'redundant' loss incurred in imbalanced load distribution cannot be adjusted downwards or upwards by E A . In this experiment, the local target requirement set by E A cannot be realized at node 2, an overly utilized hop. This results in an increase in overall end-to-end loss, compared to the even load case. However, C - M P A P S with E A still performs better than U - F C F S , since there is only a slightly increase of load level in node 2, and the M P A P S scheme evenly distributes the increased data loss to all videos. MPAPSJJniform load FCFSJJniform load M PAPS_@Nonuniform load FCFS@Nonuniform load (a) MPAPS@Uniform load FCFS@Uniform load X MPAPS_@Nonuniform load - FCFS ©Nonuniform load 0 I H 1 1 1 1 2 3 4 5 V ideo ID (b) 113 100.00% r B M P A P S e U n i l o r m l o a d r .FCFSOUni lormload n MPAPS_@NonuniIormload • FCFS@Nonuniformload 83.10% 82.12% 80.00% h 60.00% h 53.87% 46.90% LLI 40.00% h 20.00% h 0.00% L (c) Figure 6-2. Performance comparison between C - M P A P S with E A and U - F C F S The evaluation of C - M P A P S with Ex_Move considers the case that non-uniform load distribution exists between nodes. More specifically, the three nodes along the path are configured with different patterns of loads: High-Medium-Low ( H M L ) , Low-Medium-High ( L M H ) , and Medium-Medium-High ( M M H ) , respectively. Figures 6-3 and 6-4 describe the results where five videos are delivered. Three are cartoon traces called "Asterix", one is a sports trace called "Soccer", and the other is a news trace called "Talk", all with the same packet loss constraint of 6%. To verify the above results, the number of videos and hops are varied. 6.3.1.2 C-MPAPS with Ex_Move 114 rx LU 60 40 9 20 0 C-M PAPS@HM L U-FCFSOHML - X C-MPAPSOLMH - - o-- U-FCFS@LMH -o C - M P A P S S M M H - - + - - U-FCFS@MMH 5 8 ; 1 3 Video ID (a) — H - C - M P A P S 0 H M L -C-MPAPS@LMH -C-MPAPS®MMH U-FCFS@HML o - - U-FCFSOLMH + - - U-FCFS@MMH 3 Video ID (b) LU 100.0% 80.0% 60.0% 40.0% 20.0% 0.0% C-MPAPSOHML r .U-FCFS@HML Q C - M P A P S 8 L M H ,"> U-FCFSOLM H C - M P A P S 0 M M H D U - F C F S @ M M H 80.71% 81.43% 64.80%| 82.80% 60.70% 53.87% (c) ure 6-3. Performance comparison between C - M P A P S with E x _ M o v e and U - F C F S 115 As seen in Figure 6-3, compared to the U - F C F S scheme, the FERs in C - M P A P S with E x _ M o v e are lower for most videos when using the three tested load patterns. For example, the F E R in video 2 in U - F C F S can reach up to approximately 60%, but C -M P A P S with Ex_Move to around 15% under a load pattern of M M H . This demonstrates that C - M P A P S with Ex_Move achieves a higher video quality and a higher network efficiency than U - F C F S . M P A P S ensures that at every node, I packets receive better service than P packets which, in turn, receive better service than B packets. Consequently, I packet loss is less than that of the P packets, and P packet loss is less than that of the B packets both on an end-to-end basis. Therefore, C - M P A P S with Ex_Move produces more videos with pleasant viewing quality than U - F C F S produces, because the contribution to M P E G video quality of the M P E G video packets is in descending order of I, P and B . As a result, the ETs in C - M P A P S are higher than U -F C F S , as indicated in Figure 6-4. It also evident from Figure 6-3 that for the different load patterns presented here, more values of the FI in C - M P A P S with Ex_Move are close to one than in U - F C F S scenario. In other words, a larger number of videos meet their individual end-to-end packet loss constraints in C - M P A P S with Ex_Move than in U - F C F S . This suggests that both overservicing and underservicing occur less in C - M P A P S with Ex_Move than in F C F S . Therefore, the C - M P A P S with Ex_Move scheme provides fair service among videos. This is a consequence of the mutual support of local loss allocation and M P A P S schemes. The local loss allocation scheme, Ex_Move , effectively controls the nodal loss target of each video by increasing or decreasing the loss constraints of a video at the next subsequent node according to its loss performance in preceding nodes. In this way, E x _ M o v e attempts to achieve the individual end-to-end loss requirements. Meanwhile, 116 the M P A P S scheme classifies the videos into groups with similar loss performances under the premise that low loss performance groups should receive quicker and more frequent service, and vice versa. This classification method also prevents an unnecessary increase in packet loss by giving a video with a larger than expected loss a higher transmission priority. Thus, the packet loss experienced by each video at individual nodes tends to conform to expectations. B y comparison, U - F C F S randomly drops packets without the assurance of local loss constraints or end-to-end loss performance. Another important result can be inferred from Figure 6-3. As shown in this Figure, the maximum F l deviations from one using C - M P A P S with E x _ M o v e are much lower than those achieved through the use of U - F C F S . There are two reasons for this. Firstly, when a router is heavily loaded, network resources become insufficient to support all the video traffic. The M P A P S scheme can adaptively change servicing priority according to the S L and the A P I and can almost evenly distributes the degradation of servicing levels to all videos at each individual node. Also, both the S L and the A P I also prevent overservicing of a particular video. Secondly, the end-to-end loss requirements are maintained by the local loss allocation scheme, Ex_Move . With Ex_Move , the loss constraints of a video at individual nodes can be adjusted based on the video's performance in preceding nodes. As a result, downstream nodes can help the video "catch up" if there are excessive data losses in upstream nodes, and videos receiving extra servicing can have their loss constraints reduced to allow more urgent videos to pass through. U - F C F S does not have this attribute. Figure 6-4 gives more insight into what happens in each node by depicting the loss achieved at each node at a load pattern of FfML. This figure shows that the losses experienced by each video at the same node in C - M P A P S with E x _ M o v e are almost the 117 same, but they are quite different from each other in U - F C F S . Secondly, the local loss deviation from the local loss target for each video in C - M P A P S with E A is much smaller than that of U - F C F S . Thirdly, the packet loss ratios for all videos at heavily loaded nodes are higher than those at lightly loaded nodes. Fourthly, the positive and negative loss deviations that occur in upstream nodes tend to be adjusted by downstream nodes. For example, the loss performance delivered is overly bad for video 4 (V4) at node 1, and thereafter, the loss is adjusted downwards by increasing its servicing opportunities. cc CL Node ID (a) C - M P A P S with Ex_Move cc _i CL V1 V2 X - - V 3 (. V4 — - O V5 N o d e ID (b) U - F C F S Figure 6-4. Comparison of packet loss at each node between C - M P A P S with Ex Move and U - F C F S 118 Experiments using a number of videos N= 10 to 30 and a number of hops up to 17 exhibit similar trends as those presented in Figures 6-3 and 6-4. The U - F C F S is inferior to the C - M P A P S with Ex_Move in all the cases and only the absolute values of the F E R , F l and E T are different. Typically, with increases in the number of videos and the number of hops there are corresponding increases in F E R and F l and a decrease in ET . The changes distribute equally between videos. The results are omitted here. 6.3.1.3 C-MPAPS with Ex_Even The performance of C - M P A P S with Ex_Even is first evaluated where five videos are routed through a 3-hop path under three different load distributions: High-Medium-L o w ( H M L ) , Low-Medium-High ( L M H ) , and Medium-Medium-High ( M M H ) , respectively. Three are cartoon traces called "Asterix", one is a sports trace called "Soccer", and the other is a news trace called "Talk", all with the same packet loss constraint of 6%. The results are plotted in Figures 6-5 and 6-6. This is further validated using different combinations of the number of videos and the number of hops. The following results are observed from Figures 6-5 and 6-6: > Compared to the U - F C F S , C - M P A P S with Ex_Even achieves a lower F E R for most videos under three different patterns of load, indicating that in the load patterns tested, C - M P A P S with Ex_Even produces a higher video quality than the U - F C F S . This is because M P A P S is performed at every node traversed, and video packets are protected according to their importance. I-packet loss is the smallest, followed by P-packet loss, then B-packet loss. Consequently, F E R incurred by error propagation is largely reduced. > The C - M P A P S with Ex_Even F l curves are flatter and closer to the fairness line than the U - F C F S , when both schemes are applied to three different load patterns. 119 This indicates that C - M P A P S with Ex_Even shares services well among all videos. This is because Ex_Even uniformly spreads the accumulated loss difference from the local loss target at preceding nodes to subsequent nodes. The corresponding loss responsibility is shifted to the subsequent nodes, and the end-to-end loss requirement of a video is met. This is verified by Figure 6-6. On node 1, video 4 (V4) experiences loss that is higher than its expected local target. The subsequent nodes 2 and 3 adjust their local loss targets for video 4 down in order to satisfy the end-to-end loss constraint. Also, at every node, M P A P S is able to provide the desired local loss target for each video very exactly, thereby maximizing network resource utilization and decreasing the loss deviation for each video. > Compared to U - F C F S , C - M P A P S with Ex_Even delivers E T by more than 20%. The reason is that C - M P A P S with Ex_Even protects important information from loss, thus increasing the delivery of useful information. Experiments with varying numbers of videos from 10 to 30 and hops from 3 to 17 confirm the above observations and are omitted here. GC LU LL 60 40 5 20 C-MPAPS@HML -X C-MPAPS@LMH -o C-MPAPSOMMH U-FCFS0HML U-FCFS@LMH U-FCFS0MMH ^ X Video ID (a) 120 0.5 - A C - M P A P S @ H M L - X C - M P A P S @ L M H -o C - M P A P S 8 M M H F a i r n e s s L i n e i—i U - F C F S @ H M L o- - U - F C F S @ L M H + - - U - F C F S 6 M M H LU 100.0% 80.0% 60.0% 40.0% 20.0% 0.0% 84.00% V i d e o ID (b) C - M P A P S @ H M L r . U - F C F S @ H M L n C - M P A P S a L M H ,i U - F C F S @ L M H I C - M P A P S @ M M H • U - F C F S S M M H 85.33% 64.80% 81.80% 60.70% 53.87% (c) Figure 6-5. Performance comparison between C - M P A P S with Ex_Even and U _ F C F S 0.0300 0.0250 -V1 V4 V2 V5 -X V3 N o d e ID Figure 6-6. Packet loss of C - M P A P S with Ex_Even at individual node 121 6.3.1.4 C-MPAPS with Ex_A Three sets of experiments examine the behaviour of C - M P A P S with E x _ A . In the experiments, a sequence of nodes employs a M P A P S scheme and E x _ A sets the local loss target. The results for the delivery of five video through a 3-hop path are shown in Figures 6-7 and 6-8. Three of the videos are cartoon traces called "Asterix", one is a sports trace called "Soccer", and the other is a news trace called "Talk": all have the same packet loss constraint of 6%. These figures show three patterns of load considered: the first one consists of High-Medium-Low ( H M L ) , the second consists of L o w -Medium-High ( L M H ) , and the third consists of Medium-Medium-High ( M M H ) . The results show that most videos have a lower F E R and a higher E T in C - M P A P S with E x _ A than in U - F C F S . For example, C - M P A P S with E x _ A reduces about 45% F E R for video 2 from U - F C F S and reduces 28% E T from U - F C F S , all under a load pattern of M M H . C - M P A P S with E x _ A achieves packet loss ratios that closely approach the end to-end loss constraints of individual videos, resulting in FI values close to 1. For U -F C F S , the FI values are very different from the target. The results in C - M P A P S with E x _ A are due to the cooperation between M P A P S and E x _ A schemes. M P A P S within each node takes the responsibility to ensure the local loss requirement of each video. E x _ A performs a balanced division of nodal loss deviation to achieve an end-to-end loss constraint. The 'violated' or 'redundant' loss from upstream nodes is shifted to underutilized or hot spot (bottleneck) nodes in a path. Therefore, network resources are well utilized when M P A P S and E x _ A is used, resulting in a better satisfaction with loss performance. 122 60 Co 40 oe UJ 20 1.5 1 0.5 0 LU 100.0% 80.0% 60.0% 40.0% 20.0% 0.0% C-MPAPSOHML - C - M P A P S a L M H - C - M P A P S B M M H U-FCFS@HML o U-FCFS@LMH + - - U-FCFS@MMH /A 3 Video ID (a) — ° r^-JK^ C-MPAPSOHML U-FCFS@HML -X C-MPAPS@LMH - - « - - U-FCFSOLMH -o C -MPAPS@MMH - - + - - U-FCFS®MMH 3 Video ID (b) ", C-MPAPS@HML : .U -FCFS@HML B C - M P A P S e L M H r U-FCFS® LMH C-MPAPS@MMH • U-FCFS@MMH 86.12% 86.00% 64.80% 53.87% I J (c) ure 6-7. Performance comparison between C - M P A P S with E x _ A and U F C F S 123 Figure 6-8 further demonstrates the effective adaptation of local loss targets to meet the end-to-end loss constraint by E x _ A . Nodes along the path are configured with a load pattern of High-Medium-Low ( H M L ) . In node 1, the loss for video 4 (V4) is larger than its loss target at that node 1, and the local loss targets are thus decreased in nodes 2 and 3. As a result, there is minor deviation from the desired end-to-end loss. 0.0300 r 0.0250 -• a 0.0200 9-_ i °- 0.0150 -0.0100 -0.0050 -1 Figure 6-8. Packet loss of C - M P A P S with E x _ A at individual nodes The above observations hold when the number of videos N=10 to 30 and the number of hops is up to 17. 6.3.1.5 C-MPAPS with Ex_P To study the performance of C - M P A P S with E x _ P , first, the transport of five videos over a 4-hop path is considered. Three are cartoon traces called "Asterix", one is a sports trace called "Soccer", and the other is a news trace called "Talk": all have a packet loss constraint of 5%. Three sets of patterns of load were configured: High-Medium-Low-Low" ( H M L L " ) , Low"-Low-Medium-High ( L " L M H ) , and Low-Low-Medium-High 124 ( L L M H ) . A varying number of videos and hops are then performed. The results presented in Figures 6-9 and 6-10 refer to the first set of experiments. Below is a summary of the experimental results. C - M P A P S with E x J P achieves a lower F E R for most videos and a higher E T than the U - F C F S . Moreover, for C - M P A P S with E x _ P , all of the F l values are equal to or close to one, but a great fluctuation of the F l can be observed in U - F C F S over all the tested load patterns. Therefore, C - M P A P S with E x _ P outperforms U - F C F S . The E x _ P performs a more finely balanced division of local loss deviation with respect to achieving an end-to-end loss constraint, shifting the 'violated' or 'redundant' loss from upstream nodes to underutilized or hot spot (bottleneck) nodes in a path. Moreover, the assignment of 'violated' loss between underutilized nodes is proportional to its load level, rather than evenly distributed, as occurs in E x - A . A large portion of 'violated' loss is allocated to less congested nodes, and a smaller portion of loss is allocated to more congested nodes. The allocation of 'redundant' loss is similar. Clearly, end-to-end loss requirements are more likely to be guaranteed by E x _ P than E x _ A , since E x _ P nearly fully utilizes network resources. E x _ P is more suitable to a case in which load conditions are diverse. The accurate distribution and adjustment of local loss targets is clearly visible in Figure 6-10 where a load pattern is FLMLL". The loss in node 1 is the largest, followed by that in node 2, 3 and 4. 125 60 ~ 40 5 DC LU E 20 -C-MPAPS@HMLL--C-MPAPS@L-LMH -C-MPAPS@LLMH U-FCFSSHMLL-a - - U-FCFS® L-LMH + - - U-FCFS@LLM H LU 100.0% 80.0% 60.0% 40.0% 20.0% 0.0% -5—r 3 Video ID (a) SlC-MPAPS@HMLL- '.' U-FCFS@HM LL-• C-MPAPS@L-L.MH rU-FCFS@L-LMH C-MPAPS@LLMH • U-FCFS@LLMH 64.80% 82.47% it* 60.70% (C) ure 6-9. Performance comparison between C - M P A P S with E x _ P and U _ F C F S 126 0.0050 I 1 1 1 1 2 3 4 N o d e ID Figure 6-10. Packet loss of C - M P A P S with E x _ P at individual node A great similar performance for the number of videos N=10 to 30, and the number of hops up to 17 is achieved, confirming the above conclusions. The results are omitted here. 6.3.2 Effectiveness of Coordinated DMPS Schemes In this section, the results for the five C - D M P S schemes: C - D M P S with E A , C-D M P S with Ex_Move , C - D M P S with Ex_Even, C - D M P S with E x _ A , and C - D M P S with E x _ P , are presented. M P E G - 4 video traces [51] are used as video sources. The reason for using M P E G - 4 video instead of M P E G - 1 video is that the M P E G - 4 video traces were available when the C - D M P S scheme is designed. The emerging M P E G - 4 standard is increasingly accepted and appears to be a promising open standard for Internet video, and the frame-dependent nature of M P E G - 4 is the same as that in M P E G -1. It is this characteristic that is considered in the design of C - D M P S . 127 6.3.2.1 C-DMPS with EA The first experiment considered for evaluation of C - D M P S with E A is a baseline scenario, with each node evenly loaded. The next experiment is a study of uneven load distribution among nodes, in which one node (node 2) slightly increases its load level. Figure 6-11 depicts the results when four videos were delivered through a 3-hop path. Two are cartoon traces called "Robin Hood" with the same packet loss constraint of 6%, one is a sport trace called "Soccer" with a packet loss constraint of 9%, and the other is a news trace " A R D Talk" with a packet loss constraint of 9%. When all of the nodes are evenly loaded, C - D M P S with E A outperforms the U - F C F S , exhibiting lower F E R values, higher E T values, and more F l values close to 1. This indicates that C - D M P S with E A achieves a higher video quality, better network efficiency, and increased level of service fairness compared to U - F C F S . This is a consequence of the cooperation of the D M P S and E A schemes. The E A sets a local target loss responsibility that meets the end-to-end loss requirement of a video, and the D M P S scheme within each node takes the responsibility to ensure the local loss requirement of each video. On the other hand, U -F C F S does not have any mechanism to control end-to-end loss or the local loss target. Also, when different loads are applied in different nodes, C - D M P S with E A still outperforms U - F C F S , but performs slightly worse compared to its performance in the even-load situation. This is due to the fact the C - D M P S with E A scheme tries to satisfy the end-to-end loss target of a video stream, assuming a uniform load distribution between nodes along the path. The 'violated' or 'redundant' loss incurred in imbalanced load distribution cannot be adjusted downwards or upwards by E A . In this experiment, the local target requirement set by E A cannot be realized at node 2, an overly utilized 128 hop. This results in an increase in overall end-to-end loss, compared to the even load case. However, C - D M P S with E A still performs better than U - F C F S , since there is only a slightly increase of load level in node 2, and the D M P S scheme evenly distributes the increased data loss to all videos. 6^  bT LU 100 80 60 40 20 - C - D M P S O U n i f o r m load U -FCFS@Un i fo rm load - C - D M P S @ N o n U n i f o r m load - U F C F S O N o n U n i f o r m load Video ID (a) E 1 -a C-DM PSOUniform load x-- U-FCFS@Uniformload C-DM PSONonUniform load UFCFSSNonUniformload 2 3 V ideo ID (b) 129 LU 100.00% 80.00% 60.00% 40.00% 20.00% 0.00% L 72.00% SC-DMPS@Uniform load '.•U-FCFS@Uniform load o C-DM PSONonUniform load • UFCFS@NonUniform load 60.00% 57.20% (c) Figure 6-11. Performance comparison between C - D M P S with E A and U - F C F S Moreover, as can be seen from this figure, both the F E R s and the FIs for high priority videos (videos 1 and 2) are considerably lower than for the low priority videos (videos 3 and 4), indicating that C - D M P S with E A can provide better video quality and a higher level of service to the high priority class than it can to the low priority class. The D M P S in C - D M P S with E A scheme includes a rule that assigns servicing privileges to high priority video traffic so that quicker and more frequent servicing is given to high priority videos. In this way, a higher level of loss satisfaction is achieved in high priority videos. Conversely, F C F S cannot differentiate its service and therefore does not provide a better service quality guarantee to more important classes of traffic. 6.3.2.2 C-DMPS with Ex_Move The evaluation of C - D M P S with Ex_Move considers the case that non-uniform load distribution exists between nodes. The three nodes along the path are configured with different patterns of loads: High-Medium-Low ( H M L ) , Low-Medium-High ( L M H ) , 13Q and Medium-Medium-High ( M M H ) , respectively. Figures 6-12 and 6-13 describe the results where four videos are delivered. Two are cartoon traces called "Robin Hood" with the packet loss constraint of 6%, one is a sports trace, "Soccer" with the same packet loss constraint of 9%, and the other is a news trace called " A R D Talk" with a packet loss constraint of 9%. To verify the above results, the number of videos and hops are varied. 131 LU 100.0% 80.0% 60.0% 40.0% 20.0% 0.0% c. _DM P S O H M L • . •U-FCFS@HML a c . _DMPS@LMH 7 U - F C F S @ L M H 1 c. D M P S 8 M M H • U -FCFS@MMH 81.10% 81.07% 66.90% m I 76.10% , X (c) Figure 6-12. Performance comparison between C-DMPS with Ex_Move and U-FCFS For the three tested load patterns, the FERs in C-DMPS with Ex_Move are lower for most videos than in U-FCFS (Figure 6-12). For example, the FER in video 3 (V3) in U-FCFS can reach up to approximately 60%, but the FER in C-DMPS with Ex_Move reaches to around 30% under a load pattern of M M H . Thus, C-DMPS with Ex_Move achieves a higher quality of video and a higher network efficiency than U-FCFS. The DMPS scheme ensures that at every traversing node, I packets receive better service than P packets which, in turn, receive better service than B packets. Consequently, I packet loss is less than that of the P packets, and P packet loss is less than that of the B packets, all on an end-to-end basis. Therefore, C-DMPS with Ex_Move produces more videos with pleasing viewing quality than U-FCFS does, because the contribution to M P E G video quality of the M P E G video packets is in descending order of I, P and B . As a result, the ETs in C-DMPS with Ex_Move scheme are higher than U-FCFS, as indicated in this Figure 6-12. 132 It also evident from Figure 6-12 that for different load patterns presented here, more values of the F l in C - D M P S with Ex_Move are close to one than in U - F C F S . Compared to U - F C F S , a larger number of videos meet their individual end-to-end packet loss constraints in C - D M P S with Ex_Move . Both overservicing and underservicing occur less frequently in C - D M P S with Ex_Move than in U - F C F S . The C - D M P S schemes provide fairness among videos, a consequence of the mutual support of local loss allocation and D M P S schemes. The local loss allocation scheme, Ex_Move , controls the nodal loss target of each video by increasing or decreasing the loss constraints of a video at the next subsequent node according to its loss performance in preceding nodes. In this way, the E x _ M o v e attempts to achieve the individual end-to-end loss requirements. Meanwhile, the D M P S scheme classifies the videos into groups with similar loss performances under the premise that the low loss performance groups should receive quicker and more frequent service, and vice versa. This also prevents an unnecessarily increase in packet loss by giving a video with a larger than expected loss a higher transmission priority. Thus, the packet loss experienced by each video at individual nodes tends to conform to expectations. B y comparison, U - F C F S randomly drops packets without either the assurance of local loss constraints or end-to-end loss performance. Another important result can be inferred from Figure 6-12. As shown in this Figure, the maximum F l deviations using C - D M P S with E x _ M o v e are lower than those achieved through the use of U - F C F S . There are two reasons for this difference in the F l deviations. Firstly, when a router is heavily loaded, network resources become insufficient to support all of the video traffic. The D M P S scheme adaptively changes service priority according to the S L and the A P I and can almost evenly distributes the 133 degradation of service levels to all videos at each individual node. Both the S L and the A P I also prevent overservicing of a particular video. Secondly, the end-to-end loss requirements are maintained by the local loss allocation schemes, Ex-Move . With the E x -Move, the loss constraints of a video at individual nodes can be adjusted based on its performance in preceding nodes. A s a result, downstream nodes can help the video "catch up" i f there are excessive data losses in upstream nodes, and videos receiving extra service can have their loss constraints reduced to allow more urgent videos to pass through. The U - F C F S scheme does not have this attribute. Figure 6-12 also reveals that both the FERs and the FIs for high priority videos (videos 1 and 2) are lower than for low priority videos (videos 3 and 4). Compared to U -F C F S , which does not have this differences, C - D M P S with E x _ M o v e can provide better video quality and a higher level of service to the high priority class. The D M P S scheme in C - D M P S with E x _ Move includes a rule that assigns service privileges to high priority video traffic so that quicker and more frequent servicing is given to the high priority videos, giving a higher level of loss satisfaction. Conversely, U - F C F S does not have any differentiable service mechanisms and cannot provide a better service quality guarantee to more important classes of traffic. Figure 6-13 gives more insight into what happens in each node by depicting the loss achieved at each node with a load pattern of F i M L . This figure shows that the losses experienced by the video in the same class at the same node for C - D M P S with Ex_Move are almost the same, but they are quite different from each other for U - F C F S . A s expected, the packet loss ratios for high priority videos (videos 1 and 2) are also lower than for low priority videos (videos 3 and 4). For each video in the C - D M P S with E A scheme, the local loss deviation from the local loss target is also smaller than the 134 deviation in the U - F C F S scheme. The packet loss ratios for all videos at heavily loaded nodes are also higher than those at lightly loaded nodes. Finally, the positive and negative loss deviations that occur in upstream nodes tend to be adjusted by downstream nodes; for example, if the loss performance delivered is overly bad for video 4 (V4) at node 1, and thereafter the loss is adjusted downwards by increasing its service opportunities. Node ID (a) C - D M P S with Ex_Move 0.03 * Node ID (b) U - F C F S Figure 6-13. Comparison of packet loss at each node between C - D M P S with E x _ M o v e and U - F C F S 135 Experiments using a number of videos N= 10 to 30 and a number of hops up to 17 exhibit similar trends as those presented in Figures 6-13 and 6-14. In all of the cases, the U - F C F S scheme is still inferior to the C - D M P S with E x _ M o v e scheme, and only the absolute values of the F E R , FI, and E T are different. Typically, with increases in the number of videos and the number of hops there are corresponding increments in F E R and FI and a decrement in E T . The changes distribute equally between videos. The results are omitted here. 6.3.2.3 C-DMPS with Ex_Even The performance of C - D M P S with Ex_Even is evaluated when four videos are routed through a 3-hop path under three different load distributions among traversed nodes: the distributions are High-Medium-Low ( H M L ) , Low-Medium-High ( L M H ) , and Medium-Medium-High ( M M H ) , respectively. Two of videos are cartoon traces called "Robin Hood" with the same packet loss constraint of 6%, one is a sports trace called "Soccer" with a packet loss constraint of 9%, and the other is a news trace called " A R D Talk" with a packet loss constraint of 9%. The results are plotted in Figures 6-14 and 6-15. The performance of C - D M P S with Ex_Even is further validated using different combinations of the number of videos and the number of hops. The following results are observed from Figures 6-14 and 6-15: > The C - D M P S with Ex_Even scheme achieves a lower F E R for most videos than the U - F C F S scheme under three different patterns of load. This indicates that in the load pattern tested, C - D M P S with Ex_Even produces a high video quality compared to U - F C F S , because D M P S is performed at every traversed node and video packets are 136 protected according to their importance. I-packet loss is the smallest, followed by P-packet loss, then B-packet loss, consequently, F E R incurred by error propagation is largely reduced. > Under all load patterns tested, the F l curve for C - D M P S with Ex_Even is flatter and closer to the fairness line than U - F C F S , indicating that C - D M P S with Ex_Even reaches a good share of the services among all videos (Figure 6-14). This is because Ex_Even uniformly spreads the accumulated loss difference from the local loss target at preceding nodes to subsequent nodes. The corresponding loss responsibility is shifted to the subsequent nodes, thereby meeting the end-to-end loss target of the video. On node 1, video 4 (V4) experiences a loss that is higher than its expected local target. The subsequent nodes 2 and 3 adjust their local loss target downwards in order to satisfy the end-to-end loss constraint for video 4 (Figure 6-15). Also, at every node, D M P S is able to provide the desired local loss target for each video quite exactly, thereby maximizing network resource utilization and decreasing the loss deviation for each video. > Compared to U - F C F S , C - D M P S with Ex_Even delivers E T by more than 10%. The C - D M P S with Ex_Even scheme protects important information from loss, thus increasing the delivery of useful information. Figure 6-14 also reveals that both the FERs and the FIs for high priority videos (videos 1 and 2) are lower than the FERs and the FIs for low priority videos (videos 3 and 4). The U - F C F S scheme does not have these attributes, indicating that C - D M P S with Ex_Even can provide better video quality and a higher level of service to the high priority class. The D M P S method in C - D M P S with Ex_Even includes a rule that assigns servicing privileges to the high priority video traffic so that quicker and more frequent 137 servicing is given to the high priority videos. In this way, a higher level of loss satisfaction is achieved in high priority videos. Conversely, F C F S does not have any differentiable service mechanisms and cannot provide a better service quality guarantee to more important classes of traffic. 1.5 -U. 1 § 1 " - - = H S ^ = ^ -0.5 h 0 Q . D M P S O H M L _ D M P S @ L M H _ D M P S @ M M H • U - F C F S @ H M L • U - F C F S @ L M H • U - F C F S @ M M H X — c. - - +- -- -2 3 4 Video ID (b) 138 LU 100.0% 80.0% 60.0% 40.0% 20.0% 0.0% c. _DMPS@HML V U - F C F S S SHML • c. _DMPS@LMH r U - F C F S « SLMH c. . D M P S O M M H D U-FCFS t SMMH 81.70% B1.70% 67.12% 57.20% (c) Figure 6-14. Performance comparison between C - D M P S with Ex_Even and U - F C F S cc —i CL 0.05 0.04 0.03 0.02 0.01 0.00 - V 1 — X — -V2 - V3 - V4 i x . — . • - * 1— X N o d e ID Figure 6-15. Packet loss of C - D M P S with Ex_Even at individual node Experiments with varying numbers of videos between 10 and 30 and hops between 3 and 17 confirm the above observations and are omitted here. 6.3.2.4 C-DMPS with Ex_A Three sets of experiments examine the behaviour of C - D M P S with E x _ A . In the experiments, a sequence of nodes employs a D M P S scheme, and E x _ A sets the local loss 139 target. The results for the delivery of four videos through 3-hop path are shown in Figures 6-16 and 6-17. Two of the videos are cartoon traces called "Robin Hood" with the same packet loss constraint of 6%, one is a sports trace called "Soccer" with a packet loss constraint of 9%, and the other is news trace called " A R D Talk" with a packet loss constraint of 9%. In these figures, three patterns of load are considered. The first one consists of High-Medium-Low ( H M L ) , the second one consists of Low-Medium-High ( L M H ) , and the third one consists of Medium-Medium-High ( M M H ) . The study yields several results. The C - D M P S with E x _ A scheme has a lower F E R and higher E T than the U - F C F S scheme. For example, C - D M P S with E x _ A reduces 30% F E R for video 3 (V3) from U - F C F S and reduces 16% E T from U - F C F S , all under a load pattern of H M L . For C - D M P S with E x _ A , the packet loss ratios also closely approach the end to-end loss constraints, resulting in FI values close to 1; however, for U - F C F S the values are very different from the target. The results in C - D M P S with E x _ A are due to the cooperation between D M P S and E x _ A schemes. E x _ A divides nodal loss deviation in a balanced way to achieve an end-to-end loss constraint, shifting the 'violated' or 'redundant' loss from upstream nodes to underutilized or hot spot (bottleneck) nodes in a path. E x _ A utilizes well network resources, resulting in a better satisfaction with loss performance. Figures 6-16 and 6-17 also reveal that both the F E R s and the FIs of the high priority videos (videos 1 and 2) are lower than those of the low priority videos (videos 3 and 4). The U - F C F S scheme does not have these attributes. Compared to U - F C F S , C -D M P S with E x _ A can provide better video quality and a higher level of service to the high priority class. The D M P S method in C - D M P S with E x _ A includes a rule that assigns servicing privileges to the high priority video traffic so that quicker and more 140 frequent servicing is given to the high priority videos. Thus, a higher level of loss satisfaction is achieved in high priority videos. Conversely, F C F S does not have service mechanisms to differentiate between different classes of traffic and cannot provide a better service quality guarantee to more important classes of traffic. 1.5 1 0.5 0 - C _ D M P S @ H M L U - F C F S @ H M L - C _ D M P S @ L M H }•-- U - F C F S @ L M H - C _ D M P S @ M M H - U - F C F S @ M M H V i d e o ID (b) 141 LU 100.0% 80.0% 60.0% 40.0% 20.0% 0.0% C _ D M P S @ H M L ' . • U - F C F S @ H M L D C _ D M P S @ L M H 7 U - F C F S @ L M H C _ D M P S @ M M H • U-FCFS@M M H 83.22% 84.90% 80.06% 67.12% (c) Figure 6-16. Performance comparison between C - D M P S with E x _ A and U - F C F S The adaptation of local loss targets to meet the end-to-end loss constraint by E x _ A is further demonstrated in Figure 6-17, where nodes along the path are configured with a load pattern of High-Medium-Low ( H M L ) . As seen in Figure 6-17, the loss for video 4 (V4) in node 1 is larger than the loss target at that node, so its local loss targets decrease in nodes 2 and 3, leading to minor deviation from the desired end-to-end loss. 0.05 0.04 a 0.03 0.02 * V1 •--X - - V 3 X V2 -3»: 111%, r ..fi: N o d e ID Figure 6-17. Packet loss of C - D M P S with E x _ A at individual node 142 The above observations hold when the number of videos N=10 to 30 and the number of hops is up to 17. The rest of the results are omitted here. 6.3.2.5 C-DMPS with Ex_P In order to study the performance of C - D M P S with E x _ P , consider the delivery of the two "Robin Hood" video, the "Soccer" video and the " A R D Talk" video over a 4-hop path. The packet loss constraints of the "Robin Hood", the "Soccer" and the " A R D Talk" traces are 6%, 9%, and 9%, respectively. Three sets of patterns of load were configured: High-Medium-Low-Low" ( H M L L " ), Low"-Low-Medium-High ( L L M H ) , and L o w - L o w - Medium-High ( L L M H ) . A varying number of videos and hops were then performed. The results presented in Figures 6-18 and 6-19 refer to the first set of experiments. A summary of the experimental results is provided below: the C - D M P S with E x _ P scheme achieves a lower F E R for most video and higher E T than the U - F C F S scheme. Moreover, for the C - D M P S with E x _ P scheme, all of the F l values are equal to or close to one, but a great fluctuation of the FIs can be observed in U - F C F S all over the tested load patterns. Therefore, C - D M P S with E x _ P outperforms U - F C F S . This is because the E x _ P performs a more finely balanced division of local loss deviation with respect to achieving an end-to-end loss constraint, shifting the 'violated' or 'redundant' loss from upstream to underutilized or hot spot (bottleneck) nodes in a path. Moreover, the assignment of 'violated' loss between underutilized nodes is proportional to its load level, rather than evenly.distributed as in the E x _ A . A large portion of 'violated' loss is allocated to less congested nodes, while a smaller portion is allocated to more congested nodes. The allocation of 'redundant' loss is similar. Clearly, 143 end-to-end loss requirements are more likely to be guaranteed by E x _ P than E x _ A , since the network resources are nearly fully utilized. This E x _ P technique is more suitable a case in which load conditions are diverse. The accurate distribution and adjustment of local loss targets is clearly visible in Figure 6-20 where a load pattern is H M L L " . The loss in node 1 is the largest, followed by that in node 2, 3 and 4. Moreover, Figure 6-18 reveals that both the F E R s and the FIs for high priority videos (videos 1 and 2) are lower than for low priority videos (videos 3 and 4). Unlike the U - F C F S scheme the C - D M P S with E x _ P scheme can provide better video quality and a higher level of service to the high priority class compared to the low priority video. F C F S does not differentiate its service and does not provide a better service quality guarantee to more important classes of traffic. However, the C - D M P S with E x _ P includes a rule that assigns servicing privileges to the high priority video traffic, so that quicker and more frequent servicing is given to the high priority videos. A higher level of loss satisfaction is achieved in high priority videos. 144 0.5 0 - C-DM PS@HM LL-- C - D M P S @ L - L M H - C - D M P S O L L M H U-FCFS@HMLL-H U-FCFS@L-LMH A U-FCFS @LLMH LU 100.0% 80.0% 60.0% 40.0% 20.0% 0.0% V i d e o ID (b) !! C-DM PS@HM LL- V U-FCFS@HM LL-• C - D M P S @ L - L M H T U - F C F S @ L - L M H C-DM PS@LLM H D U - F C F S @ L L M H 87.80% mmm 88.10% (c) 81.73% 67.12% 57.20% : Figure 6-18. Performance comparison between C - D M P S with E x _ P and U - F C F S cc _ i a 0.05 0.04 0.03 0.02 0.01 0.00 - V 1 X — V2 - V3 - - V 4 -.X Figure 6-19. Packet loss of C - D M P S with E x _ P at individual node 145 A great similar performance is achieved when the number of videos N=10 to 30, and the number of hops up to 17, confirming the above conclusions. Those results are omitted here. 6.3.3 Comparison of Local Loss Allocation Schemes Previous experiments have demonstrated the effectiveness of the five local loss allocation schemes. This section further compares their performance using five different C - D M P S schemes. In the five C - D M P S schemes, the burden of providing an end-to-end loss guarantee is delegated equally or dynamically to all of the nodes traversed by videos. The traffic at individual nodes determines the local loss. In an even load case, the five schemes results would be close to one another. For an uneven load case, the relative performance differs, and this is of practical interest. Intuitively, it is clear that the influences of the load pattern on the schemes are different and complicated. In order to illustrate the difference, one exemplary scenario is considered. In this scenario, the load pattern between nodes follows high, high, high, medium, medium", and low. Figure 6-20 depicts the Sinter of the five C - D M P S schemes when thirty videos are delivered over a 6-hop network. Half of them are sports traces called "Robin Hood" with the same packet loss constraint of 3%, and the others are news traces called " A R D Talk", with the same packet loss constraint of 9%. As seen in this figure, in the even load case, the five C - D M P S schemes achieve nearly the same S i n t e r - The results conform to the prediction. In an uneven load situation, Ex_Even, Ex_Move , E x _ A and E x _ P are better than the static approach, E A . The dynamic adjustment approaches increase or decrease 146 the loss constraints of each video at a node according to the loss performance of each video in preceding nodes. For every video, the deviation from end-to-end loss requirements decreases and a larger number of videos meet end-to-end loss requirements. E A performs no adjustment. In E A , the loss deviation from preceding nodes is added to that of the succeeding hops, and any extra loss occurring in preceding nodes cannot be compensated for at subsequent nodes. As a result, compared to the other C - D M P S , E A has difficulty in achieving the end-to-end loss target. Moreover, Sinter in Ex_Even and Ex_Move is lower than E x _ A and Ex_P . In addition, S i n t er in E x _ M o v e is slightly lower than Ex_Even, while Sinter in E x _ A is slightly lower than E x _ P . This is due to the fact that Ex_Move increases or decreases the loss constraints of a video at the next subsequent node according to its loss performance in preceding nodes, regardless of the nodal load conditions. When excess loss is added to a heavily loaded node, a bottleneck can occur, further decreasing loss performance. As a result, the end-to-end loss performance achieved deviates widely from expectations. Conversely, Ex_Even uniformly spreads loss differences to subsequent nodes, alleviating the problem of bottlenecks and increasing the opportunities for videos to meet their end-to-end loss requirements. This leads to a larger S i n t e r , compared to Ex_Move . However, since E x -Even still neglects load distribution among nodes, the problem of bottlenecks is not completely solved. In contrast, both E x _ A and E x _ P balance load distribution among nodes. Both schemes compensate for a positive Aj(i-k) by adjusting the local loss targets in the subsequent lightly-loaded nodes, where a negative Aj(i-k) can be advantageous to decrease the data loss in heavily-loaded nodes. E x _ A provides a much better end-to-end loss performance than E A , Ex_Move or Ex-Even. In E x _ P , each class of nodes, lightly-147 loaded or heavily-loaded, is further divided into subclasses and both class and subclass information are used to determine the local loss adjustment. Thus, end-to-end loss requirements are more likely to be guaranteed by E x _ P than by E x _ A . 1.00 r 0.80 -L_ 0) 0.60 -c w 0.40 -0.20 -0.00 -0.74 0.76 o.74 0 7 7 ° ' 7 8 0.39 <C? <f & (a) Even load 1.00 r 0.80 -o 0.60 -c 0.40 -0.20 -0.00 -0.46 0.62 0.68 0.71 0.79 0.83 ^ <r «y «y (b) Uneven load Figure 6-20. Comparison of local loss allocation schemes 148 6.4 Cost of Coordinated Packet Scheduling The cost of Coordinated Packet Scheduling is composed of two parts: the cost of packet scheduling at each node and the cost of inter-node communication incurred by loss distribution schemes. The cost of packet scheduling consists of a computational cost and a storage cost. The storage cost is related to maintaining flow state information. The proposed scheduling uses only two per-flow variables, number of discarded packets and queue length, and does not need much storage space. The scheduling involves calculating the packet loss ratio and the urgency index and sorting flows accordingly. The calculation is associated with one multiplication and one division operation. These operations are only performed at certain periodic intervals because the change in service order is made periodically. Sorting can be very expensive at a small time scale; however, the proposed scheduling uses flow-based periodic sorting instead of the traditional packet-based continuous sorting, thereby reducing the sorting cost. Furthermore, A N has more computation capacity, and sorting can be implemented in an A N environment. The cost of inter-node communication is caused by the exchange of information between nodes as required by the loss distribution schemes. For example, in a centralized system structure, a control center has to collect state information (PLR) from every node and calculate and propagate the local loss target to each node. Since a small packet can carry this information, per-send and per-receive overhead is trivial, while the more important overhead is how often the information exchange occurs. If information exchange occurs every time the value of P L R deviates from its target, it wi l l consume the network bandwidth and the C P U cycles of a router, especially i f there is a large number of flows and nodes. Another related problem is the maintenance of collected information. The size of the storage space grows in a linear manner with the number of flows and nodes. One possible solution is to set a threshold that distinguishes significant deviations from slight ones, allowing information exchange to occur only when deviations are 149 significant. Correspondingly, the control center only keeps information for the flow in which significant deviation occurs and computes the new local loss target for that flow. This method results in less frequent computation and less use of storage space, and therefore results in lower communication costs compared to the continuous and periodic adjustment method. The computational complexity and the amount of information that needs to be maintained and exchanged wi l l not increase linearly. Update interval is a parameter that trades performance for scalability. Further experiments need to be conducted to study this trade-off. 6.5 Chapter Summary This chapter developed promising solutions to the problem of improving end-to-end loss performances for streaming video in a multi-hop IP network. These solutions are collectively referred to as Coordinated Packet Scheduling Schemes. The experiments demonstrated that five nodal loss allocation schemes can provide superior video quality and fair service among videos by modifying downstream loss constraints based on the upstream loss performance of a video, thereby maintaining end-to-end loss requirements. It should be noted that the performance of QoS control schemes is reported when real M P E G - 1 traces are used as the background traffic. The study of the characteristics of Poisson and M P E G - 1 video traces [52] was conducted to determine which type of traffic should be used for the simulations. The Poisson model is typically used in network simulation and M P E G - 1 represent a realistic traffic sources. The study shows that the Poisson sources show some burstiness over small time scales, and little or not burstiness over large time scales, while M P E G - 1 sources show burstiness over small and large time scales. So, M P E G - 1 video traces were chosen as background traffic. Moreover, it is 150 important to understand that the proposed QoS control schemes can produce a lower F E R and FI for most videos and a higher E T with a certain congestion level in terms of packet loss ratio. In different runs, the loss performance experienced by a single video may vary, but similar trends can be observed. Due to space limit, the results for different runs are not shown. This work uses the concept of inter-node coordination to improve the end-to-end QoS of a flow or a class. However, due to inter-node coordination, the delay incurred by the information exchange between nodes can cause adjustment of local loss targets that is not fast enough and that impacts the performance of the Coordinated Packet Scheduling schemes. Delay depends on link speed and traffic level. Since the packet carrying state information is small and link speed is currently increasing, the delay is mainly related to the traffic level. Such delay can be reduced by giving prirority to information exchange traffic when there is competition for available network resources. Although the five loss distribution schemes perform differently in terms of loss performance, individually, they each have advantages in the trade-off between the ability to support end-to-end loss performance and complexity. O f the loss distribution schemes, E A is the simplest, but it cannot achieve the end-to-end loss target very well . Ex_Move performs better than E A and does not need any information regarding hops in the network; however, it can result in a "bottleneck" problem, which leads to unsatisfactory end-to-end loss performance. Ex_Even further improves end-to-end loss performance but requires information about the number of remaining hops to the destination. E x _ A provides better end-to-end loss performance than the three preceding schemes; however, E x _ A requires information about the load level of subsequent nodes. In terms of end-to-151 end loss performance, E x _ P is the most effective, but it is also the most sophisticated, relying on detailed information about nodal node distribution and the choice of weights. 152 Chapter 7 Conclusions and Future Work This chapter summarizes our research in QoS control for video transportation in IP networks. Section 7.1 presents the conclusion of this research, and Section 7.2 discusses related future work. 7.1 Conclusions Current IP networks only provide 'best-effort' service, and such service cannot support the QoS demanded by video applications. Much effort has been made to enhance the quality of IP network services; however, the means to achieve reliable and fair services and high network efficiency is still an open issue. Difficulties arise from the 'embedded' and 'passive' characteristics of current IP networks, where the inner nodes, such as routers and switches, only store and forward data. This kind of network model does not completely understand the requests of application users. Therefore, the network provides an improved service quality based only on the network perspective, which is often quite different from user expectations. Additionally, resource utilization is determined on the basis of how much buffer space and bandwidth are used and how much data has been sent. This differs from real network efficiency, which depends on the 153 amount of network resources used to deliver data that accommodates users' expectations for application performance. To solve this problem, an Active QoS control framework and a set of control techniques have developed to achieve an end-to-end reliable and fair QoS performance as perceived by the end-users and maintain high network efficiency. The A-QoS framework merges the advantages of known key network-based and end-system-based techniques: it adopts the concept of Active Network to make node processing specific to users and applications, filling the gap between the demands of users and the services provided by the network. This framework also develops the idea of error control, moving from end-system-based approaches to a system that works inside the network to mitigate the QoS constraints of applications, thereby delivering high level QoS satisfaction to application users under resource-constrained network situations. To support the A-QoS framework, several key elements have been designed and tested, including the QoS description model, buffer management schemes, packet scheduling schemes, and QoS distribution strategies. The QoS description model is an important part of the A-QoS framework: without the correct QoS description model, the network cannot yield the expected service quality. A n integrated description model called the joint application-network QoS description model is proposed, in which application QoS reflects the QoS requirements posed by the users and can be explicitly converted to network-controllable parameters. The network QoS description is also strictly related to user-perception. This model helps the network to behave exactly as the application user expects. With respect to buffer management, two schemes have been proposed to perform two-tier control: intra and inter-video levels. A t the intra-video level, newly arriving 154 lower priority data is dropped earlier in favour of upcoming higher priority data, and completely correct video frames are transmitted, rather than partial frames. Limited network resources are used to deliver perceptually important information, yielding a high video quality when the network is congested and data are lost. A t the inter-video level, a loss-based buffer sharing policy helps reach a loss performance that matches individual loss tolerance, resulting in a fair loss distribution between videos. Moreover, the delivery of useless data is reduced and network resources are utilized at a maximum, leading to high network efficiency. The contribution to buffer management is two-fold. First, the barriers between application requirements and buffer management policies are explored. Second, application-aware buffer management techniques, which are a pressing need for the near future, are developed. The proposed design also provides a guide to the design and assessment of the performance of buffer management techniques. The research on packet scheduling is the first study to focus on improving loss performance. Current packet scheduling methods are devoted to delay guarantees. Two loss-aware packet scheduling schemes are proposed in this research. The videos are mapped into groups with different transmission priorities based on tuple {upcoming packet type, current loss performance} and triplet {upcoming packet type, current loss performance, classes of videos} respectively. Then, a specific transmission schedule for a stream is set to adapt quickly to its servicing needs. These schemes obtain the desired video quality, service fairness, and network efficiency in a simple and efficient manner. In addition, the scheme whose grouping is based on triplet provides loss protection for high priority traffic. The success in the use of packet scheduling in loss control creates a new path for supporting loss requirements, which once could only be achieved through the use of buffer management. 155 Finally, a set of QoS distribution strategies are proposed for the division of end-to-end QoS requirements into local nodes along the source-to-destination path. Although the problem of end-to-end QoS is closely related to both nodal-based QoS control and QoS allocation methodology, the QoS allocation methodology have been rarely studied. Another major contribution of this work is the development of a solution that integrates QoS distribution strategies with nodal-based scheduling schemes, increasing the probability that the end-to-end QoS requirement can be met. 7.2 Future Research This thesis aims to merge the key advantages of end-system and network-based loss control techniques and fi l l the gap between application requirements and network services. Results show that the proposed A-QoS framework has considerable advantages for streaming video over IP networks. Several related aspects deserve further investigation. • Nodal-based network management techniques have been developed based on MPEG-coded video. Techniques with the ability to support a wider range of video applications need to be studied. To this end, a generalized form applicable to other formats of videos with high structure and prioritized characteristics, such as layered coded video [14] or Fine Granularity Scalability video [15], needs to be established. Other important issues for further work are the use of loss-aware network control and the simultaneous control of delay in a 156 network in order to satisfy end-to-end loss and delay requirements are also important issues for further work. • In end-to-end QoS distribution, static routing is assumed. There needs to be an investigation of improvements in conjunction with dynamic routing. The availability of information held by intermediate nodes in active networks makes it possible to evaluate the path for the data flow of the application according to the users' QoS requirements. Therefore, cooperation between Active Network-enabled QoS-based dynamic routing and modified loss allocation can provide better end-to-end QoS for applications. • In the architecture developed in this thesis, the exchange of QoS information is required between the application and the network as well as between network components. This information exchange should be managed by a communication protocol. The joint study of QoS monitoring and communication protocol must also be examined. A-QoS is built on the top of an Active Network architecture, and Active Networks can easily retrieve desired information in the intermediate nodes and send such information between nodes as required by spreading active packets. This potential can be given to the design of a QoS monitoring and communication protocol. • For this study, the proposed QoS control schemes are demonstrated through the use of an active processor module in the network simulator. The active processor module can execute the active program and perform computations on the user data flowing through it. Currently, the active programs are cached in every active node. When an active program is implemented, the content of a packet is examined and the desired operations are performed on the video 157 streams flowing through the network to satisfy user QoS requirements. Another option is to dynamically upload the active program when needed, allowing the deployment of full active network programmability. In such a way, users can make decisions about changing the mechanisms to manage resources and behavior of nodes according to their requirements. Therefore, this option would offer better network resources utilization and improved video quality. Future work wi l l include the implementation of the proposed schemes using the second option and the study of the flexibility of swapping different buffer management and packet scheduling schemes by users. • The proposed A-QoS framework provides application-aware services that effectively support the QoS requirements of video applications. The results suggest that superior application performance can be achieved using explicit application-aware control, in which each application can set up its own control policy to exploit its required services. For this reason, the other component of multimedia, voice applications should also benefit from this approach. The style of best-effort packet delivery available in today's IP networks is inappropriate for voice applications, and existing proposals about improving voice performance focus on results from the network perspective. The level of performance achieved does not match application performance as users expect. There could be an advantage in performing explicit control of voice packets within the A-QoS framework, since this framework allows performance improvement from an application perspective. This control could be conducted using the Act ive-FEC approaches to error recovery within the framework proposed in this research [23]. Other interesting examples of that, such as in-158 network re-encoding, voice transcoding, voice-oriented packet discard, in-network retransmission, can also be found in this paper. • The design and implementation of the QoS control schemes are based on active networks. Future networks would be most likely heterogeneous, in which IS, DS and A N would co-exist [17]. The extension of active QoS control schemes with IS and D S networks is another subject of future investigation. Specifically, in an IS environment the active schemes could help a network achieve high network utilization and improved scalability without sacrificing deterministic QoS guarantees for video packets. In order to apply the proposed schemes in a DS environment, the schemes should be extended to accommodate class-based buffer management, packet scheduling, or both. For example, in active L B S _ F P D the buffer usage among video streams is inversely proportional to the loss constraints of each video stream. When L B S _ F P D is applied to a D S -based network, the loss-based buffer sharing mechanism would be applied to a class of videos. Although DiffServ L B S _ F P D is a less fine-grained control, it still preserves the main idea of original L B S _ F P D , providing an equitable loss distribution among the video streams with different loss tolerances. In this way, a large percentage of videos with high QoS satisfaction can still be achieved. 159 BIBLIOGRAPHY [1] A . Mehaoua and R.Boutaba, "The Impacts of Errors and Delays on the Performance of M P E G 2 Video Communications", IEEE International Conference On Acoustics, Speech, and Signal Processing, Phoenix, U S A , March 1999. [2] C. Perkins, O. Hodson, and V.Hardman, " A Survey of Packet Loss Recovery Techniques for Streaming Audio", IEEE Network, V o l . 12, No. 5, pp. 40-49, September/October 1998. [3] ISO/IEC 11172. [4] ISO/IEC 13818-1. [5] F . Pereiva, " M P E G - 4 : Why, What, How and When", Image Communication, Vol .15, January 2000. [6] J. Meggers and M . Wallbaum, "Application Level Error Recovery Using Active Network Nodes", The 5th IEEE Symposium on Computers and Communications, Antibes, France, July 2000. [7] R. Gurin and H . Schulzrine, "Network Quality of Service", In I. Foster and C . Kesselma, Editors, The Grid: Blueprint for a Network Computing Infrastructure, Morgan Kaufmann Publishers, San Francisco, U S A , 1998. [8] S. Keshav, " A n Engineering Approach to Computer Networking: A T M Networks, the Internet, and the Telephone Network", Addis on-Wesley, 1997. [9] A . Albanese and M . Luby, "PET-Priority Encoding Transmission", High Speed Networking for Multimedia Applications, Kluwer Academic Publishers, Boston, U S A , March 1996. 160 [10] W . Heinzelman, "Application-specific Protocol Architecture for Wireless Networks", PHD thesis, M I T , June 2000. [11] M . Normura, T.Fujii , and N . Ohta, "Layered Packet Loss Protection for Variable Rate Video Coding Using D C T ", International Workshop on Packet Video, Torino, Italy, September 1988. [12] K.Shinmanura, Y . Hayashi and F. Kishino, "Variable Bitrate Coding of Compensating for Packet Loss", Proceedings of Visual Communications and Image Processing, pp. 91-98, November 1988. [13] N . Feamster and H . Balakrishnan, "Packet Loss Recovery for Streaming Video", The 12th International Packet Video Workshop, Pittsburgh, U S A , Apr i l 2002. [14] X . L i , M . H . Ammar, and S. Paul, "Layered Video Multicast with Retransmission ( L V M R ) : Evaluation of Error Recovery", The 7th International Workshop on Network and Operating System Support for Digital Audio and Video, St. Louis, M O , U S A , M a y 1997. [15] S. L i , F. Wu , Y . Zhang, "Study of a New Approach to Improve F G S Video Coding Efficiency", 1SO/IEC JTCl/SC29/WGll/m5583, December 1999, Maui . [16] M . R . Ito and Y . Ba i , " A Packet Discard Scheme for Loss Control in IP Networks with M P E G Video Traffic", The 8 t h IEEE International Conference on Communication Systems, Singapore, November 2002. [17] M . R . Ito and Y . Ba i , "Development in QoS Control for Video Communication", The 8 t h IEEE International Conference on Communication Systems, Singapore, November 2002. 161 [18] Y . Bai and M . R . Ito, "Packet Scheduling to Support Loss Guarantee for Video Traffic", The 10th IEEE International Conference on Telecommunications, Papeete, Tahiti, French Polynesia, February 2003. [19] Y . Bai and M . R . Ito, "Users-oriented Fair Buffer Management for M P E G Video Streams", International Conference on Advanced Information Networking and Applications, Xi 'an, China, March 2003. [20] Y . Bai and M . R . Ito, " A Packet Scheduling Scheme for Satisfying Intra and Inter-Stream Loss Requirements in Video Communication", International Conference on Advanced Information Networking and Applications, Xi 'an, China, March 2003. [21] Y . Bai and M . R . Ito, "On Loss-Aware Packet Scheduling For Video Transport Over a Mult i -Hop IB Network", The 5th International Workshop on Multimedia Network Systems and Applications in conjunction with the 23rd International Conference on Distributed Computing Systems , Providence, U . S . A , May/June, 2003. [22] Y . Bai and M . R . Ito, "Towards End-To-End Loss Guarantees for Streaming Video in a Mult i -Hop IP Network", International Conference on Information Technology: Coding and Computing, Las Vegas, U S A , Apr i l 2003. [23] Y . Bai and M . R . Ito, "QoS Control for Multimedia Communication: Approaches and Directions", submitted to IEEE Communications Surveys. [24] K . Nichols, S. Blake, F. Baker, and D . Black, "Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers", RFC 2474, December 1998. [25] S. Blake, D . Black, M . Carlson, E . Davies, Z . Wang, and W . Weiss, " A n Architecture for Differentiated Services", RFC 2475, December 1998. 162 [26] M . J . Riley and I .E.G. Richardson, "Quality of Service and the A T M Adaptation Layers", IEE Colloquium on ATM in Professional and Consumer Applications, May 1997. [27] Y . Wang and Q-F. Zhu, "Error Control and Concealment for Video Communication: A Review", in Proceedings of the IEEE, 86(5), pp.994-997, M a y 1998. [28] J-C, Bolot, T. Turletti, " A Rate Control Mechanism Packet Video in the Internet", The IEEE INFO COM '94, Toronto, Canada, June 1994. [29] C. Perkins and O. Hudson, "Option for Repair of Streaming Media", RFC 2354, June 1998. [30] G . Carle and E . W . Biersack, "Survey of Error Recovery Techniques for IP-Based Audio-Visual Multicast Applications", IEEE Networks, 11(6): 24-36, November/ December 1997. [31] B . W . Wah, " A Survey of Error-Concealment for Real-time Audio and Video Transmission over the Internet", IEEE International Symposium on Multimedia Software Engineering, Taipei, Taiwan, December 2000. [32] R. Braden, L . Zhang, S. Berson, S. Herzog, and S. Jamin, "Resource Reservation Protocol (RSVP) - Version 1 Functional Specification", RFC 2205, September 1997. [33] R. Braden, D . Clark and S. Shenker, "Integrated Services in the Internet Architecture: A n Overview", RFC 1633, June 1994. [34] K . Psounis, "Active Networks: Applications, Security, Safety, and Architectures", IEEE Communications Surveys, 2(1), Q l 1999. 163 [35] N . Christin and J. Liebeherr, "QoS Architecture for Quantitative Service Differentiation", IEEE Communications Magazine, Special Issue on Scalability in IP-Oriented Networks, V o l . 41, No. 6, pp. 38-45, June 2003. [36] D . L . Tennenhouse, J . M . Smith, W . D . Sincoskie and D . J. Wetherall, " A Survey of Active Network Research", IEEE Communication Magazine, V o l . 35, No. 1, pp.80-86, January 1997. [37] L . Massoulie and J. Roberts, "Bandwidth Sharing: Objectives and Algorithms", IEEE INFOCOM'99, New York, U S A , March 1999. [38] J.W. Causey, H . S. K i m , "Comparison of Buffer Allocation Schemes in A T M Switches: Complete Sharing, Partial Sharing and Dedicated Allocation", IEEE International Conference on Communications, New Orleans, U S A , M a y 1994. [39] M . A . Labrador and S. Banerjee, "Packet Dropping Policies for A T M and IP Networks", IEEE Communications Surveys, 2(3), Q3 1999. [40] S. Floyd and V . Jacobson, "Random Early Detection Gateways for Congestion Avoidance", IEEE/ACM Transactions on Networking, pp. 397-413, August 1993. [41] I. Joe, "Packet Loss and Jitter Control for Real Time M P E G Video Communications", Computer Communications, pp.901-904, 19 (1996). [42] M . Claypool and J. Riedl, "End-to-End Quality in Multimedia Applications", Computer Science Technical Reports, Worcester Polytechnic Institute, WPI-CS-TR-98-18, 1998. [43] G . Ghinea and J.P. Thomas, "QoS Impact on User Perception and Understanding of Multimedia Video Clips", ACM Multimedia '98, Bristol, U K , September 1998. 164 [44] Y . Wang, M . Claypool, Z. Zuo, " A n Empirical Study of Real Video Performance across the Internet", ACM SIGCOMM Internet Measurement Workshop, San Francisco, U S A , November 2001. [45] D . Wijesekera, J. Srivastava, A . Nerode and M . Foresti, "Experimental Evaluation of Loss Perception in Continuous Media", ACM Multimedia Journal, July 1999. [46] H . Kanakia, P. P. Misha and A . Reibman, " A n Adaptive Congestion Control Scheme for Real-time Packet Video Transport", ACM/IEEE SIGCOMM Symposium on Communications Architectures and Protocols, San Francisco, California, September 1993. [47] Proceedings of DARPA Active Networks Conference and Exposition, I E E E Computer Society, M a y 2002. [48] D . Wu , T. Hou, J. Yao and H . J. Chao, "Transmission of Real-time Video over the Internet: A B i g Picture", IEEE NetWorld+Interop, Las Vegas, U S A , May 2000. [49] H . Zhang and S.Keshav, "Comparison of Rate-Based Service Disciplines", ACM SIGCOMM'91, Zurich, Switzerland, September 1991. [50] C. Han, K . G . Shin, "Scheduling MPEG-Compressed Video Streams With Fi rm Deadline Constraints", The 3rd ACM International Conference on Multimedia, San Francisco, U S A , November 1995. [51] http://www-tkn.ee.tu-berlin.de/~fitzek/TRACE/trace.html [52] http://nero.informatik.uni-wuerzburg.de/MPEG/ 165 [53] Y . Ba i and M . R . Ito, " A n Active Quality of Service Control Framework for Video Streaming over IP Networks", submitted to IEEE/ACM Transactions on Networking. [54] M . Fester, " Performance issues for High-end Video over A T M " , August 1995: http://www.cisco.com/warp/public/cc/sol/mkt/ent/atm/vidat_wp.html [55] T Wol f and J.S. Turner, "Design Issues for High Performance Active Routers", IEEE Journal on Selected Areas in Communications, Vol .19, No.4, March 2001. [56] L . Wei , H . Lehman, S.J.Garland, and D . L . Tennenhouse, "Active Reliable Multicast ", IEEE INFOCOM'98, San Francisco, U S A , March/Apri l 1998. [57] B.Schwartz, A . W Jackson, W.T. Strayer, Wenyi Zhou, R . D . Rockwell , and C. Partridge, "Smart Packets for Active Networks", The 2nd IEEE Conference on Open Architectures and Network Programming, New York, U S A , March 1999. [58] T. Faber, " A C C : Using Active Networking to Enhance Feedback Congestion Control Mechanisms", IEEE Network, Vol .12, Issue 3, pp.61 -65, May/June 1998. [59] G.T. Karetsos, S.A. Kyriazakos, G . Karagiannopoulos, "Supporting mobile IP in an Active Networking environment", IEEE Wireless Communications and Networking Conference , Chicago, U S A , September 2000. [60] A . Banchs, W.Effelsberg, C. Tschudin, and V . Turau, "Multicasting Multimedia Streams with Active Networks", The 23rd Annual Conference on Local Computer Networks, Boston, U S A , October 1998. [61] L . Le , H . Sanneck, G . Carle and T. Hoshi: "Active Concealment for Internet Speech Transmission" The 2nd International Working Conference on Active Networks, Tokyo, October 2000. 166 [62] R. Sivakumar, S. Han and V . Bharghavan, " A Scalable Architecture for Active Network Control", OPENARCH 2000, Tel A v i v , Israel, March 2000. [63] S. Kang, H . Y . Youn, Y . Lee, D . Lee and M . K i m , "The Active Traffic Control Mechanism for Layered Multimedia Multicast in Active Network", The 8th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems, San Francisco, U S A , August 2000. [64] E . Amir , S. McCanne and R . H . Katz, " A n Active Service Framework and Its Application to Real-Time Multimedia Transcoding", ACM SIGCOMM'98, Vancouver, Canada, August 1998. [65] S. L i , and B . Bhargava, "Active Gateway: A Facility for Video Conferencing Traffic Control", The 21st International Computer Software and Applications Conference, Washington, D C , U S A , August 1997. [66] E . Amir , " A n Application Level Video Gateway", ACM Multimedia'95, San Francisco, U S A , November 1995. [67] K . Najafi and A . Leon-Garcia, "Active Video-A Novel Approach to Video Distribution", The 2nd IFIP/IEEE International Conference on Management of Multimedia Networks, Versailles, France, November 1998. [68] J. Brasssil, "Active Network Error Recovery for Continuous Media Applications", SPIE Symposium on the Performance and Control of Network Systems, V o l . 3231, pp. 196-207, November 1997. 167 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0065527/manifest

Comment

Related Items