@prefix vivo: . @prefix edm: . @prefix ns0: . @prefix dcterms: . @prefix dc: . @prefix skos: . vivo:departmentOrSchool "Applied Science, Faculty of"@en, "Electrical and Computer Engineering, Department of"@en ; edm:dataProvider "DSpace"@en ; ns0:degreeCampus "UBCV"@en ; dcterms:creator "Er, Wee Liat"@en ; dcterms:issued "2009-03-11T00:00:00"@en, "1996"@en ; vivo:relatedDegree "Master of Applied Science - MASc"@en ; ns0:degreeGrantor "University of British Columbia"@en ; dcterms:description """Personal Communications Systems (PCS) is a rapidly growing and important segment of the telecommunications industry. The ultimate goal of PCS is to provide a wide range of integrated wireless and wireline services, such as voice, data, video, and imaging etc., in one universal Personal Communications Network (PCN). We consider using the Cable Television (CATV) network as a communications network backbone to provide transport for ATM based Wireless PCS traffic, as one of the alternatives to network architecture choices. The objective is to achieve medium sharing of existing networks and hence conserve capital while providing minimum interruption to the ongoing services. The suitable signaling architecture for the proposed CATV/PSTN overlay is presented. A distributed communications architecture (DCPA/ASEs approach) is discussed and compared to the traditional centralized network and control (CP/B-ISUP approach) architecture. Call setup and handoff delay encountered in the proposed CATV/PCN overlay using these two schemes are compared. The network performances of the design model, such as the ATM end-to-end delay, AAL end-to-end delay, buffer requirement, cell access delay and cell loss are also investigated in this research work. Simulation models, consisting of 1 and 3 distributing centers (DC), each with 15 distributing points (DP), have been constructed for study. Network delay, throughput, capacity and coverage of the CATV/PCN overlay under various traffic loading situations are estimated. Finally, in order to support the ever increase demand for mobile communications, distributed antennas are considered to provide coverage enhancement in a simulcast arrangement."""@en ; edm:aggregatedCHO "https://circle.library.ubc.ca/rest/handle/2429/5867?expand=metadata"@en ; dcterms:extent "4905291 bytes"@en ; dc:format "application/pdf"@en ; skos:note "ATM-BASED WIRELESS PERSONAL COMMUNICATIONS SERVICES (PCS) OVER CABLE TELEVISION (CATV) NETWORK BY W E E LIAT ER B.Sc(EE), University of Alberta, 1994 A THESIS S U B M I T T E D IN P A R T I A L F U L F I L L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F M A S T E R O F A P P L I E D S C I E N C E IN T H E F A C U L T Y O F G R A D U A T E STUDIES D E P A R T M E N T O F E L E C T R I C A L E N G I N E E R I N G WE ACCEPT THIS THESIS AS CONFORMING TO THE REQUIRED STANDARD T H E UNIVERSITY OF BRITISH COLUMBIA DECEMBER 1996 © WEE L . ER, 1996 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and, study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not. be allowed without my written permission. Department of BltcttfcAf • 6 t#?>^ /7 The University of British Columbia Vancouver, Canada , ' Date DE-6 (2/88) Abstract Personal Communications Systems (PCS) is a rapidly growing and important segment of the telecommunications industry. The ultimate goal of PCS is to provide a wide range of integrated wireless and wireline services, such as voice, data, video, and imaging etc., in one universal Personal Communications Network (PCN). We consider using the Cable Television ( C A T V ) network as a communications network backbone to provide transport for A T M based Wireless PCS traffic, as one of the alternatives to network architecture choices. The objective is to achieve medium sharing of existing networks and hence conserve capital while providing minimum interruption to the ongoing services. The suitable signaling architecture for the proposed C A T V / P S T N overlay is presented. A distributed communications architecture (DCPA/ASEs approach) is discussed and compared to the traditional centralized network and control (CP/B-ISUP approach) architecture. Call setup and handoff delay encountered in the proposed C A T V / P C N overlay using these two schemes are compared. The network performances of the design model, such as the A T M end-to-end delay, A A L end-to-end delay, buffer requirement, cell access delay and cell loss are also investigated in this research work. Simulation models, consisting of 1 and 3 distributing centers (DC), each with 15 distributing points (DP), have been constructed for study. Network delay, throughput, capacity and coverage of the C A T V / P C N overlay under various traffic loading situations are estimated. Finally, in order to support the ever increase demand for mobile communications, distributed antennas are considered to provide coverage enhancement in a simulcast arrangement. ii Table Of Contents Abstract , • ii List of Figures vii List of Tables x Acknowledgments xi Chapter 1 Introduction • 1 1.1 A Summary of A T M Concepts 2 1.1.1 A T M Cell Header Fields 2 1.1.2 A T M Protocol Reference Model. . . ....5 1.2 Motivation and Objectives 6 1.3 Thesis Outline 8 Chapter 2 C A T V Network and the P C N Overlay Architecture 9 2.1 Rationale for using C A T V Networks for PCS 9 2.2.1 The Backbone Network 14 2.2.2 The Access Network 15 2.3 Bandwidth or Spectrum Allocation of C A T V 17 2.4 H F C Upgrade Requirements and Integration Phases 18 2.4.1 Phase 1 19 2.4.2 Phase 2 : 19 2.4.3 Phase 3 20 2.5 Personal Communications Network (PCN) Overlay 20 2.5.1 P C N architecture 21 2.5.2 The A T M based P C N overlay 22 Chapter 3 A T M Control Signaling Issues 25 3.1 A T M Signaling Transport Network for the P C N Overlay 25 3.1.1 Quasi-associated Signaling Transport Architecture 26 3.1.2 Associated Signaling Transport Architecture 28 3.1.3 Hybrid Signaling Transport Architecture 29 3.1.4 Interoperation of Current SS7 Network and Broadband Network 29 3.1.5 A T M Signaling Transport Architecture Suitable for the P C N Overlay 31 3.2 Distributed Control Architecture (DCA) 32 3.2.1 Distributed Call Processing Architecture 33 3.2.2 Elements of the Distributed Call Processing Architecture 34 3.3 Distributed Call Processing Approach vs. Current Approach 35 3.4 System and Network Modeling of the P C N Overly 38 3.4.1 System Model and Parameters 38 3.4.2 Physical Network Model 40 3.5 Call Setup and Handoff Delay Analysis 41 3.5.1 Intra-cluster Intra-DC Call Connection Setup Delay 41 3.5.2 Inter-cluster Intra-DC Call Connection Setup Delay.... : 42 iv 3.5.3 Inter-DC Call Connection Setup Delay 44 3.5.4 Infra-cluster Intra-DC Call Connection Handoff Delay 45 3.5.5 Inter-cluster Intra-DC Call Connection Handoff Delay 46 3.5.6 Inter-DC Call Connection Handoff Delay ;., : 46 3.6 Discussion and Conclusion 47 Chapter 4 Performance Analyses for Integrated Voice and Data Traffic in the C A T V / P C N Overlay 52 4.1 Physical Single D C Network Model 52 4.2 Simulation Model r-53 4.3 Simulation Results - Single D C Network with Data Traffic ; 54 4.4 Simulation Results - Single D C Network, Mixed Voice and Data Traffic 56 4.4.1 Voice and Data Traffic with Variable Data Traffic Loading 56 4.4.2 Voice and Data Traffic with Variable Voice Traffic Loading 61 4.4.3 Voice and Data Traffic with Different Voice Coding Rates ....63 4.4.4 Voice and Data Traffic with Different Inter-cluster Traffic Levels .66 4.5 Three-DC Network Model and Simulation Results 68 Chapter 5 Network Capacity and Coverage of The C A T V / P C N Overlay ! 73 5.1 Network Capacity of the C A T V / P C N Overlay 73 5.2 Coverage Estimate In Terms of City Blocks 75 5.3 Optimum Number of Base Stations 77 5.4 Capacity Enhancement in the C A T V / P C N Overlay 79 5.5 Distributed Antenna : 80 5.5.1 Distributed Antennas Concepts 80 5.5.2 Merits of Distributed Antennas 81 5.5.3 Number of Radio Ports (RADs /MEXs) : 82 5.5.4 PCS Air Interface With C D M A 82 Chapter 6 Summary and Conclusions 85 6.1 Summary of Findings 85 6.2 Suggestions for Future Work ..- ...88 References , 89 Appendix A . List of Abbreviations and Acronyms 93 Appendix B. Simulation Model Selections 96 Appendix C . Confidence Interval Calculations 99 vi List of Figures F I G U R E 1.1. A T M cell format for U N I interface (left) and NNI interface (right) .3 F I G U R E 1.2. A T M protocol stack.... : ...5 F I G U R E 1.3. A A L type 5 C S - P D U format , '. 6 F I G U R E 2.1. Evolution of C A T V networks 10 F I G U R E 2.2. Current C A T V architecture 11 F I G U R E 2.3. H F C access network architecture options .14 F I G U R E 2.4. C A T V backbone network architecture. '. 15 F I G U R E 2.5. Interconnection between D C and DPs .16 F I G U R E 2.6. Spectrum allocation of a modern C A T V network 17 F I G U R E 2.7. Logical architecture of P C N based on SS7 21 F I G U R E 2.8. Conceptual configuration of the P C N overlay 23 F I G U R E 3.1. A T M service and control transport network and the U N I and NNI signaling protocol 26 F I G U R E 3.2. ATM-based quasi-associated signaling transport 27 F I G U R E 3.3. ATM-based associated signaling transport 28 F I G U R E 3.4. Interworking between the A T M signaling transport network and SS7 network '. 30 F I G U R E 3.5. Cluster-based Distributed Call Processing Architecture.... 33 F I G U R E 3.6. Conventional (CP/B-ISUP) approach for call setup (associated mode) 36 F I G U R E 3.7. D C P / A S E s approach for call setup (associated mode) 37 F I G U R E 3.8. Physical network layout for call connection setup analysis 40 F I G U R E 3.9. Mean end-to-end Intra-cluster Intra-DC connection setup delay 42 F I G U R E 3.10. Mean end-to-end inter-cluster Intra-DC connection setup delay 43 F I G U R E 3.11. Mean end-to-end inter-DC connection setup delay 44 F I G U R E 3.12. Mean minimum and maximum intra-cluster Intra-DC handoff delay 45 F I G U R E 3.13. Mean inter-cluster Intra-DC handoff delay 46 F I G U R E 3.14. Mean inter-DC handoff delay 47 F I G U R E 3.15. Mean end-to-end connection setup time with the CP/B-ISUP approach 48 F I G U R E 3.16. Mean end-to-end connection setup time with the D C P / A S E s approach 49 F I G U R E 3.17. Mean connection handoff delay with the CP/B-ISUP approach 50 F I G U R E 3.18. Mean connection handoff delay with the D C / A S E s approach 50 F I G U R E 4.1. Single D C network model ; 52 F I G U R E 4.2. Worst case data packet end-to-end delay 55 F I G U R E 4.3. Average data packet A A L end-to-end delay in each DP 56 F I G U R E 4.4. Average inter-cluster data packet A A L end-to-end delay 57 F I G U R E 4.5. Average intra-cluster data packet A A L end-to-end delay 58 F I G U R E 4.6. Data packet A A L end-to-end transmission delay in each DP location 59 F I G U R E 4.7. A A L end-to-end delay variation of voice and data traffic 60 F I G U R E 4.8. Average inter-cluster data packet A A L end-to-end delay (fixed data throughput) 61 F I G U R E 4.9. Average intra-cluster data packet A A L end-to-end delay (fixed data throughput) 62 F I G U R E 4.10. Average inter-cluster A A L end-to-end delay (diff. voice coding rate) 63 F I G U R E 4.11. Average intra-cluster A A L end-to-end delay (diff. voice coding rate) 64 F I G U R E 4.12. Inter-cluster A A L and A T M end-to-end delays (fixed data throughput) 65 F I G U R E 4.13. Intra-cluster A A L and A T M end-to-end delays (fixed data throughput) 66 F I G U R E 4.14. Worst case inter-cluster A A L and A T M end-to-end delay : 67 F I G U R E 4.15. Worst case intra-cluster A A L and A T M end-to-end delay 67 F I G U R E 4.16. 3 -DC network model '. 68 F I G U R E 4.17. Average A A L end-to-end delay for inter-DC traffic ; 70 F I G U R E 4.18. D C switch traffic loss ratio 70 F I G U R E 4.19. Average A A L end-to-end delay variation for inter-DC traffic 71 F I G U R E 4.20. Average Inter-DC A A L end-to-end delay with different voice coding rates 72 F I G U R E 5.1. DP coverage area representation for wireless PCS 75 F I G U R E 5.2. DP coverage estimate in terms of city blocks and associated microcells 76 F I G U R E 5.3. DP coverage estimate with shorter holding time .....77 F I G U R E 5.4. Optimum number of base stations vs. offered traffic per BS 78 F I G U R E 5.5. Distributed antennas enlarge service coverage areas 81 F I G U R E 5.6. Transmission delay due to different distances pulses traversed 83 F I G U R E B. 1. Single D C network model ; 96 F I G U R E B.2. Data/Voice source model 96 F I G U R E B.3. Data/Voice destination model 97 F I G U R E B.4. DP model 97 F I G U R E B.5. DP_sw model 98 F I G U R E B.6. DC_sw model 98 List of Tables T A B L E 1. Payload Type (PT) Indicators 4 T A B L E 2. B- ISDN service classes . 6 T A B L E 3. 550-MHz C A T V network performance allocations 12 T A B L E 4. Summary of Results based on an N T T study 32 T A B L E 5. Processing times for protocol layer subsystems 39 T A B L E 6. Application processing times 39 T A B L E 7. Connection handoff delays of the CP/B-ISUP and the D C P / A S E s approaches ; 49 T A B L E 8. Delay and Delay Variation for Two-way Session Audio and Video Services 61 T A B L E 9. Network capacity with blocking probability of 0.001 and 50% voice / 50% data traffic .74 T A B L E 10. Network capacity with blocking probability of 0.001 and 60%voice / 40% data traffic 74 T A B L E 11. Call arrival rate and holding time assumptions... 76 T A B L E 12. 99% confidence interval of sample simulation results 100 Acknowledgments I would like to express my gratitude to my research supervisor, Dr. R. W. Donaldson, for his direction, guidance and support throughout my studies at the University of British Columbia. This work was funded by the Canadian Institute of Telecommunications Research (CITR) through a Research Assistantship provided by Dr. Donaldson. I would also like to thank Dr. Victor C . M . Leung and M i l 3, Inc. for providing Opnet, the simulation software, and continuous technical support. Further, special thanks to my colleagues and staff in the department for their help. Finally, I would like to thank my family and relatives for their encouragement and support for me to pursue my personal goal. xi Chapter 1 Introduction Personal Communications Systems (PCS) is a rapidly growing and important segment of the telecommunications industry. The ultimate goal of PCS is to provide truly personal, cost-effective and ubiquitous communication coverage to enable people to communicate with other people, databases or other services at any space time, regardless of the their geographic locations, through any type of device, using a unique and lifelong Personal Identification Number (PIN). The demand for PCS (personal communications services) has increased significantly since the introduction of the first and second generation systems, as the public has become aware of the benefits. Personal Communications Services are expected to have a continuing growth via B-ISDN (Broadband Integrated Services Digital Network) in the coming decades. The main feature of the B - I S D N is the provision of an integrated network capable of supporting a wide range of multimedia applications including voice, data, video and image communications over interconnections of local area networks (LAN) . The B-ISDN is conceived as an all purpose digital network. As summarized in C C I T T Recommendation 1.121, the B-ISDN is able to support switched, semipermanent, permanent, point-to-point and point-to-multipoint connections, and is to provide on-demand, reserved and permanent services. Connections in the B-ISDN support both circuit mode and packet mode services either with a connectionless or connection-oriented nature in bidirectional and unidirectional configurations. As well, the B - I S D N contains 1 Introduction intelligent capabilities for the purpose of providing network control, operation and management (OAM). A T M (Asynchronous transfer mode) has been defined by the C C I T T as the target transport mode for B-ISDN. 1.1 A Summary of ATM Concepts A T M is a fixed-size virtual-circuit-oriented (i.e., connection-oriented), packet-switching and multiplexing technique. With A T M , various applications with different bandwidth requirements are easily supported because bandwidth is assigned on demand as long as there are sufficient resources to support the application in the network. Hence, A T M provides an efficient means to guarantee the quality of service (QOS) requirements of applications. Although A T M possesses a connection-oriented nature, connectionless services can also be supported relatively easily and efficiently by incorporating connectionless servers. On the other hand, buffer management and switching fabric designs are simplified by the fixed and small A T M cell sizes. Furthermore, buffer sizes at intermediate nodes are expected to be small, thereby implying a relatively small and bounded cell delay. 1.1.1 ATM Cell Header Fields A n A T M cell, shown in Figure 1.1, is comprised Of 53 bytes, where the first 5 bytes constitute the header and the remaining 48 bytes contain user information (payload). G F C is a 4-bit field which provides flow control at the UNI (user-network interface) for traffic originating at user equipment and directed to the network. G F C defines two operation modes: controlled access and uncontrolled access. The controlled access mode regulates 2 Introduction the flow rate of cells generated by users at UNI, while the uncontrolled access mode doesn't regulate cell flow rate. The H E C field is employed mainly to detect and discard cells with corrupted headers and for cell delineation1; the C R C polynomial used is 2 8 1 + x + x + x . The 3-bit P T (refer to Table 1 for P T values) is used to distinguish between user cells and control cells so that control and signaling data can be transmitted on a different sub-channel from user data. The 1-bit C L P field determines cell-loss priority. A cell with C L P set to 1 indicates that it may be discarded by the network during congestion, whereas a cell with C L P set to 0 (higher priority) shall not be discarded if at all possible. 8 bits ^ ^ 8 bits GFC: generic flow control PT: Payload type VPI: virtual path identifier CLP: cell loss priority VCI: virtual circuit identifier HEC: header error control FIGURE 1.1. ATM cell format for UNI interface (left) and NNI interface (right) 1. Cell delineation is used to determine the cell boundaries from the stream received from the physical media layer. CCITT Recommendation 1.432 defines how the HEC field of the cell header can be used for this purpose. The HEC cell delineation process becomes more robust when the scrambling mechanism is applied to achieve possible improvement of the transmission performance and to improve the robustness of the HEC mechanism by reducing the possibilities for false emulation of the bit pattern with respect to theHECcoding[34]. -3 Introduction The Virtual channel identifier (VCI) and virtual path identifier (VPI) defines a cell's channels and transmission paths. Many individual application streams, distinguished by their VCIs, can be multiplexed onto a virtual path. Control signaling and network management information are also carried in virtual circuit connections (VCC) . V C C s between the same two points along a network are grouped into virtual path connections (VPC) and V P C s are multiplexed along a physical medium. Bandwidth for the V C C s can be varied dynamically, with variable bit rate services accommodated. T A B L E 1. Pay load Type (PT) Indicators PTI coding Meaning 000 User data cell, congestion not experienced, S D U type = 0 001 User data cell, congestion not experienced, S D U type = 1 • 010 User data cell, congestion experienced, S D U type = 0 011 User data cell, congestion experienced, S D U type = 1 100 Segment O A M flow-related cell 101 End-to-end O A M flow-related cell 110 Resource management cell 111 Reserved Cell switching in A T M networks is by routing tables based on the address in the cell header. When a virtual path is established, the VPI of a cell is used to select an entry from the switch's routing table to decide the A T M switch output port to which the cell should be forwarded. The VPI value of the cell may then be overwritten with a new one that the writing switch will then recognize as an entry in its routing table. After that, the cell is sent to the next switch. 4 Introduction 1.1.2 ATM Protocol Reference Model The A T M protocol model, as shown in Figure 1.2, is divided into three layers, including the physical layer, A T M layer and A T M adaptation layer ( A A L ) . The A A L is responsible for defining a set of service classes to fit the needs of different user requests and converts incoming user requests for services into A T M cells for transport. The A T M layer, on the other hand, mainly performs switching and multiplexing functions. It is also responsible for flow control (to ensure quality of service QOS) and cell sequence integrity; however, no retransmission of lost or erroneous cells is performed. Finally, the physical layer defines a transport method for A T M cells between two A T M entities. Management Plane Control Plane User Plane Higher Layer Protocols Higher Layer Protocols ATM Adaptation Layer (AAL) ATM Layer Physical Layer FIGURE 1.2. ATM protocol stack The traffic requirements of a diverse set of applications can be arranged into 4 classes, i.e. Class A , B , C and D. Different A T M adaptation layer protocols ( A A L 1 , A A L 2 , A A L 3 / 4 , and A A L 5 ) are used to support different service classes listed below: • A A L 1 supports continuous constant bit rate (CBR) services ( C L A S S A) such as voice. A A L 2 supports continuous variable bit rate (VBR) services ( C L A S S B) such as video conferencing. 5 Introduction A A L 3 / 4 and A A L 5 support packet-based information transfer. A A L 5 is proposed because A A L 3 / 4 is highly complicated. For A A L 5 , multiplexing is performed at higher levels, error detection is simplified and only one type of convergence sublayer protocol data unit (CS-PDU) is allowed. The C S - P D U contains the user data field, a pad field to align the resulting P D U to fill an integral number of A T M cells, a control field, a length field and a C R C field, as illustrated in Figure 1.3. The S D U type included in the PT field of the cell header, when set to 1, indicates that this is the last cell of a C S - P D U . A l l others are transmitted with the S D U type set to 0. . User data Pad Control Field Length Field CRC 0-47 bytes 2 bytes 2 bytes 4 bytes FIGURE 1.3. AAL type 5 CS-PDU format The requirements in each of the four service classes are summarized in Table 2. TABLE 2. B-ISDN service classes CLASS A CLASSB CLASSC CI ASS D Timing between source anil destination Required Not required Bit rate constant Variable Connection mode Connection -oriented Connection-less 1.2 Motivation and Objectives The many distinctive features and potential demands of PCS suggest that highly centralized communications architectures may not be suitable for implementation. 6 Introduction Proposed wireless networks are expected to use microcells to facilitate frequency reuse to improve spectral efficiency. A significant increase in the signaling traffic, relating to location updates caused by frequent cell boundary crossings, is expected due to the reduction in cell size. Apart from that, the signaling loads will increase because of an increase in the number of users. The current centralized network and control architecture imposes a limit on meeting increased processing loads and call setup and handoff requirements. As a result, the QOS requirements may be violated and undesired call dropping and/or transmission errors may occur. It is therefore useful to consider a distributed communications architecture. This thesis attempts to investigate the network performance of an A T M based wireless PCS network overlayed on a C A T V network. Similar work has been conducted for networks with architectures based on I E E E 802.6 M A N [15] [31] [32][33]. The objectives of the thesis work are: a) to propose an A T M based wireless P C N over C A T V networks; b) to discuss the A T M signaling transport architectures suitable for the P C N / C A T V overlay; c) to compare the proposed distributed call control architecture and the current architecture, and to compute the call connection setup and handoff delays of these architectures; d) to study network performance through network attributes such as A A L end-to-end delay, A T M end-to-end delay and cell loss ratio (CLR); 7 Introduction e) to estimate network capacity and coverage of the C A T V / P C N overlay and to provide suggestions on ways to enhance network capacity and coverage. 1.3 Thesis Outline In chapter 2, the current C A T V architecture in Canada is described and the wireless A T M overlay for PCS is proposed. Chapter 3 discusses the A T M signaling transport architectures, as well as the distributed control architecture. A comparison of call setup and handoff performance enhancement for the distributed and current approach has been made over the proposed P C N . The simulation results of performance analysis for integrated voice and traffic in the C A T V / P C N overlay are presented in Chapter 4. Investigations for a single D C network and a three-DC network are carried out at various levels of inter-cluster traffic, different user traffic levels and different speech encoding/ decoding rates. In chapter 5, we estimate the network capacity of the C A T V / P C N overlay and discuss issues regarding the use of distributed antennas to enhance network coverage. Finally, Chapter 6 summarizes the findings and provides suggestions for future work. 8 Chapter 2 CATV Network and the PCN Overlay Architecture This chapter provides a brief technical introduction to the current C A T V architecture in Canada and proposes an ATM-based wireless PCS overlay to support integrated services. 2.1 Rationale for using CATV Networks for PCS Additional network capacity as well as architectural changes will be needed to meet the anticipated demand for Personal Communications Services (PCS). Users expect superior-quality, highly reliable and uninterrupted data transmission services for business or private network voice communications. The technology introduced to meet the performance and reliability objectives must provide effective diagnostic aids as well as automation of transmission circuit and equipment redundancy. The start-up cost associated with a new, unbuild network would be enormous, and it is therefore desirable to minimize costs by utilizing an existing network and upgrading it as necessary. In other words, capital conservation while providing minimum interruption to the ongoing services is a key objective in the implementation and delivery of PCS. The Cable Television ( C A T V ) industry has been incorporating optical fiber into its network. The optical backbone and distribution cables have been deployed, and many of the current coaxial drop cables are expected to be replaced by optical fibers in the near future. In addition, future expansion of the network has been well planned by the 9 CATV Network and the PCN Overlay Architecture installation of spare fibers. Thus, C A T V networks offer broadband coverage over a wide area. This implies a readily available asset, and avoids the need to construct a new transmission facilities which would incur a very high start-up cost. The evolution of a C A T V service provider into a Multiple System Operator (MSO) is illustrated in Figure 2.1; some of the potential services to be provided are listed as well. Entertainment Broadcast T V H D T V Enhanced pay-per-view Video-on-demand Interactive T V Interactive Video games 1-way broadcast, broadband, non-switched. NB=Narrowband, WB=Wideband, BB=Broadband FIGURE 2.1. Evolution of C A T V networks 2.2 CATV Architecture in Canada A typical C A T V distribution network is composed of 3 major parts: the headend, the distribution system and the drop system. Super trunks, starting at the headend (HE), carry traffic (Television signals, voice, data, etc.) to distribution centers (DC). The traffic is then passed along distribution cables which connect D C s and drop points (DP). Finally, DPs distribute the traffic to subscribers via drop cables (mainly coaxial cables). A D C normally serves approximately 15[1] DPs according to the population of subscribers. A Info - Transaction Video Catalog shopping Distance learning Desktop multimedia Image networking Transaction services H i g h speed Internet access > Communications Telecommuting Video Conferencing Video Telephony PCS ISDN/non-switched special P O T S 2-way asymmetric, N B / W B / B B , Switched 2-way symmetric, N B / W B , switched 10 CATV Network and the PCN Overlay Architecture D C also interconnects with other carriers for access to the Public Switched Telephone Network (PSTN) via standard Operator-to-Operator Signaling System 7 (SS7) links. Bidirectional optical fiber links are utilized to interconnect the H E , DCs and DPs to make up the backbone network and the access network. The current backbone network employs either digital or Frequency Modulation (FM) carriage of the video signals to deliver 60dB SNR to DCs. The distribution network architecture uses standard D L C techniques for moving telephone signals on fiber optic cables, and standard V S B - A M (Vestigial Side Band - Amplitude Modulated) transport techniques for moving video and entertainment services on fiber optic cables. This architecture is implemented with a star/ring topology, as illustrated in Figure 2.2. Amplifiers are needed and they are strategically located to overcome cable attenuation. multitaps FIGURE 2.2. Current CATV architecture 11 CATV Network and the PCN Overlay Architecture This architecture deployment has the merits of meeting future needs of higher channel capacity and signal quality. In addition, it enables configurations as a wide-area network (WAN) with redundant paths between hubs to support data services reliably. In order to meet the performance requirements specified in Table 3, the trunk cascades are restricted to 10 feed-forward amplifiers, made possible by establishing additional DCs and DPs. T A B L E 3 . 5 5 0 - M H z C A T V network performance allocations CNR A C T B B Backbone Network 60 dB -65 dB Access Network (Fiber supertrunk) 56 dB -65 dB Drop cables (Coax) 55 dB -62 dB Distribution 57 dB -60 dB System Total 50 dB -51 dB a. C N R is the Carrier to Noise Ratio. b. C T B is the Composite Triple Beat Studies show that 6 DCs are needed to serve 600,000 subscribers in Toronto with 550MHz capacity, which supports 77 video channels[l]. Each D C in turn serves a maximum of 15 DPs, each with an area of approximately 9km 2 and a population of about 10,000 subscribers. This architecture is very suitable for the interconnection of fourth level cells in a cellular radio network as well as for multiplexing/bridging nodes for a, high-speed metropolitan-area (MA) data network. These new services could be deployed using the spare fibers that are installed along with those for C A T V . A n added benefit is that a substantial saving can be achieved by the sharing of common facilities (powering, real estate, fiber cable installation and so on). 12 CATV Network and the PCN Overlay Architecture DPs provide an efficient concentrated TR-303 interface to a local digital telephony switch. A M signals are carried by a D P to fiber nodes (FN) where the conversion of optical signals to electrical signals takes place. Each F N in turn serves 125 subscribers on each of its 4 extensions, giving a total of 500 subscribers per F N . The C A T V systems transmit bidirectionally over coaxial cable tree and branch buses to Network Interface Units (NIU). NIU's are employed to encode and decode telephony signals and place them on conventional twisted-pair inside wiring to consumer terminals, such as telephones, fax-machines, and modems. NIU's also provide telephone signaling (ringing) and power. Broadband signals are carried into subscribers' homes on coaxial cable for distribution. Hence, digital signals are encoded and decoded only once at the NIU using digital modulation techniques (QPSK, 64QAM) without any form of further transcoding or switching in the access network, and are transmitted end-to-end over proven broadband media (fiber and coaxial cables). The composition of fiber and coaxial links in the access network creates a hybrid fiber-coax (HFC) architecture. It is also referred to as a Subcarrier Modulated Fiber-Coax Bus (SMFCB) architecture. The two current architectural options are R F to the Home architecture and R F to the Kerb, shown in Figure 2.3. In current practise, R F to the Home is often implemented, with interface boxes placed at the C A T V D C location and at the customer premise location. The customer premise interface box is used to convert telephone signals, to and from the subscriber, to digital format for transport using Q P S K techniques to and from the D C . This differs from R F to the Kerb since the latter architecture incorporates a street cabinet containing the interface at the kerbside (i.e. out of the premise) to convert digital signals to analog baseband telephony signals. Twisted-13 CATV Network and the PCN Overlay Architecture pair cable distribution then feeds in an overlay fashion from the interface point in the network to the customer's home[2][3]. The use of an H F C distribution network has the potential to connect subscribers at 10 to 100 times the speed, compared to the current P S T N approach. Therefore, it enables and simultaneously enhances the operations of interactive and interconnectivity based services with image-based and graphics-based content. Current architecture provides the majority of the reverse spectrum and 50MHz (600-650MHz portion) in the digital forward spectrum (600-750MHz) for these services. The spectrum allocation will be elaborated further in a later section. 10,000 subscribers in a DP 500 subscribers per FN ^BS Kerbside Option (conversion of R F signal to n*64kb/s telephone signal is carried out at the 125 Home node; telephones are connected v ia twisted pair; F M and T V s are connected by coax through set-top box.) Individual Home Option (a conversion device, located in the consumer's premise, is used to convert R F signals to baseband telephone signals.) FIGURE 2.3. HFC access network architecture options 2.2.1 The Backbone Network In the backbone networks shown in Figure 2.4, redundant backbone routes are used as backup to ensure successful transmission of television programs in case a failure 14 CATV Network and the PCN Overlay Architecture occurs. Each D C may launch signals for delivery to any other DCs , or function as a termination point for intercity links. The broadband A / B switches select between the signal feeds as required for normal or backup operation. Video signals (originally F M modulated or digitally encoded) delivered to DCs via the backbone network are demodulated to composite baseband, complete with 4.5MHz audio subcarrier. The composite signal is then V S B - A M modulated at the Intermediate Frequency (IF) of 45.75MHz and an upconverter is used to transfer the signal to its assigned frequency bound. A n unique fiber and frequency allocation is made for each signal so that all reach DCs via two separate paths without collisions. T X R X T X | — < f > A / B | ~ T F M _ Demod (DCDC) F M Demod (DC) T X R X R X | — ( D H T X T X | — ( j ) — | R x ] |A/B| T F M Demod ( D C ) T X © - R X R X - T X A/B F M Demod ( D C ) RXl T X T X I R X l R X ( D — T X 1 T X J — ( j ) — [ R X ] RXl T X A/B F M Demod (DC) F M Demod (DC) > — F M M o d (HE) FIGURE 2.4. CATV backbone network architecture 2.2.2 The Access Network As mentioned before, about 15 DPs, each serving an average of 10,000 subscribers, are grouped together and served by a D C . Again, DPs are connected in a 15 CATV Network and the PCN Overlay Architecture number of rings (usually 5 to 8 DPs per ring). Four dedicated fibers are utilized to deliver the 77 cable broadcast signals, and injection laser diodes operating at 2.5mW and 1310nm wavelength are used for these A M links. It has also been planned to obtain additional capacity for future interactive video services by using the 1550nm optical wavelength. The redundant ring connection of DCs is shown in Figure 2.5. Four interhub fibers are used for backup of the broadcast signals. Twelve daisy-chained fibers can be used to support the requirements of cellular telephone interconnection and high-speed business communications. Further, 1 other fiber for each DP is allocated to carry reverse residential data communications traffic and also for network monitoring purposes. Another 4 unused fibers in the daisy chain are available for future services. t« 1 4 fibers for CATV | programs Total number of fibers =16+5n where n is the number of DPs. A maximum of 15 DPs are supported by a DC. Daisy Chain Fiber: 1 for reverse traffic, 4 spare, 12 others Ring fiber: 4 for CATV, 4 spare, 1 for reverse traffic, 12 to support high-speed communications and cellular services FIGURE 2.5. Interconnection between DC and DPs 16 CATV Network and the PCN Overlay Architecture To emphasize, the DPs are located approximately every 3km, which corresponds to the spacing of fourth level cellular telephone cells. It implies that PCS can be readily implemented by placing antennas at the appropriate locations. 2.3 Bandwidth or Spectrum Allocation of CATV The frequency spectrum of a modern C A T V network is usually divided into 2 portions. Down stream or forward traffic occupies the spectrum from 54MHz to 1GHz, whereas 5 M H z to 40MHz is allocated for the upstream or return traffic. The bandwidth allocation scheme, adapted from that of TeleWest, U . K . , is shown in Figure 2.6[2], and is the new standard, effective 1995. The pre-1995 standard is 5-30MHz for upstream traffic. Freq (MHz) 5 j 40 54 Digital return spectrum 550 600 Broadcast spectrum 750 Digital forward spectrum F I G U R E 2.6. Spectrum allocation of a modern CATV network Digital return spectrum, which occupies the 5-40MHz band, is used for interactive services with 12MHz for telephony, 12 M H z for data or signalling and remainder to be determined. The broadcast band (54-550MHz) is used for analog and digital broadcast services, with 77 analog channels. Finally, digital forward spectrum (600-750MHz) is for narrowcast entertainment and interactive services; 50MHz is allocated for interactive services (including forward allocation and a matching return allocation), and 100MHz for digital video downstream transmissions. These spectra comprise the broadband network 17 CATV Network and the PCN Overlay Architecture whose signals are carried by 6 separate fibers (4 for the broadcast spectrum, 1 for the reverse spectrum and 1 for the forward spectrum). 2.4 HFC Upgrade Requirements and Integration Phases A few key elements are required for the H F C upgrade, including: (1) powering system, (2) spectrum management system, and finally (3) element management systems that integrate into larger common platform network management. • powering system: The perceived necessity to provide \"lifeline\" telephony services gives rise to the concept of full protection for the powering system. A few large cable companies are considering 90V A C powering because the conventional 60V A C design simply cannot support early cable telephony hardware which typically uses 15 to 25W/ unit. • spectrum management system: While not yet required, active spectrum management would promote efficient use of the limited spectrum and would become an element manager built around open interfaces. • element management system: T M N (telecommunications management network) is a suitable model for an element management system. It will become important as H F C networks are integrated into S O N E T and other public network elements. The seamless integration process of the H F C networks would likely be a phased approach. We can identify 3 integration phases as outlined below. 18 CATV Network and the PCN Overlay Architecture 2.4.1 Phase 1 The incentives for phase 1 integration come from existing and near term C A T V services. Achieving operational economies is the primary driver. To lower the cable operator's costs without sacrificing interactive services and high-bandwidth, a double star architecture (Figure 2.2) is implemented. This double star design not only reduces significantly the fiber counts from H E to DP, it also enhances the advertising business. A n integrated process and delivery system with targeted advertising blocks can be provided; i.e. 1 ad per region, 1 ad per D C , 1 ad per DP, and 1 ad per F N . The transmission around the headend ring initially may involve proprietary digital technology for N T S C video. To provide B-ISDN services, S O N E T would be a necessity from the beginning. Digital entertainment video in M P E G - 2 (compressed digital entertainment video standard) would be mapped directly into the S O N E T S T S - N frame or converted to A T M cells if further processing were needed. 2.4.2 Phase 2 Phase 2 involves the introduction of such interactive services as Video on Demand (VOD), multimedia communications, and telephone all delivered to the home through the H F C platform. The H F C architecture has adequate flexibility, as the effective bandwidth can be increased easily by reducing the number of homes on the shared bus. A common transportation standard should be identified and utilized for all phase 2 applications. A T M cells modulated in 64 Q A M format downstream and Q P S K format with T D M A upstream could be used. Also, a dedicated one laser per node configuration could be phased in to enable dedicated information to pass to any specific node. 19 CATV Network and the PCN Overlay Architecture S O N E T is suitable for interfacing the D C bidirectionally to the rest of the network. D P may also house distributed file servers to provide advertisement insertion, broadcast, V O D and other potential services. Scalable A T M switches could be installed at DCs and DPs to perform routing functions. 2.4.3 PHASE 3 In phase 3, all information to and from the home would be digital. To meet the service demand, fiber optics have to be deployed further into the H F C network to create more nodes with each serving areas to provide greater effective bandwidth. Utilization of the spectrum from 750MHz to 1GHz will be practical as fiber moves closer to the home. Distributed switching, routing and grooming at the DCs can be employed to satisfy service attributes. The H F C platform and associated C A T V architecture has the required potential for migration to new or enhanced services and for staying ahead of the increasing bandwidth demand. 2.5 Personal Communications Network (PCN) Overlay A P C N architecture consists of 3 logical layers - the intelligent layer for controlling network services, the transport layer for distributing user information, and the access layer to provide subscriber personal communication access to the network. Here, we proposed an Asynchronous Transfer Mode (ATM) based transport layer architecture over the C A T V network to provide wireless broadband integrated services. 20 CATV Network and the PCN Overlay Architecture 2.5.1 PCN architecture The P C N architecture can be divided functionally into three logical layers, as shown in Figure 2.7. The intelligent layer incorporates the network intelligence and processing capabilities for realizing location independent services in the P C N . Network intelligence is embodied in databases which are hierarchically distributed to minimize traffic loads[15]. The transport layer provides transmission and switching services to handle the exchange of control signaling messages and user information. Priority schemes are required to ensure quality of service (QOS) and timing requirements of different kinds of services. Control signaling messages are assigned the highest priority because this information is critical to network performance. Voice and video are then given priorities over data which can tolerate larger delays. Finally, the access layer acts as an interface for fixed and mobile users to access the fixed network. A variety of air interface standards may be employed to implement the mobile access network. Intelligent Layer (signaling): - Call Processing - Database Management - Other IN functions STPs may be omitted in associated mode Transport Layer: - Transmission - Switching Access Layer: - Mobile user access - Fixed user access F I G U R E 2.7. L o g i c a l architecture o f P C N based on SS7 21 CATV Network and the PCN Overlay Architecture In Figure 2.7, SSP denotes a service switching point, STP a service transfer point and SCP a service control point. SSPs are nodes having common channel signaling (CCS) capability; they are interconnected by signaling links. STPs are nodes which function as intermediate transport switches for control signaling messages. SCPs are the SSPs that provide database access to support transaction-based services. The functions of these layers will be elaborated in the next chapter. The underlying foundation for the intelligent Network (IN) is Common Channel Signaling (CCS). Signaling capabilities are thereby provided for networks to exchange network management information and messages for call and connection control. 2.5.2 The ATM based PCN overlay A conceptual configuration of an A T M based PCS network over a C A T V plant, shown in Figure 2.8, is proposed to provide a ubiquitous service and to support the anticipated urban and suburban PCS demand. This design employs A T M distributed switching capabilities to partition PCS call control and management functions and to dynamically share the data link capacity with other services. The distribution cables and drop cables are used as backbone and cell-site links, respectively. The service area of a DP is distributed among a few small A T M switches, each controlling several wireless base stations or radio transceiver systems (RTS, in the case of antenna remoting). In areas where access to C A T V services is not available or is limited (i.e. shopping malls, parks, railway stations or airports), stand alone switches may be installed to provide services to a substantial PCS user community. 22 CATV Network and the PCN Overlay Architecture lllll primary data base backbone network small ATM switches, co-located with FN, \\ T are used \\\\ for wireless « traffic n Access network (HFC architecture) FIGURE 2.8. Conceptual configuration of the PCN overlay 23 CATV Network and the PCN Overlay Architecture A secondary database (a database partition), to which all users in the DP control zone should register themselves, is connected with each D P / A T M switch. Higher level databases, i.e. primary databases and M A (metropolitan area) databases can be added at the D C and H E levels, respectively, to track subscribers' movements and update their locations within the D C and M A areas. In other words, the primary databases would indicate the DPs where a subscriber in the DCs is situated; and the secondary databases would indicate the base station controllers ( A T M switches at the lowest level) serving a subscriber. In this way, a logically centralized but physically distributed database is created which enables parallel processing to facilitate fast and efficient handoffs and callee localization. Due to the localized nature of calls, the called parties would often be located by querying only the low level DP databases. The partitioning and distribution of these databases forms a hierarchical database design which is able to minimize unnecessary traffic congestion associated with subscriber registration, call setup, and handoff. This is similar to the architecture proposed in [15]. The D C s are connected to other public networks such as P S T N and internet via interworking units (IWU), whereby the necessary format conversions are performed to ensure compatibility. Additionally, a distributed antenna simulcast arrangement, where all antennas mounted on lamp posts along a specific cable length of the C A T V network, can be employed to provide PCS coverage in a cost effective fashion, as discussed in Chapter 5. 24 Chapter 3 ATM Control Signaling Issues Chapter 2 presents a new P C N overlay architecture based on distributed network intelligence and A T M technology to fulfill the service requirements of a P C N . Currently the Digital Subscriber Signaling System N o . l (DSS1)[23] and Signaling System No.7 [24] are used for signaling in I S D N and POTS. A number of studies regarding the impact of the penetration of new PCS and advanced intelligent network1 (ABN) services on today's signaling network have been reported in [25], [26] and [17]. The results suggest potential hmitations of today's SS7 network capability in supporting new services, especially the use of today's 56kb/s links which may cause an unacceptably slow network' response time. Also,, the existing SS7 network physical interfaces may not be able to support stringent ehd-to-end signaling time targets of 20ms-100ms. The identification of an appropriate target broadband signaling transport network architecture is therefore an important issue to ensure smooth and cost-effective signaling network evolution. 3.1 ATM Signaling Transport Network for the PCN Overlay Figure 3.1 depicts an A T M service and control transport network to economically carry services and control/signaling messages in the same physical network. Service connections and control signaling connections are logically separated from each other by using different virtual channels (VC) or virtual paths (VP). There are now three options to 1. Bellcore 's A I N specification includes the definition of new elements such as the service node (SN) , the intelligent peripheral (IP) and the adjunct. 25 ATM Control Signaling Issues implement such a network, based on the signaling message routing principle[17]; these options include: (1) the quasi-associated mode only; (2) the associated mode only or (3) a hybrid of the quasi-associated and associated mode. O&M Service control Traffic management Intelligent nodes USCK Application}, O Q.293J SAAL N ATM c Call Control Application 3 Q.2931 SAAL ATM M Physical Medium (SONET) \\ \\ \\ \\ \\ \\ v. Terminal •. N S X S X X B-ISUP AEASE\\ ATCAP\\ SCCP MTP-L3 SAAL ATM Physical Medium (SONET) Local Exchange FIGURE 3.1. ATM service and control transport network and the UNI and NNI signaling protocol 3.1.1 Quasi-associated Signaling Transport Architecture The quasi-associated signaling transport architecture evolves from the current SS7 network concept with enhanced STP and SCP capabilities for broadband signaling requirements. The M T P - L 1 and M T P - L 2 functions residing within today's STP will be 26 ATM Control Signaling Issues replaced by A T M and A A L functions in the A T M based signaling transport network. Figure 3.2 depicts the architecture of this configuration. It uses A T M transport as a one-to-one replacement of today's dedicated copper data transport. So, a significant capacity upgrade is needed, i.e. more B-SCPs (broadband SCPs). This implies that more links between the B - S T P and the B - S C P are required and could render the growth to a large-scale network difficult from both operational and economic perspectives. - — - signaling VP (SVP) - —- - signaling links signaling higher layer Protocol stacks of Network elements signaling higher layer MTP-3 MTP-3 MTP-3 AAL AAL AAL VC VC VC VP VP VP VP PL PL PL PL B-SSP VPX B-STP B-SSP VPX: Virtual path cross-connect. VPXs will be commonly used for both signaling and service messages. Access to B-SCP from B-SSP is via B-STP. Also, the ATM transport architecture for supporting SVC may be implemented at the VC level. FIGURE 3.2. ATM-based quasi-associated signaling transport 27 ATM Control Signaling Issues 3.1.2 Associated Signaling Transport Architecture In associated signaling transport architecture, the STP function is distributed into each signaling node and the signaling message routing is carried out on a distributed basis. As shown in Figure 3.3, B-STPs are excluded as signaling messages are transported directly over link sets interconnecting those B-SSPs by virtual paths. This architecture allows each A T M signaling node to access directly the B-SCPs, which may be configured either in a centralized or a distributed manner. The distributed B - S C P placement scheme can be more cost-effective in the ATM-based signaling transport network than in its SS7 counterpart; under A T M fewer signaling links for B - S C P access are required and flexible signaling bandwidth allocation is available. B-SCP Plane VPX Plane B-SSP Plane service VP signaling VP (SVP) \\/VPX^=J£^ /ypx/ ~/VPX/ /& jjtSPIATM^ 'JJSSPIATM/ TT /B-SSP/ATM~Z/B-SSP/ATM / ATM switch handles the service messages VPX: Virtual path cross-connect. VPXs will be commonly used for both signaling and service messages. signaling signaling higher layer Protocol stacks of Network elements higher layer MTP-3 MTP-3 AAL AAL VC VC VP VP VP VP PL PL PL PL B-SSP VPX VPX B-SSP FIGURE 3.3. ATM-based associated signaling transport 28 ATM Control Signaling Issues There are two routing/rerouting options: Network Layer Routing (e.g. simplified M T P - 3 at each A T M switch, because there is no separate signaling network) and A T M Layer Routing. The second approach is particularly attractive since signaling messages, control messages and service data all can be carried on the. same physical network, while being separated logically by different VPs with various Q O S demands. These demands can be accommodated easily by A T M V P restoration techniques (ex. V P self-healing). Both rerouting approaches are similar except that M T P - 3 rerouting is used for failure recovery only and is performed in a connectionless manner, whereas V P rerouting is used for both failure recovery and congestion control in a connection-oriented manner. 3.1.3 Hybrid Signaling Transport Architecture Both the quasi-associated mode and the associated mode in the same A T M control transport network are supported by a hybrid signaling architecture. Only when the traffic volume is low between two signaling points or between a B-SSP and a B - S C P may B-STPs be used; direct point-to-point links are not economical in this instance. The associated mode, on the other hand, will be utilized most of the time. 3.1.4 Interoperation of Current SS7 Network and Broadband Network To enable today's networks to interoperate/interwork (and possibly to evolve the current SS7 network to future broadband network) using the associated mode, POTS may be supported through the STP-like architecture, with other services supported through the new associated mode. Hence the expensive capacity upgrade for existing SS7 system components (STPs) is alleviated and operation and maintenance costs are reduced from the perspective of POTS; For PCS and ALN, this scheme offers a flexible and efficient way 29 ATM Control Signaling Issues to access B-SCPs and A I N service modules directly. Message volume increases and the distributed B - S C P architecture are also accommodated. This architecture is achieved by interworking the associated signaling architecture with SS7 networks, as shown in Figure 3.4. I T U - T Rec. 1.580 specifies an interworking scenario using the interworking function #1 (IWF #1). The rVVF #1 translate both current as well as future broadband signaling messages. For example, the IWF #1 translate between B-ISUP and N-ISUP messages, including the generation, termination, and protocol conversion of signaling messages[17]. This interworking configuration enables the signaling transport network to support broadband services initially and then to integrate voice services later. ATM-based signaling network IWF#1 SS7 network i • 1 i 1 i 1 ATM user information STM user information FIGURE 3.4. Interworking between the ATM signaling transport network and SS7 network 30 ATM Control Signaling Issues 3.1.5 ATM Signaling Transport Architecture Suitable for the PCN Overlay Preliminary results for understanding the trade-offs between the two possible target broadband signaling architectures (quasi-associated and associated mode) are summarized in Table 4. This summary is based upon a study conducted on economic and reliability issues for POTS over A T M , commissioning by the Telephone and Telegraph (NTT) long-distance network[17]. The model network incorporates 63 SSP switching nodes accommodating 84 SSPs, 20 STP nodes accommodating 34 STPs, and 20 SCPs. The two transport architectures, one using the quasi-associated mode (Type 1) and the other using the associated mode (Type 2), are examined and analyzed. The results suggest that Type 1 architecture incurs high costs due to the expensive STPs but that the V P of Type 1 can carry signaling information more efficiently. Type 2 is significantly cheaper even when the four-route option (each SSP pair uses four disjoint V P routes for protection purposes) is employed. This also leads to a more reliable architecture over Type 1, in which the unavailability of the STP nodes contributes significantly to a higher SSP node pair unavailability. Another important factor is the scalability criterion of the two architectures. As mentioned before, the quasi-associated modes utilizes B-STPs to interconnect B-SSPs and B-SCPs. If distributed B-SCPs are to be implemented in order to handle the signaling demands for PCS and other new services, more links are required and would cause difficulties for the necessary substantial network growth. 31 ATM Control Signaling Issues T A B L E 4. Summary of Results based on an NTT study Architectures Relative Cost (SCP, STP and transmission) Unavailability Average Worsl case Type 1 1.0 3.3 x 10\"8 1.5 x 10~7 Type 2 0.6 2.5 x 10\"9 3.3 x 10\"8 Note: M o r e than 80% of Type 2 S S P node pairs have unavailability under 1 x 10' To consider evolvability, the quasi-associated signaling network may interwork with today's SS7 networks more easily than its associated signaling counterpart by simply upgrading its current STPs to ATM-based B-STPs. A n interworking unit is needed if the associated signaling network is to interwork with today's SS7 networks as described in Section 3.1.4 on page 29. Based on the above criteria, we recommend the ATM-based associated signaling transport network architecture as the signaling architecture for the proposed P C N overlay. The evolvability advantage of the quasi-associated signaling network is largely offset by its potential limitations, i.e. scalability, reliability and cost. Moreover, as the technology progresses, it would be easier to overcome the evolvability problem. 3.2 Distributed Control Architecture (DCA) With the introduction of A T M technology, a vast increase of customer traffic with various kinds of QOS and reliability requirements is anticipated. In this case, a centralized control architecture becomes incapable of fulfilling necessary system response time constraints and also reduces overall network adaptability and flexibility as the network complexity increases. Distributed network control using embedded algorithms within 32 ATM Control Signaling Issues network elements can resolve these problems by exchanging small amounts of information locally for management processes in response to network requests. 3.2.1 Distributed Call Processing Architecture Thomas F. L a Porta et al. presented a modular server-based functional control architecture, namely the Distributed Call Processing Architecture (DCPA)[27][28], to meet B-ISDN requirements in a rapid and efficient manner. In this approach a network is divided into clusters, as shown in Figure 3.5. In each cluster, there are Home Call Servers, User Signaling Servers, Connection Servers and Channel Servers. Roamer Call Servers, Home Location Servers, Visitor Location Servers are placed outside these clusters. FIGURE 3.5. Cluster-based Distributed Call Processing Architecture 33 ATM Control Signaling Issues 3.2.2 Elements of the Distributed Call Processing Architecture The functions and tasks of each server kind are listed below: • Call servers perform call control functions such as maintaining the Basic Call State Model (BCSM) of the Intelligent network, invoking services and accessing service profile information. Home Call Servers serve mobile terminals (MT) in their home network and Visitor Call Servers serve M T s which are roaming in another network. • Connection servers manage establishments of communications paths, perform routing and interact with channel servers to setup connections, and compute end-to-end QOS requirements. • Channel servers administer link resources such as the V C I / V P I and/or radio channel assignments, perform Call Admission Control (CAC), set control parameters for Usage Parameter Control (UPC), and configure the fabric by setting up V C I / V P I translation tables. • User signaling servers handle multiple user processes (e.g. call setup and termination requests). The user process may map simple requests into more advanced requests; for example, a single connection call request into a multiconnection call request, so that the amount of signaling traffic over the air interface can be kept low and the UNI can be kept simple. • Location servers track of the locations of M T s . A Visitor Location Server tracks the cluster locations of visiting M T s , while a Home Location Server tracks the cluster of M T s within their home network or in the Visitor Location Server. 34 ATM Control Signaling Issues 3.3 Distributed Call Processing Approach vs. Current Approach A T M is expected to provide multiconnection and multicasting services (point-to-multipoint connections). However, the current Broadband ISDN User Part (B-ISUP) is defined to support point-to-point A T M connections only. Therefore the hop-by-hop connection establishment algorithm (Figure 3.6)[18] used by the today's B-ISUP signaling protocol and associated call processing software is neither efficient nor robust enough to react to negative system responses, especially when dealing with a highly complex system such as the C A T V / P C N overlay. Moreover, the hop-by-hop approach increases the end-to-end connection setup delay and also adds complexity to the Call Admission Control (CAC) process at each A T M switch fabric. The D C A approach described earlier can be adapted to enhance system performances. As opposed to the conventional approach, the distributed approach spreads the call processing application process into the Broadband Call Control (BCC) Application process in the call server, the Broadband Bearer Control (BBC) Application process in the connection server, and the Broadband Channel Control Application Process (BCHC) in the channel server1. In this case, network requirements are implemented using modularized protocols to obtain acceptable efficiency. As an example, when FN-based features are of interest, call servers can be added to the network as needed, whereas a network employing A T M switches for connection control only (i.e. no IN-based features) 1. B - I S U P has already split cal l and bearer control elements into distinct Applicat ion Service Elements (ASEs) within I T U - T Q.2764. Capability Set 2 (combination of future requirements) w i l l take this further by introducing an Edge-to-Edge protocol A S E ( E A S E ) that uses T C A P and S C C P rather than B - I S U P for transport[16]. 35 ATM Control Signaling Issues doesn't need the call control module at all. Interfaces would be provided for users to request connections with/without IN-based features. T Call Processing^ Embedded call ? processing & signaling software which also support the BCSM & associated IN-based services i 4,6 CaMPmcessin^ switch 1 switch 2 3j switch 3 1: primitive for call setup request is sent. B-ISUP then generates an I AM (Initial Address Message). 2: a new JAM is generated and transported to switch 3. 3: an Address Complete Message (ACM) is sent back to switch 2. 4: another ACM is sent to switch 1. 5: Answer Message (ANM) is sent following the ACM to indicate successful connection establishment. 6: ANM is sent to switch 1 to indicate successful connection establishment. FIGURE 3.6. Conventional (CP/B-ISUP) approach for call setup (associated mode) Improvements in the end-to-end connection setup delay can be realized by the parallel processing of specific channel control functions in those switches involved in a connection. In addition, the connection server communicates with channel servers only to update resource availability and to configure switches for a connection; resource tracking is not used. This implies a decentralized control architecture. Reliability and scalability can hence be accomplished while attaining a shorter connection setup delay by employing these modularized processes only as needed. The three Application Service Elements 36 ATM Control Signaling Issues (ASE, defined as operations using parameters associated with each application process) are: the B C S E (Broadband Call Service Element), the B B S E (Broadband Bearer Service Element), and the B C H S E (Broadband Channel Service Element). Effectively, these are the modularized protocols corresponding to B C C , B B C and B C H C applications. Figure 3.7 illustrates this D C P approach[18]. Broadband Call Control (BCC) Embedded call processing & signaling f software in \\~ each kind of server Broadband Bearer Control (BBC) Broadband Channel Control (BCHC) 10 | ^10 ^2,6 ^3,4,5,7,8,9 2,6^ | 31415,71819 Channelyy-y s M 1 Server k-1 ^ - 1 intermediate switches 1: primitive for call setup request. In this case, Call Server is client and Connection Server is server. 2: a multicast BCHSE operation, Reserve-channels message is generated and sent to channel servers at each switch. 31415: BCHC at each switch executes CAC and selects VPllVCIs for the links on the switch. If successful, distinct Channel-reserved responses (3,4,5) are generated. 6: a multicast Commit-translations (6) message is transported, after BBC calculates the end-to-end QOS measurements and compares these results with the user-requested value, to switches to set control parameters and to configure the fabric, i.e. setup translation table entries to map incoming VPIIVCI to outgoing VPIIVCI. 71819: a Translations-committed message from the channel server at each switch is generated. 10: a Connections-established message is generated and sent to BCC to complete call setup. FIGURE 3.7. DCP/ASEs approach for call setup (associated mode) 37 ATM Control Signaling Issues The D C P A employs a client-server based signaling protocol, namely the extended multicast version of Transactions Capabilities Application Part (TCAP) of SS7 which is responsible for switch-to-service control point communications. T C A P uses S A A L directly to transport its messages. 3.4 System and Network Modeling of the PCN Overly We now employ the D C P A to evaluate the performance of our proposed P C N overlay. A n analytical layered performance modeling approach is used, based on queuing analysis using decomposition into subsystems as in [18]. Here, a subsystem is defined as a processor executing one or more protocol layers or application process within a network node. The processors in the CP/B-ISUP approach (Figure 3.6) include the Call Processing module, the B-ISUP module and the M T P - L 3 / S A A L module. Those in the D C P / A S E s approach are the following modules: B C C , B B C , B C H C , B B S E / T C A P , B C H S E / T C A P , and S A A L . 3.4.1 System Model and Parameters Each subsystem is modeled as a multiclass, single-server, infinite-capacity, round-robin, processor-sharing, M / D / l queuing system to determine the mean sojourn times for each message type. Interprocessor communications delays at the interfaces between the subsystems within a network node and the propagation delays over signaling links are assumed to be caused by propagation only; there is no queueing delay. The mean end-to-end connection setup delay is obtained by the sum of the individual mean sojourn times in the subsystems, interprocessor communication delays and the propagation delays on 38 ATM Control Signaling Issues signaling links, along the message flows through the network. As noted earlier, the current 56kb/s signaling links may not be able to handle the future A T M network traffic; we therefore assume that 1.5Mb/s signaling links are utilized as suggested in [17], [28] and [29]. Tables 5 and 6 list the processing times for protocol layer subsystems and application processing times, respectively. Finally, each signaling message is assumed to occupy 144 octets (i.e. 3 cells). TABLE 5. Processing times for protocol layer subsystems Protocol Layer Processing Time (ms) S A A L (l)C:P/ASKs) 1.5 for outgoing multicast; 1 for all others. M T P - L 3 / S A A L (CP/B-ISUP) 1.5 in the STP for incoming messages; 2 in the STP for outgoing messages; 2 in the switches for incoming and outgoing messages. B- ISUP 4.5 for incoming and outgoing IAMs; 1.5 for all others. T C A P / A S E ( B C S E , B B S E , B C H S E ) 14 for outgoing Setup-connections; 10.5 for incoming Setup-connections; 9.5 for all other outgoing messages; 6 for all other incoming messages. TABLE 6. Application processing times Subsystem Processing T ime (ms) Cal l Setup Cal l Release Cal l Handoff Call Processing (CP) 50 20 20 Broadband Cal l Control ( B C C ) 15 6 6 Broadband Bearer Control (BBC) 25 10 10 Broadband Channel Control ( B C H C ) 10 4 4 In an M / G / l system, the total waiting time, in queue and in service, is given by: 39 ATM Control Signaling Issues s = x + 2 ( l - p ) (EQ3 .1) — 1 2 2 where X = E{X} = - is the average service time, X = E{X } is the second moment X, of the service time, X is the cell arrival rate and finally p = - , where p is the resource utilization factor. When the service times are identical for all customers (i.e. an M / D / l ~~2 1 ' system), then X = — , in which case: u 2 S = X 1 + 2(1-p)J ( E Q 3.2) 3.4.2 Physical Network Model Consider the physical C A T V network layout depicted in Figure 3.8, where DPs in a cluster are separated by 3km from each other and the distance between any two D C s is 6km. route 2 DC1 inter-DC handoff route 3 DC2 J{_\\ DC switch |X | DP switch FIGURE 3.8. Physical network layout for call connection setup analysis 40 ATM Control Signaling Issues This arrangement is constructed to study the worst case connection setup and handoff delays (recall that only a maximum of 15 DPs can be supported in a D C and usually up to 8 DPs are grouped in a ring). Also it is assumed that in the D C P A scheme, the call server and the connection server are installed physically adjacent to the D C switch to enable fast inter-cluster connection setup. In the next section, we compare quantitatively the current connection setup approach and the D C P A connection setup approach. 3.5 Call Setup and Handoff Delay Analysis Below we present results for call setup and handoff delays. 3.5.1 Intra-cluster Intra-DC Call Connection Setup Delay As shown in Figure 3.8, the furthest distance between 2 DPs in the same cluster is represented by route 1, assuming bidirectional traffic flow (i.e. the traffic from switch 1 can be transported directly to the right to switch a). In other words, a call generated at switch 1 has to go through a maximum of 3 switches to reach the destination switch 2 located 12km from switch 1. The mean end-to-end call setup delay using the 2 schemes is compared in Figure 3.9. In the CP/B-ISUP (conventional hop-by-hop) approach, we assume that the time between the generation of the A C M s and A N M s is 0 in all the following analyses. It is clear that the D C P / A S E s approach yields better mean end-to-end setup time than the conventional approach. The time saving is made possible by enforcing parallel channel control operations and the reduced call control and connection control processing. Also, processing times are shorter in the D C P / A S E s approach because processes are 41 ATM Control Signaling Issues distributed. This also allows for the higher cell arrival rate than for the conventional approach. T—i—i—i—[—i—i—i—i—]—i—i—i—i—|—i—i—i | |—r-i—i—i | i—i—i—i—[—i—i—i—i—|—i—i—i r-I C P / B - I S U P Approach - | \\ . J D C P / A S E s Approach | f / j i i i l i i i i l i i i i l i i i i l i I i i l i i I i—i—i—i—l—i—i—i—i—I \" 0 5 10 15 20 25 30 35 40 C a l l Ar r iva l Rate (calls/s) FIGURE 3.9. Mean end-to-end Intra-cluster Intra-DC connection setup delay 3.5.2 Inter-cluster Intra-DC Call Connection Setup Delay For the inter-cluster connection setup, we assume that the call servers and connection servers are located in close proximity to the D C switch to enable fast inter-cluster call connection setup. Also, the D P A / A S E s approach is slightly modified to incorporate an Inter-cluster-Setup-Request (ICSR) message and an Inter-cluster-Setup-Complete (ICSC) message. After the call-setup-request message (message 1 in Figure 3.7) is received, B B C at the source connection server issues an ICSR to the destined connection server to request connection setup, after which the multicast B C H S E operation is performed in parallel. Upon receiving the ICSR, the destined connection server executes 42 ATM Control Signaling Issues the same procedures as illustrated in Figure 3.7, to setup necessary channels for the connection in the destined cluster. If successful, the destined connection server sends an I C S C back to the source connection server. Finally, the source connection server sends a Connections-established message back to the source call server. In this case, the call server in the destined cluster is not involved in the call setup process. The results of both approaches are shown in Figure 3.10, which exhibits characteristics similar to those in Figure 3.9. The mean end-to-end inter-cluster intra-DC connection setup delay with the conventional approach is higher by 0.6 seconds than its mean end-to-end intra-cluster connection setup delay counterpart. Because of the longer serial connection processing, the mean end-to-end inter-cluster intra-DC connection setup delay with the conventional approach saturates at a lower cell arrival rate. In contrast, there is only a very small increase in the delay associated with the D C P / A S E s scheme. 5 •\\—i—r—i—i | i i—i—i—|—i—m—i—|—i—i—i i |—i—i—i—i—|—i—i—i—i—|—i—i—r 4 h C P / B - I S U P Approach D C P / A S E s Approach 3 h 0 0 10 15 20 25 J I I I I I I I I I I I I I I I L J I I I L 30 40 C a l l A r r i v a l Rate (calls/s) FIGURE 3.10. Mean end-to-end inter-cluster Intra-DC connection setup delay 43 ATM Control Signaling Issues 3.5.3 Inter-DC Call Connection Setup Delay The mean end-to-end inter-DC connection setup delays for both schemes are depicted in Figure 3.11. Once again, the CP/B-ISUP approach produces a much longer connection setup delay than the D C P / A S E s approach. As expected, the mean inter-DC connection setup delay in the conventional scheme is also higher than the conventional mean inter-cluster intra-DC connection setup delay. However, the D C P / A S E s connection setup delay is almost the same as its inter-cluster intra-DC counterpart. This is because the only difference in delay arises from insignificant propagation delays (4(is/km in fiber assumed) when a signal traverses a longer distance between two DCs. 5 0 J I I I I I L _ L J_l I I I I I I I I L I I I I 0 5 10 15 20 25 35 40 Call Arrival Rate (calls/s) FIGURE 3.11. Mean end-to-end inter-DC connection setup delay 44 ATM Control Signaling Issues 3.5.4 Intra-cluster Intra-DC Call Connection Handoff Delay There are two situations for intra-cluster intra-DC connection handoff, as shown in Figure 3.8. The minimum handoff delay arises when a mobile crosses the cell boundary of 2 inter-connected DPs, while as the maximum delay is obtained when the handoff is carried out between DP2 and DP7 (i.e. call handoff must be processed at 4 intermediate DPs). The mean handoff delays of the CP/B-ISUP approach and D C P / A S E s approach are shown in Figure 3.12. The results show that the minimum and maximum handoff delays for the D C P / ASEs approach are similar and that it again achieves a higher tolerance to signaling traffic. The CP/B-ISUP approach, on the other hand, yields a significant difference between the minimum and maximum handoff delays and a substantially lower tolerance to signaling traffic, because of sequential processing of handoff requests. C a l l Ar r iva l Rate (calls/s) FIGURE 3.12. Mean minimum and maximum intra-cluster Intra-DC handoff delay 45 ATM Control Signaling Issues 3.5.5 Inter-cluster Intra-DC Call Connection Handoff Delay Figure 3.13 shows the inter-cluster handoff delays for both the D C P / A S E s and the CP/B-ISUP approaches. The handoff delay for the D C P / A S E s approach is slightly higher than that of the CP/B-ISUP approach, but the D C P / A S E s approach exhibits a higher traffic saturation tolerance. The delay advantages of the D C P / A S E s approach are available only when there are more than 3 intermediate switches involved in the switching of handoff requests. 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 M 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 M 1 1 1 1 1 1 1 1 1 1 1 1 j 1 1 1 1 1 1 1 1 1 rj I i i > > 1 • • • • I • • • • I • • • • I • i • • I • • i i I t i i i t i . . . I i • • • -I • • • • I t i . . I • • • i I • i i i I . • • • I I 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 Call Arrival Rate (calls/s) FIGURE 3.13. Mean inter-cluster Intra-DC handoff delay 3.5.6 Inter-DC Call Connection Handoff Delay Finally, the inter-DC handoff delay characteristics for the C A T V / P C N overlay is shown in Figure 3.14. The D C P / A S E s not only produces a very low delay, it again shows a relatively high signaling traffic saturation tolerance, compared to the CP/B-ISUP approach. 46 ATM Control Signaling Issues n I i i i i I i i i i I i i i i I i i i i I i i t i I t i t i I i i i i I i i i i I i i i i I i i i i t i i i i I i i i i I i i i i I i i • • I I 0 5 10 .15 20 25 30 35 40 45 50 55 60 65 70 Call Arrival Rate (calls/s) FIGURE 3.14. Mean inter-DC handoff delay 3.6 Discussion and Conclusion The mean end-to-end connection setup time implemented with the CP/B-ISUP approach for the inter-DC, the inter-cluster intra-DC and the intra-cluster intra-DC configurations are plotted for comparison in Figure 3.15. The mean end-to-end delay increases as the number of intermediate switches along the connection path increases. Therefore, the inter-DC configuration is associated with the highest setup delay. The connection setup delay increases exponentially as call arrival rate increases, and the saturation is mainly affected by the call processing time needed. The delay values of inter-D C , inter-cluster intra-DC and intra-cluster intra-DC connection setup when the call arrival rate is lcall/s are 0.58s, 1.23s and 1.68s, respectively. 47 ATM Control Signaling Issues Q t\" 0 0 c _ g V J o a> C C o O C c w 4 h 3 h Inter-DC Inter-cluster Intra-DC Intra-cluster Intra-DC Assumptions: one processor per subsystem and generation time between ACM and ANM is 0 -i i i i_ -I I I L . 10 C a l l A r r i v a l Rate (calls/s) 15 20 FIGURE 3.15. Mean end-to-end connection setup time with the CP/B-ISUP approach Figure 3.16 shows the mean end-to-end connection setup time required in the 3 configurations implemented with the D C P / A S E s approach. The intra-cluster intra-DC configuration has the lowest connection delay. The mean delay in the inter-cluster intra-D C arrangement increases accordingly. However, the inter-DC call setup delay is almost the same as that of the inter-cluster intra-DC call setup delay, as explained above. The D C P / A S E s approach also promises a higher tolerance to signaling traffic compared to the conventional hop-by-hop approach. This scheme illustrates a more superior performance over the CP/B-ISUP scheme as all connection setup delays are significantly lower. The resulted delay values for inter-DC, inter-cluster intra-DC and intra-cluster intra-DC connection setup when the call arrival rate is lcall/s are computed as 0.25s, 0.35s and 0.35s, respectively. 48 ATM Control Signaling Issues Q f oo c ,2 o u B § u B T3 C w 5 [—i—i—r—i—j—i—i—i—i—|—i—i i i—|—i—r-i—i—|—i—i—i—i—|—i—i—i—i—r-Inter-DC Inter-cluster Intra-DC 2 h T—i—l—i—I—l—r-Intra-cluster Intra-DC — a — Assumptions: one processor per subsystem and the delay in the connection message exchange within the same DC is negligible 15 20 25 C a l l A r r i v a l Rate (calls/s) FIGURE 3.16. Mean end-to-end connection setup time with the DCP/ASEs approach 40 In Figure 3.17, the mean connection handoff delay with the CP/B-ISUP approach has been plotted. It shows large delay differences between each situation. When comparing these results to those in Figure 3.18, one sees that the problem of large delay differences encountered in the CP/B-ISUP approach, is alleviated in the D C P / A S E s approach. As well, a relatively high level of signaling traffic is handled with the D C P / ASEs approach, and the mean delay resulted in the D C P / A S E s approach is significantly lower than for the CP/B-ISUP approach. A relatively fast and efficient handoff is achieved using the D C P / A S E s approach. The handoff delays at X,=lcall/s are summarized below. TABLE 7. Connection handoff delays of the CP/B-ISUP and the DCP/ASEs approaches APPROACH INTRA CLUSTER INTRA-DC INTER-DP INTRA-DC (S) INTER-DC (S) ::;::::'::;:::maiii!!!! MIN (S) CP/B-ISUP 0.296 0.089. 0.158 0.641 DCP/ASKS 0.184 0.184 0.256 0.257 49 ATM Control Signaling Issues I • • • • I • • • • I * ' ' ' I • • • • I • • • • I • • • • 1 • • 1 1 1 • • • • I 1 1—i i I i i i i I 0 . 5 10 15 20 25 30 35 40 45 50 C a l l A r r i v a l Rate (calls/s) FIGURE 3.17. Mean connection handoff delay with the CP/B-ISUP approach Q •§ 0.8 I 0.4 h U Inter-DC Inter-DP Intra-DC Intra-cluster Intra-DC max Intra-cluster Intra-DC min 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 C a l l A r r i v a l Rate (calls/s) FIGURE 3.18. Mean connection handoff delay with the DC/ASEs approach The saturation/overload point (where the delay tends to grow infinitely) is mainly determined by the call processing delay. Note that the larger is the call processing delay, 50 ATM Control Signaling Issues the larger the setup delay and the smaller the call handling capacity. Hence, distributed call processing is effective in both reducing connection setup delay and increase system call handling capacity by exploiting modularized schemes of the call processing software and signaling protocols in broadband switching systems. The saturation/overload tolerance to signaling traffic is nearly twice better than that of the CP/B-ISUP approach. In a practical situation, call arrival rates at an A T M switch are typically quite large (about 100 calls/s). Multiple processors for each subsystems must be employed, as is done in existing circuit switches. The D C P A scheme has the advantage in network scalability as more switches can be deployed and more servers gradually added to satisfy signaling requirements, while maintaining relatively short connection delay and larger system handling capacity. 5 7 Chapter 4 Performance Analyses for Integrated Voice and Data Traffic in the CATV/PCN Overlay In this chapter, we examine using simulation network performance including the A A L and A T M end-to-end delay of integrated voice and data traffic. We first analyze the network performance for a single D C model, which consists of 15 DPs in 2 rings (8 DPs in one ring and 7 in the other). We then investigate the model with 3 identical, interconnected DCs. 4.1 Physical Single DC Network Model The physical single D C network model is shown in Figure 4.1. It is comprised of a D C and 15 DPs, distributed along 2 rings. The D C serves as a gateway to interconnect the 2 rings and other DCs. It may also interconnect with the P S T N , internet and other public or private networks. Each DP generates traffic (including data and voice traffic) and transports it to all other DPs through 155.52Mb/s bidirectional links. The distance between two DPs is assumed to be 3Km. DPb DP6 DP'/ DPB DPlb DP14 . DPl'J FIGURE 4.1. Single DC network model 52 Performance Analyses for Integrated Voice and Data Traffic in the CATVIPCN Overlay 4.2 Simulation Model Voice (QOS A) and data traffic (QOS C) are assigned different access levels priority so that voice packets would be transmitted as soon as they enter the network, regardless of data traffic loading. Signaling traffic is assigned the highest priority to effectively manage traffic flow for optimal network performance. The priority levels are established by operating.2 queues at each output port, one for voice traffic and the other for data. In other words, waiting packets gain access immediately,when capacity becomes available; however, any waiting voice packet is always sent before a data packet. This model assumes that the V P and V C switches are sufficiently fast to handle the.maximum rate of arriving cells, so input port buffering is not modeled. Inter-arrival time of data packets at each DP is exponentially distributed with a burstiness of 5. The Poisson arrival rate at each D P is identical. Further, a buffer size of 1000 cells is available for each queue in all A T M switches output ports. Finally, the A A L S A R (segmentation and reassembly rate) is 40500 2 cells/s. A l l simulation models are implemented with O P N E T , a sophisticated simulation software provided by M i l 3, Inc. O P N E T is chosen because of its G U I (graphical user interface) feature and built-in communications modules. In addition, O P N E T is equipped with most of the standard A T M modules and allows researchers to develop additional appropriate modules to suit various applications easily. 1. burstiness is denned as the ratio o f peak cel l rate over average cel l rate. 2. 40500 cells/s is the maximum S A R achievable, i.e. 155.52Mb/s divided by 384 bits. It indicates the maximum number of cells which can be produced or reassembled at the A A L per time unit. 53 Performance Analyses for Integrated Voice and Data Traffic in the CATV!PCN Overlay 4.3 Simulation Results - Single DC Network with Data Traffic We assume here all traffic in the network is data traffic, and each DP generates the same amount of traffic. Each ring is defined as a cluster. Thus, intra-cluster traffic is the traffic within a cluster and inter-cluster traffic is the traffic between two rings within a D C . Two statistics are collected in the A A L layer, namely the A A L end-to-end delay and the delay variance. A A L end-to-end delay is computed as the difference between the time an A A L P D U (protocol data unit) is created at the source A A L , and the time it is reassembled at the destination A A L . A A L end-to-end delay variation is the variance in the end-to-end delay. A T M end-to-end delay is measured in the A T M layer; it equals the time difference between the time A T M cells are sent from the source A T M layer module, and the time they arrive at the destination A T M layer module. Figure 4.2 shows the worst case data packet delay (i.e. data packets which originate at DP 12 and arrive at DP4). We consider two configurations: 1. In configuration 1, traffic generated in each DP is distributed equally to all other DPs. 2. In configuration 2, the intra-cluster traffic generated at each DP is twice as much as the inter-cluster traffic. The results show that the network can support about 900Mb/s total throughput under configuration 1 and 1200Mb/s under configuration 2, with end-to-end delay less than 1.5ms. For total throughput under 500Mb/s, the two configurations produce similar results. But as the total throughput increases above 500Mb/s, the performance of configuration 1 'degrades' (delay increases exponentially) because the network becomes 54 Performance Analyses for Integrated Voice and Data Traffic in the CATV/PCN Overlay congested, thus inducing a larger delay over configuration 2. The substantial increase in network capacity achieved in configuration 2 is due to the localized nature of calls. This is because the network is not as heavily loaded at the DPs far away from the D C as it is at the DPs near the D C . Therefore, it is clear that more intra-cluster traffic can be supported by utilizing the unoccupied bandwidth, and hence increasing the total throughput. The A T M end-to-end delay is obtained by the A A L end-to-end delay less the S A R delay, and hence it exhibits the same trend as the A A L end-to-end delay. 1.5 8 00 G D-Q s C W I 0.5 0 Config 2 A T M end-to :end delay Config 2 A A L end-to-end delay Config 1 A T M end-to-end delay Config 1 A A L end-to-end delay Config 1: traffic from each DP is sent equally to all other DPs Config 2: 75% intra-cluster traffic and 25% inter-cluster traffic _ L _ L _ L _ L 0 100 200 300 400 500 600 700 800 900 1000 1100 1200 Total Network Throughput (Mb/s) FIGURE 4.2. Worst case data packet end-to-end delay Figure 4.3 illustrates the average A A L end-to-end delay in each DP. The values are obtained by averaging the A A L end-to-end delay values of all the packets originating from all DPs and received at each DP. It is observed that configuration 2 yields better delay performance over configuration 1 even at higher network throughput, owing to the localized nature of calls. Figure 4.3 provides a general statistics of delay values; more 5 5 Performance Analyses for Integrated Voice and Data Traffic in the CATV/PCN Overlay accurate results would have to be taken from direct measurements for traffic generated at one D P and terminating at the other. 0.3 0.25 o as Q B 1> T3 a pq a o.i5 3 0.1 Config 1 Data Throughput 251.2 Mb/s o-126.1 Mb/s 1=1 62.22 Mb/s X 31.04 Mb/s ^ Config 2 Data Throughput 375.3 Mb/ s ' o 188.1 Mb/ s • 93.69 Mb/s x 47.12 Mb/s * J I I L J I I L J L 7 8 9 D P Index 10 11 12 13 14 15 FIGURE 4.3. Average data packet AAL end-to-end delay in each DP 4.4 Simulation Results - Single DC Network, Mixed Voice and Data Traffic 4.4.1 Voice and Data Traffic with Variable Data Traffic Loading Figure 4.4 shows the average inter-cluster data packet A A L end-to-end delay (with respect to DPI2) versus the total network throughput. It is assumed that all DPs have identical traffic generation statistics, and that all DPs send equally to all other DPs within the network. There are 132 16Kb/s voice calls in each DP, resulting in a total voice throughput of 475Mb/s. This value is chosen so that the network accommodates 50% voice traffic and 50% data traffic, when fully loaded. 56 Performance Analyses for Integrated Voice and Data Traffic in the CATV/PCN Overlay Data packets are sent from DP 12 to all other DPs in cluster 1 (recalling that there are 8 DPs in cluster 1 and 7 in cluster 2). Figure 4.4 shows the average inter-cluster data packet A A L end-to-end delay of various pairs (DPI, DP8), (DP2, DP7), (DP3, DP6), (DP4, DP5). Because the network is symmetric (even number of nodes) in cluster 1, the two DPs in each pair experience similar delay. The trend shows that the delay increases exponentially when the total network throughput exceeds 550Mb/s. The total network throughput achieved is 950Mb/s, in which 475Mb/s is voice traffic. When the throughput is 950Mb/s, the maximum inter-cluster delay from DP 12 to DP4/DP5 is roughly 1.6ms and the minimum inter-cluster from DP12 to DP1/DP8 is roughly 1.5ms. Further increase in the total network throughput can cause the delay to increase without bound. 1000 Total Network Throughput (Mb/s) FIGURE 4.4. Average inter-cluster data packet AAL end-to-end delay The average intra-cluster (i.e. from DP12 to all DPs in cluster 2) data packet A A L end-to-end delay is plotted in Figure 4.5. These curves represent the delay values from DP12 to DP9, DP10, D P l i ; DP12, DP13, DP14, and DP15. In this case; the DP12-DP12 57 Performance Analyses for Integrated Voice and Data Traffic in the CATVIPCN Overlay delay represents the segmentation and reassembly (SAR) time required and the routing and transmission time within a DR Since the network is not symmetric (odd number of nodes) in this cluster, there are two possible ways the inter-cluster traffic is transported: 1) the inter-cluster traffic originating at DP12 is transported over DP13, DP14 and DP15 to the D C switch; 2) the inter-cluster traffic originating at DP 12 is transported over DP11, DP 10 and DP9 to the D C switch. They result in the worst case performance (when all traffic from DC12 is transported on the links along the path to DC) , and the optimal case performance (when no traffic from DP12 is transported on the links along the path to DC) . 8 s Q c o 0.5 0.4 h e 0.3 T3 g C 0.2 0.1 D P 9, D P 15 maximum D P 10, D P 14 maximum D P 11, D P 13 maximum D P 12 average D P 11, D P 13 minimum D P 10, D P 14 minimum D P 9, D P 15 minimum - x -- e -- B -600 700 800 900 1000 Total Network Throughput (Mb/s) FIGURE 4.5. Average intra-cluster data packet AAL end-to-end delay As shown in Figure 4.4, the average inter-cluster data packet A A L end-to-end delay increases exponentially as the total network throughput increases. This is because the D C switch becomes congested as the offered inter-cluster traffic increases which causes an increased delay. The minimum and maximum delay pairs in Figure 4.5 provide the boundaries for intra-cluster data packet end-to-end delay estimate in cluster 2. It is 58 Performance Analyses for Integrated Voice and Data Traffic in the CATV/PCN Overlay intuitive that the difference between the minimum delay and the maximum delay becomes larger when the distance between the source node and the destination node increases, and when the offered traffic load increases. The last curve (relatively constant) in Figure 4.5, represents the sum of the S A R time required to segmentize and reassemble an A A L P D U , and the routing and transmission time within a D R 1.8 t i i i i i i i i i _ i i i : i i 1 2 3 4 5 6 7 8 9 10 11 12 13. 14 15 D P Index F I G U R E 4.6. Data packet A A L end-to-end transmission delay in each DP location Figure 4.6 depicts the data packet A A L end-to-end transmission delay in each DP location with respective to DP12. The delay increases as the total throughput increases. The values of DP9, DP10 and DP11 represent the minimum intra-cluster A A L delay and those of DP13, DP 14 and DP15 represent the maximum intra-cluster A A L delay. Therefore, we expect that when all DPs in the network generate the same amount of traffic (with 50% voice traffic and 50% data traffic) to fully load up the network, the maximum inter-cluster delay (from DP12 to DP4/DP5) is roughly 1.6ms and the maximum intra-cluster delay (from DP12 to DP9/DP15) is about 0.4ms. 59 Performance Analyses for Integrated Voice and Data Traffic in the CATVIPCN Overlay Since voice traffic has priority over data, the delay experienced by the voice packets is much smaller than the data packet delay, and it is relatively constant. The results show that the voice packet maximum inter-cluster A A L end-to-end delay (i.e. from D P 12 to DP4/DP5) is roughly 0.17ms and the intra-cluster A A L end-to-end delay (from DP12 to DP9/DP15) is about 0.007ms, regardless of the total throughput (results will be shown later). Delay variation or jitter is an undesirable transmission outcome due to random delay which arises out of buffering in the network. It can be controlled at the receiver using larger buffers and delaying cells. Table 8[34] gives some examples of the delay and delay variation Objectives of various audio and video services. Our results, as shown in Figure 4.7, indicate that these objectives have been met as both the data traffic and voice traffic A A L end-to-end delay variation is controlled well below 0.001ms, if the network is not overloaded. 40 r 1 600 700 800 900 1000 Total Network Throughput (Mb/s) FIGURE 4.7. AAL end-to-end delay variation of voice and data traffic 60 Performance Analyses for Integrated Voice and Data Traffic in the CATVIPCN Overlay TABLE 8. Delay and Delay Variation for Two-way Session Audio and Video Services APPLICATION I)ELAY(MS) JITTER(MS) 64KB/S VIDEO CONFERENCE 300 130 1.5MB/S M P E G NTSC VIDEO 5 6.5 20MB/S HDTV VIDEO 0.8 1 I6KH/S COMPRESSED VOICE 30 130 256KB/S M P E G VOICE 7 9.1 4.4.2 Voice and Data Traffic with Variable Voice Traffic Loading Now we examine the network delay performance by fixing the total data throughput at 476Mb/s and varying the number of voice calls in each DP. As before, each DP is assumed to generate the same amount of traffic and sends it equally to all other DPs. Figures 4.8 and 4.9 illustrate the average inter-cluster and intra-cluster data packet A A L end-to-end delay, respectively. V C denotes the number of 16Kb/s voice calls in each DP. FIGURE 4.8. Average inter-cluster data packet AAL end-to-end delay (fixed data throughput) 61 Performance Analyses for Integrated Voice and Data Traffic in the CATV/PCN Overlay Average Intra-cluster Data Packet A A L End-to-end Delay 8 6 ' 5 Q c o 0.5 h 0.4 6 0.3 I c T 3 C w 0.2 h 0.1 h D P 9, D P 15 maximum D P 10, D P 14 maximum D P 11, D P 13 maximum D P 12 average D P 11, D P 13 minimum D P 10, D P 14 minimum D P 9, D P 15 minimum - x - e -VC=1982 VC=1924 \\ VC=398 -o—e—o t / t \\ VC=991 VC=1387 VC=1585 VC=1784 _ L 600 700 800 Total Network Throughput (Mb/s) 900 1000 FIGURE 4.9. Average intra-cluster data packet AAL end-to-end delay (fixed data throughput) The average data packet A A L end-to-end delay decreases when the number of voice calls in each DP decreases because the bandwidth not occupied by the voice traffic are allocated for data traffic when needed. But when the voice traffic increases, bandwidth is allocated to voice traffic immediately since it has a higher priority. Data traffic then is left with a smaller bandwidth, inducing a larger delay. Based on Figures 4.4 and 4.5, it is observed that the data packet A A L and A T M end-to-end delays at a certain total network throughput level will be approximately the same under various data and voice traffic distributions. In other words, by changing the amount of voice traffic and adjusting the amount of data traffic accordingly to obtain the same total throughput level, the delays encountered by data packets will roughly be the same. 62 Performance Analyses for Integrated Voice and Data Traffic in the CATVIPCN Overlay 4.4.3 Voice and Data Traffic with Different Voice Coding Rates In this section, we consider the effect of voice coding rate on voice and data packet delay in the network. In our model, each DP has the same number of voice calls with Poison arrivals. Figure 4.10 shows the worst case average inter-cluster A A L end-to-end delay (i.e. from DP12 to DP4/DP5) for 2 different voice coding rates (16Kb/s and 8Kb/s). Figure 4.11 depicts the worst case average intra-cluster packet A A L end-to-end delay (i.e. from DP12 to DP15). n r . i i i i i i i i i i i i l 0 100 200 300 400 500 600 • 700 Data Traffic Throughput (Mb/s) FIGURE 4.10. Average inter-cluster AAL end-to-end delay (diff. voice coding rate) The curves in both figures show clearly that a reduction in the voice coding rate leaves more capacity for data packet transmission. Reduction of the voice coding rate from 16Kb/s to 8Kb/s produces a savings of 236Mb/s which can be utilized by data traffic. The data traffic throughput thus increases from 475Mb/s to 711Mb/s. There is a one-to-one correspondence between the delay value and total throughput, and we can expect to obtain 63 Performance Analyses for Integrated Voice and Data Traffic in the CATVIPCN Overlay approximately 947Mb/s data throughput with the same number of voice calls if 4Kb/s voice coding rate is used. 8 e, Q 0.6 0.5 0.4 •? 0.3 T 3 g T 3 c 0.2 0.1 0 Voice Coding Rate |Voice A T M ete delay - — Voice A A L ete delay & Data A T M ete delay - - * Data A A L ete delay *• 8kb/s 16kbls . — x — • _ x — -x' -e-t-i o, IO, a—is 100 200 300 400 500 600 700Data Traffic Throughput (Mb/s) FIGURE 4.11. Average intra-cluster AAL end-to-end delay (diff. voice coding rate) It is not surprising that the delay experienced by voice packets remain constant, because voice packets have higher priority than data packets. Thus, a voice packet gains access as soon as it enters the network while data packets have to be stored in the buffer if there is not enough bandwidth to allow for immediate transmission. Also, the difference between the voice A T M and A A L end-to-end delay is small. Many speech coders typically used or propose 20ms frame lengths for digital cellular applications. When 16Kb/s coders are used, the frame length is 320 bits. With A A L 5 , after the frame is padded with 5 byte header and 8 byte trailer, it is exactly one A T M cell (424 bits) in size. This is much smaller than a data P D U which is typically over 10000 bits long. Therefore, a very small S A R 64 Performance Analyses for Integrated Voice and Data Traffic in the CATV/PCN Overlay delay is associated, as compared to the S A R delay needed to segmentize or reassemble data packets. The worst case voice packet inter-cluster (from D P 12 to DP4/DP5) and intra-cluster (from DP12 to DP9/DP15) A A L end-to-end delays are roughly 0.17ms and 0.007ms respectively, as before. Figure 4.12 and Figure 4.13 show the inter-cluster and intra-cluster end-to-end delay values with fixed data throughput and variable voice throughput with 16Kb/s and 8Kb/s voice coding rates. V C 1 6 indicates the number of 16Kb/s voice calls in each DP and V C g indicates the number of 8Kb/s voice calls in each DP. When 8Kb/s coding rate is used rather than 16Kb/s, the network can support approximately twice the number of voice service users while achieving the same delay and throughput performance. Q c o P -a a W 1.6 1.4. 1.2 1 0.8 0.6 0.4 0.2 0 Data Throughput 476Mbls 485Mbls Voice A T M ete delay 1 — « — Voice A A L ete delay 1 9 Data A T M ete delay Data A A L ete delay « * VC, f i=1982 \\ J/'' VC 8=3754 VC 8=3185 _ L 100 200 300 V o i c e Traffic Throughput (Mb/s) 400 500 FIGURE 4.12. Inter-cluster AAL and ATM end-to-end delays (fixed data throughput) 65 Performance Analyses for Integrated Voice and Data Traffic in the CATV/PCN Overlay Q s a c 0.6 0.5 0.4 0.3 0.2 0.1 0 Data Throughput 476Mbls 485Mbls Voice A T M ete delay 1 — © — Voice AAL ete delay 1 9 Data ATM ete delay Data AAL ete delay © *-VC16=1924 VClfi=1784 VC8=3754 VC8=1982 VC8=2736 100 200 300 Voice Traffic Throughput (Mb/s) 400 500 FIGURE 4.13. Intra-cluster A A L and A T M end-to-end delays (fixed data throughput) 4.4.4 Voice and Data Traffic with Different Inter-cluster Traffic Levels Different inter-cluster traffic levels have significant impact on system throughput. The inter-cluster and intra-cluster delay performance Of voice and data A A L and A T M delays are plotted in Figure 4.14 and Figure 4.15. As in Section 4.3, two configurations are considered: 1) each DP generates the same amount of traffic and distributes it equally to all other DPs, and 2) the intra-cluster traffic generated at each DP is twice as much as the inter-cluster traffic. In configuration 1, with a total throughput of 951Mb/s, the inter-cluster A A L delay encountered by a data packet is 1.6ms. In configuration 2, the delay is only 0.64ms at 970Mb/s throughput level. In case of intra-cluster data A A L end-to-end delay, the values are 0.4ms and 0.3 ms for configuration 1 and configuration 2, respectively. However the delay encounter by voice packets remains unchanged as voice has a higher priority. 66 Performance Analyses for Integrated Voice and Data Traffic in the CATV/PCN Overlay Q c o e •a w 1-6 F-1.4 1.2 b 1 0.8 o.6 r-0.4 0.2 % Config 1 Config 2 V o i c e A T M ete delay — © — - - — -c* — Voice A A L ete delay e Data A T M ete delay - — x — -Data A A L ete delay * Config 1: traffic from each DP is sent equally to all other DPs Config 2: 75% intra-cluster traffic and 25% inter-cluster traffic 00 600 700 800 Total Network Throughput (Mb/s) 900 1000 FIGURE 4.14. Worst case inter-cluster A A L and A T M end-to-end delay \"33 P e o T3 C w 0.6 0.5 0.4 0.3 0.2 h 0.1 Config 1 f |Voice A T M ete delay Vo ice A A L ete delay — ° — Data A T M ete delay - - x - -Data A A L ete delay — * — Config2 — — 0 1 Config 1: traffic from each DP is sent equally to all other DPs Config 2: 75% intra-cluster traffic and 25% inter-cluster traffic 500 . — — * _ J _ — X l a 600 700 800 Total Network Throughput (Mb/s) 900 1000 FIGURE 4.15. Worst case intra-cluster A A L and A T M end-to-end delay 67 Performance Analyses for Integrated Voice and Data Traffic in the CATVIPCN Overlay The delay and capacity enhancements are due to the reduction of inter-cluster traffic which causes congestion at the D C gateway. The voice traffic delay is not reduced because voice is already assigned the higher priority over data, and thus it would be transmitted as soon as it enters the network, regardless of the data traffic volume. One can expect the total network capacity to be about 1.2Gb/s in configuration 2, based on the result in Section 4.3. 4.5 Three-DC Network Model and Simulation Results The physical 3 -DC network model is plotted in Figure 4.16. Each D C subnet consists of 15 DPs distributed along 2 rings, as shown in Figure 4.1. The left ring with 8 DPs in Figure 4.1 is referred to as cluster 1, and the right ring with 7 DPs is referred to as cluster 2. Since the number of DPs are different in the left ring and the right ring, the offered traffic from these clusters are different, if all the DPs generate the same amount of traffic. The distance between DCs is presumed to be l O K m and 155Mb/s backbone trunks are employed. Each D C is comprised of 15 D P s distributed along 2 rings as shown in Figure 4.1. The ring which has 8 D P s is referred to as cluster 1 and the other ring which has 7 D P s is referred to as cluster 2. The two clusters offer different traffic volume to the whole network (cluster 1 traffic > cluster 2 traffic). DC1 DC 2 FIGURE 4.16. 3-DC network model 68 Performance Analyses for Integrated Voice and Data Traffic in the CATV/PCN Overlay Network survivability and reliability are important themes in B-ISDN. Due to the steady increase in optical transmission, a single failure in the network will seriously impact many service users. A wide range of protection and restoration techniques have been developed, such as component redundancy, self-healing rings and dynamic restoration. Therefore, we allocate 30Mb/s bandwidth in each backbone trunk for backup purposes. When there is a problem in the network (i.e. link failure), necessary V P / V C restorations or other restoration techniques can be performed using the reserved bandwidth[35][36]. The average voice and data A A L end-to-end transmission delay vs. throughput is shown in Figure 4.17. A total throughput of 890Mb/s is achieved in the backbone network. This corresponds to the throughput achievable when there is 55% intra-DC traffic and 45% inter-DC traffic, with 30Mb/s is reserved for backup bandwidth. A relatively large difference in the time delay of cluster 1 and cluster 2 traffic is experienced when the network approaches congestion, because the two clusters offer different traffic volumes to the network. For throughput of less than 700Mb/s, the traffic from both clusters experience similar small delays. The average data packet A A L end-to-end delay rises to roughly 10.5ms and 5.0ms for cluster 1 traffic and cluster 2 traffic respectively. However, the A A L end-to-end delay is the same for both cluster 1 and cluster 2 voice traffic because of the higher priority assigned to voice packets. The average voice A A L end-to-end delay is found to be around 0.075ms. Note that these values represent the delay of packets going from one D C to the other D C . 69 Performance Analyses for Integrated Voice and Data Traffic in the CATV/PCN Overlay Total Network Throughput (Mb/s) FIGURE 4.17. Average AAL end-to-end delay for inter-DC traffic. Traffic from Cluster 2 - - * - - I Total Network Throughput (Mb/s) FIGURE 4.18. D C switch traffic loss ratio Performance Analyses for Integrated Voice and Data Traffic in the CATVIPCN Overlay The impact of buffer size on network performance has been studied and the simulation results with the 3 -DC model is depicted in Figure 4.18. With a buffer size of 1000 cells at DPs and DCs and data traffic with a burstiness of 5, a 4% loss in the total traffic (defined as the ratio of total traffic transmitted to total traffic received) in cluster 1 resulted. But for cluster 2 traffic, no loss is encountered due to the smaller offered traffic volume all of which could be successfully accommodated. The average A A L end-to-end delay variation of data and voice traffic is shown in Figure 4:19. Again, the delay variation objectives are met (less than 0.001ms). Average A A L End-to-end Delay Variation for Inter-DC traffic 100 O 80 h M 60 4) Q 2 40 h Q i C w g 3 20 Voice Cell * -Data Cell e-0 700 750 Total Network Throughput (Mb/s) FIGURE 4.19. Average AAL end-to-end delay variation for inter-DC traffic Figure 4.20 shows the delay performance when two different voice coding rates (16Kb/s and 8Kb/s) are employed. With 16Kb/s voice sources, the data throughput achieved is 288Mb/s and with 8Kb/s voice sources, the data throughput is 588Mb/s. 71 Performance Analyses for Integrated Voice and Data Traffic in the CATV/PCN Overlay Therefore an additional 300Mb/s can be allocated for data traffic when 8Kb/s voice coding rate is used rather than 16Kb/s. For the same reason, if 4 Kb/s voice coding rate is used, we can estimate that approximately 600Mb/s will be available for data traffic while supporting the same number of voice service users. Because voice has priority over data, the inter-DC voice A A L end-to-end delay remains unchanged. a Q c o T 3 C 1> 2 > < .11 10 9 8 7 6 5 4 3 2 1 0 Voice Coding Rate 916Kbls Cluster 2 Vo ice Traffic f - * -Cluster 2 Data Traffic * Cluster 1 Vo ice Traffic Cluster 1 Data Traffic 100 150 , 2 0 0 250 300 350 400 450 500 550 600 Data Traffic Throughput (Mb/s) FIGURE 4.20. Average Inter-DC AAL end-to-end delay with different voice coding rates 72 Chapter 5 Network Capacity and Coverage of The CATV/PCN Overaly In this chapter, the network capacity of the C A T V / P C N overlay is presented and some suggestions on ways to increase capacity are provided. Issues regarding the use of distributed antenna to enhance network coverage are also discussed. 5.1 Network Capacity of the CATV/PCN Overlay The network capacities of the single D C model in Chapter 4 under various circumstances are tabulated in Tables 9 and 10. In configuration 1, each DP has identical traffic generation statistics and sends traffic equally to all other DPs to produce a total network throughput of 950Mb/s. In configuration 2, each D P generates twice as much traffic to send to intra-cluster nodes, compared to the traffic volume sent to inter-cluster nodes and the total throughput achieved is 1.2Gb/s. The blocking probability is assumed to be 0.001. We consider two traffic distributions: 1) 50% voice / 50% data and 2) 60% voice / 40% data. For the following calculations, a duplex voice call consumes 2 circuits and we assume simplex data communications where 1 circuit is used. Voice calls are assumed to have a call arrival rate of 2 calls in the busy hour and a holding time of 120s. Data calls are assumed to arrive at a rate of 0.6call/hour and last for 60s. The Erlang B equation given in 73 Network Capacity and Coverage of The CATV/PCN Overaly Equation 5.1, where p is the blocking probability, is then applied to obtain the relevant capacity value. p = ( ^ / m ) O T / m ! (EQ5.1) £ (X/mf/n\\ n = 0 TABLE 9. Network capacity with blocking probability of 0.001 and 50% voice / 50% data traffic i l l l l l l f f i i i l i l l l i l i DATA VOICE DATA TOTAL CHANNEL CHANNEL USERS USERS USERS CONFIGURATION BW (KB/S) BW (KB/S) no5) IBB (IB6) CONFIGURATION 1 8 9.6 4.42 4.92 5.37 8 16 4.42 2.95 3.39 16 32 2.12 1.47 1.69 CONFIGURATION 2 8 9.6 5.59 6.22 6.78 8 16 5.59 3.73 4.28 16 32 2.78 1.85 2.13 TABLE 10. Network capacity with blocking probability of 0.001 and 60%voice / 40% data traffic VOICE DATA VOICE DATA TOTAL CHANNEL CHANNEL I Isers USERS USERS CONFIGURATION BW(KB/S) BW (KB/S) (10S) (III6) (106) 8 9.6 5.31 3.94 4.47 CONFIGURATION 1 8 16 5.31 2.35 2.89 16 32 2.64 1.17 1.43 8 9.6 6.71 4.97 5.64 CONFIGURATION 2 8 16 6.71 2.98 3.65 16 32 3.34 1.48 1.82 74 Network Capacity and Coverage of The CATV/PCN Overaly 5.2 Coverage Estimate In Terms of City Blocks Based on the simulation results, the maximum inter-DP traffic that each DP is allowed to send in configuration 1 is 950Mb/'s+ 15 = 63.33Mb/s. Assuming that 63.33Mb/s corresponds to 40% of the total traffic in a DP, the total achievable throughput of a DP is 158.33Mb/s. If lOOMb/s (i.e. roughly 63%) is allocated for voice traffic with tlKbls coding rate, approximately 5660 two-way calls could be supported. For a blocking probability of 0.01. these two-way circuits can support 5655.2 Erlangs. FIGURE 5.1. DP coverage area representation for wireless PCS , Figure 5.1 depicts a downtown metropolitan area consisting of rectangular city blocks. Each road between two city blocks contains a radio microcell. The resulting total voice traffic Tv in a coverage area of n blocks and 2n cells is given in Equation 5.2[15]: Tv = [a + 2(l-a)]x(Ab + 2Ac)n<5655.2 ( E Q 5 . 2 ) where a is the fraction of intra-DP traffic, (1-cc) the fraction of inter-DP traffic, Ab the offered traffic per city block, and Ac the offered traffic per microcell. Therefore, the number of city blocks is calculated from: 75 Network Capacity and Coverage of The CATVIPCN Overaly • » * ( 2 - W 2 A e ) Assume two pavements of lengths 300m per microcell with pedestrians spaced at 1.5m during the busy hours; 75% of the population have portable phones (i.e. 300 users), and 1000 people per block are inside buildings with 90% having phones (i.e. 900 users). Table 11 shows the average call arrival rate and holding time assumptions for each user and the corresponding Ab and Ac values. Figure 5.2 (case 2) and Figure 5.3 (case 1) illustrate the DP coverage area as a function of a, the fraction of intra-DP traffic. T A B L E 11. Call arrival rate and holding time assumptions illililBHiiiiiiiii^ iiiiii Case 2h A D 30 ' 60 A c 10 20 a. Case 1: call arrival X = 2 calls/hour, holding time = 1 min. b. Case 2: cal l arrival X = 2 calls/hour, holding time = 2 min. 60 0 L I l I l I l !i I I I 0 • 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 . 1 a , Fraction of intra-DP traffic F I G U R E 5.2. D P coverage estimate in terms of city blocks and associated microcells 76 Network Capacity and Coverage of The CATV I PCN Overaly 0.3 0.4 0.5 0.6 0.7 a , Fraction of intra-DP traffic F I G U R E 5.3. D P coverage estimate w i t h shorter ho ld ing t ime In case 2, a DP is able to support an area of approximately 56 (i.e. 5Km ) and 28 (i.e. 2.5Km 2 ) city blocks for oc=l or 0, respectively. If 40% of the traffic is inter-DP traffic, the coverage is 40 city blocks. When the holding time decreases by half (case 1), the number of covered city blocks would increase proportionally, as shown in Figure 5.3. 5.3 Optimum Number of Base Stations A DP is modeled as a M / M / 1 queueing system to estimate the optimum number of base stations (BS) located at FNs (fiber nodes). We can represent the average utilization of a DP as: P = N i= 1 c ( E Q 5.4) 77 Network Capacity and Coverage of The CATVIPCN Overaly where y{ is the average data rate of each BS, and C the total capacity of a DP. If each node Ny has the same average output data rate y, p = . The average delay of a M / M / l queue is given by: d = j — 1 X ( l - p ) C ( l - p ) , where A. = ivy is the total arrival rate. It is desirable 0 (Ny) to obtain a high throughput to delay ratio (3 = ^ = Ny ~ Q '•> and the ratio (3 can be optimized (i.e. maximum throughput allowed when the delay is minimum) with respect to dP the number of BSs N by letting = 0. Therefore, the optimum number of BS is: dN AT C B S = 2y ( E Q 5 . 5 ) 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 Offered Traffic per B S (Erlangs) FIGURE 5.4. Optimum number of base stations vs. offered traffic per BS Figure 5.4 presents a numerical example based on the following assumptions: (1) each BS generates the same amount of traffic, (2) the blocking probability of the air 78 Network Capacity and Coverage of The CATVIPCN Overaly interface is 0.01, and (3) the total voice capacity of a DP switch is lOOMb/s. As an example, when the offered traffic of each BS is 260 Erlangs, the optimal number of BS is 10. With 40% of the total traffic being inter-DP traffic, 2 calls/hour arrival rate and 120 ry seconds call holding time, a DP could support a area of around 3.65Km , implying that each BS would cover an area of 0.37Km 2 . 5.4 Capacity Enhancement in the CATV/PCN Overlay The network utilization can be enhanced by reducing the number of DPs in each ring. The reduction of DPs would give bandwidth access to fewer DPs more exclusively. The bottleneck of the network is the D C switch because the inter-DC and inter-cluster traffic would have to be switched through it. So, the traffic volume builds up as it approaches the D C switch, leaving the links further away from the D C lightly occupied (assuming that all DPs generate equal amount of traffic). As the number of DPs is reduced, this undesirable situation is alleviated and allowing for a higher utilization. Recalling that 5 to 8 DPs are typically connected in a ring, one example would be to distribute the 15 DPs in a D C along 3 rings (5 DPs per ring). In this way, the network utilization can be substantially increased. Additional capacity enhancement can be achieved by utilizing silence deletion algorithms for voice traffic. Voice sources typically have a voice activity of 0.4, i.e. active only 40% of the time[15][31]. Therefore, silent periods contains no information and is wasteful if transmitted. Substantial capacity improvements of at least twice the original voice capacity can be expected. 79 Network Capacity and Coverage of The CATV!PCN Overaly Moreover, stand alone A T M switches can be installed in areas where access to C A T V services is not available or is limited to provide services to PCS users. This will contribute additional increase in throughput. Further improvements can be achieved by using C A T V distributed antennas in a simulcast fashion to improve system coverage and spectral efficiency, and to minimize costs, as described in the following section. 5.5 Distributed Antennas 5.5.1 Distributed Antennas Concepts To meet the ever increasing demand for mobile communications, increased capacity may be provided by subdividing cells into smaller sizes. With small cells, the choice of locations suitable for installing a base station (BS), including its transmitting antenna tower, are limited. Hence, specially leased locations, such as a space on top of a privately owned building, are required and this imposes high costs. Also, the conventional method, which employs single antenna to provide coverage to a microcell, requires high radiated power which maybe harmful[4]. Therefore, it is proposed to consider a simulcast arrangement, where all antennas mounted on lamp posts or other places along specified paths of the C A T V network broadcast and receive on the same radio channel. This architecture could accommodate both vehicles and pedestrians carrying low-power portables. The elements of this configuration are shown in Figure 5.5, and they operate as follows[4][5]: 1. Base stations perform modulation and demodulation functions. 80 Network Capacity and Coverage of The CATVIPCN Overaly 2. A remote antenna signal processor (RASP), located at the central site, interfaces the base station to the C A T V network or to dedicated communication cables. 3. Remote antenna drivers (RADs) are responsible to relay off-air signals back to a central site via the plant's return path and provide interface to the cable for the antenna. 4. Microcell Extenders (MEXs) deploy dedicated coaxial cable or fiber to enlarge the net coverage zone of a R A D or of a base station. The problem of no in-site C A T V plant is also overcome by utilizing these M E X s to fill gaps in coverage zones in a cost effective manner. FIGURE 5.5. Distributed antennas enlarge service coverage areas 5.5.2 Merits of Distributed Antennas Microcell designers can use distributed antenna effects to simply increase a coverage zone and minimize the effect of line-of-sight (LOS) blockages. In areas with low traffic or low penetration service, multiple radio ports (i.e. R A D s and/or M E X s ) can be CATV cable individual RAD coverage zone dedicated cable to be fed to MEXs to fill gaps in the coverage zones total coverage zone 81 Network Capacity and Coverage of The CATVIPCN Overaly grouped together to from a larger set of radio users to share common cable spectrum and call processing hardware resources. Also, there is no call handoff issues when a user moves between two microcells within the same simulcast group. As microcell size decreases in PCS systems, the variation in traffic demand becomes larger. Without simulcasting, spectrum and hardware resources must be budgeted for maximum traffic at each cell, even though the average traffic loading over the entire service area is low, such as in suburban areas. With simulcasting, the traffic loading can be dynamically balanced by rearranging the grouping of radio ports according to traffic load variations; for example, heavy traffic zones can be accommodated by assigning all the resources of a group server (i.e. RASP) to a fewer number of radio ports. 5.5.3 Number of Radio Ports (RADs/MEXs) Studies conducted by Sirikiat Ariyavistakul et a/.[37] show that a maximum of 9 radio ports would ensure that simulcasting in both C D M A and T D M A systems doesn't degrade the uplink performance. For areas with high traffic density in the proposed C A T V / P C N network, fewer radio ports (i.e smaller cell size) should be utilized to increase cell capacity. If 3 R A D s / M E X s are used to cover an area of 0 .37Km 2 as calculated before, each microcell has a radius of 200m. 5.5.4 PCS Air Interface With CDMA In a multipath environment, spread spectrum signalling enables one to make use of the multiple paths so that the energy in each path can be resolved and constructively combined to provide a performance gain over a standard one path receiver. The R A K E 82 Network Capacity and Coverage of The CATVIPCN Overaly receivers originally proposed by Price and Green[12][13] is a standard method for coherent combining of spread spectrum (SS) signals. The multipath structure of the channel is first estimated. Then the received signal is passed through a R A K E correlator which is matched to the channel's response to the transmitted waveforms. Hence the time structure of the multipath environment is exploited efficiently and the signal to noise ratio increases accordingly. On the uplink, the base station demodulators receive independently fading signals by antenna elements placed at different locations. The use of distributed antennas helps mitigate multiple access interference (MAI) from neighboring simulcast groups. Although the major portion of uplink interference is produced by users within the same simulcast group, results show significant improvements in mitigating the impact of multiple antenna noise[37]. D •) r i B received pulses W—d-At FIGURE 5.6. Transmission delay due to different pulses traversed distances On the downlink, digital pulses traveling different distances between a mobile transceiver and R A S P and induce a cable transmission delay, as illustrated in Figure 5.6. Two R A D s , separated by L from each other and located at D ± ^ , are transceiving from a 83 Network Capacity and Coverage of The CATVIPCN Overaly mobile. The pulse with baseband pulse width of d from antenna B is delayed relative to that of antenna A by Ax. Ax is computed as follows[5]: c 0.5c Q where c is the velocity of light in free space, - the.approximate velocity in cable, and 2u the path length difference from the mobile transceiver to antenna A and B . Consider a case g where L = 250m, c = 3 x 10 m/s, and u = 50m; the delay Ax is calculated to be 2|j,s. Since substantial delay difference between each antenna element (> 77777- ) must be BWCDMA introduced so that the multipaths can be processed by the R A K E receiver, the increase of delay spread due to different cable lengths can actually improve the radio link quality of the C D M A system. For a typical C D M A system with a bandwidth of 1.25MHz, a delay greater than 0.8(is is needed. As a result, mobile stations are expected to experience path diversity gains against multipath and shadow fading by employing coherent combination at the R A K E receiver of the delayed version of a signal transmitted through distributed antennas. A C D M A system with distributed antenna simulcast arrangement has several advantages over a T D M A system. A C D M A doesn't have delay spread and guard time requirements as a T D M A does. Also, frequency planning is major consideration in a T D M A system, but not required in a C D M A system. Finally, C D M A systems using tight power control will permit less stringent dynamic range requirements on the associated (hybrid fiber coaxial) H F C links[37]. 84 Chapter 6 Summary and Conclusions A n ATM-based wireless C A T V / P C N overlay has been proposed to meet the increasing demand for Personal Communications Services (PCS). This overlay is motivated by the need for distributed network control and capital conservation while providing minimum interruption to ongoing C A T V services. This distributed network not only has several potential advantages over the current centralized network architectures, it also achieves higher system capacity and results in lower transmission delay, as compared to the previous work on I E E E 802.6 M A N with distributed queue dual bus (DQDB) protocol. Analytical and simulation results show the viability of the proposed network. 6.1 Summary of Findings The current Canadian Cable Television (CATV) architecture is reviewed and an ATM-based wireless C A T V / P C N overlay architecture is then proposed to support PCS. It is proposed that A T M switches be installed at the D C , DP and F N locations in C A T V networks and stand alone A T M switches be installed at areas with limited or without C A T V access. A distributed database architecture is also proposed to minimize unnecessary traffic congestion associated with subscriber registration, call setup and handoff. The suitable signaling architecture for the proposed C A T V / P C N overlay is investigated. Based on the preliminary studies conducted by Nippon Telephone and 85 Summary and Conclusions Telegraph (NTT), the associated signaling transport architecture is cost effective, achieves higher availability and has the scalability advantage over quasi-associated signaling architecture. Accordingly, the associated signaling architecture is proposed as the signaling architecture for the C A T V / P C N architecture. The distributed call processing architecture (DCPA) is discussed and compared with the conventional CP/B-ISUP approach. Call setup and handoff delay encountered in the proposed C A T V / P C N using these two schemes are compared. The D C P / A S E s approach yields better mean end-to-end connection setup and handoff performance (shorter delay). The time saving is made possible by employing parallel channel control operations and reduced call control and connection control processing. A higher saturation tolerance is also achieved with the D C P / A S E s approach. It is found that the intra-cluster connection setup time is slightly over 0.2s, and that the intra-DC inter-cluster and inter-DC connection both experience a similar setup delay of slightly under 0.4s. For connection handoff, the delay values are 0.18s, 0.26s and 0.26s for intra-cluster, inter-cluster intra-DC and inter-DC connections, respectively. Simulations are conducted to analyze the network performance for single D C and three-DC models with integrated voice and data traffic. Control signaling is assigned the highest priority in order to manage traffic flow effectively. Voice is then assigned a higher priority than data. With each DP generating equal traffic which is uniformly distributed to all DPs (i.e. configuration 1), the total network throughput is found to be 951Mb/s. When the intra-cluster traffic generated at each DP is twice as much as the inter-cluster traffic (i.e. configuration 2), the network throughput is 1.2Gb/s. The A A L end-to-end delay is controlled under 2ms for inter-cluster data traffic and about 0.4ms for intra-cluster data 86 Summary and Conclusions traffic. For voice traffic, the inter-cluster end-to-end delay is less than 0.2ms and the intra-cluster delay is less than 0.01ms. A A L end-to-end delay variation is well controlled, and less than 0.001ms The effects of fixed data throughput with variable voice traffic loading and the impact of voice coding rates have been examined. By reducing voice coding rates, additional bandwidth can be released for the use of other traffic to support more data service users. Further, the backbone is found to be able to support a total throughput of 900Mb/s at 4% traffic loss ratio, with 30Mb/s of the 155Mb/s bandwidth reserved for backup/protection purposes. The network capacity and coverage of the C A T V / P S T N are studied. In configuration 1, a single-DC network can support approximately 4.5 million voice and data users on the two rings, when 60% of the traffic is voice traffic; and it is capable of supporting 5.6 million users in configuration 2 with the same traffic distribution. A DP can cover approximately 40 city blocks. The optimum number of base stations in a D P is found to be 10, assuming 40% of the total traffic in a DP is inter-DP traffic, each user has a call arrival rate of 2 calls/busy hour and each call lasts for 2 minutes. Distributed antennas in a simulcast arrangement can be used to improve spectral efficiency, provide coverage enhancement, achieve sharing of common cable spectrum and processing hardware resources, alleviates call handoff issues when a user moves within the same simulcast group, and allow dynamic load balancing. No more than 9 radio ports (i.e. R A D s / M E X s ) should be used to meet uplink performance requirements. Finally, C D M A has been identified as the PCS air interface technology for the C A T V / P C N 87 Summary and Conclusions overlay. The merits of using C D M A as the multiple access technique have been elaborated. 6.2 Suggestions for Future Work There is much work remained to be investigated in the context of using C A T V / P C N overlay to support PCS. Further research may include the following areas: • Performance analysis (i.e. A A L and A T M end-to-end delays, C L R ) when voice with silence deletion, image and video traffic are transported in the C A T V / P C N overlay from mobile-to-mobile, mobile-to-fixed and fixed-to-mobile stations. • Interconnection of the C A T V / P C N with P S T N and other public or private networks: the optimal gateway locations must be justified by network management and economical factors. A cost evaluation to justify network viability and performance would be useful. • Utilization of umbrella cells for coverage enhancement, including call control signaling issues such as call setup, handoff, forwarding, termination and provision of overload capacity could be usefully studied. 88 References [1] George M . Hart, Nick F. Hamilton-Piercy, \"A Broadband Urban Hybrid Coaxial/ Fiber Telecommunications Network\", IEEELCS, Vol. 1, No. 1, pp. 38-45, February 1990. [2] Chuck Carroll, \"Development of Integrated Cable/Telephony in the United Kingdom\", IEEE Communications Magazine, Vol. 33, No. 8, pp. 48-60, August 1995. [3] Wiiliam Pugh, Gerald Boyer, \"Broadband Access: Comparing Alternatives\", IEEE Communications Magazine, Vol. 33, No. 8, pp. 34-46, August 1995. [4] T.S. Chu, M J . Gans, \"Fiber Optic Microcellular Radio\", IEEE 41st Vehicular Technology Conference, St. Louis, pp. 339-344, May 1991. [5] Robert W. Donaldson, Andrew S. Beasley, \"Wireless C A T V Network Access for Personal Communications Using Simulcasting\", IEEE Transactions on Vehicular Technology, Vol. 34, No. 3, pp. 666-671, August 1994. [6] S. Ogose, T. Hattori, \"Spectrum Efficiency for Multitransmitter Simulcasting in Land Mobile Radio\", IEE Electronics letters, Vol. 25, Issue 9, pp. 612-613, April 27, 1989. [7] William C.Y. Lee, \"Mobile Communications Design Fundamentals\"; 2nd edition, New York: Wiley-Interscience Publication, John Wiley & Sons, Inc., 1993. [8] Per-Erik Ostling, \"Handover with Simulcasting\", IEEE 42nd Vehicular Technology Conference, Denver, pp. 823-826, May 1992. [9] R . A . Valenzuela, L J . Greenstein, \"Performance Evaluations for Urban Line-of-sight Microcells at 900MHz Using A Multi-Ray Propagation Model\", IEEE Global Telecommunications Conference, Phoenix, pp. 1947-1952, December 1991. [10] Laurence B . Miltstein, Donald L . Schilling, Raymond L . Pickholtz and others, \"On the Feasibility of a C D M A Overlay for Personal Communications Networks\" IEEE JSAC, Vol. 10, Issue 4, pp. 655-667, May 1992. 89 [11] Ahmad Jalali, Paul Mermelstein, \"Effects of Diversity, Power Control, and Bandwidth on the Capacity of Microcellular C D M A systems\", IEEE JSAC, Vol.12, No.5, pp. 952-961, June 1994. [12] Howard H . Xia , Angel Herrera, Steve K i m , Fernando Rico, Brent Bettencourt, \"Measurement and Modeling of C D M A PCS In-building systems with distributed antenna\", IEEE 44th Vehicular Technology Conference, Stockholm, pp. 733-737, 1994. [13] Babak H . Khalaj, Arogyaswami Paulraj, Thomas Kailath, \"Antenna Arrays for C D M A Systems with Multipath\", Proceedings ofMILCOM 93, Boston, Vol. 2, pp. 624-628, Oct. [14] Norbert L . B . Chan, \"Multipath Propagation Effects on a C D M A Cellular System\", IEEE Transactions on Vehicular Technology, Vol. 43, No. 4, pp. 848-855, November 1994. [15] Andrew D. Malyan, Leslie J. Ng, Victor C M . Leung, Robert W. Donaldson, \"Network Architecture and Signalling for Wireless Personal Communications\", IEEE JSAC, Vol.11, No.6, pp. 830-841, August 1993. [16] Mark Jeffrey, \"Signalling standards for A T M \" , IEE Colloquium: ATM Asynchronous Transfer Mode in Wide and Local Area. Networks, London, No. 118, pp. 3/1-3/3, 1994. [17] Tsong-Ho Wu, Noriake Yoshikai, Hiroyuki Fujii, \" A T M Signaling Transport Network Architectures and Analysis\", IEEE Communications Magazine, Vol. 33, No. 12, pp. 90-99, December 1995. [18] Malathi Veeraraghavan, Thomas F. L a Porta, Wai Sum Lai , \"An Alternative Approach to Call/Connection Control in Broadband Switching Systems\", IEEE Communications Magazine, Vol. 33, No. 11, pp. 90-96, November 1995. [19] I T U - T Recommendation 1.311, \"B-ISDN General Network Aspects\", Geneva, Switzerland, January 1993. [20] I T U - T Draft Recommendation Q.2010, \"Broadband integrated Service Digital Network Overview Signalling Capability\", Geneva, Switzerland, December 1993. 90 [21] ITU-T, \"Draft Recommendation Q.2761-Q.2764 on B - I S D N User Part\", November-December 1993. [22] ITU-T, \"Draft Recommendation: Signalling A T M Adaptation Layer (S .AAL) , Q . S A A L 0 - Q . S A A L 3 \" , December 1993. [23] C C I T T Recommendations Q.931-932, \"Digital Subscriber Signaling System No. 1, Fascicle VI.11\", Blue Book, 1989. [24] C C I T T Study Group XI , \"Specifications of Signaling System No. 7\", Blue Book, 1989. [25] Bellcore Technical Reference TR-TSY-000082, \"Signaling Transfer Point (STP) Generic Requirements\", Issue 4, December 1992. [26] J. Luetchford, N . Yoshikai, T . H . Wu, \"Network Common Channel Signaling Evolution\", Proc. ISS 95, Vol. 2, pp. 234-238, April 1995. [27] Thomas F. L a Porta, Malathi Veeraraghavan, \"Description of a Functional Signaling Architecture for Broadband Networks\", Globecom 1993: IEEE Global Telecommunications Conference, Houston, pp. 1012-1016, 1993. [28] Thomas F. L a Porta, Malathi Veeraraghavan, Philip A . Treventi, Ramachandran Ramjee, \"Distributed Call Processing for Personal Communications Services\", IEEE Communications Magazine, Vol. 33, No. 6, pp. 66-75,1995. [29] I. Hawker, C P . Botham, \"Distributed Control In S D H Transport Networks\", IEE Colloquium No. 235: SDH and its Management and ATM and its Services, London, pp. 2/1-2/5,1994. [30] Nen-Fu Huang, Ko-Shung Chen, \"A Distributed Paths Migration Scheme for I E E E 802.6 Based Personal Communication Networks\", IEEE JSAC, Vol. 12, No. 8, pp. 1415-1425, October 1994. [31] William Y . L . Wong, \"Delay-throughput Analysis of Inter- and Intra-man Voice and Data Integrated Traffic in I E E E 802.6 M A N Based P C N \" , M . A . S c . Thesis, Department of Electrical Engineering, University of British Columbia, February 1995. 91 [32] Christopher M . Cobbold, \"Voice Transport, Capacity and Signaling Improvements for Integrated Wireless Personal Communications Over Metropolitan Area Networks\", M . A . S c . Thesis, Department of Electrical Engineering, University of British Columbia, September 1995. [33] Nanjian Qian, \"Call Control Signaling for Personal Communications Over Interconnected Metropolitan Area Network\", M . A . S c . Thesis, Department of Electrical Engineering, University of British Columbia, April 1993. [34] Raif O. Onvural, \"Asynchronous Transfer Mode Networks: Performance Issues\", 2nd edition, Boston: Artech House, 1995. [35] Eiji Oki , Naoaki Yamanaka, Francis Pitcho, \"Multiple-Availability-Level A T M Network Architecture\", IEEE Communications Magazine, Vol. 33, No. 9, pp. 80-88, September 1995. [36] Ryutaro Kawamura, ikuo Tokizawa, \"Self-healing Virtual Path Architecture in A T M Networks\", IEEE Communications Magazine, Vol. 33, No. 9, pp. 72-79, September 1995. [37] Sirikiat Ariyavisitakul, Thomas E . Darcie, Larry J. Greenstein, Marry R. Philips and N . K . Shankaranarayanan, \"Performance of Simulcast Wireless Techniques for Personal Communications Systems\", IEEE JSAC, Vol. 14, No. 4, pp. 632-642, May 1996. 92 List of Abbreviations and Acronyms Appendix A. List of Abbreviations and Acronyms A A L A T M Adaptation Layer A C M Address Complete Message A I N Advanced Intelligent Network A N M Answer Message A S E Application Service Elements A T M Asynchronous Transfer Mode B B C Broadband Bearer Control B B S E Broadband Bearer Service Element B C C Broadband Call Control B C H C Broadband Channel Control B C H S E Broadband Channel Service Element B C S E Broadband Call Service Element B C S M Basic Call State Model B - I S D N Broadband Integrated Services Digital Network B-ISUP Broadband ISDN User Part BS Base Station C A C Call Admission Control C A T V Cable Television C B R , Constant Bit Rate C C I T T International Consultative Committee for Telegraphy and Telephony (referred to as International Telecommunication Union (ITU) now) C C S Common Channel Signaling C D M A Code Division Multiple Access C L P Cell Loss Priority . C L R Cell Loss Ratio C N R Carrier to Noise Ratio C R C Cyclic Redundancy Check CP/B-ISUP Conventional Approach for Call Setup and Handoff C S - P D U Convergence Sublayer Protocol Data Unit 93 List of Abbreviations and Acronyms C T B Composite Triple Beat D C Distribution Center D C A Distributed Control Architecture D C P A Distributed Call Processing Architecture D C P / A S E s Distributed Call Processing Approach DSS1 Digital Subscriber Signaling System No. 1 D L C Data Link Control DP Distribution Point F M Frequency Modulation F N Fiber Node G F C Generic Flow Control H E Headend H E C Header Error Control H F C Hybrid Fiber Coaxial I A M Initial Address Message ICSR Inter-cluster-Setup-Request ICSC Inter-cluster-Setup-Complete IN Intelligent Network IWF #1 Interworking Function #1 I W U Interworking Unit L A N Local Area Network M A Metropolitan Area M A N Metropolitan Area Network M E X Microcell Extender M P E G Motion Picture Expert Group M S O Multiple Service Operator M T Mobile Terminal M T P Message Transfer Part N I U Network Interface Unit NNI Network-Network Interface O A M Operation and Management 94 ' List of Abbreviations and Acronyms P C N / P C S Personal Communications Network/Services PIN Personal Identification Number POTS Plain Old Telephone System P S T N Public Switched Telephone Network P T Payload Type Q A M Quadrature Amplitude Modulation QOS Quality of Service Q P S K Quadrature Phase Shift Keying R A D Remote Antenna Driver R A S P Remote Antenna Signal Processor RTS Radio Transceiver System S A A L Signaling A A L S A R Segmentation and Reassemble Rate SCP/B-SCP Service Control Point/Broadband SCP S D U Service Data Unit S M F C B Subcarrier Modulated Fiber-Coax Bus SNR Signal to Noise Ratio S O N E T Synchronous Optical Network SS7 Signaling System No. 7 SSP/B-SSP Service Switching Point/Broadband SSP STP/B-STP Service Transfer Point/Broadband STP T C A P Transactions Capabilities Application Part T D M A Time Division Multiple Access UNI User-Network Interface U P C Usage Parameter Control V B R Variable Bit Rate V C I / V P I Virtual Circuit Identifier/Virtual Path Identifier V C C / V P C Virtual Circuit Connection/Virtual Path Connection V O D Video On Demand V S B - A M Vestigial Side Band - Amplitude Modulated W A N Wide Area Network 95 Simulation Model Selections Appendix B. Simulation Model Selections DPb DP6 DP7 DPB DPlb DP14 DP13 Figure B.l. Single DC network model • • • • • • • • • J D pr_0 pt_0 pk_thruput Figure B.2. Data/Voice source model 96 Simulation Model Selections Dl sink aa!5_ete_delay • r • a a l 5 _ e t e _ d e l a y _ v a r i a t i o n d a t a s r c v o i c e s r c Figure B.4. DP model 97 Simulation Model Selections p r _ 3 a p t _ 3 a • p k „ t h r u p u t 2 Figure B.5. DPsw model pr_3a - pt_3a Figure B.6. DCsw model 98 Confidence Interval Calculations Appendix C. Confidence interval Calculations The 99% confidence level of the results having the ^-distribution can be obtained from the following equations: 1. Mean (sample), x: n where x = (xv x 2 , x n ) = samples, and n is the number of samples. 2. Estimation of standard deviation of population, s: 3. Confidence interval: s - - t s x ~ ' a / 2 ' r