Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Performance evaluation and comparison of a token ring network with full latency stations and dual latency… Lo, Edward Chi Lup 1988

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-UBC_1988_A7 L63.pdf [ 4.68MB ]
Metadata
JSON: 831-1.0097896.json
JSON-LD: 831-1.0097896-ld.json
RDF/XML (Pretty): 831-1.0097896-rdf.xml
RDF/JSON: 831-1.0097896-rdf.json
Turtle: 831-1.0097896-turtle.txt
N-Triples: 831-1.0097896-rdf-ntriples.txt
Original Record: 831-1.0097896-source.json
Full Text
831-1.0097896-fulltext.txt
Citation
831-1.0097896.ris

Full Text

PERFORMANCE EVALUATION AND COMPARISON OF A TOKEN RING NETWORK FULL LATENCY STATIONS AND DUAL LATENCY STATIONS by EDWARD CHI LUP LO A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE STUDIES E l e c t r i c a l Engineering • We accept t h i s thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA ~ May 1988 © E d w a r d Chi tup Lo, 1988 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of i dA L £fi/6~. The University of British Columbia Vancouver, Canada DE-6 (2/88) ABSTRACT A method of performance improvement of token r i n g networks i s presented, based on the use of stations with two latency states. Station latency i s defined as the time delay introduced i n passing data through a s t a t i o n . Most token r i n g protocol standards (e.g. IEEE 802.5 or ANSI X3T9.5) require incoming data to be decoded and encoded i n the s t a t i o n before transmission onto the r i n g . These encoding and decoding operations add s i g n i f i -cantly to the s t a t i o n latency. The bypassing of the encoding and decoding steps i s proposed, which improves the mean message waiting time. A de t a i l e d evaluation and comparison of the networks i s based on both a n a l y t i c a l and simulation r e s u l t s . The performance of i d e n t i c a l stations and symmetric t r a f f i c i s obtained a n a l y t i c a l l y . A d i s c r e t e event simulation model for a token r i n g network i s written i n GPSS fo r general t r a f f i c . Results show a s i g n i f i c a n t reduction i n mean waiting time for the dual latency r i n g , with performance approaching or exceeding that of gated and exhaustive service, f o r c e r t a i n ranges of network u t i l i z a t i o n . TABLE OF CONTENTS ABSTRACT i i TABLE OF CONTENTS ; i i i TABLE OF FIGURES V ACKNOWLEDGMENTS - v i i i I. OBJECTIVE AND OUTLINE OF THE THESIS 1 I I . INTRODUCTION 3 2.1 Local Area Networks 3 2.2 Token Ring Local Area Networks 4 2.2.1 History and Developments 4 2.2.2 Basic Operations and Service D i s c i p l i n e s 5 2.3 Performance Measure 8 2.4 Importance of Performance Modeling 9 I I I . IMPLEMENTATION 12 3.1 The IEEE 802.5 Standard 13 3.2 The Dual Latency Method 16 IV. ANALYTICAL MODEL 18 4.1 Assumptions 18 4.2 Notations 19 4.3 S t a b i l i t y Requirements 21 4.4 Analysis of a Completely Symmetric Ring 21 4.5 Analysis of a General Ring 23 4.6 Analysis of a Dual Latency Ring 25 V. SIMULATION MODEL 29 5.1 Why Simulate ? 29 5.2 Drawbacks of Using Simulation 30 5.3 Language 31 5.4 Models 32 5.5 S t a t i s t i c a l Considerations 58 5.5.1 Independent Replications of Runs 59 5.5.2 The Regenerative Method 61 5.6 V a l i d a t i o n 64 5.7 Features of the Simulation Models 65 VI. NUMERICAL AND SIMULATION RESULTS 69 6.1 Low Speed LANs 70 6.1.1 Completely Symmetric Cases 71 6.1.2 Asymmetric Cases 82 6.2 High Speed LANs 95 6.2.1 Completely Symmetric Cases 95 6.2.2 Asymmetric Cases 110 VII. CONCLUSIONS AND RECOMMENDATIONS 116 7.1 Summary 116 7.2 Conclusions 116 7.3 Recommendations 121 VIII. SUGGESTIONS FOR FUTURE WORK 122 8.1 Real-Time Applications 122 8.2 End-to-end Performance Modeling 122 REFERENCES 124 i v TABLE OF FIGURES Figure 2.1 - A Typical Structure of a Token Ring 7 Figure 3.1 - Token Format . 14 Figure 3.2 - Frame Format 15 Figure 3.3 - Access Control f i e l d , Ending Delimiter f i e l d and Frame Status f i e l d 15 Figure 4.1 - A Model of P o l l i n g Systems 20 Figure 5.1 - Model Segment f o r Limited-to-one Service D i s c i p l i n e with Multiple-Token Operation and Regenerative Method 3 6 Figure 5.2 - Model Segment f o r Gated Service D i s c i p l i n e with Multiple-Token Operation and Regenerative Method .... 37 Figure 5.3 - Model Segment f o r Exhaustive Service D i s c i p l i n e with Multiple-Token Operation and Regenerative Method 38 Figure 5.4a - Model Segment f o r Single-Token operation with Limited-to-one Service D i s c i p l i n e and Regenerative Method 44 Figure 5.4b - Model Segment for Single-Token operation with Limited-to-one Service D i s c i p l i n e and Regenerative Method 45 Figure 5.4c - Model Segment f o r Single-Token operation with Limited-to-one Service D i s c i p l i n e and Regenerative Method 46 Figure 5.5a - Model Segment for Single-Packet operation with Limited-to-one Service D i s c i p l i n e and Regenerative Method 47 Figure 5.5b - Model Segment f o r Single-Packet operation with Limited-to-one Service D i s c i p l i n e and Regenerative Method 48 Figure 5.5c - Model Segment f o r Single-Packet operation with Limited-to-one Service D i s c i p l i n e and Regenerative Method 49 Figure 5.6a - Model Segment f o r the Avalanche Simulation with Multiple-token Operation, Limited-to-one Service D i s c i p l i n e and Regenerative Method 52 Figure 5.6b - Model Segment f o r the Avalanche Simulation with Multiple-token Operation, Limited-to-one Service D i s c i p l i n e and Regenerative Method 53 Figure 5.7 - Model Segment f o r the Timer Model of the Avalanche Simulation 54 Figure 5.8 - Model Segment f o r the Turbo mode 57 Figure 6.1 - Performance Comparison of Case 1A f o r 20 stations 75 Figure 6.2 - Performance Comparison of Case 1A f o r 100 stations 76 Figure 6.3 - Performance Comparison of Case 1A f o r 200 stations 77 Figure 6.4 - Performance Comparison of Case 1A f o r 1000 stations 78 Figure 6.5 - Performance Improvement of F u l l Latency Ordinary Service to Dual Latency Ordinary Service for Case 1A 79 v-Figure 6.6 - Performance Improvement of F u l l Latency Exhaustive Service to Dual Latency Ordinary Service f o r Case 1A 80 Figure 6.7 - Performance Improvement of F u l l Latency Exhaustive Service to Dual Latency Ordinary Service f o r Case IB 81 Figure 6.8 - Performance Improvement of F u l l Latency to Dual Latency f o r Case 2, d i f f e r e n t mean message lengths .. 85 Figure 6.9 - Performance Comparison of F u l l Latency with Dif f e r e n t Service D i s c i p l i n e s to the Dual Latency f o r Case 2A 86 Figure 6.10 - Performance Comparison of F u l l Latency with Di f f e r e n t Service D i s c i p l i n e s to the Dual Latency f o r Case 2B 87 Figure 6.11 - Performance Comparison of F u l l Latency with Di f f e r e n t Service D i s c i p l i n e s with the Dual Latency f o r Case 2C '.\ 88 Figure 6.12 - Performance Comparison of F u l l Latency with Di f f e r e n t Service D i s c i p l i n e s to the Dual Latency f o r Case 2D 89 Figure 6.13 - Performance Comparison of F u l l Latency with Di f f e r e n t Service D i s c i p l i n e s to the Dual Latency f o r Case 2E 90 Figure 6.14 - Performance Improvement of F u l l Latency to the Dual Latency f o r Di f f e r e n t Message D i s t r i b u t i o n s (Case 3) . 92 Figure 6.15 - Performance Comparison of F u l l Latency with Di f f e r e n t Message Dis t r i b u t i o n s for Case 3 93 Figure 6.16 - Performance Comparison of Dual Latency with Di f f e r e n t Message Di s t r i b u t i o n s f o r Case 3 94 Figure 6.17 - Performance Comparison of F u l l Latency with Di f f e r e n t Message Lengths f o r Case 4, 20 Stations 98 Figure 6.18 - Performance Comparison of Dual Latency with Di f f e r e n t Message Lengths f o r Case 4, 20 Stations 99 Figure 6.19 - Performance Comparison of F u l l Latency with Di f f e r e n t Message Lengths f o r Case 4, 100 Stations 100 Figure 6.20 - Performance Comparison of Dual Latency with Di f f e r e n t Message Lengths for Case 4, 100 Stations 101 Figure 6.21 - Performance Comparison of F u l l Latency with Di f f e r e n t Message Lengths for Case 4, 1000 Stations 102 Figure 6.22 - Performance Comparison of Dual Latency with Di f f e r e n t Message Lengths for Case 4, 1000 Stations 103 Figure 6.23 - Performance Comparison of F u l l Latency with Di f f e r e n t Service D i s c i p l i n e s to the Dual Latency f o r Case 4B, 100 Stations 104 Figure 6.24 - Performance Comparison of F u l l Latency with Di f f e r e n t Service D i s c i p l i n e s to the Dual Latency f o r Case 4D, 100 Stations 105 Figure 6.25 - Performance Comparison of F u l l Latency with Dif f e r e n t Service D i s c i p l i n e s to the Dual Latency f o r Case 4B, 1000 Stations 106 v i Figure 6.26 - Performance Comparison of F u l l Latency with Di f f e r e n t Service D i s c i p l i n e s to the Dual Latency f o r Case 4D, 1000 Stations 107 Figure 6.27 - Performance Improvement of F u l l Latency Ordinary Service to the Dual Latency Ordinary Service for Case 4A 108 Figure 6.28 - Performance Improvement of F u l l Latency Ordinary Service to the Dual Latency Ordinary Service for Case 4D 109 Figure 6.29 - Performance Comparison of F u l l Latency with D i f f e r e n t Service D i s c i p l i n e s to the Dual Latency Ordinary Service f o r Case 5 113 Figure 6.30 - Performance Comparison of F u l l Latency with Di f f e r e n t Service D i s c i p l i n e s to the Dual Latency Ordinary Service f o r Case 6 114 Figure 6.31 - Performance Improvement of F u l l Latency Ordinary Service to the Dual Latency Ordinary Service f o r Case 5 and Case 6 ~. 115 v i i ACKNOWLEDGMENTS I am always indebted to my supervisors Dr. R.H.S. Hardy and Dr. R.W. Donaldson for t h e i r consistent encouragement and support throughout the course of my research. Special thanks are given to Dr. H.W. Lee and Dr. J.K. Cavers for t h e i r h e l p f u l comments and suggestions. My research colleague, Mr. Ian Radziejewski, has also given me a l o t of help and i n s p i r a t i o n . F i n a l l y , f i n a n c i a l support from the Natural Sciences and Engineering Research Council of Canada i s g r a t e f u l l y acknowl-edged. v i i i I. OBJECTIVE AND OUTLINE OF THE THESIS The purpose of t h i s thesis i s to investigate and v e r i f y the performance improvement that can be made by the use of the dual latency technique [Hard87], [Hard88a]. The performance of the MAC Layer of the OSI model i s described by both a n a l y t i c a l and simulation models. The physical layer i s modeled as a delay caused by the r i n g medium and the s t a t i o n interface. The a n a l y t i c a l model i s based upon queueing models for token rings, p r i m a r i l y i n the context of p o l l i n g systems [Taka86]. The e x i s t i n g simulation model was written i n GPSS and can be used to simulate the performance of a token passing (token r i n g or token bus) network. This simulation model enables us to look at the s i t u a t i o n s where predictions from the a n a l y t i c a l model may not be accurate. The organization of the thesis i s as follows: 2. Introduction: This chapter describes the background of Local Area Networks (LANs) and token r i n g LANs. The importance of performance modeling i s also b r i e f l y mentioned. 3. Implementation: A b r i e f description of the IEEE 802.5 standard and the implementation of the dual latency method 1 i s presented. 4 . Analytical Model: The purpose of this chapter i s to give the assumptions and equations used for the performance analysis of a completely symmetric ring. 5. Simulation Model: Various aspects of the simulation models are described. These include: strategy, simulation language used, block diagrams, s t a t i s t i c a l considerations and d i f f i c u l t i e s encountered. 6 . Results: Numerical and simulation results are presented and the conclusions drawn are explained. 7 . Conclusions and Recommendations: The conclusions and recommendations that can be drawn from this research are described. 8 . Suggestions for Future Work: Real-time performance modeling becomes a natural extension of the thesis. Further, end-to-end performance modeling may be achieved by building up layer by layer. S E C T I O N 1 . O B J E C T I V E AND O U T L I N E O F T H E T H E S I S 2 I I . INTRODUCTION 2 . 1 Local Area Networks Local Area Networks (LANs) have become increasingly important i n recent years. This trend i s mainly due to the increasing use of small single-function systems, such as personal computers, business computer and workstations. These small systems generally provide a more accessible and responsive environment f o r users. They are easier to learn and more r e l i a b l e than large central time-sharing systems. LANs are designed f o r interconnecting these systems and for sharing resources and information (data and/or voice) within a small area. There are two more d r i v i n g forces behind t h i s trend. F i r s t , progress i n Very Large Scale Integration (VLSI) technology makes the implementation of a low cost Network Interface Unit (NIU) or network adapter possible. Second, low cost high speed transmission media, such as o p t i c a l f i b e r , are a v a i l a b l e . SECTION 2 . INTRODUCTION 3 2.2 Token Ring Local Area Networks 2.2.1 History and Developments The token r i n g system i s probably the oldest r i n g LAN, f i r s t proposed by Farmer and Newhall [Farm69] i n 1969 and then re f e r r e d to as a Newhall r i n g . Companies such as IBM, Apollo and Sperry seem committed to the token r i n g . IBM Zurich Research Laboratory, Ruschlikon, Switzerland has been doing intensive research on the token r i n g and many papers were published about the token r i n g system by people working there ([Bux81], [Bux83a], [Bux83b], [Bux84], [Kell83], [Wong84]). The importance of the token r i n g i s p a r t i c u l a r l y evident i n the f a s t development of IEEE Computer Society Local Network Committee (Project 802.5) and American National Standard Committee (ANSC) X3T9.5. The IEEE 802.5 token r i n g can accommodate up to 250 stations with shielded twisted p a i r at 4 Mbit/s data rate and the FDDI token r i n g up to 1000 stations with 200 km of f i b e r at 100 Mbit/s data rate. Texas Instruments has a j o i n t venture with IBM to develop a NIU f o r the token r i n g l o c a l network devised by IBM. In addition to the medium access control functions l i s t e d in the IEEE 802.5 standard [IEEE802.5], the firmware supports SECTION 2.2 Token Ring Local Area Networks 4 f i n e tuning of the star-wired r i n g l i n k , maintenance of growing networks, error logging and reporting, and mul t i - r i n g management. The NIU consists of three very large scale integrated c i r c u i t s and two medium-scale integrated cable i n t e r f a c e chips. Advanced Micro Devices Corporation recently announced the SUPERNET Family chip set for networks using the ANSI X3T9.5 standard c a l l e d Fiber Distributed Data Interface (FDDI) [AMD86]. The chip set includes a RAM Buffer Controller(RBC), a Data Path Co n t r o l l e r (DPC), a Fiber Optic Ring Media Access C o n t r o l l e r (FORMAC), a Encoder/Decoder (ENDEC), and a Transparent Asynchronous Transmitter/Receiver Interface (TAXI) . The d e t a i l s of the implementation are not yet av a i l a b l e . An introduction to the FDDI standard can be found i n [Ross86] and a comparison between the FDDI and IEEE 802.5 standard i n [ I y e r 8 7 ] . 2.2.2 Basic Operations and Service D i s c i p l i n e s There are two terms that have to be c l a r i f i e d f i r s t before proceeding. Station latency i s defined as the time delay introduced i n passing data through a s t a t i o n while ring SECTION 2.2.1 History and Developments 5 latency i s the sum of the delays around the r i n g . That i s , r i n g latency i s equal to the sum of a l l the station l a t e n c i e s and propagation delays. Figure 2.1 depicts a t y p i c a l structure of a token r i n g . A token i s passed sequentially from s t a t i o n to s t a t i o n around the r i n g . The token contains a f l a g i n d i c a t i n g whether the token i s "free" or "busy". The s t a t i o n which has messages to send must wait u n t i l a free token passes by, change the status of the token from free to busy and transmit the data. P h y s i c a l l y , a token can be a control b i t i n the free token or i n the header of a message. The transmitting s t a t i o n i s responsible f o r generating a new free token when i t s transmission i s f i n i s h e d . There are three ways a transmitting s t a t i o n can generate a new free token. The method used can greatly a f f e c t the performance. They are : single-packet. single-token and multiple-token operation. For single-packet operation, a transmitting s t a t i o n waits u n t i l a l l i t s transmitted b i t s have t r a v e l l e d around the r i n g and been removed. Then the s t a t i o n releases a new free token. This operation i s the most conservative type of the three since i t ensures that there i s only one transmission at any point i n time. Very SECTION 2.2.2 Basic Operations and Service Discip l i n e s 6 few e x i s t i n g token r i n g LANs use t h i s approach because i t does not u t i l i z e the token e f f i c i e n t l y . Nevertheless, i t i s the simplest to implement. Taken -a-7/71 • Propagation Delay Station Latency Mllllllllll llllllllllll a J U 2 £ Figure 2.1 - A Typical Structure of a Token Ring For single-token operation, a transmitting s t a t i o n only waits u n t i l i t s busy token or header comes back before r e l e a s i n g a new free token. I f , however, the message transmission time i s longer than the r i n g latency, the st a t i o n w i l l get i t s busy token before completing data transmission. In t h i s case, a new free token w i l l be released r i g h t a f t e r data transmission. Multiple-token operation i s the most e f f i c i e n t of the three. A transmitting s t a t i o n releases a new free token r i g h t a f t e r SECTION 2.2.2 Basic Operations and Service Discip l i n e s 7 i t s data transmission. I f the message transmission time i s shorter than the r i n g latency, i t i s possible to have several busy tokens and one free token on the r i n g at the same time, as the name implies. I f the message transmission time i s longer than the r i n g latency, multiple-token operation i s the same as single-token operation. SERVICE DISCIPLINES There are three main service d i s c i p l i n e s that are d i f f e r e n t depending on the number of messages which may be served i n a queue a f t e r a free token i s captured; they are exhaustive, gated, l i m i t e d service d i s c i p l i n e s . D i f f e r e n t d i s c i p l i n e s are treated i n [Hofr86]. For exhaustive service, the server continues to serve the s t a t i o n u n t i l the queue i s empty. For gated service, the server serves only those messages which are queued at the p o l l i n g instant. For limited-to-m service, the server serves at most m messages. Therefore, when m i s one, i t i s c a l l e d limited-to-one service or ordinary service. 2.3 Performance Measure The mean message response time i s the singl e most important performance measure i n most computer communications networks SECTION 2.2.2 Basic Operations and Service D i s c i p l i n e s 8 [Klei76]. The message response time i s defined as the time between the s t a r t of a request to transmit and the end of i t s transmission. The usual performance measure used i s the mean message waiting time which i s the time between the s t a r t of a request to transmit and the beginning of i t s transmission. There i s a simple r e l a t i o n s h i p between the mean response time and the mean message waiting time. The mean message response time i s equal to the sum of mean message waiting time and mean service time. Throughout the thesis, the mean message waiting time i s used as the measure. 2.4 Importance of Performance Modeling The s i g n i f i c a n c e of performance modeling i s stressed from the management point of view. Modeling may be involved i n the design and development phase of a new system or the configuration and tuning of an e x i s t i n g system. Results from performance modeling should be able to provide "good" recommendations to the designers. A wrong recommendation i n design and development w i l l usually a f f e c t the organization more than one i n configuration and tuning. There are several ways to obtain performance modeling. The most simple one i s to measure the performance d i r e c t l y under a c e r t a i n system loading f o r each a l t e r n a t i v e . Measurement i s SECTION 2.3 Performance Measure 9 e s s e n t i a l i n performance modeling because i t deals with the r e a l system. But there are two drawbacks: i t may not be possible or f e a s i b l e to b u i l d a l t e r n a t i v e s so quickly and i t i s usually very c o s t l y . Therefore, t h i s research project would, eventually, use measurement to v a l i d a t e the model. The research team has already designed the chip set f o r IEEE 802.5 and some of the chips are ready to be tested. The second approach i s to apply p r o b a b i l i t y and queueing theory to model performance. The main advantage i s that i t requires l e s s computational e f f o r t than a simulation. The model, however, has to make some mathematically tractable assumptions which may not be applicable to the system. For the research i n t e r e s t , the exact a n a l y t i c a l solutions f o r ordinary service asymmetric t r a f f i c are not a v a i l a b l e , so the t h i r d approach i s stressed more. The t h i r d approach i s to apply d i s c r e t e event simulation. The Simulation i s a powerful and f l e x i b l e approach. The d e t a i l s of the model can be c o n t r o l l e d more r e a d i l y . Nevertheless, i t usually requires more computational e f f o r t and i t i s more d i f f i c u l t to prove the correctness. I t i s the c o n t r o l l a b i l i t y of the d e t a i l s of the model that i s important f o r t h i s SECTION 2.4 Importance of Performance Modeling 10 research since the performance can be b u i l t layer by layer, which i s very important to the design and development phase of the proposed new system. SECTION 2.4 Importance of Performance Modeling 11 I I I . IMPLEMENTATION In order to t e s t the performance of a dual latency s t a t i o n with r e a l i s t i c hardware parameters, the IEEE 802.5 token r i n g standard [IEEE802.5] was used as the basis for an implementation of a network inte r f a c e u n i t u t i l i z i n g the dual latency method [Radz88]. The state machine description of the s t a t i o n was converted to a l o g i c description. Additional l o g i c was included to implement the low latency function. The purpose of t h i s implementation i s to v e r i f y that a dual latency s t a t i o n i s f e a s i b l e to implement while maintaining compatibility with an accepted protocol standard. Furthermore, the order of the latency improvement could be obtained using standard integrated c i r c u i t technology. The implementation i s based on the IEEE 802.5 token r i n g standard rather than the ANSI X3T9.5 standard. The reasons f o r not implementing the ANSI X3T9.5 standard [ANSI87] are twofold. F i r s t l y , the IEEE 802.5 standard i s mature and well documented while the X3T9.5 standard i s s t i l l evolving. Secondly, t e s t i n g of the network inte r f a c e units i s easier and cheaper f o r the IEEE 802.5 standard than for the X3T9.5 standard. The IEEE 802.5 network inte r f a c e u n i t can be tested with even telephone twisted SECTION 3 . IMPLEMENTATION 12 wires while the FDDI has to be tested with o p t i c a l f i b r e . Further, the data rate of the IEEE 802.5 standard i s only 4 Mbps while that of the X3T9.5 i s 100 Mbps. An IEEE 802.5 network interface u n i t was designed to operate with a data rate up to 10 Mbps. The l o g i c design was done by my colleague, Mr. Ian Radziejewski, using PCAD which i s an IBM based CAD system. The design was simulated using the l o g i c simulation t o o l i n PCAD. For d e t a i l s of the implementation, one should re f e r to [Radz88]. The QuickChip F a c i l i t y at Simon Fraser University i s used to produce a gate array implementation of the design. Section 3.1 gives a general description of the IEEE 802.5 standard and section 3.2 gives an overview of the dual latency design and the desc r i p t i o n of the design blocks. 3.1 The IEEE 802.5 Standard An o u t l i n e of the IEEE 802.5 token passing r i n g medium access control method i s described f o r reference; however, f u l l d e t a i l s can be found i n [IEEE802.5]. A token r i n g consists of stations connected s e r i a l l y and information i s transferred b i t by b i t from one st a t i o n to the next. The s t a t i o n which has the same address as the destination address copies information as i t passes through the st a t i o n . The information, f i n a l l y , would be removed from the r i n g by the sending s t a t i o n . The r i g h t to SECTION 3 . IMPLEMENTATION 13 transmit i s cont r o l l e d by a token being passed i n the ring. Any s t a t i o n may capture the token by changing the token to a start-of-frame sequence. The maximum amount of time that a st a t i o n can use the ri n g before releasing a free token i s co n t r o l l e d by a token holding timer set during ri n g i n i t i a l i z a t i o n . Eight l e v e l s of p r i o r i t y are ava i l a b l e and the a l l o c a t i o n of p r i o r i t y i s by mutual agreement. Error detection and recovery u t i l i z e a monitor function which supervises the proper token operation and performs recovery i n case of errors . Only one monitor i s active at any time on a r i n g . The se l e c t i o n of the monitor i s t o t a l l y transparent to the users and i s determined by the in t e r n a l addresses of stati o n s . There are two basic formats used: tokens and frames. The following figures show t h e i r f i e l d contents, with the left-most b i t transmitted f i r s t . TOKEN FORMAT SD AC ED SD = St a r t i n g Delimiter (1 octet) AC = Access Control (1 octet) ED = Ending Delimiter (1 octet) Figure 3.1 - Token Format SECTION 3.1 The IEEE 802.5 Standard 14 FRAME FORMAT SD AC FC DA SA INFO FCS ED FS SD=Starting Delimiter(1 octet) AC=Access Control(1 octet) FC=Frame Control(1 octet) DA=Destination Address(2 or 6 octets) SA=Source Address(2 or 6 octets) INFO=Information(0 or more octets) FSC =Frame-Check-Sequence(4 octets) ED =Ending Delimiter(1 octet) FS =Frame Status( 1 octet) Figure 3.2 - Frame Format ACCESS CONTROL FIELD (AC) PPP T M RRR PPP = p r i o r i t y b i t s M T = token b i t RRR ENDING DELIMITER (ED) monitor b i t reservation b i t s JK1JK1 I E JK1JK1 = sp e c i a l pattern for ending delimiter recognition I = intermediate frame b i t E = error-detected b i t FRAME STATUS FIELD (FS) A C r r A C r r A = address-recognized b i t s C = frame-copied b i t s r = reserved b i t s Figure 3.3 - Access Control f i e l d , Ending Delimiter f i e l d and Frame Status f i e l d Both token format and frame format have the Starting Delimiter, the Access Control f i e l d and the Ending Delimiter. SECTION 3.1 The IEEE 802.5 Standard 15 The Access Control f i e l d has eight b i t s t o t a l l y , with 3 b i t s used f o r p r i o r i t y , 1 b i t for the token, 1 b i t for the monitor function and 3 b i t s for reservation of p r i o r i t y . The pattern of a free token i s shown i n the Token Frame while the pattern of a message i s shown i n the Frame Format. The Token Frame can be converted into the Frame Format by just f l i p p i n g the Token b i t T i n the Access Control f i e l d . 3.2 The Dual Latency Method There are several conditions that must be met before a st a t i o n can enter the low-latency state. F i r s t l y , the s t a t i o n must not be sending a frame; otherwise, the data source i s from the in t e r f a c e i t s e l f and not from the r i n g . Secondly, the st a t i o n which has queued message(s) to send must decode the r i n g input i n order to f l i p the token b i t (T) of a free token i n the Access Control (AC) f i e l d on time. In the low-latency mode, a s t a t i o n acts simply as a repeater most of the time, i . e . i t e s s e n t i a l l y repeats the data unaltered. S t i l l , the s t a t i o n has to check i f the Destination Address (DA) i s i t s address i n order to copy data addressed to i t . Moreover, the s t a t i o n has to c a l c u l a t e the CRC ( c y c l i c redundancy check) to detect and report any transmission errors. The CRC c i r c u i t requires the Frame Check Sequence f i e l d for error detection, so the SECTION 3.1 The IEEE 802.5 Standard 16 i n t e r f a c e u n i t has a time period of 7 b i t s between the F C S and the error b i t E i n the Ending Delimiter (ED) f i e l d to decide i f the E b i t should be set or not. The complete network interface u n i t was implemented using l o g i c elements with timing parameters consistent with the s p e c i f i c a t i o n s of a commercial 3 micron C M O S gate array-process. The en t i r e c i r c u i t was simulated using a l o g i c simulator at 10 Mbps. The r e s u l t s show that the f u l l latency i s 3.6 b i t s (360ns), whereas the latency i s reduced to 0.6 b i t s (60ns) i n low latency mode. In t h i s case, a factor of 6 reduction i n s t a t i o n latency was observed. SECTION 3.2 The Dual Latency Method 17 IV. ANALYTICAL MODEL There are a number of papers and books published f o r the analysis of p o l l i n g systems and a token r i n g i s one type of p o l l i n g systems. A p o l l i n g system consists of a system of multiple queues served by a sing l e server i n a c y c l i c order. The most extensive l i t e r a t u r e seems to be from Takagi [Taka86]. Takagi has summarized a number of important papers and put the r e s u l t s i n one book. His work would, hopefully, unify the notations and terminology used for the analysis of p o l l i n g systems. Other good work on t h i s t o p i c can also be found i n [Klei76], [Schw77], [Schw87], [Hamm86], [Stal87] and Bert[87]. Papers that are of i n t e r e s t can be found i n [Wats84], [Ferg85], [Ferg87], [Ever86], [Pang87], [Boxm86], [Boxm87], [Groe87], [Taka85], and [Taka87]. Although the general exact solutions of token rings with ordinary service are not avai l a b l e , there i s one s p e c i a l case of token rings that i s well analyzed. I t i s the case of i d e n t i c a l s t a t i o n s . Figure 4.1 depicts a p o l l i n g system model used. 4.1 Assumptions The assumptions are l i s t e d as follows: * Poisson A r r i v a l s * I n f i n i t e Buffer Size SECTION 4 . ANALYTICAL MODEL 18 * FCFS ( F i r s t Come F i r s t Serve) for each message i n a queue * Multiple-token operation * Stable System (see section 4.3) 4.2 Notations There are N queues i n the system and they are indexed by i i n c y c l i c order ( i = 1,2, ... N). A Poisson a r r i v a l of messages i s assumed at a rate A., for queue i . The f i r s t and second moments of service time at queue i are b t and bl 2 1 respectively. The time i t takes to transmit a b i t from one s t a t i o n to the next i s c a l l e d switchover time (sometimes c a l l e d walk time or reply i n t e r v a l ) . The switchover time i s equal to the sta t i o n latency plus the propagation delay. The mean and variance of the switchover time f o r queue i i s denoted by r, and 6? respec t i v e l y . The dual latency method t r i e s t o minimize the st a t i o n latency while the propagation delay i s fix e d . We define the t o t a l u t i l i z a t i o n p 0 and the t o t a l r i n g latency £ as follows: N N ( 4 . 1 ) i- 1 j - 1 SECTION 4.1 Assumptions 19 N i - 1 (4.2) S T A T I O N S CD I LX • C N D E P A R T U R E Figure 4 . 1 - A Model of P o l l i n g Systems. SECTION 4.2 Notations 20 4.3 S t a b i l i t y Requirements The system i s stable i n the sense that the mean queue length at every queue i s f i n i t e , which implies that the mean p o l l i n g cycle time i s also f i n i t e . The necessary s t a b i l i t y condition i s : * Exhaustive & gated : p 0 < l ( 4 . 3 ) * Ordinary (limited-to-one) : For a l l i , p0 + ktR < 1 ( 4 . 4 ) In the exhaustive or gated service system, i f there i s one heavily loaded st a t i o n , the whole system w i l l be overloaded. In the ordinary or limited-to-one system, however, one heavily loaded s t a t i o n w i l l a f f e c t the mean waiting time of that s t a t i o n dramatically rather than that of the whole system. This l o c a l i z a t i o n e f f e c t provides fairness among stations. 4 . 4 Analysis of a Completely Symmetric Ring The completely symmetric case i s the case that every station i s i d e n t i c a l i n terms of i t s a r r i v a l rate, service time d i s t r i b u t i o n , and switchover time d i s t r i b u t i o n . Therefore the subscript i i s omitted i n the expressions. Derivations of the expressions shown below can be found i n ei t h e r j[Taka86] or SECTION 4.3 S t a b i l i t y Requirements 21 [Taka85]. There are three service d i s c i p l i n e s that w i l l be considered: limited-to-one (ordinary) service, gated service and exhaustive service. The mean waiting time for l i m i t -ed-to-one service i s given by: «52 A / [ A . 6 ( 2 ) + r ( l + A . b ) + A.6 2 ] EW\ Hmited-to-one 2[ 1 - A/A. ( T + fc> ) ] ^ ^ where E[w] i s the mean waiting time of messages queued for transmission at a stat i o n , r and 6 2 are the mean and variance of the reply i n t e r v a l (the sum of st a t i o n latency and i n t e r s t a t i o n propagation delay), N i s the number of stations connected to the network, and b and b l z l are the f i r s t and second moments of the message service time respectively. The corresponding r e s u l t for gated and exhaustive service i s : 6 2 A / [ A . 6 ( 2 ) + r ( l ±A.b)] 2 r + 2 ( 1 -Nkb) E[^3 I gated(+). exhaustive(-) o _+ o r i A / I U N ( 4 . 6 ) where the '±' term takes the value •+• for gated service and '-1 f o r exhaustive service. SECTION 4 . 4 Analysis of a Completely Symmetric Ring 22 Given equal reply i n t e r v a l and service time s t a t i s t i c s , the following inequality holds for a network of /v stations, operating under the three types of service d i s c i p l i n e s : £ " [ > ] I l i m i t e d ^ | g a t e d > E[w] | e x h a u s t i v e (4.7) 4.5 Analysis of a General Ring For a general ri n g , numerical algorithms developed by Ferguson and Aminetzah [Ferg85] and Watson [Wats84] are ava i l a b l e to enable the c a l c u l a t i o n of mean waiting times i n a gated or exhaustive token r i n g system. In an ordinary service system, the exact expression to calculate the mean waiting time i s very d i f f i c u l t and i s not yet known at t h i s moment. Only the two-queue case f o r a general ordinary service r i n g system case has been solved e x p l i c i t l y [Boxm87], requiring the soluti o n of a Riemann-Hilbert boundary value problem of mathematical physics [Groe87]. Approximation techniques [Groe87] [Boxm86] [Ever86] are avail a b l e , but they are not incorporated because t h e i r accuracy i s not guaranteed, e s p e c i a l l y f o r a comparison between f u l l latency and dual latency system. For gated and exhaustive service system, Watson's algorithm [Wats84] can be expressed as follows: SECTION 4.4 Analysis of a Completely Symmetric Ring 23 1 1 2p?(l-p,.),4l ' ' 2{\-Pj) 2 ( 1 + 2Rp2{\-Pi)fr '-' ' 2 K ( l - P > ) L J U a t e d 2p? hk(bi Ai 2 ( l - p ) AT (;) 2R\p2 i- 1 where > - l ; - l y-1 _ 2 . _ 2 A 0 : = °N n - 0 and where the y y J t n are defined by i *"1 y j,k,n I cxh P m + j - 1 y / , m , n -m - 1 m'k* 1 SECTION 4.5 Analysis of a General Ring y j,k , n I gated P m* j - 1 V , m ,n - 1 m- 1 + I Pm.j-tyj.n.n ( 4 . 1 1 ) 1 f o r fc = 1 N y y . i . - i = 1 y ^ . , , - 0 for m = 2 , . . . . 7 V yy.*,«-y> . w*.n f o r n = o , i , 2 . . . 4.6 Analysis of a Dual Latency Ring The evaluation of p o l l i n g system performance under conditions of dependent s t a t i s t i c a l d i s t r i b u t i o n s i s d i f f i c u l t i n general. Ferguson [Ferg86] has developed bounds and approxi-mations to the mean waiting time for a token r i n g with st a t i o n dependent overheads and exhaustive service. For the case of limited-to-one service, no such approximate solutions are yet ava i l a b l e . However, i n the case of a p o l l i n g system having a reply i n t e r v a l with two values, dependent on the state of the SECTION 4.5 Analysis of a General Ring 25 message queue, a r e l a t i v e l y straightforward approach i s possible. We could approximate the soluti o n by f i r s t assigning each s t a t i o n a fixed reply i n t e r v a l , equal to the sum of the low st a t i o n latency value and the i n t e r s t a t i o n propagation delay. We then add the addit i o n a l latency, the difference between the f u l l and low latency values, to the message service time. This approach i s v a l i d because each s t a t i o n operates with f u l l latency only when i t has a message to transmit. This approach i s based on the assumption that the latency at a s t a t i o n with a waiting message can be expressed as the sum of two independent random var i a b l e s , one with the same d i s t r i b u t i o n as the latency f o r a st a t i o n with no waiting message and one representing the addit i o n a l delay. For t h i s case, we apply Takagi's r e s u l t [Taka86] by adding the add i t i o n a l latency component to the message service time and by s e t t i n g the switchover time equal to the low latency. That i s , 6 Dual latency Full latency b ( 2 ) Dual latency = b( 2 ) + 2bz1r + (z1r) 2 Dual latency = r | Low latency w h e r e Ar = r | Full latency Low latency ( 4 . 1 2 ) SECTION 4.6 Analysis of a Dual Latency Ring 26 In order to investigate the advantage of the dual r i n g latency-further, I make use of the equation 4.5 from Takagi [Taka85] for the mean waiting time of a completely symmetric r i n g with ordinary service and constant switchover time. The equation 4.12 i s substituted into t h i s equation and y i e l d : _ A/[A .O D u a ( latency + r Dual l a t e n c y ! 1 + ^ ^ Dual latency ) ] ^ L^J I Dual , l imi ted- to-one 2 [ l A/A.(r Dual latency + t*Dual l a t e n c y ) ] N[kb(2) + r( 1 + p ) ] - Z l r ( l - p - A . r ) 2 [1 - A / ( p + kr)] ( 4 . 1 3 ) The equation above gives the mean waiting time as a function pf s t a t i o n u t i l i z a t i o n , P, f i r s t and second moment of the message length, b and t>'2', switchover time, r and number of st a t i o n N. I t assumes limited-to-one service and a completely symmetric r i n g . The r a t i o then becomes: *W] IF„ . . . A,b<2> + r ( l + p ) ^ ^[^]louai A .b ( 2 ) + r ( l + p ) - d r ( l - p - A . r ) From the equations above, there are observations that can be made. F i r s t l y the r a t i o i s always greater than or equal to one. Secondly, the r a t i o increase as the Ar increases. From SECTION 4.6 Analysis of a Dual Latency Ring 27 the problem d e f i n i t i o n , Ar<r, that leads to an upper l i m i t on the r e s u l t of the r a t i o ( i . e . , Ar = o ) . . Thirdly, the r a t i o decreases as the variance of the message length increases. That makes constant message length more a t t r a c t i v e to the dual latency method. SECTION 4.6 Analysis of a Dual Latency Ring 28 V. SIMULATION MODEL In t h i s section, the simulation aspects of the performance modeling are described. Computer simulation i s a very powerful t o o l and can be a r b i t r a r i l y d etailed. The simulation software developed can be used i n general cases. 5.1 Why Simulate ? There are two general reasons f o r using a computer simulation model. F i r s t l y , an exact mathematical analysis f o r a token r i n g with limited-to-one service i s not yet possible. Boxma and Groenendijk [Boxm87] recently e x p l i c i t l y solved a two-queue p o l l i n g system with limited-to-1 service d i s c i p l i n e , which already involved the solut i o n of a very d i f f i c u l t problem i n mathematical physics. For a p o l l i n g system with more than two stations, the solut i o n would be much more d i f f i c u l t . Therefore, approximation techniques [Groe87] [Boxm86] [Ever86] are avail a b l e for limited-to-one service d i s c i p l i n e . Since the accuracy of the approximation techniques i s not guaranteed, i t may be inappropriate to compare res u l t s with d i f f e r e n t parameters, as i n the case of f u l l latency and dual latency, when the error of one case may be a l o t higher or lower than that of the other case. SECTION 5 . SIMULATION MODEL 29 Mathematical analysis sometimes has assumptions that are not v a l i d f o r r e a l world s i t u a t i o n s . For instance, assumptions of multiple-token operation, symmetric t r a f f i c , Poisson a r r i v a l s , etc. may not be good approximations to r e a l world s i t u a t i o n s . In f a c t , the IEEE 802.5 standard uses single-token operation rather than multiple-token operation. Small changes i n the operation may have dramatic impact on a mathematical model. But a simulation program can be modified r e l a t i v e l y e a s i l y . Most mathematical models only deal with steady state s i t u a t i o n s . Simulation models, however, can be used to examine the transient behavior as well as steady state behavior. Good texts on computer simulation are [Shan75], [Gord78], [Saue81], [Lave83] and [Mac85]. 5.2 Drawbacks of Using Simulation There are disadvantages of using computer simulation modeling. F i r s t l y , i t p o t e n t i a l l y contains programing errors, l i k e any software program. I t i s very d i f f i c u l t to prove the correctness of any reasonably sized program. Time and cost for simulation software development may be high and computational expenses of simulation are usually more than that of mathematical analysis. SECTION 5.1 Why Simulate ? 30 5.3 Language Two di s c r e t e event simulation languages were considered for use: GPSS (General Purpose Simulation System) and SIMSCRIPT. GPSS was chosen because of i t s compact structure that allows minimal changes i n source code when simulating d i f f e r e n t system parameters. The simulation program was o r i g i n a l l y developed i n GPSS/PC on an IBM PC. The GPSS/PC compiler was very user f r i e n d l y and had a l o t of v i s u a l a i d f o r debugging programs. But l a t e r I found that i t was too slow to run the simulation program, so I switched to GPSS/H on the MTS operating system. The program running on MTS i s a few hundred times f a s t e r than the PC version. Furthermore, the GPSS/H compiler has addit i o n a l f a c i l i t i e s and block statements to reduce the source code required and i s upward compatible with GPSS V. Descriptions of GPSS language can be found i n [Schr74] and [Gord75]. A pre-processor was developed i n PASCAL to allow convenient entry of d i f f e r e n t system parameters. The pre-processor automatically generates source codes for the GPSS/H compiler. The system parameters include service d i s c i p l i n e (ordinary, gated or exhaustive), token transmission operation (multi-ple-token, single-token or single-packet), latency types ( f u l l latency or dual latency), service time d i s t r i b u t i o n (exponen-SECTION 5.3 Language 31 t i a l , constant or mixed of both), a r r i v a l d i s t r i b u t i o n (exponential or constant), and running method (regenerative method or independent r e p l i c a t i o n s of runs, r e f e r to section 5.5.2). 5.4 Models This section w i l l describe a few important model segments used i n the simulation models. Since GPSS was used to implement the simulation, a basic understanding of the language i s required. The block diagrams of the model segments are l i s t e d i n Figure 5.1 to 5.8. The standard GPSS block notions and the concise meaning of each block can be found i n [Gord75] and [Schr74], The models were implemented i n the GPSS/H compiler by Wolverine Software Corporation (refer to [GPSS/H]). The GPSS/H compiler i s fas t e r and easier to use than the GPSS/360 and the GPSS/V compilers by IBM. For the models I ran, the GPSS/H compiler was about 6 to 7 times f a s t e r than the GPSS/360. Further, the GPSS/H compiler provides convenient features such as free input formats and i n t e r a c t i v e debugging. The models described are limited-to-one, gated, and exhaustive service d i s c i p l i n e s , multiple-token, single-token and s i n -gle-packet operations, the regenerative method and the independent r e p l i c a t i o n s of runs, and the avalanche s i t u a t i o n SECTION 5.3 Language 32 simulation and the turbo mode. For each type of token operations, three service d i s c i p l i n e s are possible; the avalanche simulation or turbo mode can be used on top of the models. For i l l u s t r a t i o n purpose, the figures show the models with regenerative method; the models with independent r e p l i c a t i o n s of runs w i l l be very s i m i l a r . The main difference i s that f o r independent r e p l i c a t i o n s of runs, the data i s c o l l e c t e d i n a number of runs; the number i s determined by the user. Thus, the user has to look at the output and calculate the data manually. As w i l l be shown i n section 5.5.1, the c a l c u l a t i o n i s f a i r l y simple. The models used for d i f f e r e n t service d i s c i p l i n e s w i l l be explained f i r s t , followed by the models used for d i f f e r e n t token operations and f i n a l l y the avalanche simulation model and the turbo model. Although the block diagrams show only the most important model segments, i t i s enough to understand the basic operations without too much emphasis on the programming d e t a i l s . Figure 5.1 to Figure 5.3 depict the models used f o r the limited-to-one, gated and exhaustive service d i s c i p l i n e s r e s p e c t i v e l y . Each s t a t i o n i s represented by a model segment. The model segment shown i s ju s t for one s t a t i o n and the dashed rectangle represents the portion f o r each s t a t i o n . A c i r c l e in SECTION 5.4 Models 33 the block diagrams represents a block l o c a t i o n . The f i r s t block statement f o r each s t a t i o n i s a GENERATE block to generate i t s own t r a f f i c based on some d i s t r i b u t i o n s . The software supports two i n t e r a r r i v a l d i s t r i b u t i o n s : exponential and constant. I t i s not d i f f i c u l t to enter other d i s t r i b u t i o n s i f necessary. A f t e r a message i s generated, i t w i l l go to a QUEUE block to s t a r t to gather waiting timing s t a t i s t i c s . The message (or transaction) i s then put into the user chain of that s t a t i o n . That makes the message scan-inactive and saves computing time dramatically. There i s a token c i r c u l a t i n g around the r i n g i n a c y c l i c order. When the token finds a message waiting i n a station, i t w i l l release one message (or transaction) from the user chain of that s t a t i o n . The token i s then destroyed by the TERMINATE block. The message released from the user chain then seizes the f a c i l i t y (or single server) named as "TOKEN". The message w i l l depart the queue and record the s t a t i s t i c s . The server w i l l be used for a c e r t a i n amount of time depending on the random number generator and the service time d i s t r i b u t i o n . The software supports three d i f f e r e n t service d i s t r i b u t i o n s : exponential, constant and a combination of exponential and constant d i s t r i b u t i o n s . Other d i s t r i b u t i o n s can be entered e a s i l y . F i n a l l y the server i s released i n the RELEASE block. For a dual latency station, the time difference between the SECTION 5.4 Models 34 f u l l latency and the dual latency has to be added as a delay. For the regenerative method, the model t e s t s i f the whole system i s empty. I f i t i s , the s t a t i s t i c s w i l l be recorded as a regenerative cycle before the system w i l l be reset. For the independent r e p l i c a t i o n s of runs, the s t a t i s t i c s c o l l e c t e d for the f i r s t M messages s p e c i f i e d by the user are discarded and then the data afterward w i l l be used as useful data. In a gated service system, at the instant the token detected the s t a t i o n has message(s) to send, i t w i l l store the number of messages i n the queue to a v a r i a b l e named "USERQ". Therefore, every time the server serves a message, the v a r i a b l e "USERQ" i s decreased by one. The server w i l l continue to serve u n t i l the "USERQ" becomes zero. In an exhaustive service system, the server w i l l continue to serve u n t i l the queue of the s t a t i o n becomes empty. Thus, one would expect a longer running time for gated service compared with exhaustive service, assuming everything else the same, because there are more block statements involved i n running a program for gated service. SECTION 5.4 Models 35 STATION * i + 1 l Figure 5.1 - Model Segment for Limited-to-one Service D i s c i p l i n e with Multiple-Token Operation and Regenerative Method. SECTION 5.4 Models 3 6 S T A T I O N MULTIPLE-TOKEN, GATED SERVICE, REGENERATIVE METHOD. GENERATE QUEUE NOT USED L I N K LO* LATENCY S E I Z E ADVANCE DEPART \ i - Store | queue content ( S A V E V A L U E ) ADVANCE USERQ i w SERVICE TIME R E L E A S E UNLINK © C~ S A V E V A L U E ) USERQ-IF USERQ ZERO ADVANCE IF USERQ NOT ZERO ^ - 5 0 Full latency minus low latency IF SYSTEM IOT EHPT.V IF SYSTEM EMPTY S P L I T COLLECT STATISTICS I TERMINATE S T A T I O N i + 1 0 Figure 5.2 - Model Segment for Gated Service D i s c i p l i n e with Multiple-Token Operation and Regenerative Method. S E C T I O N 5.4 Models 37 STATION M U L T I P L E - T O K E N , E X H A U S T I V E S E R V I C E , R E G E N E R A T I V E M E T H O D . NOT USED t LOW L A T E N C Y 1 i+i • v ADV r ANCE UNLINK © F u l l l a t e n c y minus low l a t e n c y I F SYSTEM NOT EMPTY >-S T A T I I T I C S ( T E R M I N A T E ) STATION 0 Figure 5.3 - Model Segment f o r Exhaustive Service D i s c i p l i n e with Multiple-Token Operation and Regenerative Method. SECTION 5.4 Models 38 The multiple-token operation with d i f f e r e n t service d i s -c i p l i n e s i s j u s t described. For the cases of the single-token and the single-packet operations, the si t u a t i o n s are more complicated and require more computing e f f o r t . Figures 5.4a, 5.4b and 5.4c show the block diagram of the single-token operation and Figures 5.5a, 5.5b and 5.5c show the block diagram of the single-packet operation. For the purpose of i l l u s t r a t i o n , ordinary service d i s c i p l i n e and the regenerative method are assumed. Using the f u l l latency method, the ring latency i s known. (The r i n g latency i s equal to the sum of a l l s t a t i o n l atencies plus propagation delays.) But the ring latency i s not known beforehand i f the dual latency method i s used. Therefore, using the dual latency method f o r the single-token and the single-packet requires s p e c i a l attention. For the single-token operation, i f the f u l l latency method i s used, the operations of the blocks become very s i m i l a r to the multiple-token. The only difference i s that the service time for the l a s t message w i l l be compared with the sum of the ring latency and the header time. I f the service time i s larger, the message w i l l proceed as usual; otherwise, the time diffe r e n c e between the two has to added as additional delay induced. For ordinary service, the l a s t message served per token i s also the only message served per token. SECTION 5.4 Models 39 I f the dual latency method i s used, the s t a t i o n w i l l generate messages as normal. But a f t e r a message seizes the token and goes through a DEPART block, another transaction has to be generated f o r the dual latency method and i t i s c a l l e d the created transaction. The created transaction i s generated by the SPLIT block. The created transaction i s necessary i n order to check the time i t takes f o r the header to.go back to the same s t a t i o n . Since f o r the dual latency method the ring latency i s not known beforehand, the created transaction i s to be c i r c u l a t e d around the r i n g . The r i n g latency experienced by the created transaction depends on the queues of other stations. A f t e r some time, the created transaction goes back to i t s sending s t a t i o n and checks i f the l o g i c switch "RING1• i s set. I f the l o g i c switch "RING1" i s not set, i t w i l l be held i n the GATE block waiting for the l o g i c switch to be set by the o r i g i n a l transaction. I t the l o g i c switch i s set, the created transaction w i l l proceed to set the l o g i c switch "RING2" before f i n a l l y being destroyed. The o r i g i n a l transaction w i l l go through the normal path to the ADVANCE block and then the RELEASE block. In order to keep the memory f o r the system manageable by the GPSS/H compiler, two l o g i c switches "RING" and "RING2" are used to control the flow of the two transactions. The o r i g i n a l transaction w i l l proceed as normal, but a f t e r the RELEASE block, i t w i l l set SECTION 5.4 Models 40 the l o g i c switch "RING1" and then t e s t i f the created transaction has returned to i t s own s t a t i o n . I f the created transaction has returned, the message transaction w i l l proceed; otherwise, i t w i l l wait for the created transaction to return to the LOGIC block to set the l o g i c switch "RING2". The s e t t i n g of the l o g i c switch "RING2" i s c o n t r o l l e d only by the created transaction and the s e t t i n g of the l o g i c switch "RING1" i s c o n t r o l l e d only by the o r i g i n a l transaction. The reason f o r using the l o g i c switches i s to avoid using the TEST block alone, which produces a large amount of computing overhead. The t r i c k y part i s when the st a t i o n uses service d i s c i p l i n e s other than ordinary service. For instance, i f the s t a t i o n uses exhaustive service, i t does not know how many messages that w i l l be served i n advance. That i s the main reason for using the created transaction. Only the o r i g i n a l transaction of the l a s t message fo r a token w i l l open the l o g i c switch "RINGl" and allows i t to check i f a l l the created transactions have returned to the sending s t a t i o n . The operations described above assume that a s t a t i o n can switch from low latency to high latency between messages i n the cases of exhaustive or gated service d i s c i p l i n e s . In terms of implementation, when a s t a t i o n has a message arrived and waits f o r a free token, i t w i l l not be able to switch to high latency i f i t i s repeating SECTION 5.4 Models 41 message b i t s sending by other stations. The s t a t i o n w i l l have to wait u n t i l i t finds the ending delimiter of the message and then switch to high latency. For the single-packet operation, i f the f u l l latency i s used, the operations of the blocks become very s i m i l a r to the multiple-token operation. The main difference i s that the service time of the l a s t message w i l l be increased by the amount equal to the r i n g latency. The r i n g latency i s a constant and can be calculated i n the pre-processor. The block diagram using the dual latency method i s shown i n Figures 5.6a, 5.6b and 5.6c. The message transaction w i l l proceed as i n the case of the single-token operation u n t i l i t reaches the RELEASE block. For the single-packet, the entire message has to be received by the sending s t a t i o n rather than only the header. Therefore, I make use of the t r a n s i t time to ca l c u l a t e the r i n g latency of the l a s t message (as for exhaustive s e r v i c e ) . A f t e r the l a s t message i s served, we save the absolute clock using the MARK block . The t r a n s i t time for each created transaction i s stored i n the va r i a b l e named "CHECK", so that the variable "CHECK" w i l l f i n a l l y store the t r a n s i t time of the l a s t created message a f t e r a l l created transactions are destroyed. In the mean time, the l a s t message may be waiting f o r the l a s t created transaction to f i n i s h . SECTION 5.4 Models 42 Therefore, the additional delay i s equal to the t r a n s i t time of the l a s t created transaction minus the time the l a s t message waiting for a l l created transactions to f i n i s h . SECTION 5.4 Models 43 STATION S I N G L E - T O K E N , L I M I T E D - T O - O N E S E R V I C E , R E G E N E R A T I V E M E T H O D . GENERATE QUEUE NOT USED < -LINK LOU L A T E N C Y <D <3> SEIZE 1 ADV ANCE v UNLINK OKE.M DEPART P I . C Y C L E ( ASSIGN ) SPLIT ADVANCE S E R V I C E T I M E RELEASE T O K E N / STATION * i + 1 Figure 5 . 4a - Model Segment for Single-Token operation with Limited-to-one Service D i s c i p l i n e and Regenerative Method. SECTION 5.4 Models 44 © LOGIC SET RINGl IF RESET RING2 \ BLOCK ~ TRANSACTION LOGIC IF NOT ALL CREATED TRANSACTIONS DESTROYED v» BLOCK TRANSACTION RESET RINGl LOGIC RESET RING2 \1/ ADVANCE F u l l l a t e n c y minus low l a t e n c y IF SYSTEM NOT EMPTY F  IF SYSTEM EMPTY SPLIT 0 7 COLLECT STATISTICS S T A T I O N i + 1 Figure 5.4b - Model Segment for Single-Token operation with Limited-to-one Service D i s c i p l i n e and Regenerative Method. SECTION 5.4 Models 45 Swapping t i m e d e p e n d i n g on queue e m p t i n e s s . ADVANCE Figure 5 .4c - Model Segment f o r Single-Token operation with Limited-to-one Service D i s c i p l i n e and Regenerative Method. SECTION 5.4 Models 46 S T A T I O N i SINGLE-PACKET, LIMITED-TO-ONE SERVICE, REGENERATIVE METHOD. NOT USED <-LOW LATENCY 1 1 ADV ANCE U N L I N K GENERATE QUEUE L I N K FIFO S E I Z E rOKEN DEPART Pi.CYCLE ( A S S I G N ) S P L I T ADVANCE SERVICE TIME R E L E A S E S T A T I O N ' i + 1 Figure 5.5a - Model Segment for Single-Packet operation with Limited-to-one Service D i s c i p l i n e and Regenerative Method. SECTION 5.4 Models 47 ( MARK ) SET RING1 IF RESET RING2 . BLOCK -> TRANSACTION IE NOT ALL CREATED TRANSACTIONS DESTROYED > BLOCK TRANSACTION RESET RING1 RESET RING2 THE RESIDUAL RING LATENCY PLUS FULL LATENCY MINUS LOW LATENCY IF SYSTEM NOT EMPTY EMP^p IF SYSTEM EMPTY SPL IT COLLECT STATISTICS STATION • i + 1 1 Figure 5.5b - Model Segment for Single-Packet operation with Limited-to-one Service Discipline and Regenerative Method. SECTION 5.4 Models 48 S T A T I O N i + 1 • ADVANCE Swapping time depending on queue emptiness. LOOP /• ^ N SAVE T H E ( SAVEVALUE ) RING L A T E N C Y CHECK, Ml IF R E S E T LOGIC RINGl S E T RING2 . BLOCK TRANSACTION 0 1 Figure 5.5c - Model Segment f o r Single-Packet operation with Limited-to-one Service D i s c i p l i n e and Regenerative Method. SECTION 5.4 Models 49 The model segment used for the avalanche simulation i s shown i n Figure 5.6a and 5.6b. Figures 5.6a and 5.6b present the model segment for each s t a t i o n while Figure 5.7 presents the timer model used i n the avalanche simulation. The avalanche model t r i e s to simulate the real-time behavior when the rin g system experiences many messages arrived at a short period. Although the avalanche a r r i v a l s may be large, the normal a r r i v a l s may be f a i r l y small. Hence, there are two things to investigate. One i s that when the system f i n i s h e s serving a l l the avalanche messages and another one i s that when the system returns to i t s normal operating state. There are two measures proposed: relaxation time and empty time. Relaxation time here i s defined as the time i t takes the system to f i n i s h a l l the avalanche messages. Empty time i s defined as the time i t takes the system to empty queues of a l l stations. Obviously, the empty time i s larger than or equal to the relaxation time. There are two model segments shown to i l l u s t r a t e the operations of the avalanche model. The f i r s t one i s the model segment used for each s t a t i o n and i s shown i n Figures 5.6a and 5.6b. The second one i s the model f o r the timer to open or close the gate which allows or disallows the avalanche messages to enter a s t a t i o n . Further, i t w i l l record the relaxation time and the empty time. In order to d i s t i n g u i s h between avalanche messages and normal messages, a group entity SECTION 5.4 Models 50 i s used. A l l avalanche messages belong to the group named "AVALAN". When the l o g i c switch "LOCK" i s reset, i t allows the avalanche messages to go to the st a t i o n . When the l o g i c switch i s set, a l l avalanche messages generated are destroyed. A messages proceeds as usual u n t i l i t seizes the token. Then the message i s checked i f i t belongs to the group "AVALAN". I f i t i s , then c o l l e c t s t a t i s t i c s on the tables; otherwise, skip the s t a t i s t i c s c o l l e c t i o n part. In order to determine when the l a s t avalanche message f i n i s h e s being served, the relaxation time f o r every avalanche message that f i n i s h e s transmission i s stored i n the va r i a b l e "KEEP". Therefore, when the system i s empty, the va r i a b l e "KEEP" w i l l store the relaxation time for a l l the avalanche messages that have fi n i s h e d t h e i r transmission. The timer model s t a r t s with storing the time that the avalanche period ends i n the va r i a b l e "END". The l o g i c switch i s reset and a f t e r the avalanche period i s past, the l o g i c switch w i l l be set. That makes a l l the avalanche messages generated a f t e r s e t t i n g be destroyed immediately without going to the queues of the stations. This model segment also c o l l e c t s the s t a t i s t i c s for the relaxation time and the empty time when the system becomes empty. SECTION 5.4 Models 51 STATION i AVALANCHE SIMULATION, MULTIPLE-TOKEN, LIMITED-TO-ONE SERVICE, REGENERATIVE METHOD. MOT USED V -LOH LATENCY i ADV r ANCE UNLINK TRANSFER, ^ UNCONDITIONAL NORMAL I GENERATE I ARRIVALS QUEUE 4-0 LINK SEIZE 3L AVALAN .EXAMINE IF IT IS IF IT IS NOT AVALANCHE KESSAGF. Figure 5.6a - Model Segment f o r the Avalanche Simulation with Multiple-token Operation, Limited-to-one Service D i s c i p l i n e and Regenerative Method. SECTION 5.4 Models 52 © AVALAN IF IT IS IF IT IS NOT AVALANCHE MESSAGE \ i / ( SAVEVALUE ) KEEP RELAXATION TIME RELEASE TOKEN/ ADVANCE F u l l latency minus low latency IF SYSTEM NOT EMPTY > ESJ-^ IF SYSTEM EMPTY SPLIT COLLECT STATISTICS S T A T I O N i + 1 Figure 5 .6b - Model Segment f o r the Avalanche Simulation with Multiple-token Operation, Limited-to-one Service D i s c i p l i n e and Regenerative Method. SECTION 5.4 Models 53 TIMER FOR AVALANCHE SIMULATION s N Save the time ( SAVEVALUE ) the avalanche IF ALL AVALANCHE MESSAGES SERVED IF ALL MESSAGES SERVED END LOGIC ends. RESET LOCK ADVANCE LOGIC AVALANCHE PERIOD SET LOCK IF NOT \ BLOCK ~ TRANSACTION IF NOT TABULATE BLOCK TRANSACTION RECORD RELAXATION TIME DONE TABULATE RECORD EMPTY TIME EMPTY TRANSFER \/ UNCONDITIONAL Figure 5 .7 - Model Segment for the Timer Model of the Avalanche Simulation. SECTION 5 . 4 Models 54 The model segment for the turbo mode i s shown i n Figure 5.8 and i s important to saving computing time. Normally, without the turbo mode, the computing cost increases when the system u t i l i z a t i o n goes down. I t i s a common problem for discrete-event simulation because every time the simulated time moves forward, i t i s considered by the computer as an event and there i s some computing overhead for every event. Hence, f o r the token r i n g LAN, the tr a n s f e r of a free token increases the simulated time and thus i s regarded as one event. But that i s not useful to the data we are interested i n . As the channel u t i l i z a t i o n becomes smaller, a larger portion of "computing time i s wasted i n the t r a n s f e r of the free token around the r i n g . In order to a l l e v i a t e the problem, when the system becomes empty, the token i s held i n the TEST block. The scanning algorithm of GPSS then schedules the next event which w i l l be the e a r l i e s t message a r r i v a l . A f t e r a message a r r i v a l , the free token i s put back to the ri n g randomly allowing messages a chance to capture i t . I t should be noted that the message f i r s t arrived does not guarantee the capture of the token, since another message may a r r i v e l a t e r but could seize the token on the way the token goes to the s t a t i o n of the f i r s t message. There i s a BUFFER block has to added to the model segment for each s t a t i o n . The BUFFER block should be put r i g h t a f t e r the QUEUE block and i s used to e x p l i c i t l y force a r e s t a r t of the current events chain scan. SECTION 5.4 Models 55 Otherwise, a f t e r a message i s put into the user chain (opposed to the current event chain) the compiler w i l l misinterpret that there i s no more active transaction and quit running. SECTION 5.4 Models 56 STATION 1 IF SYSTEM E M P T Y BLOCK > THE TOKEN IF SYSTEM NOT EMPTY PRIORITY PICK MODE STATION i \ j z • • • s i / STATION N \/ UNCONDITIONAL 1 1 i + 1 STATION i UNLINK gure 5.8 - Model Segment f o r t h e T u r b o mode. SECTION 5.4 Models 57 5.5 S t a t i s t i c a l Considerations Since computer simulation i s random i n nature, the r e s u l t s may be very d i f f e r e n t by using d i f f e r e n t sets of random number generator seeds. In order to overcome t h i s problem, s t a t i s t i c a l analysis i s frequently used i n simulation. In si t u a t i o n s where a sequence has a l i m i t i n g d i s t r i b u t i o n , i . e . the sequence goes through a transient phase which depends on the i n i t i a l conditions, and has an e s s e n t i a l l y unchanging d i s t r i b u t i o n afterward, the sequence has what we c a l l a steady state behavior. The sequence could be the mean waiting time of a system, the t o t a l u t i l i z a t i o n of a system, etc. In contrast, c h a r a c t e r i s t i c s of., a simulation sequence depend on the i n i t i a l conditions and on the point at which they occur e i t h e r i n a sequence or i n time. The sequence has a tra n s i e n t state behavior. I t could be the mean waiting time of the 5th customer, the mean waiting time of the 10th customer, the mean absolute time that the 5th customer f i n i s h e d h i s service, etc. In t h i s section, I w i l l concentrate on the s t a t i s t i c a l considerations for steady state behavior rather than for transient behavior. Good material on t h i s subject can be found i n [Klej74], [Fish79] and [Lave83]. There are common problems associated with simulation of steady state behavior. The f i r s t common problem for steady state simulation i s that the SECTION 5.5 S t a t i s t i c a l Considerations 58 duration of a transient phase i s usually unknown. I f the transient phase duration i s overestimated, then computing time i s wasted. I f the duration i s underestimated, the data c o l l e c t e d i s biased because of the transient data included. A general technique to determine the duration of a transient i s not known. In practice, an experimenter could estimate the duration by looking at the pl o t of the sequence and subj e c t i v e l y determine that a c e r t a i n period i s good enough to terminate a transient phase. A f t e r the transient duration i s estimated, there remains the problem of analyzing the output data. The d i f f i c u l t y l i e s i n the f a c t that the output sequence i s not independent. The two approaches described below both produce independent blocks of data so c l a s s i c a l s t a t i s t i c s can be applied. 5.5.1 Independent Replications of Runs The most common and easiest to understand approach i s to run the simulation using the method of independent r e p l i c a t i o n s . In order to make each r e p l i c a t i o n independent of each other, we can change the seeds of random number generators for each r e p l i c a t i o n , discard the data from the transient phase and c o l l e c t the data afterward for each r e p l i c a t i o n . SECTION 5.5 S t a t i s t i c a l Considerations 59 Suppose we want to estimate the true mean a and the confidence i n t e r v a l s of the estimated mean ]i. Let w be the number of r e p l i c a t i o n s , n be the sample s i z e of each run, y„ be the (th observation i n the j run, and the sample mean of the /th run be 7,. Therefore, for each run: (5.1) Since 7,,j-\,...,M are independent, an approximately unbiased estimate of the mean i s (5.2) Hence, t r a d i t i o n a l s t a t i s t i c a l analysis can be used to calc u l a t e the confidence i n t e r v a l s . The (l-a) confidence i n t e r v a l may be obtained by ' * T T ( 5' 3> where SECTION 5.5.1 Independent Replications of Runs 60 (5.4) and t*'J i s the upper a/2 pe r c e n t i l e of the Student's t - s t a t i s t i c s with M-l degree of freedom. I f the degree of freedom i s over 30, then the standard normal s t a t i s t i c s can be used. In general, M should be kept small and N large to minimize unused data during the transient state and any residual bias caused by the slow convergence of p. 5.5.2 The Regenerative Method The regenerative method i s based on the idea of independent cycles i n a singl e run. Crane and Iglehart did a l o t of o r i g i n a l research on t h i s t o p i c [Cran74a], [Cran74b], [Cran75a] and [Cran75b] and a more ap p l i c a t i o n oriented text can be found i n [Cran77]. The advantages of using the regenerative method are l i s t e d as follows: 1. I t does not need to know the duration of a transient phase. SECTION 5.5.1 Independent Replications of Runs 61 2 . I t c o l l e c t s useful data r i g h t at the beginning. We don't need to discard any useful data i n the transient phase. 3. I t produces independent sub-sequences of data i n a single run. Limitations The regenerative method applies to discrete-event simula-t i o n s that can be modeled as regenerative processes. The basic regenerative assumption i s that there e x i s t s an regeneration time, 7,, such that the continuation of the process beyond 7, i s a p r o b a b i l i s t i c r e p l i c a of the whole process s t a r t i n g at time 0. This property implies that T2,T3.... e x i s t and they have the same property as 7,. Further, f i n i t e regeneration times are assumed. The t y p i c a l s i t u a t i o n which s a t i s f i e s the regenerative requirement i s when the system h i t s 7,, the system can proceed without any knowledge of i t s past h i s t o r y . The procedures of obtaining an approximate lOO'(i-a) confidence i n t e r v a l f o r the parameter r are the following: 1. Run the simulation f o r n regenerative cycles. SECTION 5.5.2 The Regenerative Method 62 2. F o r e a c h r e g e n e r a t i v e c y c l e , compute t h e sum o f t h e o b s e r v e d d a t a i n t h e j t h c y c l e Y, and t h e number o f o b s e r v a t i o n s made i n t h e j t h c y c l e v,. 3. A f t e r t h e s i m u l a t i o n r u n , compute t h e f o l l o w i n g sample s t a t i s t i c s : where - 2 f s i a + f 2 s where 4 . F i n a l l y , t h e c o n f i d e n c e i n t e r v a l c a n be computed. SECTION 5.5.2 The Regenerative Method 63 where <P i s the standard normal d i s t r i b u t i o n function. 5.6 V a l i d a t i o n The v a l i d i t y of the simulation models was f i r s t inspected by using the i n t e r a c t i v e debugging f a c i l i t i e s i n GPSS/H. The program was monitored to determine i f i t behaved as expected. Afterward, the v a l i d a t i o n was then checked by comparing known r e s u l t s with the simulation r e s u l t s . The two examples of Ferguson's paper [Ferg85] were chosen for comparison and the simulation r e s u l t s with 90% confidence i n t e r v a l s covered most of the data points of the known t h e o r e t i c a l r e s u l t s . When the simulations for unknown t h e o r e t i c a l r e s u l t s were run, there were three things that had to be checked. F i r s t l y , I would observe i f the r e s u l t s were too reasonable. I f they were, the systems parameters would be inspected again very c a r e f u l l y . Secondly, when the u t i l i z a t i o n went down to 0.1, the mean message waiting time should be close to but not SECTION 5.5.2 The Regenerative Method 64 smaller than h a l f of the r i n g latency for Poisson a r r i v a l s . T h i r d l y , the mean message waiting time of the f u l l latency simulations should be more than that of the dual latency simulations f o r same u t i l i z a t i o n . F i n a l l y , when the u t i l i z a -t i o n went up close to the s t a b i l i t y l i m i t , the simulation r e s u l t s should experience a sharp increase i n mean message waiting time. The simulation should be stopped because the memory requirements and the computing cost would r i s e dramatically. The simulation might become too expensive to run. 5 . 7 Features of the Simulation Models The simulation model consists of a pre-processor, a processor and a post-processor. The pre-processor was written i n PASCAL and i t automatically generates the source code f o r the GPSS/H processor a f t e r the system parameters are entered. I f the regenerative method i s sp e c i f i e d , a FORTRAN post-processor i s ava i l a b l e f o r c a l c u l a t i o n and output of confidence i n t e r v a l s f o r the mean message waiting time and other s t a t i s t i c a l data such as the number of regenerative cycles, number of messages, etc. The pre-processor i s the heart of the simulation software. It allows the user to enter h i s parameters f o r h i s r i n g system SECTION 5.6 Validation 65 r e l a t i v e l y e a s i l y . Also, i t checks i f the input parameters are u n r e a l i s t i c . I f they are, i t w i l l output a warning message, but a GPSS/H program w i l l s t i l l be generated. This w i l l prevent the user from wasting computing money on wrong entries and at the same time give the user the freedom to run such a system. The post-processor i s a r e l a t i v e l y straightforward program f o r s t a t i s t i c a l analysis of the regenerative method. The reason for using FORTRAN i s that i t can be d i r e c t l y linked with GPSS/H. I t should be noted that the simulation model can be used for ei t h e r a token r i n g or token bus network. Of course, the input parameters w i l l be d i f f e r e n t . For a token ri n g , the switchover time i s j u s t the propagation time from one s t a t i o n to the next plus the s t a t i o n latency. For token bus, the switchover time would become the time to transmit an e x p l i c i t token frame, which i s 152 b i t s i n the IEEE 802.4 Token Bus standard [IEEE802.4] plus the propagation delay between the two stations, which may be p h y s i c a l l y quite f a r apart. F i n a l l y the response time of the s t a t i o n has to be added. As the u t i l i z a t i o n becomes lower, the computing cost r i s e s considerably. The reason i s that for discrete-event simulation the computing cost i s d i r e c t l y proportional to the number of events i n i t i a t e d within the simulated time period but not SECTION 5.7 Features of the Simulation Models 66 simulated time i t s e l f . Therefore, i n low u t i l i z a t i o n the tr a n s f e r of a free token dominates the transfer of messages. Consequently, each transfer of a free token i s an event to the computer but there i s no s t a t i s t i c a l l y useful data associated with such an event. In order to remedy t h i s problem, a turbo mode i s av a i l a b l e for running low u t i l i z a t i o n . In turbo mode, af t e r each message i s served, the free token released i s suspended u n t i l the next message a r r i v e s . Then the free token i s randomly put back into the r i n g . The time a free token c i r c u l a t e d around the r i n g would be very minimal. The problem with t h i s approach i s that f o r some cases the assumption would be i n v a l i d . For instance, i f stations have constant a r r i v a l s , the capture of a token may depend on the ordering of stations rather than random placement of a token. There i s another type of simulation option a v a i l a b l e . I t i s c a l l e d avalanche simulation and i s used for cases when a number of messages a r r i v e only i n an avalanche fashion, i . e . , they a r r i v e i n a period of time and then disappear and ar r i v e . . . The a p p l i c a t i o n of t h i s kind of avalanche simulation can be thought of as the simulation of a process control a p p l i c a t i o n , such as a nuclear power plant. When the alarm i s on and a number of sensors send out messages roughly around the same time, the system i s under an overloaded s i t u a t i o n . This s i t u a t i o n occurs only once i n a while but not a l l the SECTION 5.7 Features of the Simulation Models 67 time. The software has already been debugged, but i t s t i l l needs some modifications. The d i f f i c u l t y i n the avalanche simulation i s that the confidence i n t e r v a l s of the parameters of i n t e r e s t are too b i g f o r a reasonable computing cost. In summary, the input parameters are l i s t e d as follows: - Service D i s c i p l i n e s : exhaustive, gated, or limited-to-one. - token transmission types : multiple-token (e.g. FDDI, IEEE 802.4), single-token (e.g. IEEE 802.5) or single-packet. - Latency Types : F u l l latency or dual latency. - I n t e r a r r i v a l Time : exponential or constant or a given d i s t r i b u t i o n function. - Service Time : exponential, constant, or a mix of both. - switchover Time : constant for both lat e n c i e s . - Number of Stations: 2 - 100. - Running Methods : the Regenerative Method or Replicated Runs. - Turbo mode ava i l a b l e for running low u t i l i z a t i o n . - avalanche simulation option SECTION 5.7 Features of the Simulation Models 68 VI. NUMERICAL AND SIMULATION RESULTS This section presents the numerical and simulation examples i n order to discuss some general c h a r a c t e r i s t i c s of the dual latency method compared with the f u l l latency. Since there can be so many combinations of the parameters, i t was d i f f i c u l t to choose a so c a l l e d " t y p i c a l " LAN to investigate. Thus, I used the values for the low latency and the f u l l latency based on the implementation [Radz88]. Section 6.1 describes the LANs running at low speed, such as the IEEE 802.5 standard. The values used for the late n c i e s were based on the IEEE 802.5 implementation that runs at 10 Mbps data rate. The st a t i o n low latency i s 0.6 b i t s and the st a t i o n f u l l latency i s 3.6 b i t s . Section 6.2 describes the LANs running at high speeds, such as the FDDI standard. The values used f o r the latencies were based on the AMD implementation [AMD86] of the FDDI standard that runs at 100 Mbps. The s t a t i o n low latency i s 62 b i t s and the st a t i o n f u l l latency i s 24 b i t s [Hard88b]. Each subsection i s further divided into completely symmetric cases and asymmetric cases. For the asymmetric cases, i t i s assumed that the LAN has 20 stations, 2 of them heavily load and 18 of them l i g h t l y loaded. This asymmetric loading was studied i n [Bux83a], [Bux84], [Ferg85], and [Ever86] and was chosen to represent a cl a s s of asymmetric t r a f f i c . Each of the two heavily SECTION 6 . NUMERICAL AND SIMULATION RESULTS 69 loaded stations contributes 40% of the t o t a l u t i l i z a t i o n while each of the eighteen l i g h t l y loaded stations shares the remaining t r a f f i c u t i l i z a t i o n evenly. 6.1 Low Speed LANs Highlights of the implementation f o r the dual latency method i s presented i n section 3. The s t a t i o n latencies were estimated based on the IEEE 802.5 implementation [Radz88] operating at 10 Mbps. The values f o r the s t a t i o n low latency and the s t a t i o n f u l l latency are 0.6 and 3.6 b i t s r e s p e c t i v e l y . The propagation delay i s assumed to be 5 ue/km cable length. Further, the propagation delay per s t a t i o n i s assumed to be 0.9 b i t s which i s equivalent to about 18 m cable length. The t o t a l cable length of the r i n g i s about 360 m. Therefore, the f i n a l low latency (the propagation delay per s t a t i o n + the s t a t i o n low latency) i s equal to 1.5 b i t s and the f i n a l f u l l latency (the propagation delay per s t a t i o n + the s t a t i o n f u l l latency) i s equal to 4.5 b i t s . Thus, the diff e r e n c e between the f i n a l f u l l latency and the f i n a l low latency i s equal to 3 b i t s . SECTION 6 . NUMERICAL AND SIMULATION RESULTS 70 6.1.1 Completely Symmetric Cases The parameters used to run the completely symmetric ri n g are l i s t e d below. The cases 1A and IB were chosen to demonstrate the e f f e c t of the number of stations and the e f f e c t of using e i t h e r constant or exponential message d i s t r i b u t i o n . The 176 b i t s mean message length was picked because as w i l l be seen i n section 6.1.2, the mean message length plays an important r o l e i n determining how a t t r a c t i v e the dual latency method i s . The mean message length here i s 176 b i t s as i n Case 2C i n section 6.1.2, which i s l/8k (exp.) b i t s + 48 b i t s overhead rather than e i t h e r completely constant or completely exponential. The number i s used because i t i s large enough to show the s i g n i f i c a n c e of the dual latency method but i t i s not too small to exaggerate the r e s u l t s . For real-time applications, the messages are t y p i c a l l y only a few bytes i n length [Hutch87]. The message length chosen i s well suitable for these kinds of applications. SECTION 6.1.1 Completely Symmetric Cases 71 Completely Symmetric Case (Case 1) Number of stations 20, 100, 200, or 1000. Constant switchover time ( i n b i t s ) 4.5 for f u l l latency. 1.5 for low latency. Mean message length ( i n b i t s ) Case Exponentially d i s t r i b u t e d Information (bits) Constant information (bits) case 1A case IB 176 0 + 0 + 176 Figures 6.1 to 6.4 show the performance comparison of d i f f e r e n t service d i s c i p l i n e s for d i f f e r e n t number of s t a t i o n s . The r e s u l t s were obtained using the equations 4.5 and 4.6. Note that the gap between the f u l l latency and dual latency becomes greater and greater as the number of stations increases. The 20 s t a t i o n figure also shows the i n t e r s e c t i o n of the dual latency (ordinary service) curve and f u l l latency exhaustive service curve. This i n t e r s e c t i o n e x i s t s f o r a l l cases but may occur at a much higher u t i l i z a t i o n f o r some cases. The reason i s that the mean waiting time of the dual latency tends to increase faster than the exhaustive service i n high u t i l i z a t i o n . This can be explained by the s t a b i l i t y c r i t e r i o n mentioned i n section 4.3: SECTION 6.1.1 Completely Symmetric Cases 72 * Exhaustive & gated : * Ordinary (limited-to-one) : For all i , p 0 + XtR < 1 (4.3) (4.4) Therefore, f o r a completely symmetric r i n g , b max b+Ti/N b (6.1) b+r where p 0 i s the t o t a l u t i l i z a t i o n , x, i s the Poisson a r r i v a l rate f o r s t a t i o n i , b i s the mean service time, and X and r are the r i n g latency and st a t i o n switchover time respe c t i v e l y . Note that the maximum u t i l i z a t i o n i s independent of N. But why does Figure 6.1 exhibit an i n t e r s e c t i o n and Figures 6.2 to 6.3 do not. The explanation for t h i s i s that as the number of stations increases, the chances of a st a t i o n goes into the low latency state becomes higher f o r the same u t i l i z a t i o n . So, there are more time savings f o r using the dual latency. That makes the in t e r s e c t i o n occur at higher u t i l i z a t i o n . Also note the f u l l latency exhaustive service i s always stable f o r p< l . SECTION 6.1.1 Completely Symmetric Cases 73 Figure 6.5 presents the r a t i o of the mean waiting time of the f u l l latency and that of the dual latency, both using ordinary service. The f i r s t observation i s that the r a t i o i s always higher than one. For t h i s case, the dual latency produces a 20% improvement i n mean waiting time for 20 stations even at 0.9 u t i l i z a t i o n . In Figure 6.6, the r a t i o of the mean waiting time of the f u l l latency exhaustive service and that of the dual latency ordinary service i s shown. Note that the r a t i o f a l l s below the break even l i n e at high u t i l i z a t i o n as expected. The r a t i o approaches zero when the dual latency r i n g becomes unstable. For t h i s case, the max(p 0) i s equal to roughly 0.975. This confirms the r e s u l t s obtained above. Similar behavior can be found i n Figure 6.7 f o r case IB. The difference between case 1A and case IB i s that case 1A uses exponential d i s t r i b u t i o n for message length while case IB uses constant message length. The performance improvement of case 1A i s les s than that of case IB. This can be supported by equation 4.14 which gives the r a t i o of the mean waiting time of the f u l l latency to the mean waiting time of the dual latency. SECTION 6.1.1 Completely Symmetric Cases 74 COMPARISON OF PERFORMANCE CASE 1A (THEORETICAL) 2.8 -, UTILIZATION - 20 STATIONS • Full latency + Dual latency 0 Gated service A Exhaustive service Figure 6.1 - Performance Comparison of Case 1A for 2 0 sta t i o n s . SECTION 6.1.1 Completely Symmetric Cases 75 w h a ti w • H 0 Z < w 2 5 -3 -COMPARISON OF PERFORMANCE CASE 1A (THEORETICAL) • Full latency o Gated service UTILIZATION - 100 STATIONS + Dual latency A Exhaustive service Figure 6.2 st a t i o n s . - Performance Comparison of Case 1A for 100 SECTION 6.1.1 Completely Symmetric Cases 76 COMPARISON OF PERFORMANCE 0 0.2 0.4 0.6 0.8 UTILIZATION - 200 STATIONS • Full latency + Dual latency O G-ated service A Exhaus t i ve service Figure 6 . 3 - Performance Comparison of Case 1 A f o r 2 0 0 s t a t i o n s . SECTION 6.1.1 Completely Symmetric Cases 77 w h re (J u IH 0 2 < COMPARISON OF PERFORMANCE CASE 1A (THEORETICAL) • Full latency 0 G-ated service UTILIZATION - 1000 STATIONS + Dual latency A Exhaustive service Figure 6.4 - Performance Comparison of Case 1A for 1000 sta t i o n s . SECTION 6.1.1 Completely Symmetric Cases 78 EFFECT OF # OF STATIONS Figure 6 . 5 - Performance Improvement of F u l l Latency Ordinary Service to Dual Latency Ordinary Service for Case 1 A . SECTION 6.1.1 Completely Symmetric Cases 79 EFFECT OF tt OF STATIONS Figure 6 . 6 - Performance Improvement of F u l l Latency Exhaustive Service to Dual Latency Ordinary Service for Case 1 A . SECTION 6.1.1 Completely Symmetric Cases 80 EFFECT OF # OF STATIONS CASE IB (THEORETICAL) < Q \ i-l i-l D h w 0 w H < < u 0 0 H < • 20 s t a t i o n s o 200 s t a t i o n s UTILIZATION + 100 s t a t i o n s A 1000 s t a t i o n s Figure 6.7 - Performance Improvement of F u l l Latency Exhaustive Service to Dual Latency Ordinary Service for Case IB. SECTION 6.1.1 Completely Symmetric Cases 81 6.1.2 Asymmetric Cases The parameters chosen for the asymmetric case are l i s t e d below. Cases 2A to 2E were chosen to i l l u s t r a t e the e f f e c t of message length on dual latency while cases 3A to 3E were chosen to i l l u s t r a t e the e f f e c t of message length d i s t r i b u t i o n from t o t a l l y exponential gradually to purely constant. The t r a f f i c pattern chosen was used as an i l l u s t r a t i o n i n many papers [Bux83a], [Bux84], [Ferg85], and [Ever86] and might be of i n t e r e s t to other researchers. They used IK (exp.) b i t s + 48 b i t s overhead as the mean message length. I ran a few points using t h i s mean message length and concluded that t h i s mean message length was too big to show the s i g n i f i c a n c e of the dual latency. Hence, I t r i e d to reduce the exponential part by one h a l f to become case 2A and one h a l f of 2A to become 2B and so on to case 2E. But I kept the overhead the same (48 b i t s ) . SECTION 6.1.2 Asymmetric Cases 82 Asymmetric Cases (Case 2 & Case 3) Number of stations 20. T r a f f i c pattern Pi = P 8 = 4 0 % p 0 . p 2 = .. = Pz= Ps = ••• = Pao • Constant switchover 4.5 for f u l l latency. time ( i n b i t s ) 1.5 for low latency. Mean message length Case Exponentially Constant ( i n b i t s ) d i s t r i b u t e d information Information (bits) (bits) case 2A 512 + 48 case 2B 256 + 48 case 2C 128 + 48 case 2D 64 + 48 case 2E 32 + 48 case 3A 0 + 176 case 3B 48 + 128 case 3C 88 + 88 case 3D 128 + 48 case 3E 176 + 0 Figure 6.8 shows the performance improvement expected for case 2 with d i f f e r e n t mean message lengths. Note that for the asymmetric cases only the mean waiting time for st a t i o n 1 i s shown. Stations 1 and 8 are heavily loaded and of primary concern. The behavior of the mean waiting time for s t a t i o n 1 and s t a t i o n 8 are very s i m i l a r . From the figure, SECTION 6.1.2 Asymmetric Cases 83 the f i r s t observation i s that the improvement increases r a p i d l y as the u t i l i z a t i o n goes up. Secondly, when the message length increases the advantage of using the dual latency method decreases. I t i s due to the fa c t that long messages have a lower switchover time overhead than short messages. Figures 6.9 to 6.13 show the performance comparison of f u l l latency with d i f f e r e n t service d i s c i p l i n e s to the dual latency f o r various mean message lengths. The curves for exhaustive and gated service d i s c i p l i n e s were calculated based on Watson's algorithms and were put i n to be consistent with case 1. For a mean message of 512 exp.+48 b i t s (case 2A), the dual latency i s better than f u l l latency with gated service up to 0.8 u t i l i z a t i o n and f u l l latency with exhaustive service up to 0.5. For a message with 128exp.+48 b i t s (case 2C) , the dual latency i s better than the gated service up to 0.7 and exhaustive service up to around 0.5 u t i l i z a t i o n . When the mean message length decreases to 64exp.+48 b i t s , the dual latency i s s t i l l b etter than the gated service up to 0.65 and exhaustive service up to 0.5 u t i l i z a t i o n . Also, the gap between the dual latency and the f u l l latency becomes greater as the mean message length becomes smaller. SECTION 6.1.2 Asymmetric Cases 84 EFFECT OF MESSAGE LENGTHS Figure 6 . 8 - Performance Improvement of F u l l Latency to Dual Latency f o r Case 2 , d i f f e r e n t mean message lengths SECTION 6.1.2 Asymmetric Cases 85 PERFORMANCE COMPARISON CASE 2A (SIMULATION) 0 0.2 0.4 0.6 0.8 UTILIZATION - 20 STATIONS • Ful l latency + Dual la tency o E x h a u s t i v e service A G-ated service Figure 6 .9 - Performance Comparison of F u l l Latency with Di f f e r e n t Service D i s c i p l i n e s to the Dual Latency for Case 2 A SECTION 6 . 1 . 2 Asymmetric Cases 86 PERFORMANCE COMPARISON CASE 2B (SIMULATION) 0 0.2 0.4 0.6 0.8 UTILIZATION - 20 STATIONS • Ful l latency + Dual la tency 0 E x h a u s t i v e service A Gated service Figure 6.10 - Performance Comparison of F u l l Latency with Dif f e r e n t Service D i s c i p l i n e s to the Dual Latency f o r Case 2B. SECTION 6.1.2 Asymmetric Cases 87 PERFORMANCE COMPARISON CASE 2C (SIMULATION) 4.5 - i UTILIZATION - 20 STATIONS • Ful l latency + Dual la tency o E x h a u s t i v e service A G-ated service Figure 6.11 - Performance Comparison of F u l l Latency with Di f f e r e n t Service D i s c i p l i n e s with the Dual Latency f o r Case 2C. SECTION 6.1.2 Asymmetric Cases 88 PERFORMANCE COMPARISON CASE 2D (SIMULATION) 0 0.2 0.4 0.6 0.8 UTILIZATION - 20 STATIONS • Full latency + Dual latency O Exhaustive service A Gated service Figure 6.12 - Performance Comparison of F u l l Latency with Dif f e r e n t Service D i s c i p l i n e s to the Dual Latency for Case 2D SECTION 6.1.2 Asymmetric Cases 89 w H t—I pa z h-t u 2 0 H h-i < < u 700 600 500 400 -300 200 100 PERFORMANCE COMPARISON CASE 2E (SIMULATION) • Full latency <> Exhaustive service UTILIZATION - 20 STATIONS + Dual latency A G-ated service Figure 6.13 - Performance Comparison of F u l l Latency with D i f f e r e n t Service D i s c i p l i n e s to the Dual Latency for Case 2E SECTION 6.1.2 Asymmetric Cases 90 Case 3 t r i e s to investigate what would happen when the mean message length i s dominated by e i t h e r the constant f i e l d or the exponential f i e l d as i n the cases of p r a c t i c a l systems. Figure 6.14 shows that the performance improvement becomes more s i g n i f i c a n t as the mean message length has more constant f i e l d . Figures 6.15 and 6.16 show the performance comparison among case 3A to case 3E f o r f u l l latency stations and f o r dual latency stations respectively. The more constant the mean message length, the smaller the mean waiting time. I f we t r y to compare the two figures, they behave very s i m i l a r except that figure 6.16 (dual latency) looks l i k e s h i f t e d u t i l i z a t i o n curves of figure 6.15 ( f u l l l atency). In other words, the dual latency w i l l delay the onset of the i n s t a b i l i t y of the system compared with the f u l l latency. In r e a l systems l i k e the IEEE 802.5 and the FDDI standards, they usually have a constant header f i e l d . That w i l l make the dual latency more appealing than random message lengths. For real-time applications, the system often has a l o t of short and constant messages that are time-dependent. That makes the dual latency even more a t t r a c t i v e f o r t i m e - c r i t i -c a l a p plications. SECTION 6.1.2 Asymmetric Cases 91 EFFECT OF DISTRIBUTIONS OF SERVICE TIME CASE 3 - STATION 1 (SIMULATION) < Q s •J u w W 0 Z H < < w u 0 0 H < UTILIZATION - 20 STATIONS 176 e x p . b its + 128 exp.+48 bits 0 88 exp.+88 bits A 48 exp.+ 128 bits X 176 b i t s constant Figure 6.14 - Performance Improvement of F u l l Latency to Dual Latency for Different Message Di s t r i b u t i o n s (Case 3) SECTION 6.1.2 Asymmetric Cases PERFORMANCE COMPARISON CASE 3 - STATION 1 (SIMULATION) w h z (3 » H 0 < w 2 UTILIZATION - 20 FULL LATENCY STATIONS • 176 e x p . bits + 128 exp.+48 bits 0 88 exp.+88 bits A 48 exp.+128 bits X 176 bits constant Figure 6.15 - Performance Comparison of F u l l Latency with D i f f e r e n t Message Di s t r i b u t i o n s for Case 3 SECTION 6.1.2 Asymmetric Cases 93 PERFORMANCE COMPARISON CASE 3 - STATION 1 (SIMULATION) 1.3 _. 0 0.2 0.4 0.6 0.8 UTILIZATION - 20 DUAL LATENCY STATIONS • 176 exp. bits + 128 exp.+48 bits 0 88 exp.+88 bits A 48 exp. + 128 bits X 176 bits constant Figure 6.16 - Performance Comparison of Dual Latency with Di f f e r e n t Message Dist r i b u t i o n s for Case 3 SECTION 6.1.2 Asymmetric Cases 94 6.2 High Speed LANs High speed LANs have become more important because of the increasing demand for more services and the technological advances i n communications. The latencies used i n the examples were based on the AMD implementation [AMD86] of the FDDI standard. The s t a t i o n f u l l latency was estimated to be between 62 b i t s and 45 b i t s and the low latency was estimated to be 24 b i t s [Hard88b]. I f we assume that the propagation delay for o p t i c a l f i b e r i s 5 us/km and the i n t e r s t a t i o n f i b e r length i s 10 m long. Then the propagation delay per s t a t i o n becomes 50 ns which i s equivalent to 5 b i t s time. The v a r i a t i o n of the f u l l latency value cart be 17 b i t s and well covers the 5 b i t s propagation delay. The o r i g i n a l estimation of the low latency value was 8.6 b i t s . Since the FDDI standard s p e c i f i e s that the minimum s t a t i o n delay be three octets, the 24 b i t s value has to be used. The 5 b i t s propagation delay i s assumed to contribute part of the low latency value from 8.6 b i t s to 24 b i t s . Therefore, the 5 b i t s propagation delay i s assumed to be included i n both the 62 b i t s f u l l latency and the 24 b i t s low latency. SECTION 6.2 High Speed LANs 95 6.2.1 Completely Symmetric Cases The parameters chosen for the completely symmetric case are l i s t e d below. The 312 b i t s overhead was calculated based on the FDDI standard. The 4K b i t s (case 4A) exponential f i e l d was chosen so that the p r o b a b i l i t y of the messages exceeding the maximum message s i z e (35840 bits) s p e c i f i e d i n the FDDI standard i s very small. Then 4K i s further reduced by a fac t o r of 4 to become IK (case 4B) and so on f i n a l l y to 64 b i t s (case 4D). 1 Completely Symmetric Case (Case 4) Number of stations | 20, 100, 200, or 1000. Constant switchover 62 f o r f u l l latency. time (in b i t s ) 24 f o r low latency. Mean message length Case Exponentially Constant (in b i t s ) d i s t r i b u t e d information Information (bits) (bits) case 4A 4 K + 312 case 4B 1 K + 312 case 4C 256 + 312 case 4D 64 + 312 Figure 6.17 to 6.22 show the performance comparison for d i f f e r e n t mean message lengths. The performance of the f u l l SECTION 6.2.1 Completely Symmetric Cases 96 latency stations and that of the dual latency stations are very s i m i l a r except that the performance curve of the dual latency s t a t i o n appears s h i f t e d to lower values of u t i l i z a t i o n . Further, f o r short messages l i k e case 4D, the mean message waiting time o r i g i n a l l y i s smaller for low u t i l i z a t i o n , but as the u t i l i z a t i o n increases, the waiting time becomes bigger. When the u t i l i z a t i o n i s low, there i s l e s s chance that a short message w i l l have to wait for another short message. But when the u t i l i z a t i o n i s high, the chances of a short message having to wait f o r another short message i s greater because of the increased switchover time overhead f o r short messages. Figures 6.23 to 6.26 show the performance comparison of f u l l latency with d i f f e r e n t service d i s c i p l i n e s to the dual latency ordinary service. Again, i t i s observed that the gap between the f u l l latency and the dual latency i s greater for smaller mean message length. Further, the gap increases as the number of stations increases. Figure 6.27 and 6.28 show the e f f e c t of number of stations on the performance improvement using the dual latency method. As we noted before f o r short message length the performance improvement i s more s i g n i f i c a n t than f o r long message length. SECTION 6.2.1 Completely Symmetric Cases 97 EFFECT OF MESSAG-E LENG-THS Figure 6.17 - Performance Comparison of F u l l Latency with Di f f e r e n t Message Lengths for Case 4, 20 Stations. SECTION 6.2.1 Completely Symmetric Cases 98 CO H HC (5 (J M HH O Z < u 2 EFFECT OF MESSAG-E LENGTHS CASE 4 (THEORETICAL) • 4K (exp.)+312 bits o 256(exp.)+312 bits UTILIZATION - 20 DUAL LATENCY STATIONS + IK (exp.)+312 bits X 64 (exp.) +312 bits Figure 6.18 - Performance Comparison of Dual Latency with D i f f e r e n t Message Lengths for Case 4, 20 Stations. SECTION 6.2.1 Completely Symmetric Cases 99 EFFECT OF MESSAG-E LENG-THS CASE 4 (THEORETICAL) 100 -, 1 90 -ffl 2 70 -0 0.2 0.4 0.6 0.8 UTILIZATION - 100 FULL LATENCY STATIONS • 4K (exp.)+312 bits + IK (exp.)+312 bits o 256(exp.)+312 bits X 64 (exp.)+312 bits Figure 6.19 - Performance Comparison of F u l l Latency with D i f f e r e n t Message Lengths for Case 4, 100 Stations. SECTION 6.2.1 Completely Symmetric Cases 100 CO H ffl z HC M 0 Z < w EFFECT OF MESSAGE LENGTHS CASE 4 (THEORETICAL) UTILIZATION -• 4K (exp.)+312 bits o 256(exp.)+312 bits 100 DUAL LATENCY STATIONS + IK (exp.) +312 bits X 64 (exp.)+312 bits Figure 6.2 0 - Performance Comparison of Dual Latency with D i f f e r e n t Message Lengths for Case 4, 100 Stations SECTION 6.2.1 Completely Symmetric Cases 101 EFFECT OF MESSAG-E LENG-THS CASE 4 (THEORETICAL) 1 _ 0.9 -P 0.8 -2 0.7 -w 2~ 0.6 -— u> UTILIZATION -1000 FULL LATENCY STATIONS • 4K (exp.)+312 bits + IK (exp.)+312 bits o 256(exp.)+312 bits X 64 (exp.)+312 bits Figure 6.21 - Performance Comparison of F u l l Latency with D i f f e r e n t Message Lengths for Case 4, 1000 Stations. SECTION 6.2.1 Completely Symmetric Cases 102 w H t—I CQ 2 w Ha On w 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 * EFFECT OF MESSAGE LENGTHS CASE 4 (THEORETICAL) 0.2 UTILIZATION 0.4 0.6 0.8 • 4K (exp.)+312 bits 0 256(exp.)+312 bits 1000 DUAL LATENCY STATIONS + IK (exp.)+312 bits X 64 (exp.)+312 bits Figure 6.2 2 - Performance Comparison of Dual Latency with D i f f e r e n t Message Lengths f o r Case 4, 1000 Stations. SECTION 6.2.1 Completely Symmetric Cases 103 w H PQ ra 0 o) M 0 Z < u 2 50 COMPARISON OF PERFORMANCE CASE 4B (THEORETICAL) 40 -30 20 -• Full latency o Gated service UTILIZATION - 100 STATIONS + Dual latency A Exhaustive service Figure 6.23 - Performance Comparison of F u l l Latency with Di f f e r e n t Service D i s c i p l i n e s to the Dual Latency for Case 4B, 100 Stations. SECTION 6.2.1 Completely Symmetric Cases 104 COMPARISON OF PERFORMANCE CASE 4D (THEORETICAL) u "1 l I I l l I I I I I 0 0.2 0.4 0.6 0.8 1 UTILIZATION - 100 STATIONS • Full latency + Dual latency o Oated service A Exhaustive service Figure 6.24 - Performance Comparison of F u l l Latency with Di f f e r e n t Service D i s c i p l i n e s to the Dual Latency f o r Case 4D, 100 Stations. SECTION 6.2.1 Completely Symmetric Cases 105 500 COMPARISON OF PERFORMANCE CASE 4B (THEORETICAL) Figure 6.25 - Performance Comparison of F u l l Latency with D i f f e r e n t Service D i s c i p l i n e s to the Dual Latency for Case 4B, 1000 Stations. SECTION 6.2.1 Completely Symmetric Cases 106 COMPARISON OF PERFORMANCE CASE 4D (THEORETICAL) 500 n n 0 0.2 0.4 0.6 0.8 I UTILIZATION - 1000 STATIONS • Full latency + Dual latency 0 Oated service A Exhaustive service Figure 6.26 - Performance Comparison of F u l l Latency with Di f f e r e n t Service D i s c i p l i n e s to the Dual Latency f o r Case 4D, 1000 Stations SECTION 6.2.1 Completely Symmetric Cases 107 EFFECT OF # OF STATIONS CASE 4A (THEORETICAL) < P D s J P u Ul 2 0 H < < u 0 < tt UTILIZATION (4K exp.+312 bits message) • 20 stations + 100 stations o 200 stations A 1000 stations Figure 6.27 - Performance Improvement of F u l l Latency Ordinary Service to the Dual Latency Ordinary Service for Case 4A. SECTION 6.2.1 Completely Symmetric Cases 108 EFFECT OF tt OF STATIONS CASE 4D (THEORETICAL) < D D s & u w 2 0 H < < u 0 0 H < UTILIZATION (64 exp.+312 bits message) • 20 stations + 100 stations o 200 stations A 1000 stations Figure 6.28 - Performance Improvement of F u l l Latency Ordinary Service to the Dual Latency Ordinary Service for Case 4D. SECTION 6.2.1 Completely Symmetric Cases 109 6.2.2 Asymmetric Cases The parameters chosen for the asymmetric cases are l i s t e d i n case 5 and case 6 below. The difference between the two cases i s that the f u l l latency i s 62 b i t s and 45 b i t s for case 5 and case 6 respectively while the low latency i s unchanged (24 b i t s ) . That gives us the improvement expected when the f u l l latency decreases. The message length was chosen to be 256 b i t s i n the exponential f i e l d so that i t i s the same as case 4C and the information f i e l d s i z e i s comparable to the overhead f i e l d s i z e . Asymmetric Case (Case 5) Number of stations 20. T r a f f i c pattern p, = p 8 = 4 0 % p 0 . p 3 = ... = p7 = p 9 = ... = p 2 0 . Constant switchover time ( i n b i t s ) 62 for f u l l latency. 24 for low latency. Mean message length ( i n b i t s ) Exponentially d i s t r i b u t e d Information (bits) Constant information (bits) 256 + 312 SECTION 6.2.2 Asymmetric Cases 110 Asymmetric Case (Case 6) Number of stations 20. Traffic pattern p, = p 8 = 40%p 0 , p 2 = ... = p 7 = p, = ... = p 2 0 . Constant switchover time ( in bits ) 45 for f u l l latency. 24 for low latency. Mean message length ( in bits ) Exponentially distributed Information (bits) Constant information (bits) 256 + 312 Figures 6.29 and 6.30 show the performance comparison for case 5 and case 6 respectively. Note that for asymmetric cases only the mean waiting time for station 1 i s shown. Since i t i s the heavily loaded station, i t i s of primary concern. Since the f u l l latency i s greater in case 5 than in case 6, a longer mean message waiting time i s expected for case 5. The figures confirm this rule. Both cases indicate that the dual latency performs better than the gated and exhaustive services up to around 0.4-0.5 ut i l i z a t i o n . Compared with the completely symmetric cases of the same system, the dual latency i s better than the gated and exhaustive services up to much higher u t i l i z a t i o n . The reason i s that for an asymmetric case the t r a f f i c i s much more concentrated and queues can be built up easily. That SECTION 6.2.2 Asymmetric Cases 111 makes the time savings using the dual latency less s i g n i f i c a n t since the time savings has to divided by the number of messages i n the queues. Figure 6.31 shows the range of performance improvement expected i f the f u l l latency changes from 62 b i t s to 45 b i t s . The performance improvement for case 5 and 6 i s f a i r l y constant and i t r i s e s r a p i d l y as the u t i l i z a t i o n goes up. This sharp increase i n performance can be explained by the fa c t that the heavily loaded stations s t a r t to b u i l d up queues. When the queues b u i l d up, the time savings of using dual latency s t a r t s to produce multiplying e f f e c t . SECTION 6.2.2 Asymmetric Cases 112 COMPARISON OF PERFORMANCE Figure 6.29 - Performance Comparison of F u l l Latency with D i f f e r e n t Service D i s c i p l i n e s to the Dual Latency Ordinary Service f o r Case 5 SECTION 6.2.2 Asymmetric Cases 113 COMPARISON OF PERFORMANCE CASE 6 - STATION 1 (SIMULATION) 20 -i 1 19 -18 -17 -0 0.2 0.4 0.6 0.8 UTILIZATION (256exp.+312 bits message) 0 Full latency + Dual latency o Exhaustive service A Gated service Figure 6.30 - Performance Comparison of F u l l Latency with D i f f e r e n t Service D i s c i p l i n e s to the Dual Latency Ordinary Service f o r Case 6. SECTION 6.2.2 Asymmetric Cases 114 PERFORMANCE IMPROVEMENT CASE 5 & 6 - STATION 1 (SIMULATION) < D \ J D u w 0 Z M H < z < u u 0 0 H < • Full latency 62bits UTILIZATION (256exp. + 312 bits message) + Full latency 45bits Figure 6.31 - Performance Improvement of F u l l Latency Ordinary Service to the Dual Latency Ordinary Service for Case 5 and Case 6. SECTION 6.2.2 Asymmetric Cases 115 VII. CONCLUSIONS AND RECOMMENDATIONS 7.1 Summary The performance measure used for t h i s study i s the mean message waiting time. The study shows that a dual latency ring always performs better than a f u l l latency r i n g . The degree of improvement depends on a number of parameters: number of stations, a r r i v a l s t a t i s t i c s , message length s t a t i s t i c s , and u t i l i z a t i o n of the system. The improvement i s more s i g n i f i c a n t when the LAN has a large number of stations, short messages or constant messages. 7.2 Conclusions The conclusions drawn below are useful f o r the performance up to the MAC layer i n a token r i n g LAN. 1) Performance of the dual latency r i n g system i s always better than that of the f u l l latency system. The following table shows the percentage reduction of the mean waiting time i f the dual latency r i n g i s used instead of the f u l l latency r i n g f o r the cases described i n section 6. SECTION 7 . CONCLUSIONS AND RECOMMENDATIONS 116 NUMBER OF STATIONS RANGE OF UTILIZATION PERCENTAGE REDUCTION IN MEAN MESSAGE WAITING TIME CASE 1A 20 0 - 0.9 67% - 15% 100 0 - 0.9 67% - 40% 1000 0 - 0.9 67% - 63% CASE IB 20 0 - 0.9 67% - 25% 100 0 - 0.9 67% - 50% 1000 0 - 0.9 67% - 65% CASE 2A 20 0 - 0.5 - 0.8 67% - 15% - 23% CASE 2E 20 0 - 0.3 - 0.6 67% - 56% - 74% CASE 3A 20 0 - 0.4 - 0.7 67% - 42% - 54% CASE 3E 20 0 - 0.4 - 0.7 67% - 32% - 46% CASE 4A 20 0 - 0.85 61% - 10% 100 0 - 0.85 61% - 29% 1000 0 - 0.85 61% - 55% CASE 4D 20 0 - 0.85 61% - 49% 100 0 - 0.85 61% - 58% 1000 0 - 0.85 61% - 61% CASE 5 20 0 - 0.5 61% - 88% CASE 6 20 0 - 0.5 - 0.6 47% - 60% - 92% 2) Performance of the dual latency system i s better than that of the gated and exhaustive service systems up to medium range of u t i l i z a t i o n . The table below l i s t s the percentage reduction i n mean message waiting time when the dual latency r i n g i s better than the exhaustive service r i n g for the cases described i n section 6. SECTION 7.2 Conclusions 117 NUMBER OF STATIONS RANGE OF UTILIZATION PERCENTAGE REDUCTION IN MEAN MESSAGE WAITING TIME CASE 1A 20 0 - 0.8 67% - 5% 100 0 - 0.9 67% - 20% 1000 0 - 0.9 67% - 51% CASE IB 20 0 - 0.8 67% - 16% 100 0 - 0.9 67% - 33% 1000 0 - 0.9 67% - 54% CASE 2A 20 0 - 0.4 67% - 8% CASE 2E 20 0 - 0.5 67% - 10% CASE 3A 20 0 - 0.5 67% - 6% CASE 3E 20 0 - 0.5 67% - 10% CASE 4A 20 0 - 0.7 61% - 15% 100 0 - 0.7 61% - 32% 1000 0 - 0.7 61% - 37% CASE 4D 20 0 - 0.85 61% - 0% 100 0 - 0.85 61% - 22% 1000 0 - 0.85 61% - 34% CASE 5 20 0 - 0.4 61% - 9% CASE 6 20 0 - 0.3 47% - 12% 3) Dual latency becomes more a t t r a c t i v e when there are many stations. In case 1A, the percentage reduction i n mean message waiting time i s 15% for 20 stations, 40% for 100 stations and 63% for 1000 stations at 0.9 u t i l i z a t i o n (refer to the table i n conclusion #1) . In case IB, the percentage reduction i s 25% for 20 SECTION 7.2 Conclusions 118 stations, 50% f o r 100 stations and 65% f o r 1000 stations at 0.9 u t i l i z a t i o n . In case 4A, the percentage reduction i s 10% for 20 stations, 29% for 100 stations and 55% for 1000 stations at 0.85 u t i l i z a t i o n . In case 4B, the percentage reduction i s 49% for 20 stations, 58% for 100 stations and 61% for 1000 stations at 0.85 u t i l i z a t i o n . 4) Short messages are very suitable for the dual latency method. Case 4A has the longest mean message length and case 4D has the shortest one. For 20 stations, the percentage reduction i n mean message waiting time i s 10% i n case 4A and 49% i n case 4D at 0.85 u t i l i z a t i o n (refer to the table i n conclusion #1) . For 100 stations, the percentage reduction i s 29% i n case 4A and 58% i n case 4D at 0.85 u t i l i z a t i o n . For 1000 stations, the percentage reduction i s 55% i n case 4A and 61% i n case 4D at 0.85 u t i l i z a t i o n . 5) Constant messages make the dual latency method more favorable than other types of messages. Case 1A has exponentially d i s t r i b u t e d messages while case IB has constant, messages. For 2 0 stations, the percentage reduction i n mean message waiting time i s 15% i n case 1A and 25% i n case IB at 0.9 u t i l i z a t i o n (refer to the table i n conclusion #1) . For 100 stations, the SECTION 7.2 Conclusions 119 percentage reduction i s 40% i n case 1A and 50% i n case IB at 0.9 u t i l i z a t i o n . For 1000 stations, the percentage reduction i s 63% i n case 1A and 65% i n case IB at 0.9 u t i l i z a t i o n . 6) The shorter the message length, the smaller the delay for low u t i l i z a t i o n . But the shorter the message length, the higher the delay for high u t i l i z a t i o n . 7) For asymmetric t r a f f i c , the mean waiting times of the heavily loaded stations become unstable when the network u t i l i z a t i o n goes up. The dual latency approach allows higher l e v e l of t r a f f i c u t i l i z a t i o n before the onset of i n s t a b i l i t y . This explains the tend of increasing percentage reduction i n mean message waiting time of the dual latency r i n g to the latency r i n g i n some cases described i n section 6. In case 2A, the percentage reduction i s 67% at zero u t i l i z a t i o n , 15% at 0.5 u t i l i z a t i o n and 23% at 0.8 u t i l i z a t i o n (refer to the table i n conclusion #1) . In case 2E, the percentage reduction i s 67% at zero u t i l i z a t i o n , 56% at 0.3 u t i l i z a t i o n and 74% at 0.6 u t i l i z a t i o n . In case 3A, the percentage reduction i s 67% at zero u t i l i z a t i o n , 42% at 0.4 u t i l i z a t i o n and 54% at 0.7 u t i l i z a t i o n . In case 3E, the percentage reduction i s 67% at zero u t i l i z a t i o n , 32% SECTION 7.2 Conclusions 120 at 0.4 u t i l i z a t i o n and 46% at 0.7 u t i l i z a t i o n . In case 5, the percentage reduction increases from 61% to 88% when the u t i l i z a t i o n grows from zero to 0.5. In case 6, the percentage reduction increases from 61% to 92% when the u t i l i z a t i o n grows from zero to 0.6. 7.3 Recommendations 1) Since the performance of the dual latency method i s always better than or equal to the f u l l latency, the incorporation of the dual latency into the Network Interface Unit would be very desirable. The cost of introducing the dual latency c i r c u i t r y should be investigated further. 2) The dual latency method i s very a t t r a c t i v e f o r real-time applications because t h e i r message lengths are often small. The other c h a r a c t e r i s t i c s and requirements of real-time applications should be explored more to determine i f the dual latency method can f u l f i l them. 3) When the chip implementation of the IEEE 802.5 standard network i n t e r f a c e u n i t f o r the dual latency method i s ready, the performance improvement can then be experimentally measured and compared with the simulation r e s u l t s . SECTION 7.2 Conclusions 121 VIII. SUGGESTIONS FOR FUTURE WORK 8.1 Real-Time Applications Many e x i s t i n g LANs have been developed f o r the o f f i c e environment. However, there i s a demand of LANs for real-time s i t u a t i o n s . I t i s unclear i f the currently a v a i l a b l e protocols would be su i t a b l e to handle real-time applications. Examples are defense systems, i n d u s t r i a l process control systems and integrated telecommunications systems. There are three major requirements i d e n t i f i e d for real-time LANs [Lann85]. They are timing, robustness and f l e x i b i l i t y requirements. The same research group also stresses the importance of modeling of avalanche t r a f f i c (e.g. alarm messages) f o r real-time systems [Boud87]. Since the dual latency method i s more suitable for real-time applications, avalanche simulation may be able to reveal other important properties of the dual latency method. Hence, modeling of avalanche simulation should be investigated further. 8.2 End-to-end Performance Modeling The methods av a i l a b l e f o r performance modeling of the physical layer, and MAC layer are quite successful i n general and SECTION 8 . SUGGESTIONS FOR FUTURE WORK 122 proven useful i n practi c e . Nevertheless, the performance of the f i r s t two layers may not be the performance experienced by a user. That makes the end-to-end performance very useful; however, universal techniques for end-to-end performance modeling are not yet available [Reis86]; some proposed methods can be found i n [Mitc86] and [Mura87]. The end-to-end performance modeling may involve analysis of a network of queues. Since the simulation software was developed f o r the f i r s t two layers, higher layer models may be developed and added on top of the software to predict the end-to-end performance. SECTION 8.2 End-to-end Performance Modeling 123 REFERENCES [AMD86] [Bert87] [Boud87] [Boxm86] [ B o x i t i 8 7 ] A pplications Seminar; Technology. C i r c u i t s . Solu- tions . Advanced Micro Devices, AMD Applications Eng., Sunnyvale, C a l i f o r n i a , 1986. Dimxtri P. Bertsekas, Data Networks. Prentice-Hall, Inc., 1987. J . Boudenant, B. Feydel and P. Rolin, "An IEEE 802.3 Compatible Deterministic Protocol", IEEE INFOCOM 187, Mar. 1987. 0. J . Boxma and B. Meister, "Waiting-time approxima-t i o n s f o r c y c l i c - s e r v i c e systems with switch-over times", Performance Evaluation Review, v o l . 14, no. 1, May 1986. O.J. Boxma and W.P. Groenendijk, "Two queues with al t e r n a t i n g service and switching times", Report OS-R8712, Centre for Mathematics and Computer Science, Amsterdam, 1987. [Bux81] W. Bux, "Local area subnetworks: a performance comparison", IEEE Trans. Commun.. v o l . COM-29, pp.1465-1473, October, 1981. [Bux83a] W. Bux and H.L. Truong, "Mean-delay approximation for c y c l i c service queueing systems", Performance Evalua- t i o n , v o l . 3, pp.187-196, August 1983. [Bux83b] W. Bux, F.H. Closs, K. Kuemmerle, H.J. K e l l e r and H.R. Mueller, "Architecture and design of a r e l i a b l e REFERENCES 124 token-ring network", IEEE Journal on Selected Areas i n Communications. v o l . SAC-1, no. 5, pp.756-765, Nov. 1983. [Bux84] W. Bux, "Performance Issues i n Local-area Networks", IBM Systems Journal, Vol. 23, No.4, 1984. [Cran74a] M.A. Crane and D.L. Iglehart, "Simulating Stable Stochastic Systems, I: General Multiserver Queues", Journal of ACM, v o l . 21, no. 1, pp. 103-113, January 1974. [Cran74b] M.A. Crane and D.L. Iglehart, "Simulating Stable Stochastic Systems, I I : Markov Chains", Journal of  ACM, v o l . 21, no. 1, pp. 114-123, January 1974. [Cran75a] [Cran75b] [Cran77] [Ever86] M.A. Crane and D.L. Iglehart, "Simulating Stable Stochastic Systems, I I I : Regenerative Processes and Discrete-Event Simulations", Operation Research, v o l . 23, no. 1, pp. 33-45, January -February 1975. M.A. Crane and D.L. Iglehart, "Simulating Stable Stochastic Systems, IV: Approximation Techniques", Management Science, v o l . 21, no. 11, pp. 1215-1224, Jul y 1975. M.A. Crane and A.J. Lemoine, "An Introduction to the Regenerative Method fo r Simulation Analysis", Lecture  Notes i n Control and Information Sciences. Springer-Verlag, New York, 1977. D. E v e r i t t , "Simple approximations for token rings", IEEE Trans. Commun.. v o l . COM-34, no. 7, pp. 719-721, July, 1986. REFERENCES 125 [Farm69] W.D. Farmer and E.E. Newhall, " An experimental d i s t r i b u t e d switching system to handle bursty computer t r a f f i c ", Proceedings of the ACM Symposium  on Problems i n the Optimization of Data Communica- tio n s , ACM, New York, 1969. [Ferg85] M.J. Ferguson, Y.J. Aminetzah, "Exact r e s u l t s for nonsymmetric token r i n g systems", IEEE Trans.  Commun.. v o l . COM-33, pp. 223-231, 1985. [Ferg87] M.J. Ferguson, "Mean waiting time f o r a token with s t a t i o n dependent overheads", i n R.L. Pickholtz (ed.), Local Area & Multiple Access Networks, Computer Science Press, pp. 43-67, 1986. [Fish79] G.S. Fishman, P r i n c i p l e s of Discrete Event D i g i t a l  Simulation. John Wiley & Sons, New York, 1979. [Lann85] G. Le Lann, "Issues i n Real-time Local Area Networks", P a c i f i c Computer Communication Symposium. Seoul, Oct. 1985. [Gord75] Geoffrey Gorden, The Application of GPSS V to  Discrete System Simulation. Prentice-Hall Inc., 1975. [Gord78] Geoffrey Gorden, System Simulation. 2nd edi t i o n , Prentice-Hall, Inc., 1978. [GPSS/H] James 0. Henriksen and Robert C. Crain, GPSS/H USER1S  MANUAL. Wolverine Software Corporation, 1983. [Groe87] W.P. Groenendijk, "Waiting-time approximations for c y c l i c - s e r v i c e systems with mixed service s t r a t e -gies", Centre for Mathematics and Computer Science. Amsterdam, 1987. REFERENCES 126 [Hamm86] Joseph L. Hammond and Peter J.P. O'Reilly, Perfor- mance Analysis of Local Computer Networks, Addi-son-Wesley Inc., 1986. [Hard87] R.H.S. Hardy, Edward Lo and Ian Radziejewski, " U t i l i z a t i o n of Dual Latency Stations for Performance Improvement of Token Ring Networks", Proceedings of  the IEEE P a c i f i c Rim Conference on Communications.  Computers. and Signal Processing. June 4-5, 1987, V i c t o r i a , B.C., pp. 395-399. [Hard88a] R.H.S. Hardy, Edward Lo and Ian Radziejewski, "Performance Evaluation of a Token Ring Network with Dual Latency Stations", IEEE INFOCOM '88. Mar. 27-Apr. 1, New Orleans, A, 1988. [Hard88b] R.H.S. Hardy, Edward Lo, Ian Radziejewski and D. Chan, "Performance Analysis of a Fibre Optic Local Area Networks", Internal Report, Simon Fraser University, February 1988. [Hofr86] M. H o f r i , "Queueing Systems with a Procrastinating Server", Performance '86 and ACM SIGMETRICS 1986.  Performance Evaluation Review. v o l . 14, no. 1, pp.245-253, May 1986. [Hutch87] D. Hutchison and M. Merabti, "Ethernet f o r Real-time Applications", IEE Proceedings. Vol. 134, Pt. E, No.l, January 1987. [IEEE802.5] IEEE Standard: 802.5-1985 Local Area Networks Token  Ring Access Method. IEEE Computer Society Press, 1985. REFERENCES 127 [IEEE802.4] IEEE Standard: 802.4-1985 Local Area Networks  Token-Passincr Bus Access Method, IEEE Computer Society Press, 1985. [Iyer85] V. Iyer and S. Joshi, "FDDI's lOOM-bps protocol improves on 802.5 spec's 4M-bps l i m i t " , EDN. May 2, 1985. [Kell83] H. K e l l e r , H. Meyr and H.R. Muller, "Transmission design c r i t e r i a f o r a synchronous token r i n g " , IEEE  Journal on Selected Areas on Communications. v o l . SAC-1, no. 5, pp. 721-733, Nov. 1983. [Klei76] Leonard Kleinrock, Queueing Systems. Vol. I: Theory; Vol. I I : Applications. John Wiley and Sons, New York, 1975-1976. [Klej74] J.P.C. Kleijnen, S t a t i s t i c a l Techniques i n Simula- t i o n . Part I; Part I I , M. Dekker Inc.,New York, 1974-1975. [Kies83] W.M. K i e s e l , "End-to-end Delay i n Local Area Networks", Tenth International T e l e t r a f f i c Congress, Montreal, 1983. [Lave83] Stephen S. Lavenberg, Ed., Computer Performance  Modeling Handbook. Academic Press, New York, 1983. [Mac85] E.A. MacNair and CH. Sauer, Elements of P r a c t i c a l  Performance Modeling. Prentice-Hall, Inc., 1985. [Mitc86] L.C. M i t c h e l l and D.A. Lide, "End-to-end Performance Modeling of Local Area Networks", IEEE J . Select.  Areas Commun.. Vol. SAC-4, No. 6, Sept. 1986. REFERENCES 128 [Mura87] M. Murata and H. Takagi, "Two-layer Modeling for Local Area Networks", IEEE INFOCOM '87. San Francisco, 1987. [Pang86] J.W.M. Pang and R.W. Donaldson, "Approximate Delay Analysis and Results for Asymetric Token Passing and P o l l i n g Networks", IEEE Journal on Selected Areas on  Communications f v o l . SAC-4, NO. 6, Sept. 1986. [Radz88] I.R. Radziejewski, "Design of a token r i n g network inte r f a c e unit", Bachelor Thesis, Engineering Science Department, Simon Fraser University, Jan. 1988. [Reis86] M. Reiser, "Communication-System Models Embedded i n the OSI-Reference Model, A Survey", Computer  Networking and Performance Evaluation. E l s e v i e r Science Publishers B.V., North-Holland, 1986. [Ross86] F.E. Ross, "FDDI - a t u t o r i a l " , IEEE Communications  Magazine. May 1986, Vol. 24, No.5. [Saue81] Charles H. Sauer and K. Mani Chandy, Computer Systems  Performance Modeling. Prentice-Hall Inc., 1981. [Schr74] Thomas J . Schriber, Simulation Using GPSS. John Wiley & Sons, 1974. [Schw77] M. Schwartz, Computer Communication Network Design  and Analysis. Prentice-Hall, 1977, p.275. [Schw87] M. Schwartz, Telecommunication Networks: Protocols. Modeling and Analysis. Addison-Wesley Publishing Company, 1987. [Shan75] Robert E. Shannon, Systems Simulation: the Art and  Science, Prentice-Hall Inc., 1975. REFERENCES 129 W. S t a l l i n g s , Local Networks r 2nd ed i t i o n , MacMillan Book Company, 1987. H. Takagi, "Mean message waiting times i n symmetric multi-queue systems with c y c l i c service", Proceedings  of the IEEE International Conference on Communica- ti o n s . 1985. pp. 1154-1157, Chicago, IL, June 23-26, 1985. H. Takagi, Analysis of P o l l i n g Systems. MIT Press, 1986. H. Takagi, "A survey of queueing analysis of p o l l i n g systems", Proceedings of the Third International  Conference on Data Communication Systems and Their  Performance. Rio De Janeiro, B r a z i l , June 22-25, 1987. K.S. Watson, "Performance evaluation of c y c l i c service strategies-A survey", Performance '84. pp. 521-533, North Holland, New York, 1984. J.W. Wong and W. Bux, "Analytic modeling of an adapter to l o c a l area networks", IEEE Transactions on  Communications. v o l . COM-32, no. 10, pp. 1111-1117, Oct. 1984. REFERENCES 130 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0097896/manifest

Comment

Related Items