Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A model for the analysis of packet switching computer networks Alemparte, Miguel D. 1974

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-UBC_1974_A6_7 A44.pdf [ 5.03MB ]
Metadata
JSON: 831-1.0051753.json
JSON-LD: 831-1.0051753-ld.json
RDF/XML (Pretty): 831-1.0051753-rdf.xml
RDF/JSON: 831-1.0051753-rdf.json
Turtle: 831-1.0051753-turtle.txt
N-Triples: 831-1.0051753-rdf-ntriples.txt
Original Record: 831-1.0051753-source.json
Full Text
831-1.0051753-fulltext.txt
Citation
831-1.0051753.ris

Full Text

c- 1 A MODEL FOR THE ANALYSIS O F PACKET SWITCH I MO C O M P U T E R NETWORKS by M i g u e l H . A l e n p a r t e B.Sc.,. U n i v e r s i d a d C a t o l i c a de C h i l e , 1970 Sc.*. S t a t e U n i v e r s i t y o f Mew York at B u f f a l o , 1972 A' THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SC I EflCE In the Department of Computer S c i e n c e We a c c e p t t h i s t h e s i s as c o n f o r m i n g t o the r e q u i r e d s t a n d a r d THE UNIVERSITY OF BRITISH COLUMBIA A p r i l 1974 I n p r e s e n t i n g t h i s t h e s i s I n p a r t i a l f u l f i l m e n t o f the r e q u i r e m e n t s f o r an advanced degree a t the U n i v e r s i t y o f B r i t i s h C o l u m b i a , I agree t h a t the L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e f o r r e f e r e n c e and s t u d y . I f u r t h e r agree t h a t p e r m i s s i o n f o r e x t e n s i v e c o p y i n g o f t h i s t h e s i s f o r s c h o l a r l y p urposes may be g r a n t e d by the Head o f my Department o r by h i s r e p r e s e n t a t i v e s . I t i s u n d e r s t o o d t h a t c o p y i n g o r p u b l i c a t i o n o f t h i s t h e s i s f o r f i n a n c i a l g a i n s h a l l n o t be a l l o w e d w i t h o u t my w r i t t e n p e r m i s s i o n . Department o f Computer Science The U n i v e r s i t y o f B r i t i s h Columbia Vancouver 8 , Canada Date A p r i l 2 6 , 1974 Abstract An overview of the computer networks currently being developed i s presented. Three types of network design are distinguished - packet switching, l i n e switching and Newhall loops. Reference i s made to several models developed during the design of these networks, but only those concerned with packet switching are described i n d e t a i l . A gueueing model emerges as a f e a s i b l e a l t e r n a t i v e to the more expensive simulation models. This model i s developed to include the basic l o g i c of the message switching centres (nodes) and used as the backbone to a h e u r i s t i c procedure that considers l i m i t e d capacity buffers and blocking at the nodes. Key words and phrases: Data transmission, computer nets, packet switching, switching centre, i n t e r a c t i v e computation, resource sharing, queueing network, Markovian queue, steady state. TABLE OF CONTENTS 1. DEVELOPMENTS IN COMPUTES NETWORKS. 1.1 Representative Networks. 1 1.2 Packet Switching, Line Switching and Newhall Loops. .. 3 1.3 Other Developments. 7 2. MODELS FOR THE ANALYSIS OF PACKET SWITCHING NETWORKS 2.1 Simulation Models .........................11 2.2 The General Markov Chain Approach. 15 2.3 Jackson's Theorem and Kleinrock's Model. .............22 3. A QUEUING MODEL FOR PACKET SWITCHING NETWORKS 3.1 Transmission Bounded and Switching Bounded Networks. .28 3.2 A Queuing Model With Switching Processor Queues. ......32 3.3 Limited Buffer Capacities: A Heu r i s t i c Approach. .....38 4. RESULTS. 4.1 The Queuing Model vs. a Simulation Model. ............45 4.2 E f f e c t of Queuing Delays at the Switching Centre ..49 4.3 Runs With Balanced and Unbalanced Networks ..........51 4.4 Limited vs. Unlimited Buffer Capacities .....52 4.5 Deadlocks: A Limitation of the Model .................55 5. CONCLUSIONS AND SUGGESTIONS FOR FURTHER RESEARCH 62 BIBLIOGRAPHY 66 APPENDIX 1 .....70 APPENDIX 2 .... 73 APPENDIX 3 .......79 APPENDIX 4 .............................. 83 APPENDIX 5 84 APPENDIX 6 ....95 i v L i s t of Tables Table 4.1 "Ver x f i c a t i o n Runs" .............................46 Table 4.2 " E f f e c t of Queueing Delays" .....................50 Table 4.3 "Balanced and Unbalanced Networks" ..............51 Table 4.4 "Errors i n Balanced and Unbalanced Nets" 52 Table 4.5 " E f f e c t of Buffer Capacities" ............„„...,,53 Table 4.6 "Buffer Capacities i n a Line-Bounded Net" 54 L i s t of Figures Figure 3.1 "A 4-Node Computer Net" 33 Figure 3.2 "20-Node Queueing Net" .........................36 Figure 4.1 " V e r i f i c a t i o n Runs" ..........47 Figure 4.2 "Packet Size E f f e c t " ...........................48 Figure 4.3 "2-Node Deadlock" 56 Figure 4.4 "Deadlock Times" .58 V My thanks to Dr. D. Seeley for his help and guidance, to Dr. J. Kennedy for his suggestions and to Dhirendra Chheda for his simulation program and numerous discussions. To Magdalena. 1 CHAPTER 1: DEVELOPMENTS IN COMPUTER NETWORKS. 1.1 Representative Networks. A number of computer networks are currently being developed i n North America and Europe, some of which have been implemented and are operating. It should be understood that by a computer network i t i s meant more than the mere data transmission between two or more computers. The systems which we are concerned with are best described by the d e f i n i t i o n given i n [38]: "a computer network i s a set of autonomous, independent computer systems, interconnected so as to permit in t e r a c t i v e resource sharing between any pair of systems." Most important i n th i s d e f i n i t i o n i s the word " i n t e r a c t i v e " . Evidently, a network of computer systems interconnected by mail or occasional phone l i n e s does not meet the requirements of t h i s d e f i n i t i o n . A computer network must provide to the user a response time that meets the standards of i n d i v i d u a l time-sharing conversational computer systems. Such networks can be c l a s s i f i e d i n three main categories: l i n e switching, packet switching and Newhall loops. A number of sp e c i a l purpose l i n e 2 switching networks are currently operating but they do not meet the requirements of the d e f i n i t i o n . (for a summary of these see [37,22,20 ]). This, as we s h a l l see l a t e r in t h i s chapter, does not imply that l i n e switching i s not a f e a s i b l e solution for computer networks as defined above. New and f a s t l i n e switching networks are being designed or proposed. To mention some, the German EDS network [19] planned by the German Telecommunications administration to be implemented i n the near future, the United Kingdom Post O f f i c e proposed hybrid network [9], and the DATE AN network designed i n the U.S.A. by the Data Transmission Company [12]. Fast response packet switching networks on the other hand have not only been designed but some, l i k e the ARPA network i n the United States [ 16, 27,31,38], and the smaller NPL network i n the United Kingdom [39 ], are operating experimentally. There i s a growing interest i n Europe, Canada and the United States towards packet switching design. The proposed EUBONET or European Computer Network Project [2] i s to use packet switching design to interconnect computers i n France, I t a l y , Switzerland and the United Kingdom i n i t i a l l y and i n Yugoslavia, Norway, Portugal and 3 Sweden l a t e r on. In Canada, a proposal to interconnect computers at u n i v e r s i t i e s across the country considered mainly packet switching design as reported i n [33,34]. F i n a l l y , loop transmission networks have been proposed in Canada and the United States as reported in [30,36,41,43], but as f a r thi s author could determine none have been implemented yet. 1.2 Packet Switching, Line Switching and Mewhall Loops. Data t r a f f i c i n a computer network, can vary from bulk transmission of a f i l e to short bursts of inte r a c t i v e data. An extreme example of the l a t t e r i s when echo-plexing at the destination i s desired. Here, an image of each character i s sent back to the o r i g i n upon a r r i v a l at the destination; thus a byte of data must make the return t r i p between two distant points with the minimum delay possible. Another example i s the case of a t y p i c a l conversational session between a terminal and a distant computer. Bursts of data or " l i n e s of input" are transmitted with short delays and r e l a t i v e l y long i d l e periods between "input l i n e s " . This bursty nature of the t r a f f i c i s a major consideration i n the design of a computer network. Line switching and packet switching telecommunication designs are as old as the telephone and the telegraph networks respectively. In a l i n e switching network, there are one or more alternatives f o r setting up an interconnecting path or " c i r c u i t " between two computers. As i n the telephone network, a path i s used by the two customers for a period of time. In f a c t , many computers and peripheral devices use the telephone network i t s e l f for interconnection. However, th i s practice i s not acceptable according to the requirements of our d e f i n i t i o n of a computer network. Beside the problems of low l i n e capacities, error rates, and noise, there i s the problem of the set-up time. Whether the network uses the public telephone system or leased wideband l i n e s , i t would use the switching centres of the telephone network which consist mainly of relays and mechanical switches which are too slow to set up a path every time i t i s needed for the transmission of a burst of data. The solution i s , then, to set up the path not so often; but t h i s means long i d l e periods and hence poor u t i l i z a t i o n of the network. Another, but c l e a r l y expensive solution i s to have a " f u l l y connected" network of leased wideband 5 l i n e s . New s w i t c h i n g c e n t r e s are however being developed, which w i l l reduce set-up times so t h a t l i n e s w i t c h i n g t e c h n i q u e s w i l l become f e a s i b l e f o r computer networks. E s s e n t i a l l y , the s w i t c h i n g c e n t r e i s a mini-computer with a s p e c i a l i z e d s t o r e d program which r e p l a c e s p h y s i c a l connection by a d d r e s s i n g techniques (making the boundary s e p a r a t i n g l i n e s w i t c h i n g and packet s w i t c h i n g t h i n n e r ) . T h i s method has been t e s t e d s u c c e s f u l l y by AT&T [29] a n d w i l l be used by the EDS, DATRAN and other networks (see s e c t i o n 1.1). A d e t a i l e d d e s c r i p t i o n of l i n e s w i t c h i n g techniques can be found i n [ 2 9 ] . Packet s w i t c h i n g , on the other hand, o f f e r s an a l t e r n a t i v e s o l u t i o n t o the problems of i d l e time and t r a n s m i s s i o n d e l a y s . As i n modern l i n e s w i t c h i n g nets, a d d r e s s i n g techniques are used to route packets of i n f o r m a t i o n through the network. The two main d i f f e r e n c e s between packet s w i t c h i n g and l i n e s w i t c h i n g are t h a t i n packet s w i t c h i n g , (a) each packet i s routed i n d i v i d u a l l y a t every s w i t c h i n g c e n t r e , and t h a t (b) packets are b u f f e r e d at the s w i t c h i n g c e n t r e s . For t h i s reason, t h i s technique i s a l s o known as " s t o r e and 6 forward switching". In l i n e switching, packets are not buffered at the centres and routes may be set for sequences of packets rather than i n d i v i d u a l packets. This i s done by the f i r s t packet at " c a l l i n g time" or set-up time. In the packet switching method, there i s no counterpart to the concept of "path set-up". Messages are broken down into packets at the l o c a l switching centre and fed into the network by asynchronous Time Division Multiplexing (TDM). Each packet has a leader which contains information about i t s destination, i t s nature, and i t s length. At the switching centres, the leader i s examined and the packet i s assigned a "next destination" or "next centre". Queues of packets may occur at the centres due to the stochastic nature of the t r a f f i c . With t h i s technique the u t i l i z a t i o n factor of the l i n e s i s considerably increased, and under appropriate conditions time delays are small. A number of networks have been designed using packet switching (see section 1.1) • A detailed description of the packet switching design can be found i n [4,8,32]. A t h i r d network design i s the Newhall loop. This design i s i n fact a s p e c i a l case of packet switching. 7 but due to i t s p e c u l i a r c h a r a c t e r i s t i c s i t i s t r e a t e d s e p a r a t e l y . E s s e n t i a l l y , computers are connected to sw i t c h i n g c e n t r e s which are i n t u r n i n t e r c o n n e c t e d i n a loop ( t r a n s m i s s i o n through a l i n e i s s t r i c t l y one way, and there i s e x a c t l y one route between any p a i r of c e n t r e s ) , or i n a duplex loop (two way t r a n s m i s s i o n ) . Messages are broken down i n t o packets and b u f f e r e d at the s w i t c h i n g c e n t r e s but no r o u t i n g i s done a t the c e n t r e (except f o r a t r i v i a l r o u t i n g i f the loop i s d u p l e x ) . a packet waits at the c e n t r e u n t i l there i s a f r e e s l o t i n the l i n e where i t i s multiplexed synchronously or asynchronously using TDH. When a packet a r r i v e s at a s w i t c h i n g c e n t r e , i t s address i s examined to see whether the packet has reached i t s d e s t i n a t i o n . I f t h i s i s the case, i t i s removed from the l o o p , otherwise the packet c o n t i n u e s on to the next c e n t r e . On the average, the packet t r a v e l s a g r e a t e r d i s t a n c e but c o n s i d e r a b l e p r o c e s s i n g time i s saved at the s w i t c h i n g c e n t r e s . Software and t a b l e s a t the ce n t r e s are a l s o s i m p l e r and s m a l l e r , l o o p s can be connected to other loops i n the same way a s i n g l e computer i s connected to the loop. For a d e t a i l e d d e s c r i p t i o n of an asynchronous TDM loop see [21]. a d e s c r i p t i o n of a synchronous TDM loop 8 i s found i n [ 3 6 ] . 1.3 Other Developments. As a l r e a d y mentioned, the proposed h y b r i d network o f t h e U n i t e d Kingdom P o s t O f f i c e would combine l i n e s w i t c h i n g and pa c k e t s w i t c h i n g . The network would o p e r a t e i n l i n e s w i t c h i n g mode f o r b u l k d a t a t r a n s f e r s and o t h e r c o n t i n u o u s h i g h i n f o r m a t i o n r a t e messages. T h i s type of message i n a packet s w i t c h i n g network a l o n e would b u i l d up queues a t the s w i t c h i n g c e n t r e and o v e r l o a d l i n k s , d e t e r i o r a t i n g the o v e r a l l performance of t h e net . The second mode of o p e r a t i o n , o r packet s w i t c h i n g mode, w i l l be used f o r t r a n s m i s s i o n of s h o r t b u r s t s o f d a t a w i t h l o n g pauses between b u r s t s . I d e a l l y , a h y b r i d network s h o u l d be a b l e t o p r o v i d e a l i n e s w i t c h i n g path between any p a i r o f computers, one or more p a i r s a t a t i m e , w i t h o u t a s i g n i f i c a n t d e t e r i o r a t i o n of t h e packet s w i t c h i n g t r a f f i c among t h e o t h e r computers. The use of s a t e l l i t e s as s w i t c h i n g c e n t r e s i s a l s o b e i n g c o n s i d e r e d w i t h growing i n t e r e s t . A s a t e l l i t e as a s w i t c h i n g c e n t r e p r o v i d e s two advantages. F i r s t , due t o t h e s i n g l e - c h a n n e l - p e r - c a r r i e r mode o f o p e r a t i o n , no 9 r e a l s w i t c h i n g i s done a t the node, hence there i s no queue b u i l d - u p a t the node. Second, the q u a l i t y of the t r a n s m i s s i o n l i n k provided by s a t e l l i t e s i s good and without severe n o i s e problems. However, from a t e c h n i c a l p o i n t of view, two major problems prevent the immediate adoption of the s a t e l l i t e i n computer networks: e r r o r r a t e s i n the order of 10 - 7 BER (Bit E r r o r Rate) and t r a n s m i s s i o n d e l a y s of more than h a l f a second r e t u r n from an e a r t h s t a t i o n . I t i s our o p i n i o n t h a t s a t e l l i t e s should be used only as backup s w i t c h i n g nodes i n computer networks, p r o v i d i n g a l t e r n a t i v e routes when e a r t h c e n t r e s become h i g h l y congested, and hence s o l v i n g a t y p i c a l problem i n packet s w i t c h i n g n e t s . A t h i r d and i n e v i t a b l e development, i s an a l l -d i g i t a l t r a n s m i s s i o n . C u r r e n t l y , computer networks are u s i n g the analog modulation c a r r i e r s of the telephone network, which are designed p r i m a r i l y f o r voice t r a n s m i s s i o n . In other words, d i g i t a l data i s sent over communication l i n k s which were not designed f o r d i g i t a l t r a n s m i s s i o n but f o r analog. Computer networks need not c a r r y human v o i c e so d i g i t a l m u l t i p l e x i n g (DM) can be used. DM reduces the u t i l i z a t i o n of the bandwidth and n o i s e problems, but most important, i t reduces the 10 modem delays of analog modulation. The DATEAH network mentioned before uses t h i s technique, but no r e s u l t s on performance are yet a v a i l a b l e . F i n a l l y , i t i s worth mentioning t h a t human v o i c e can a l s o be t r a n s m i t t e d over an a l l - d i g i t a l environment using the technique of pulse code modulation (PCM). In f a c t , i t i s probable that i f the telephone network were being designed today, i t would not use analog modulation. 11 CHAPTER 2: MODELS FOR THE ANALYSIS OF PACKET SWITCHING NETWORKS 2.1 Simulation Models. Computer networks, as centralized computer systems are complex e n t i t i e s involving a number of in t e r r e l a t e d components inter a c t i n g i n ways which are d i f f i c u l t to predict. They also represent large investments i n communications, computer hardware and software, on the other hand, there i s a l i t t l e or no p r a c t i c a l experience on f a s t computer networks. For these reasons, highly accurate models of analysis are needed. Discrete event simulations are perhaps the most appropriate models in t h i s case i f no r e s t r i c t i o n s i n the a v a i l a b i l i t y of computer processing time and memory are imposed. These models are generally stochastic due to the i m p o s s i b i l i t y of predicting exactly the t r a f f i c i n t e n s i t y and the t r a f f i c variations that the network w i l l handle. Some studies have been made concerning the d i s t r i b u t i o n functions for the a r r i v a l of messages to the network [14,24], but these functions obviously depend on the nature, the place and the expected use of the network being designed. In general, simulation 12 models can include a high amount of d e t a i l when compared to other models of analysis. They can e x p l i c i t l y include d i f f e r e n t types of messages, a wide range of d i s t r i b u t i o n functions for a r r i v a l of messages, a detailed logic of the operation of the switching centres, a wide range of queue d i s c i p l i n e s , etc. In f a c t , there i s almost no l i m i t (except for memory and processing time) to the amount of d e t a i l that can be included i n a simulation model. This i s what makes these models so valuable but at the same time so expensive and perhaps somewhat i n e f f i c i e n t . Description of simulation models can be found i n [12] f o r l i n e switching networks, i n [27] for packet switching networks and i n [20] f o r a flewhall loop. A b r i e f description of Chheda's program (Appendix 6) for packet switching, which i s the type of network with which t h i s work i s mostly concerned, i s given here: The computer program uses the GPSS V language and i t i s f a i r l y f l e x i b l e i n the number of nodes, message a r r i v a l rates, switching processor rates, modem capac i t i e s , routing, packet lengths and switching centre buffer capacities. Messages are generated at the computers (called hosts) where they are buffered u n t i l there i s enough space at the l o c a l switching centre. 13 Messages are broken up into fixed size packets at the l o c a l switching centre. Each packet has at t r i b u t e s l i k e type, o r i g i n and destination. Routing at the centres i s done according to a current position-destination matrix, that i s , i f a packet i s at centre i and the destination i s host j , entry ( i , j) i n the routing matrix gives the next switching centre to which the packet should go. When a message ar r i v e s at i t s destination host, a spe c i a l packet c a l l e d an acknowledgement i s generated and sent back to the host where the message was originated to indicate that i t was recieved. If a packet i s trying to gain entry to a switching centre whose buffer i s f u l l , the packet i s sent again af t e r a predetermined waiting time. Positive acknowledgement l o g i c i s used here, and there i s a version of the program that simulates t h i s process i n d e t a i l . Deadlock situations are detected by the program and the simulation i s terminated shortly after they a r i s e . Switching centres are modelled as a processing delay and modem delays with their respective queues. These queues use a common buffer and they are arranged as shown in Figure 1. 14 i 1 | ooooo I j sent queue I I i 1 I 1 i—;>ooooo | modem 1 J — |—> , j u J , I I 1 I i 1 I i 1 I —, >| ooo J cpu| — 1 >oo I modem 2 |—— |—> | I I | L _ 1 | I I I | '—i>ooooo| modem 3| — |—> i i Figure 1 "The Switching Centre" Hodem delays are d i r e c t l y proportional to packet lengths, and processor delays vary according to packet types. When a packet i s sent, i t i s put into the "sent queue" where i t stays u n t i l the acknowledgement for that packet i s received. F i n a l l y , a t h i r d delay i s included i n series with the modem delay to account for the wave propagation delay between two distant switching centres. This model contains most of the features of the ARPA packet switching network. Not included i s the TIP (Terminal Interface Processor), the new technique of piggy-backing the positive acknowledgements, [21] and the transmission errors. 15 2.2 The General Markov Chain Approach and Its Drawbacks. Analysts and designers have long re a l i z e d that models requiring l e s s computation resources than those required by discrete event simulations are necessary. Mathematical models for queueing analysis and the theory of Markov Chains (M.C.) have been used i n the analysis of computer systems and l a t e l y in analyzing computer networks. These models can indeed be less "expensive" in the i r demand of computational resources than discrete event simulations and thus,they allow for more analysis when these resources are scarce. A computer network can be viewed as a network of queues, and under certain conditions, i t i s subject to analysis by means of queueing models. The l e v e l of abstraction i s , however, much higher than that necessitated by a detailed event-simulation. The trade-off i s then abstraction vs. resources demanded, and appropriate tests have to be carried to determine whether abstraction implies decreased accuracy or not. The main assumption that has to be made i s that the a r r i v a l and service processes are independent Markovian processes. If we consider a message to be the 16 basic unit or transaction flowing through the network, a r r i v a l s are appropriately modelled by Poisson d i s t r i b u t i o n functions (at least since there i s no actual knowledge about the demands for service that w i l l be imposed upon the network), and message lengths may be considered to be exponentially d i s t r i b u t e d . The appropriateness of these assumptions i s further shown in [21,24,25]. Exponential d i s t r i b u t i o n of message lengths however, does not necessarily imply that the service processes at the switching centres are a l l Markovian. Recall that messages are broken into fixed lengths packets at the l o c a l switching centre. These packets have a constant delay at the the centre's processing unit, and a delay proportional to the number of b i t s in the packet at the modems. This implies that i f messages are short i n r e l a t i o n to the s i z e of the packet (i.e. a message w i l l generally use one packet only), processing delays at the centres tend to be constant rather than exponentially d i s t r i b u t e d , while modem delays tend to remain exponentially di s t r i b u t e d . If on the other hand, messages are long in r e l a t i o n to the packet size (i.e. several packets per message), the exponential nature of the delays i s somewhat maintained. In any case; by applying the Pollaczec-Khintchine formula, i t can be shown that the use of the 17 exponential d i s t r i b u t i o n for the service time at the centre's processing unit y i e l d s an upper bound to the queueing delay. In f a c t , i t can be proven that depending on the r e l a t i v e sizes of packets and messages, the expected queueing delay at the processing unit w i l l be a number between the computed delay and one half of the computed delay. Thus appropiate post-analysis corrections can be made. A second aspect of the main assumption (section 2.2 t h i r d paragraph), i s the independence between the a r r i v a l and the service processes. In f a c t , i f a switching centre i s considered to be a service station, the question arises whether the stochastic process representing the a r r i v a l of consecutive messages i s independent of the message lengths ( i . e . t h e i r service times). Consider for example the case where two consecutive messages flow from node A to node B. Clearly, the length of these messages controls t h e i r service times at node A thus c o n t r o l l i n g their i n t e r a r r i v a l times at node B. These same lengths, on the other hand w i l l also control their service times at node B. It has been shown, however, [26] that i f node degrees i n the network are greater than 2, (i.e. 18 messages arrive from more than 2 nodes), these processes can be assumed to be independent. In general, as the node degrees grow t h i s assumption becomes more r e a l i s t i c yet since message lengths are interleaved from a number of o r i g i n s . The general M.C. approach s h a l l now be introduced using a t r i v i a l example. Consider an exponential service process with Poisson a r r i v a l s , buffer capacity of 3, and service and a r r i v a l rates of "b M and "a" respectively. The state space S contains 4 one-component vectors, representing a l l possible queue lengths: S={ (0) , (1) , (2) , (3) } and the t r a n s i t i o n matrix T contains 16 entries which may be thought as being the rates at which each state moves to any other state. Mathematically, these entries are given by: 3 P ( i , j ) I T ( i , j ) = 1 dt |t=0 Where p ( i , j ) i s the probability of going to state j given that the current state i s i , as a function of 19 time. I t can be shown that: T ( i , j ) > 0 for i * j T ( i , j ) < 0 for i=j £ T ( i , j ) = 0 (for a l l i ) j In our example, i f we exclude the p o s s i b i l i t y of simultaneous a r r i v a l s and/or departures, we have (see for example [ 6 ]) : T ( i , j ) = 0 for |i-j| > 1 T(i,i+1) = a (2. 1) T ( i , i - 1 ) = b Thus the t r a n s i t i o n matrix i s given by: (0) (D (2) (3) (0) r-a a 0 0 -, T = (1) | b -(a+b) a 0 I (2) J 0 b -(a+b) a I (3) «- 0 0 b -b -« The steady state vector V= (v (1),v (2), v (3), v (U)), whose components are the probabilites that an outside observer w i l l find the system i n each of the 4 possible states i s given by the equations: V«T=0 4 £ v(i) = 1 (2.2) i=0 20 F i n a l l y the l i m i t i n g expected queue Q, and the l i m i t i n g expected waiting time W, can be obtained from: 4 Q = £ v ( i ) * s ( i ) (2.3) i=0 » = Q/b ( L i t t l e ' s result) For a queueing network, the state vector representing the state of the system (i.e. the length of the queues) at a given time, contain at least as many components as there are gueues i n the system. The l i m i t i n g expected queues and waits are found by applying the l a s t two formulas to each component i n the vectors, that i s , to each queue. The number of vectors i n the state space, however, becomes extremely large. Consider a network of "M" hosts and " N " switching centres where the buffer capacities of the switching centres i s 3. The number of queues i f each switching centre i s connected to "G" centres on the average and i f there i s exactly one switching centre per host i s given by M+{G + 1)N, and supposing that queues at the hosts w i l l not grow to more than "P" messages (i . e . hosts have buffer capacities of "P"), the number "s" of vectors in the state space representing a l l possible states (i.e. A l l possible combinations of 21 queue lengths) i s given by: r i 8 | (G+1) *-1 | H s = , __, x ( P) I G | L J Which for H=G=P=3 y i e l d s more than 10 7 vectors and for t3=G=rP=4 with buffer capacities of 4 y i e l d s more than 101'1 vectors. Clearly, the determination of the steady state vector and of the expected queues and delays becomes an im p o s s i b i l i t y through the available computation methods. Attempts have been made to reduce the storage demands of t h i s approach. For medium size queueing networks where the state space can be handled e f f i c i e n t l y i n terms of storage, but where the number of entries i n the t r a n s i t i o n matrix (equal to s 2) i s excessively large, there are ways of storing and manipulating t h i s matrix which take into account i t s sparse nature and the repetitiveness of i t s entries [ 42 ]« This however, i s not enough i n our case, where the state space i t s e l f i s too large. Along d i f f e r e n t l i n e s , some closed queueing systems with stochastic routing have been proven to y i e l d near completely decomposable t r a n s i t i o n matrices [ 7 ] . These systems must have servers with a wide range of service rates, 22 ( i . e . some slow, some intermediate, and some fast servers). Under such conditions, t h e i r matrices can be reduced to a number of separate smaller matrices by aggregation of states, which makes the problem more manageable. Such properties (stochastic routing and dif f e r e n t service rates), however, are not generally found in queueing networks of the type used i n a computer network model, making t h i s approach impracticable here. A third approach consists i n decomposing the system not by the aggregation of states i n the t r a n s i t i o n matrix, but by d i r e c t decomposition of the system into subsystems with fewer queues. This approach i s the one taken i n t h i s work, and i t has proven to be successful. In the following section, we s h a l l see a description of the method and a b r i e f insight into i t s t h e o r e t i c a l j u s t i f i c a t i o n s . 2.3 Jackson's Theorem and Kleinrock's Model. Consider a single server queue with Poisson a r r i v a l s and exponential service times. Mean a r r i v a l and service rates are "a" and "b" respectively. Morse suggested [3] and Burke proved [36] that the output of t h i s queue i s i s a Poisson process with mean "a" that 23 i s , i d e n t i c a l to the input process. C l e a r l y then, an arrangement of i n f i n i t e capacity Markovian queues i n series i s e a s i l y analyzed by considering each queue as a stand-alone queue, (and not as components of a tandem), every one of them having Poisson a r r i v a l s with the same rate. This r e s u l t has been generalized by Jackson [23] to a network of Markovian queues. Jackson's theorem can be stated as follows: "For a network of exponential servers, with a single FIFO unlimited capacity queue per server, and with Poisson, fixed routed a r r i v a l s ; the steady state behavior of the system i s exactly the same as i f each server i s considered to be an independant Markovian queueing system". In other words, each queue can be analyzed independently. The l i m i t i n g expected value 1 of the length of such a queue (including the element being served) i s given by: L=1/(1-r) i f (r<1) , i n f i n i t e otherwise. (2.4) where r i s the u t i l i z a t i o n factor and i t i s given by: r = a/b = (a r r i v a l rate) / (service rate) (2.5) 2U and the l i m i t i n g expected delay W in the queue (including service time) i s given by: W=L/b (2.6) The problem now has been reduced to the determination of the a r r i v a l rate "a" or t r a f f i c through each node, plus the mere application of these formulas. The t r a f f i c i s determined from the flow matrix and the routing matrix where the flow matrix gives the average number of messages flowing from every host to each of the other hosts and the routing matrix i s the current position-destination matrix described i n 2.1. (For a detailed description of these matrices and the algorithm to determine average delays turn to Appendix 1). Kleinrock's Model [16,27,28] for computer network analysis i s b a s i c a l l y an application of Jackson's theorem. The computer network i s mapped into a queueing network where transactions flowing through the servers represent messages of exponentially distributed length. Transmission channels are the "nodes" representing the servers in the queueing network, and delays include a wait for transmission (also c a l l e d modem delays) and a wave propagation delay as shown i n Figure 2.2. 25 Modem i 1 input —> oooojsl | o o o o —>output « J I I I |modem delayJpropagation delay| I 1 I Figure 2.2 "The Channel" This model can be viewed i n t r a d i t i o n a l queueing theory terms as a single server i n series with an i n f i n i t e number of p a r a l l e l servers with i d e n t i c a l constant service time as shown i n Figure 2.3. r->|s2|—, I ' ' t I I I r 1 I V->|s3|—4 input , , output 1| V—J {output 2 >oooo|s1 j > | • f- > i J | • | I • i Figure 2.3 "Queueing Model of the Channel" Clearly, the output process (output 2) i s i d e n t i c a l to the output process from the f i r s t server (output 1) but dephased by the wave propagation delay. The service times of server 1 on the other hand are proportional to the message lengths which are i n turn 26 exponentially d i s t r i b u t e d . Under these conditions, using Burke*s re s u l t s [ 3], for a Poisson input process, output 1 i s a Poisson process with the same mean rate and hence so i s output 2. We can thus apply Jackson's theorem and analyze each transmission channel as an independent queueing system. Processing delays at the switching centre are considered to be constant and equal to 10- 3 sec/message and added to the transmission channel delay. This model, with s l i g h t modifications, has been used i n the analysis and design of the ARPA network. The modifications take account of the d i f f e r e n t types of messages flowing i n the network and the p o s s i b i l i t y of non-exponential delays at the servers. One of the important l i m i t a t i o n s of t h i s model i s that limited capacity buffers at the switching centres are not modelled. In other words, i t can represent accurately only those si t u a t i o n s where queues w i l l at the most grow to a fra c t i o n of the buffer capacity of the switching centre. In other words, i f buffers are r e l a t i v e l y small, i t can only represent situations whith very low channel u t i l i z a t i o n . The second major objection that can be raised to 27 t h i s model i s that processing delays at the switching centres are considered to be constant (10~ 3) rather than depending on the expected t r a f f i c of messages through a node and the capacity of i t s processing unit. In other words, no processor queues are considered at the centres. 28 CHAPTER 3: A QUEUEING MODEL FOR PACKET SWITCHING NETWORKS 3.1 Transmission Bounded and Switching Bounded Networks We s h a l l define here the concepts of "balanced" and "unbalanced" networks and the related concepts of "transmission-bounded" and "switching-bounded" networks. As seen i n Chapter two, messages flowing i n a computer network encounter 2 main types of delays: a processing delay at the switching centre*s processing unit which includes breaking messages into packets, routing and reassembling the messages; and a transmission delay which includes a modem delay and a wave propagation delay. The magnitude of these delays, depends mainly on three things: the t r a f f i c of messages through the switching centres and l i n e s , the capacities of the processors and modems, and the distances separating the switching centres. S p e c i f i c a l l y the processing delay depends on the processing unit's capacity and the t r a f f i c through i t (rather than being a constant as i n Kleinrock's model) and the modem delay depends on the capacity of the li n e and the expected t r a f f i c through i t . The wave propagation delay on the other hand depends on the distance between centres. For u t i l i z a t i o n factors ( t r a f f i c - c a p a c i t y ratios) close to unity, delays are expected to be large and very s e n s i t i v e to t r a f f i c changes while for small u t i l i z a t i o n factors, delays tend to be constant and given by the capacity of the processor or modem as i t can be seen from ( 3 . 1 ) below: From (2.4), (2.5) and (2.6) and a = a r r i v a l rate = t r a f f i c = t b = service rate = capacity = c we have: r = t/c W = L/c = 1/(c-cr) = 1/c + ( 1/c)/[r / ( 1-r) ] (3 .1) c l e a r l y lim W=infinity and lim W=1/c r - > 1 r->0 The term (1 /c) /[ r/( 1 -r) ] represents the expected time the message waits i n queue, while the term 1/c represents the actual service delay at the centre's processing unit (or at the modem). We s h a l l c a l l these two terms w(q) and w(s) respectively w(s) = 1/c w(q) =5 ( 1 / c ) / [ r / ( 1 - r ) ] W=w(s)+w(q) (3.2) It should be clear then that w(q) i s a function of the t r a f f i c while w(s) i s constant (independent of the t r a f f i c ) . We s h a l l say that a "server" (processor or 30 modem) i s under-utilized i f " r " i s small, that i s , w(g)<<w(s). Conversely, we s h a l l that a "server" i s ove r u t i l i z e d when w(q)»w{s). We can now define a "balanced" network as a network whose u t i l i z a t i o n of processing units and l i n e s i s uniform, i . e . ; their t h e i r r's are about the same magnitude. An "unbalanced" network on the other hand i s one with large variance among u t i l i z a t i o n factors, i t i s i n t u i t i v e l y clear, that the overall u t i l i z a t i o n i n an unbalanced network i s "bounded" by those processing units and modems with large " r " while the excess capacity of the rest of the processors and modems i s being wasted. Unbalanced networks have bottle-necks with large queues while other "servers" are i d l e for long periods of time. This i s shown i n more d e t a i l i n the example of Appendix 2. We s h a l l also define "switching-bounded networks" as those unbalanced networks whose r's for the switching centre's processing units are s i g n i f i c a n t l y larger than the r's for the l i n e s . Conversely, "transmission-bounded networks" are those whose u t i l i z a t i o n factors for the lines are s i g n i f i c a n t l y larger than the u t i l i z a t i o n factors of the processors. 31 We can see now, that the queueing model studied i n Chapter 2 (Kleinrock's model) may be inaccurate for balanced networks and for switching-bounded networks. In f a c t , that model considers queueing only at the modems and not at the centre's c e n t r a l processing units where i t assumes constant delays and neglects the ef f e c t of the t r a f f i c . The actual delay used i s 10- 3 sec. which we assume corresponds to a capacity of an average of 1000 messages/sec at the processors. In other words, only the constant term w (s) i n (3.2) i s considered and w(q) i s ignored. If we have for example a balanced network with u t i l i z a t i o n factors of 0.5 the error f o r the t o t a l delay at the switching centre's processing unit i s 100% as seen below: W=w(s)+w(q) = 1/c+(1/c)[r/(1-r) ] = 1/0+, (1/c) [0.5/0. 5] = 2 (1/c) = 2w(s) for highly u t i l i z e d balanced networks (say r=0.9) t h i s becomes: 32 w = 1/c+(1/c) [0. 9/0. 1 ] = 10w(s) We analyze the e f f e c t of t h i s error upon the o v e r a l l message delay through a network in section 4.1. 3.2 A Queueing Model With Switching Processor Queues. In t h i s section, we extend Kleinrock's model to include queueing delays at the processing units. The model i s s t i l l a di r e c t application of Jackson's theorem, but not only transmission-bounded networks are well represented but switching-bounded and balanced networks as well. The model i s presented with a simple example of a 4-node computer network which i s mapped into a 20-node queueing network. A diagram of t h i s computer network i s shown i n figure 3.1 33 MIH PI cf 1U WW', f i l l Ml ^ KMC E L L C4 M43 P2 C2 - a rue H 2 J E L l P3 C5 fViC H Z H3| Figure 3 . 1 : "a 4-Node Computer Net" Hi : host i HiH : modem host i - center i H i j : modem center i - center j HiC : modem center i - host i C i : switching center i P i : c e n t r a l processing unit i 34 The computer network i s defined by the following matrices: G = 19(2) I = host message generation mean rates. Jg (3) i Cg(4) J r - f (1,2) f (1,3) f(1,4), host to host F = Jf (2,1) - f(2,3) f (2,4) | = flow matrix |f(3,1) f(3,2) - f(3,4)| (destination «-f (4,1) f(4,2) f(4,3) - J of messages) r- 2 2 4-, B = |1 - 3 3| = routing matrix 12 2 - 41 (current position - destination) •-1 3 3 -J rP(1)-i P = IP (2) I = average processor capacities |p(3) | (in messages/sec) -p(4) r (1,2) - 111(1,4)-, M = | m (2, 1) - m(2,3) | = centre to centre | - m(3,2) - m(3,4) | modem capacities «-m(4,1); - m(4,3) - J i n messages/sec r a(1,2) - d(1,«), D = |d(2,1) - d(2,3) | = centre to centre I - d(3,2) - d(3,4)| distances «-d(4,1)- - d(4,3) - J i n miles L = |m(2,h)| = host to centre modem capacities I m (3, h) | in messages/sec «-m (4,h)-« rm (1,ch C = |m (2,c) J = centre to host modem capacities | m (3, c) | i n messages/sec «-m (4,c) J (generally m (i,h) =m (i,c) , m ( i , j) =m ( j , i) , d ( i , j) =d (j, i ) ) 35 The network i s mapped into a queueing network by mapping each modem and each processing unit into an exponential server. Have propagation delays are mapped into an i n f i n i t e number of p a r a l l e l servers with rate egual to the wave propagation speed divided by the distance. The queueing network i s shown i n Figure 3.2: It can be seen that there are only 4 servers with associated "sources" (i.e. that receive a r r i v a l s from outside the system): s1, s6, s11, s16; and 4 "destination" servers: s2, s7 # s12, s17. 36 2 3 F i g u r e 3.2: n20-Node Queuing Bet" s e r v e r (node) i wave propagation delay 37 We now map the matrices defining the computer network into corresponding matrices defining the queueing network, (the destinations are 2,7,12,17). r 1— T - T -|server | a r r i v a l rates|service rates| routing | |number 1 A I B I m(1,h) I 2 — + — 1 12 17| -I 3 3| | 1 I g d) 3 I 2 I o I m (1 ,c) I o - - - j i 3 I o I P d ) I 2 4 4 5| I *» I o I n(1,2) | - 8 8 -1 I 5 I o I M1 , t ) | - - - 18| I 6 I g(2) I m(2,h) I 8 - 8 8| I 7 l o | m(2,c) | - 0 — — | I 8 I o I P(2) I 10 7 9 9| J 9 I o I m(2,3) | - - 13 13| | 10 I o I m(2,1) I 3 - - -1 I 11 I g (3) I m(3,h) I 13 13 - 13| I 12 I o | m(3,c) | - - o -1 I 13 I o I P(3) I 15 15 12 14| i 14 I o I m(3,4) J - - - 18| I 15 I o I m(3,2) I 8 8 - -1 I 16 I g(<0 I m(4,h) I 18 18 18 - j 1 17 l o | m(4,2) | - - - 0| I 18 i o I PC) I 19 20 20 17| I 19 I o I m(4,1) I 3 - -1 | 20 l o I m(4,3) | - 13 13 -1 L ±.,,, .. X A. i 2 7 12 1 r - f(1,2) f(1,3) F = 6 |f (2,1) - f (2,3) 11 |f(3,1) f(3,2) 16 «-f(4,1) f(4,2) f(4,3) 17 f (1,4)i f (2,4) | = flow matrix f (3 ,4) | (destination - J of messages) The l i m i t i n g expected values for the delays can now be computed following the method described in 38 Appendix 1, with the one change that delays computed at the centre-to-centre modems ( i . e . at s4, s5, s9, s10, s14, s15, s19, s20) must be incremented by the wave propagation delays. Some care must be taken too, when computing the t r a f f i c i f one wants to model positive acknowledgements. A detailed numerical example i s seen i n Appendix 2. 3.3 Limited Buffer Capacities: A Heuristic Approach. So far, we have not considered the p o s s i b i l i t y of packet-blocking at the switching centres. We assumed that switching centres had i n f i n i t e capacity buffers and packets are never denied entrance to the centre. Simple Markovian queues with limited buffer capacities have been studied and closed formulas for the expected queue length and delay are available. Thus, extending Jackson's theorem to networks of limited capacity queues becomes an a t t r a c t i v e l i n e of investigation. This approach, however, would be incorrect i n t h i s case as shown by taking a closer look into our model. The so c a l l e d K/M/1/C queue or limited capacity Harkovian queue considers blocked a r r i v a l s as l o s t a r r i v a l s ; i . e . , when the Poisson source generates an a r r i v a l 39 which cannot enter the buffer, t h i s a r r i v a l i s l o s t and the source continues i t s generation process unaffected. In our case, however, when a packet i s denied entrance to a switching centre, i t waits at the previous node u n t i l space i s available; (in r e a l i t y , the packet i s l o s t but a copy waits at the previous node). This means that even when packets are rejected at a node, they are s t i l l being queued up for service at that node in a FIFO way; the only difference being that they do i t in another buffer. Hence, messages are buffered in previous nodes and ultimately at the host which we assume has an i n f i n i t e capacity buffer. What i s then the r e a l e f f e c t of blocking i n a packet switching network? To answer t h i s question, we model the blocking at a node as an unavailable l i n e ; i . e . , as a blocked modem. The eff e c t then, i s to decrease the actual capacity of the l i n e by the fr a c t i o n of the time that the buffer i s f u l l . For example, l e t centre M j " in figure 3.3 have a proba b i l i t y of being f u l l equal to .2: centre centre i n 4800 bits/sec 1 j L Figure 3.3 Hence, the e f f e c t i v e capacity of the l i n e becomes (4800) • ( . 8) =3840 bits/sec. The queueing model then i s an unlimited capacity model but with smaller service rates for a l l servers l a b e l l e d "H" (i.e. for a l l servers representing modems). Hereafter, we s h a l l refer to the l i n e capacity as the "given" service rate and to the modified l i n e capacity as the " e f f e c t i v e " service rate. S i m i l a r l y we s h a l l refer to the "given" and " e f f e c t i v e " u t i l i z a t i o n factors of the servers. The problem i s now to determine the p r o b a b i l i t i e s of the centres being f u l l . These can be obtained from the steady state p r o b a b i l i t i e s v(n,s) of having "n" messages in queue "s" as follows: the steady state probabilites for M/H/1 are geometrically distributed and given by: n v (n,s) =[ 1-r (s) > r (s) 41 Where r(s) i s the e f f e c t i v e u t i l i z a t i o n factor of the corresponding server " s n . Consider now the switching centre insi d e the area labeled C1 i n figure 3.2. I t can be seen that 4 queues share the buffer space at t h i s centre; namely, the queues at p1, m14, m1c and m12. In general, for K geometrically distributed random variables with d i f f e r e n t parameters r ( i ) the cumulative d i s t r i b u t i o n convolute (i.e. probability of having N or fewer messages in the buffer) w(N,K) i s given by (3.3) below (for a proof see Appendix 3): K \ N-K-1 __ 1-r(j) ] w(H,K) = 1 - |r(i) || 1 (3.3) i=11 j * i r (j) | for r (1) *r (2) *r (3) ( K ) For i d e n t i c a l parameters r (1) =r (2) =.. . ,=r, on the other hand, the K-fold convolute of the geometrical d i s t r i b u t i o n with i t s e l f i s given by the negative binomial d i s t r i b u t i o n with parameters K and r [17]. Let us denote i t by f (x;K,r). 42 (x-1) ! K - r (1-r) x-K f(x;K,r) = (K-1) ! (x-k) ! And the probability w(N,K) of having less than or equal to N messages in the buffer i s given by: Now i f a node has some servers with equal u t i l i z a t i o n factors (which makes (3.3) undetermined) and some others with d i f f e r e n t u t i l i z a t i o n factors two d i f f e r e n t approaches can be taken: The f i r s t and perhaps "cheaper" one i s to change the u t i l i z a t i o n factors i n such a way that i f r ( i ) = r(j) then r(j) i s incremented by a small number; and thus apply formula 3.3. The second approach i s to determine the negative binomial convolute within a l l groups of equal r's (cal l them g1, g2....gm) , and to use expression (3.2) to determine the cumulative d i s t r i b u t i o n for the rest of the r's ( c a l l t h i s group gO ). We can thus compute recur s i v e l y the convolution F(N*G) f o r the group G=g1+g2+«»«+gm as follows: N w (N,K) = *Z f (x;K,r) x=0 (3.4) 43 n n- i F(n,K) = > >. — w(i,g1) w(j,g2) •••w(k,gB) (3.5) i=Oj=0 And then compute the o v e r a l l cumulative probability w (N,K) for the centre from: N w(N,K) = ^ F (i,G) « <N-i,gO) (3.6) x=0 This l a s t method, since the number of possible combinations of terms i n (3.5) i s extremely large, i s p r o h i b i t i v e l y expensive i n terms of computing time when there are more than just a few groups (say 2 or 3). Summarizing, we have obtained expressions to compute the probability of having less than or equal to N messages waiting to be served at a switching centre. We can thus compute the blocking p r o b a b i l i t y at the centre which i s simply 1-w(H,K). We notice, however, that w(N,K) i s a function of the e f f e c t i v e u t i l i z a t i o n factor which i s unknown and depends i n turn upon the blocking p r o b a b i l i t i e s we are trying to compute. We can solve t h i s problem by i t e r a t i o n , s t a r t i n g with the given l i n e capacities. From these and the t r a f f i c vector we obtain the given u t i l i z a t i o n factors using the method described i n Appendix 1 and a set of 44 blocking p r o b a b i l i t i e s using either (3.3), (3.4) or (3,6) for each switching centre. With these p r o b a b i l i t i e s we compute the modified l i n e capacities and r e s t a r t the cycle, which i s repeated u n t i l the change in blocking probabilites i s s u f f i c i e n t l y small. An ALGOLW program has been written which uses th i s method and expression 3.3 to solve a packet switching computer network of an a r b i t r a r y number of host computers and switching centres (see Appendix 5). I t has proven to be quite f a s t . In solving networks of 10 hosts and 10 switching centers for example, 3 to 7 i t e r a t i o n s were needed to obtain an accuracy of 99% i n the blocking p r o b a b i l i t i e s . In t h i s case, the program used about .5 seconds of CPU time which i s approximately 1/50 of the time used by a GPSSV simulation program in solving a similar network. More data on program performance i s given i n the next chapter. 45 CHAPTER 4. RESULTS. 4.1 The Queueing Model Vs. A Simulation Model It i s not our intention to f u l l y validate our model by comparing i t with another model which i t s e l f incorporates assumptions and abstractions of r e a l i t y whose ef f e c t i s hard to measure. Mutually validating models on the other hand are found i n the l i t e r a t u r e , but i t should be pointed out that these should be considered as nothing more than what t h e i r name says: "mutually v a l i d a t i n g " . Whether either one i s a good model of the real s i t u a t i o n i s a d i f f e r e n t question. Having made these points cl e a r , we want to show some of the differences and s i m i l a r i t i e s observed when running the same networks on the "queueing theory" implementation of our model and on Chheda*s simulation program. Ten runs were made on a 4-node network described in Appendix 4 (hereafter refered to as the te s t network). The simulation was run using Poisson a r r i v a l s , exponentially distributed message lengths, small packet size ( r e l a t i v e to the message size) and u t i l i z a t i o n factors less than 0.8. It can be observed 46 (see table 4.1) that the o v e r a l l delays were s i m i l a r to those obtained using the "queueing theory" program (maximum observed difference <14.6%). Total demand Utilization factor Overall delay mes/sec in sec/mes Queuing model Simulation 10 .13 .128 -20 .26 .138 .143 30 .39 .150 .164 40 •52. .167 .137 50 .65 .192 .220 60 .78 .233 .225 70 .91 .344 .299 72 .94 .405 -75 »97 .374 -77.4 1.01 CO -80 1.04 00 .457 90 l i l T ~ GO .470 TABLE 4.1 "Verification runs" This, obviously was no surprise since the same assumptions were made i n both models. For u t i l i z a t i o n f a c t o r s greater than 0.8, larger differences were observed. A graph of these r e s u l t s can be seen i n figure 4. 1. 47 u t i l i z a t i o n f a c t o r s .13 .26 .39 .52 .65 .78 .91 1.04 1.17 10 20 30 40" 50 60 70 §0 90" traffic (messages/sec) Figure 4.1 **Verifiction Runs" Notice that the c r i t i c a l region for o v e r a l l delays s t a r t s around u t i l i z a t i o n factors of 0.8 which i s also where the discrepancy between both curves s t a r t s . This discrepancy i s a r e s u l t of extremely long transients which makes i t economically impossible to achieve steady state i n the simulation. There was a great difference, however, in the resources used by both programs. CP0 running time in the simulation program for u t i l i z a t i o n factors less than .8 and f o r 4-node networks ranged from 30 to 90 seconds, while the "queueing theory" program used .8 seconds i n the worst cases; (generally .3 to .4 seconds). 48 When packet s i z e s were increased (or message sizes reduced) both smaller and larger delays were observed i n the simulation runs when compared to the "queueing theory" runs (see figure 4.2). o 03 (0 > s .4 » >> as OS V) r-t .3 IO <D <r> 6 rH rH .2 OS u 0) AO .1 I £ 3 4 6 8 10 12 H 16 13 packet size (bytes) Figure 4.2 "Packet Size Effect" This was explained as follows. Packets are subject to constant delays at the switching centres. This means that messages which use only a f r a c t i o n of the l a s t packet are delayed more than i n the case of the smaller packets. Constant packet delays, on the other hand, w i l l r e s u l t i n message delays which are no longer 49 exponentially d i s t r i b u t e d but tend to be constant. This, according to the Pollaczec-Khintchine formula (seen i n section 2.2) re s u l t s in shorter queueing delays. F i n a l l y , larger packets results i n less leader overheads which also tends to make the delay shorter. Notice that the "queueing theory" l i n e i s horizontal since packet sizes are not e x p l i c i t l y considered i n the program. 4.2 Effe c t of Queueing Delays at the Switching Centre. The effect of considering the queueing delay at the switching centre as a constant (independent of the t r a f f i c as in Kleinrock's model) was analyzed by running the "queueing theory" program on the test network. The runs were made using d i f f e r e n t t r a f f i c loads and considering a) no queueing delays at the centre's processing unit, b) the processing units as constant servers, and c) the processing units as exponential servers. The capacities i n messages/sec. throughout the network are as follows: Host processing=770, switching centre processing=270, a l l lines=151. It can be seen from the results i n table 4.2 that 50 as the t r a f f i c (and hence the u t i l i z a t o n factors) increases, the discrepancies between the computed o v e r a l l delays increase s i g n i f i c a n t l y . Host t r a f f i c Mean O v e r a l l Delays (sec.) (messages/sec) a) b) c) 84 .177 96 .192 .235 108 c2l6 .337 .458 114 .233 cQ08 1.38 115.2 .237 2.17 4.11 115.7 .239 H58o 2916. TABLE 4.2 " E f f e c t o f Queuing Delays" In f a c t , as the node u t i l i z a t i o n f a c t o r s approach unity (115.8 messages/sec.) the error becomes extremely large. It should be pointed out, however, that these c a p a c i t i e s were c a r e f u l l y chosen i n order that the network would be very unbalanced (node-bounded). This i s indeed the "worst case". In the next section, a s e r i e s of runs including balanced and l i n e bounded 51 networks i s presented. 1.3 Runs with Balanced and Onbalanced Networks. In the next s e r i e s , a balanced network was used to perform 9 runs with d i f f e r e n t t r a f f i c loads. . The ca p a c i t i e s were than modified i n such a way a l i n e -bounded and a switching-bounded networks were obtained. These ware run with i d e n t i c a l t r a f f i c loads giving the following r e s u l t s (tables 4 . 3 and 4 . 4 ) : l i n e bounded balanced s w i t c h i n g bounded T r a f f i c Load A B A B A B 20 30 40 50 60 70 72 75 .356 .526 1.168 1.590 3.654 .202 .231 .277 .202 .231 .278 '.358 .529 1.170 1.595 3.660 e.2H 0.243 0.287 0.367 0.538 1.179 1.602 3.666 .218 .252 .303 .393 .588 1.337 1.843 4.820 .143 .154 .168 .136 .214 .258 .271 .293 .143 .163 ol32 .213 .263 .415 .515 1.44 A: constant queuing delays B: queuing delays as i n n/u/l queues TABLE 4.3 ^Balanced and Unbalanced Netvo rka" 52 RELATIVE BRROR Load . C r i t i c a l U t i l i z a t i o n Line-Bound Balanced Mode-Bound in Mes/sec Factor Net Net Net 40 -.52 ~ .056 . 084 50 .65 — .071 ,U.S 60 .78 ~ a .093 .286 70 .91 — .134 .410 72 .94 — .154 .800 75 .99 — .310 .900 TABLE 4-4 "Errors in Balanced and Unbalanced Networks" I t can be seen i n table 4.4, that the way i n which queueing delays are handled, while being neglectable for line-bounded networks, i s important i n the cases of balanced and switching-bounded nets, e s p e c i a l l y for high u t i l i z a t i o n f a c t o r s . 4.4 Limited vs. Unlimited Buffer Capacities. A set of runs was made on the t e s t network using the following c a p a c i t i e s : centre's processing = 250 messages/sec, a l l l i n e s = 120 messages/sec. The t o t a l t r a f f i c load over the network i s 73 mess/sec. making the u t i l i z a t i o n factors equal to 0.7 and uniform throughout the network. 53 Table 4.5 shows that the network performs well for large buffer c a p a c i t i e s , but as the capacities decrease, i t becomes overloaded around a capacity of 13 messages at the centres. Centre's O v e r a l l Wait (sec) y>): CO . \. c237 101 messages .237 1 5 messages .333 14 messages 1.000 13 messages network overloaded TABLE 4.5 ."Effect of B u f f e r C a p a c i t i e s " A second set of runs was made , but t h i s time l i n e c a p a c i t i e s were reduced to 80 messages/second making the network line-boundad with u t i l i z a t i o n factors (for lines) around .93. Table 4.6 shows that buffer c a p a c i t i e s of more than 100 messages are needed so as not to overload the network. 54 C e n t r e 1 s O v e r a l l Y & i t (sec.) c a p a c i t y oo 1 sec 150 1.173 HO I.253 110 1.142 105 1.749 103 2.264 102 3.014 101 7.158 ICO 00 TABLE 4.6 . '"Sffect of B u f f e r C a p a c i t i e s Line-Bounded Network"" When limi t e d buffer capacities are used i n the simulation, and the r e s u l t s of the two programs are compared, no appreciable difference between the values obtained by: the two programs can be observed. The difference i n CPU time now i s not as large (but s t i l l remarkable) since the "queueing theory" program has to go through a s e r i e s of i t e r a t i o n s . Depending on the accuracy required, the number of nodes and the node-degrees i n the network, CPU time for the "queueing theory" program ranged from .5 to 10 seconds (2 to 9 55 iterations) while for the simulation i t s t i l l ranged from 30 to 90 seconds. The main problem encountered when making t h i s set of runs was that for large u t i l i z a t i o n factors, the simulation had a high pro b a b i l i t y of reaching a deadlocked state. This i s seen i n the next section. We should mention at t h i s point, that i t became clear after these and many other test runs that our queueing model was si g n i f a c a n t l y cheaper in terms of computing resources than the simulation. We also became aware, however, of the large degree of f l e x i b i l i t y we had with the simulation program which we did not have on our queueing model. among these areas of i n f l e x i b i l i t y , we have already mentioned the problems of packet size to message siz e r a t i o s and the header overheads, Others, were the proper modelling of d i f f e r e n t types of packets (which we handled by ad-hoc t r a f f i c adjustments and modifications on the delay formulas) and the proper modelling of mixed message types which can range from long transmissions of f i l e s to very short messages. This we were not able to handle i n our model except by ad-hoc modifications for each p a r t i c u l a r case. 56 4.5 Deadlocks: A Limitation Of The Model Limited capacity buffers w i l l often produce deadlock s i t u a t i o n s , e s p e c i a l l y i n the cases when u t i l i z a t i o n factors are high (>0.8).,A deadlock i s a si t u a t i o n where 2 or more switching centres are mutually blocked. Consider for example a simple 2-node deadlock. Both buffers are f u l l . Bode "A" t r i e s to send packets to node "B n keeping a copy of the packets in i t s own buffer ( i . e . remaining f u l l ) . Node "B" on the other hand i s t r y i n g unsuccessfully to send packets to " i " keeping i n turn a copy i n i t s buffer, thus no packet ever gets through (see figure 4.3). b u f f e r "A'* b u f f e r "B'f 1 1 J \cona\ * Icopy 11 1 1 I I 1 1 111 1 IcopyiJ •+ 1 • 1 M l 1 1 1*1 1 1 1 ! 1 1 i i ; T I Figure 4.3 "2-node Deadlock" 57 These situations cannot be adequately handled i n our model. Theoretically, the pro b a b i l i t y of getting a deadlock i n any limited capacity buffer network approaches 1 as time approaches i n f i n i t y . Ideally then, one would l i k e to determine the expected time the network w i l l operate before i t reaches deadlock as a function of the buffer capacities and the u t i l i z a t i o n f a c t o r s. A l l we are able to say at t h i s point, i s that t h i s expected time decreases as the blocking p r o b a b i l i t i e s increase. In order to gain some insight into the deadlock problem, an extensive series of runs were made using the simulation program. The program stops shortly after a deadlock s i t u a t i o n arises due to the high congestion that r e s u l t s from them. Furthermore, the program prints s t a t i s t i c s every few time units so that the time of the deadlock and i t s nature (2-way, 3-way, 4-way) can be accurately pinpointed. Figure 4.4 shows the time to deadlock in a series of runs on the test network. The network in t h i s case i s a balanced one. The dotted l i n e representes a series 58 of simulation runs with d i f f e r e n t random number seeds. to T3 O o CD to t H CD 6 •rl En 40 30 20 10 200 300 400 500 600 700 b u f f e r c a p a c i t i e s i n messages F i g u r e 4.4 " D e a d l o c k Times" It can be observed, that the time to deadlock grows as the buffer capacity increases, perhaps i n an exponential fashion. I t can also be seen that f o r the same random number seed, the time to deadlock i s not monotonic. This i s due to the f a c t that f o r some buffer c a p a c i t i e s a 3-way deadlock occured (3 nodes mutually blocked) while for larger capacities, a 2-way deadlock ocurred e a r l i e r i n time s i t u a t i o n that was somehow avoided when using the smaller capacity. 59 No conclusions as to the exact nature of the curve in figure 4.4 were obtained. This i s due to the large amount of CPU time required to make each run, which did not permit the generation of a meaningful random sample. an attempt was made to c l a s s i f y deadlocks according to which network components are mainly responsible for their occurence and how they can be avoided. This c l a s s i f i c a t i o n i s derived from the th e o r e t i c a l approach to limited buffer capacities presented i n section .3.3. Eec a l l that the e f f e c t of l i m i t i n g the buffer capacities was to increase the "given u t i l i z a t i o n f a c t o r s " of the l i n e s ( t r a f f i c / c a p a c i t y ) . This increased factor ("effective u t i l i z a t i o n factor") was computed through a series of i t e r a t i o n s s t a r t i n g from the "given 1* factor. Suppose now that one or more "given" u t i l i z a t i o n factors are greater than or very close to X, i t i s obvious that an early deadlock s i t u a t i o n w i l l a r i s e (since according to equation 2.4 the corresponding queues are expected to grow without bounds). These situations s h a l l be c a l l e d "type 1 deadlocks", and i t i s clear that they cannot be avoided 60 by increasing the buffer capacities. These deadlocks aris e because of inherent inadequacies in the l i n e capacities or the switching processing units. A second type of s i t u a t i o n arises when after the i t e r a t i v e procedure one or more of the " e f f e c t i v e " u t i l i z a t i o n factors are greater than 1 and the corresponding queues w i l l grow without bounds. These s h a l l be c a l l e d "type 2 deadlocks". These can indeed be avoided by increasing buffer capacities. Take for example the case t y p i c a l case where given u t i l i z a t i o n factors range around .65 to .75 and that because of small buffer capacities they become 1 to 1.2 after the series of i t e r a t i o n s . This i s a case where larger memory sizes at the switching centres should be considered, before deciding b l i n d l y to lease more powerful l i n e s or get faster CPUs. F i n a l l y deadlocks that a r i s e i n spite of e f f e c t i v e u t i l i z a t i o n factors being less than one s h a l l be c a l l e d "type 3 deadlocks". These are s t r i c t l y due to variances in the average flows, which combined with high e f f e c t i v e u t i l i z a t i o n factors (say >.8) may produce a deadlock. The p r o b a b i l i t y , however, of t h i s s i t u a t i o n a r i s i n g decreases as the u t i l i z a t i o n factors decrease (as shown in figure 4.2). One approach to these 6 1 situations (aside from prevention and/or recovery schemes) i s to operate networks at e f f e c t i v e u t i l i z a t i o n factors which w i l l make the deadlock prob a b i l i t y i n s i g n i f i c a n t . 62 CHAPTER 5: CONCLUSIONS AND SUGGESTIONS FOR FURTHER RESEARCH It has been found that the f i e l d of f a s t response computer networks i s growing extemely f a s t i n North America and Europe, No comparative analysis can yet be made between the three major designs (packet switching, l i n e switching and Newhall loops); each one of them has advantages and disadvantages with respect to the others. In the search for accurate and inexpensive models for design and analysis of packet switching networks, i t was found that queueing theory provides an economical but perhaps a b i t i n f l e x i b l e t o o l . The theory i s mainly due to Jackson and his network decomposition theorem. I t i s not clear that queueing models can replace discrete event simulation models in t h i s area, but they can y i e l d a s i g n i f i c a n t saving i n computational resources. I f used along with a simulation, a powerful pair of models for analysis of computer networks i s obtained. This pair provides a tool which ranges from the very inexpensive and fa s t (given by the queueing theory) to the f l e x i b l e and detailed (given by the simulation). It was also found that the queueing model used in the design and analysis 63 of ARPA and other computer networks, had two major d e f i c i e n c i e s : f i r s t that no queueing delays were considered at the switching processors and second that no provision was made to include the p o s s i b i l i t y of packet blocking at the switching centres. A model was developed which solved these d e f f i c i e n c i e s . The concepts of balanced, switching-bounded and transmission-bounded networks are presented along with their implications i n model accuracy. The area of deadlocks provides an interesing l i n e for further research. Except f o r some scattered work (see f o r example [ 1 0 ] ) very l i t t l e attention has been given to stochastic deadlock models. From a design point of view, for example, a formalization of the deadlock c l a s s i f i c a t i o n discussed in section 4.5 would be extremely useful. I t i s strongly f e l t , after extensive experimentation with the GPSSV model and some digging in the l i t e r a t u r e (which deals mostly with detection and prevention i n a deterministic enviroment), that e f f o r t s should be directed to the correct determination of the expected time to deadlock as a function of the buffer capacities (see figure 4.3 i n the previous section). Along d i f f e r e n t l i n e s two of the interesting 64 problems a r i s i n g from the study of modern computer networks are the determination of an optimal network topology (which nodes should be connected) given the locations of the nodes and the projections of the t r a f f i c ; and the selection of routing procedures which minimize the o v e r a l l message delay. Both problems have been separately studied i n the l i t e r a t u r e (see for example [13,18] for routing and [15] for topology). As i n many other applications, however, for computer networks these problems are intimately r e l a t e d ; and optimizing one of them implies a suboptimal solution for the global problem. Routing techniques can be divided into two major groups: fixed routing and dynamic routing. Fixed routing i s one where the routing matrices or tables do not change in time, while i n dynamic routing these tables are adjusted according to the past history and/or present status of the network. An example of fixed routing i s the one used throughout t h i s work given by a current-position destination matrix. Another fix e d routing i s the one given by an origin-destination matrix where each entry i n the matrix i s a vector of intermediate nodes. The only reason to consider dynamic routing 65 techniques i n computer networks i s the highly variable nature of the t r a f f i c from moment to moment. In fa c t , i f message generation rates and message lengths were constant, there would be no congestion due to variances i n the t r a f f i c , and a fixed routing minimizes the o v e r a l l message delay. This suggests that f o r computer networks one must separate the problems of cost optimization (for which fixed routings and average flows can be considered) from the problems of operation under varying t r a f f i c for which dynamic routing schemes can be used. This assertion remains to be proven, suggesting an inter e s t i n g l i n e for further research. One approach then to the global routing-topology problem, would be to determine optimal topologies for given fixed routing schemes, and then to select the best one. This, however, involves large amounts of computation. Thus more sophisticated optimization algorithms are needed. One such procedure i s suggested by Frank e t . a l . [16] but no follow up on i t has been found by t h i s author. The idea behind t h i s method i s to use a dynamic algorithm which st a r t s with a given topology and l i n e capacities, and by deleting and adding l i n e s i n every step, optimal topologies, l i n e c a pacities and routes are determined. 66 BIBLIOGRAPHY 1. Anslow, N.G. ; Hanscott, J . "Implementation of International Data Exchange Networks". ICCC, October 1972. 2. Barber, D.L,A. "The European Computer Network Project". ICCC, October 1972. 3. Burke, P. "The Output of a Queueing System". Opns^Res.4. 4. Carr, C.S.; Crocker, S.D.; Cerf, V.G. "Host to Host Communication Protocol i n the ARPA Network". Spring J o i nt Computer Conference^ 1972. 5. Chu, H.W. "An Analysis of Buffer Behavior with Batch Poisson A r r i v a l s and Simple Server with Constant Output Rate." IEEE Com 18, n° 5. 6. Clarke, A.B.; Disney, S.L. 'JProbability and Random Processes for Engineers and Scientists.'^ John Willey,~1970.~ 7. Courtois, P.J. "On the Near-Complete Descomposability of Networks of Queues and of Stochastic Models of Multiprogramming Computer Systems". Department of Computer Science. Carnegie-Mellon University, 1971. 8. Crocker, S.D.; Heafner, J.F.; Metcalfe, R.M.; Portel, J.B. "Function Oriented Protocols for the ARPA Network". Spring Joint Computer Conference, 1972. 9. D e l l , F.R.E. "Features of a Proposed Synchronous Data Network" IEEE,. October 1971. 10. E l l i s , A. "On the Probabil i t y of Deadlock in a Computer System" Department of Computer Science. University of Colorado, Boulder, Colorado. 11. F e l l e r , W. ^An Introduction to Probability Theory and i t s Applications^ John Wiley, 1968. 12. Fisher, CR. ; S l i s h , R.L. "The Datran Network". IEEE X October 1971. 67 13. Frank, H.; Chow, W. "Routing i n Computer Networks" Networks 1, pp. 99-112,} 1971. 14. Frank, H.; Chow, W. "Cost and Throughput i n Computer Communication Networks". Network Analysis Corporation. Glen Cove,* N.Y. 15. Frank, H.; F r i s h , I.T.; Chow, W. "Topological Considerations i n the Design of the ARPA Computer Network". SjDrincj Joint Computer Conference, 1970. Pp. 581-587. 16. Frank, H.; Kahn, R.F.; Kleinrock, L. "Computer Communication Network. Design and Experience with Theory and Practice". Spring Joint Computer Conference, 19 72. 17. Freund, J.E. "Mathematical S t a t i s t i c s . " Prentice H a l l , 1971. 18. F u l t z , F.; Kleinrock, L. "Routing i n Computer Networks" Spring Joint Computer Conference^ 1972. 19. Gabler, H.G. "The German EDS Network" IEEE^ October 1971. 20. Hal l , W.; J e r v i s , B.; J e r v i s , J. "Simulation of a Newhall Loop". Department of Computer Science. University Of B r i t i s h Columbia, 1973. 21. Hayes, J.F.; Sherman, D.H. " T r a f f i c Analysis in a Ring Switched Data Transmission System". The B e l l Technical J o u r n a l ^ November 1971. 22. Hirota, K. "Public Telephone Network and Computer Communication." ICCC October, 1972. 23. Jackson, J.R. "Networks of Waiting Lines". Opns^ Res. 5, p. 518. 24. Jackson, P.E.; Frichs, E. "Estimates of Distributions of Random Variables for Certain Computer Communication T r a f f i c Models". ACM Conf. Proc. October, 1969. 25. Jackson, P. E. ; Stubbs, CD. "A Study of Multi-access Computer Communications." AFIPS Conf. Proc. 34 p. 491. 68 26. Kleinrock, L. "Communication Nets" Mc. Graw H i l l , 1964. 27. Kleinrock, L. "Models for Computer Networks" IEEE 1969. Boulder, Colorado. 28. Kleinrock, L. "Analytic and Simulation Methods i n Computer Networks" Spring Joint Computer Conference^ 1970. 29. Martin, J. "Telecommunications and the Computer Prentice H a l l , 1969. 30. Manning, E. "Newhall Loops and Programmable TDM: Two Facets of Canadian Research in Computer Communications." ICCC^ October 1972. 31. McKenzie, A. A.; Cosell, B.P. ; McQuillian, J.M.; Thrope, M.J. "The Network Control Center f o r the ARPA Network ". ICCC, October 1972. 32. McQuillian, J.M.; Crowther, W.R.; Co s e l l , B.P.; Maiden, D.C. "Improvements i n the Design and Performance of the ARPA Network". 1FIPS Conf. Proc. Vol. 41. 33. De Mercado, J . ; Guindon, R.; Dr. S i l v a , J . ; Kadoch, M. "Topological Analysis and Design of CANUNET" Report of the Department of Communications, January 1972. Canadian Government. 34. De Mercado, J.; Guindon, R.; Dr. S i l v a , J.; Kadock, M. "Topological Analysis of CANONET" Vol 2 Report of the Department of Communications, March 1972. Canadian Government. 35. Morse, P. "Stochastic Properties of Waiting Lines". Op_ns._ Res._ 3, p. 256. 36. Newhall, E.E.; Venetsanopoulos, A.N. "Computer Communications Representative Systems." ICCC October, 1972. 37. Ohlmer, A. "Summary of Existing Data Communication Services in Western Europe and Tentative Forecast of New Services for the Next Decade." ICCC^ October 1972. 38. Roberts, L.G.j Wessler, B.D. "Computer Network 69 Development to Achieve Resource Sharing". Spring Joint Computer Conference^. 1970. 39. Scantelbery, R.A.; Wilkinson, P.T. "The Design of a Switching System to Allow Remote Access To Computer Services by Other Computers and Terminal Devices." IEEE, October 1971. 40. Seeley, D.; Alemparte, M. "Simulation and Analysis of the Proposed Canadian Computer Network: CANONET" Canadian Computer Conference. Edmonton, Alberta, 1973. 41. Steward, E.H. "A Loop Transmission System". ICC 1970. Vol 2. 42. Wallace, U.L. ; Rosenberg, R.S. "RQA-1 The Recursive Queue Analyzer". Department of E l e c t r i c a l Engineering. Systems Engineering Laboratory. The University of Hichigan, 1966. 43. Zafiropoulo, P.; Rothhauser, E.H. "Signaling and Frame Structures i n Highly Decentralized Loop Systems" ICCC X October 1972. 70 APPENDIX 1 "Determination of Average Delays in an Open Network of Markovian Queues Using Jackson's Theorem". Let an open network of Markovian queues be a network of exponential servers and t h e i r FIFO queues where transactions (e.g. messages) a r r i v e from outside the system i n a Poisson fashion and flow through the servers according tc a current position - destination routing matrix u n t i l they reach a l a s t (or destination) server after which they leave the system. Let "S" be the set of servers or "nodes" where each server s(i) (i= 1,2,... ,n) has an exponential d i s t r i b u t i o n of service times with mean 1/b (i) and i s associated with a "source" which generates packets i n a Poisson fashion at a rate a ( i ) . The vector "A" and "B" are those whose entries are a (i) and b (i) respectively. The entries f ( i , j ) (i= 1,2,... ,n ; j=1,2,.,.,n) to the "flow" matrix "F" indicate the f r a c t i o n of the messages generated at server s (i) that w i l l leave the system through server s (j) with the condition that s(i)#s(j) i . e . f ( i , i ) = 0 for a l l i . C l early then: n £ f ( i , j ) = 1 3=1 F i n a l l y , l e t r ( i , j ) be the entries i n the current position - destination routing matrix "R" indicating the next server to which the message w i l l go given that i t i s currently at s (i) and the destination i s s ( j ) . The entries t (i) (i=1,2....n) of the t r a f f i c vector t are determined by the following algorithm: 1) 1 t (i) = a (i) (i=1,2.,..n) 2) for each non-zero f ( i , j ) i n F do: i) k=i i i ) t ( r ( k , j ) ) = t ( r ( k , j ) ) + f (i,j)»a(i) i i i ) i f r(k,j) * j then k=r (k, j) and go back to i i ) 71 The delay d(i) (i=1,2....n) at each server can now be determined from the t r a f f i c by applying formulas (3.1 and 3.2) The average delay for each source-destination pair i s computed by adding a l l the delays encountered i n the corresponding route. The algorithm i s s i m i l a r to that f o r determining the t r a f f i c and i s not repeated here. Finaly, the " o v e r a l l network delay" i s determined taking the average between a l l the source destination pairs weighted by the corresponding flows. For i l l u s t r a t i o n of the method consider the folowing simple example. r — l i 1 —>oo 1 s 11 — 1 s2 J oooo I I I I I I t I | s4 | — j s21 oooo<— i i t i Figure a-1.1 "A 4-node queueing net" The queueing network i s defined by the following matrices: r 0 .3 .3 r~ 2 2 <h A=|0| B=|7.5| F=| 0 0 0 0| H=|1 - 3 3| |6| 17. 5 | I.5 .2 0 .31 |2 2 - <H «-7.5J >- 0 0 0 OJ t1 3 3 — j T r a f f i c computation: Let f • ( i , j)=f (i,j)«a(i) f t ( 1 ) I t(2) T t(3) T t(4) ] | a{1)=4 | a{2)=0 j a (3) =6 | a (4)=0 j Jf» (3 , 1 ) = 3.0jf« {1,2) = 1.2|f » ( 1,3) = 1 . 2 | f M 1 r ^ ) = 1 - 6 l | |f « ( 1#3 ) = 1.2| |f» (3,4) = 1.8| | |f • (3,1)=3.0| | I | |f» (3,2) =1-2| I I j. + 1— + 1 | 7.0 | 6.6 | 7.2 | 3.4 I results: | |utilization| delay I queue* I 1 f 4 |server 1 | .93 I 2.0 1 15.0 | Iserver 2 | .88 | 1.1 I 8.3 I |server 3 | .96 I 3.3 I 25.0 I Iserver 4 I .45 I .24 | 1.8 I •) including the item being served. 73 APPENDIX 2 "Numerical Example Using The 4-Node Computer Network Described In Section 3.2" Expected message delays are determined for the network described i n 3.2. The model y i e l d s an associated queueing network with 20-nodes which i s solved using Jackson»s theorem. Diagrams are repeated i n the next two pages for convenience. The following are the numerical values for the matrices and parameters defining the computer network: Average message length: 10 1-byte packets Generation rates at the hosts: q (1)=q (2)=g (3)=g (4)= 30 messages/sec. Flow matrix: r ~ • 1 • 7 • 2-| F = |. 3 - .5 .2| | .1 .4 - .51 V.2 .2 . 6 - J Routing matrix: r- 2 2 4-, R = | 1 - 3-31 12 2 - a i M 3 3 - J Average capacity of processing units: 1000 packets/sec. P (1)=P (2)=p (3)=p (4) = 100 messages/sec. Centre to centre modem capacities are given by AT&T 300 2-c4 l i n e s (4800 bits/sec.) m (1 ,2)=m (1,4) = «»«=m (4,3) = 60 messages/sec. H I MIH me,. - a pi cf D U Mil -cn NIC J i i E l - C4 t l . H f 3 •to* ELl P3 Figure A2 .1: "A 4-ttode Computer Set" Hi : host i HiH : modem host i - center i Mij : modem center i - center j HiC : modem center i - host i C i : switching center i P i : c e n t r a l processing unit i 7 5 2 3 F i g u r e &2.2: "20-Node Queuing Net" s e r v e r (node) 1 wave propagation delay 76 Centre to centre distances: d(1,2) = d(2,1) = 700 miles d(1,4) = d(4,1) = 1800 miles d(2,3) = d(3,2) = 2500 miles d(3,4) = d(4,3) = 1500 miles Host to centre modem capacities are given by AT&T type-8803 wideband l i n e s (19,200 b i t s / s e c ) : m (1 ,h) = m (1, c) =m (2, h) = «««=m (4, c) = 240 messages/sec. Thus the matrices A,B,R and F defining the queueing network are given by: r T T r — i |server | a r r i v a l r a tes | se rv ice rates | routing j )number i . I A I B 1 2 7 12 17| r | 1 | 30 | 240 1 — 3 3 3| I 2 I o | 240 1 o - -1 I 3 I o | 100 12 4 4 5 | I 1 I o | 60 | - 8 8 - I 1 5 I o | 60 | - - - 18| i 6 I 30 | 240 1 8 - 8 8| I 7 | 0 | 240 | - 0 - -1 1 8 I o | 100 | 10 7 9 9| 1 9 I o | 60 | - - 13 131 | 10 I o | 60 I 3 - - | I 11 | 30 | 240 113 13 - 13| I 12 I o | 240 j - - 0 -1 I 13 I o | 100 I 15 15 12 14| | 14 I o | 60 j - - - 18| I 15 I o | 60 I 8 8 -1 I 16 | 30 \ 240 I 18 18 18 -1 I 17 I o | 240 | - - - 01 | 18 I o | 100 I 19 20 20 17| I 19 I o | 60 I 3 - -1 J 20 I o | 60 | - 13 13 -1 L j (rates i n messages/sec) (a "0" in the routing matrix indicates end-of-route) Following the method of Appendix 1 we obtain the 77 t r a f f i c vector T: T=(30,18,48,24,6,50,21,75,33,12,30,51,96,21,21,30,27, 57,6,24) Notice that t h i s t r a f f i c has been computed without including positive acknowledgment l o g i c . The u t i l i z a t i o n , average are given by: (the modified propagation delay of approx, . gueues and average delays delay includes a wave 01 msec/mile) |server| u t i l . | av. qu.| a v. del. | modif. | 1 I .125 1.143 | 4.76 | 4.8 I 2 | .075 1.081 | 4. 50 | 4.5 I 3 | .480 1.923 | 19.23 | 19.2 I 4 | .400 1.667 | 27.78 | 34.8 I 5 | .100 1.111 | 18.52 | 36.5 I 6 | .125 1.143 | 4.76 | 4.8 I 1 | .087 1.095 | 4.56 | 4.6 1 8 I .750 4.000 | 40.00 f 40.0 1 9 | .550 2.222 | 37.03 | 62.0 1 10 | .200 1.250 | 28.83 | 27.8 I 11 | .125 1.143 | 4.76 | 4.8 I 12 | .225 1. 290 | 5.38 | 5.4 I 13 | .960 25.000 | 250.00 i 250.0 | 14 I .350 1.538 | 25.63 I 40.6 I 15 | .350 1.538 | 25.63 | 50.6 I 16 I .125 1.143 | 4.76 | 4.8 I 17 | .113 1.127 | 4.70 | 4.7 I 18 | .570 2.326 | 23.26 | 23.3 I 19 | .100 1.111 | 18.52 | 36.5 | 20 | .400 1.667 J 27.78 | 62.8 i L 1 1— — j (time units are msec.) Hence, we compute the origin-destination delay of a messag by adding each delay encountered on i t s route: 78 r - 103.4 416.2 88.5-, D = | 96.3 - 362.2 425.4) (in msec.) |396.9 350.0 - 323.4| •- 88. 3 436. 1 346.3 - J Applying formula (3.2) one can see that 96% of the delay at server 13, that i s at switching centre's 3 processing unit i s due to queueing. In f a c t , i f queueing delays at the processing units are ignored, the o r i g i n to destination delays would have been: r - 64.2 137.0 66.0-, D = | 57. 1 - 92. 2 142. 1| (in msec.) 1117.7 80.0 - 70.1| «, 65.8 152.8 93.0 - -» By observation of the T and B vectors one can see that voice grade l i n e s type 3002-c4 l i n e s could have been used for host centre connections without degrading the performance of the network, and centre to centre connections except for the pair 2-3 could have used 3002-c2 l i n e (2400 bits/sec). The delays i f these changes are made become: r - 284.3 713. 0 147.8-, D = |238.6 - 593.7 591.7| (in msec.) 1533.0 459.5 - 448.0| •-141. 29 649.5 640.0 - J In fact, what we have done by introducing these changes i n l i n e capacities, i s to transform an "unbalanced, switching bounded network" into a more "balanced" one. F i n a l l y , l e t us stress the fact that with t h i s example, rather than presenting a prototype of a packet switching network, we are just i l l u s t r a t i n g the method of delay determination and the problem of having an unbalanced network. Packet switching networks must include some kind of acknowledgement l o g i c which w i l l add to the t r a f f i c vector. 7 9 AFPKHDIT 3 "Cumulative K - f o l d Convolute f o r Ge o m e t r i c a l l y D i s t r i b u t e d Random V a r i a b l e w i t h Integer Domains and D i f f e r e n t parameter" Let p-(n) be the d e n s i t y f u n c t i o n f o r the i * ^ g e o m e t r i c a l l y d i s t r i b u t e d random v a r i a b l e va t h parameter " r ^ " P'i(n) ( l - r i ) r j n=0,1,2 otherwise (A3.1) and P^(n) i ( n ) i t s cumulative d i s t r i b u t i o n f u n c t i o n ; i i& n • and l e t Qj-.(n) be the p r o b a b i l i t y o f the s'im of K random v a r i a b l e s n^, n^, n^ w i t h d i s t r i b u t i o n s p 1 ( n 1 ) , p 2(ii2),.......PjrCn^) heing greater than or equal to "n". C l e a r l y : oo n-1 ~ r . r . j ? . ( * ) = r " and Qx(n) = P l(n) * n-1 Q K(n) =JZ'PK( i) QK-l ( n" i )- + P K ( n ) i — o — from (A3.4), ( A 3 . 2 ) and ( A 3.1) (A3.2) (A3.3) (A3.4) ( A 3 .5) so From (A3»3) & n d (A3-5) a reasonable expression (from a com-p u t a t i o n a l point of view) i s induced. Q (n) « ( l . r 2 c V + r» * 1=0^ ^-^iSy* r 2 n n ' , Y r l - r 2 \ n ( l - r 2 ) r r r 2 V r i - r 2 1 " r 2 n+1 1 _ r l n + 1 Q 2 ( a ) - = — r i + ' • - r2 Q3(n) n - 1 . / l - r 2 n - r 2 * l - i 1-r, n+l-i\ n IT'2 r 2 " r l 1-r r r r 2 2 n+1 "=*rr^.A 1-r-, n* 2 ^ - ^ > C ) t ~T~ ' 3 i»0'r, / ' r 2 - r A i=0\ r 2 ( l - r 3 ) ( l - r 2 ) 2 n rn ( l - r 3 ) ( 1 - ^ ) N = — ~- r l ( r l 3}-j f r2t r2~ r 3 ) + r 3 ( r l - r 3 ) ( - l - r 2 ) ( r 2 - r 3 ) ( r 2 - r l ) ( l - r 3 ) ( l - r 2 ) n*2 ( l - r ^ l - r ^ r . " n f 2 . n — 1 r2 + r 3 ( rr r 3><v- 2> ( r 2 - r q ) ( r 2 - r 1 ) (1-r ) ( l - r 2 ) ( 1 - r )(l-r ) ^ 2 J • 1 -J l -( r 1 - r 3 ) ( r 1 - r 2 ) ( r 2 - r , ) ( r 2 - r i ) A 81 rearranging A: ( l - r 3 ) ( l - r 2 ) ( r r r , ) ( r r r 2 ) n+2 " r l + ( l - r , ) ( l - r ) (1-r )(l-r ) s x n+ 2 • •*• n+2 -r 2 4-1' n+2 ( r2- r 3K r2- rl) ( r 3 - r 2 ) ( r 3 " r l ) (in general) K i * 1-r . i i 1 (A3.6) to complete the induction we compute Q , (n) from (A3»5) ?-nd (A3.6) n - l i / K_ n+-1 -j j-p 1-r^ n V l K 1-r . \ , n-l/r, , TT MX1 H f k t l k?j j k/ J • i 0 - f n 'k+1 K n j=i K 1 1-r, r . - r k K , n n rk) n •+• rk+l j - l \ J k*j r j - r k / - K K 1 i _ r A i - E I TT 1 i=i * * i r i - r j A rearranging A •= l — ( r j M + * J T _ i r J k*j J - k/ k ^ t r *k / k+l~rk 82 hance; 83 APPEND DC 4 "The Test Network" host 2 4 . 8 / \ 4 . 8 C-4 host 4 Center to center l i n e capacities: 4.8 kbit/sec Host-center l i n e c apacities: 7.2 kbit/sec Message lengths: exponentially d i s t r i b u t e d with average of 4 packets Packet s i z e : 2 characters: 2 bytes Center's processing capacities: 6l6 packets/sec. Center to center packets are acknowledged with one packet Host to host messages are acknowledged with one packet APPENDIX 5 "AN ALGOLS PROGRAM TO ANALYZE PACKET SWITCHING NETWORKS USING QUEUING THEORY"; BEGIN COMMNENT PROGRAM TO COMPUTE THE EXPECTED QUEUE LENGTHS AND DELAYS IN A PACKET SWITCHING NETWORK; COMMENT * * * * LIMITATIONS OF PROGRAM * * * * * * * * * * * * * * * * * * * * * * * * * * * THE PROGRAM CAN HANDLE SYMETRIC NETWORKS I* . E . ROUTE ( I , J) = ROUTE ( J , I ) THE WAVE PROPAGATION DELAYS HAVE NOT BEEN INCLUDED. ONE HOST PER IMP IS ALLOWED. EACH HOST I IS ASSOCIATED TO IMP I+M. WITH MINOR CHANGES THIS CAN BE GENERALIZED. END OF COMMENT * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * ; COMMENT * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * DATA INPUT: (CAPACITIES ARE ALWAYS IN * VIRTUAL* MESSAGES PER UNIT T I M E , THE * VIRTUAL * MESSAGE IS A 'MESS AGE ACKNOWLEDGEMENT *) M = NUMBER OF HOST COMPUTERS N = NUMBER OF I MPS MMAC = RATIO LENGTH OF AVERAGE MESSAGE TO LENGTH OF REAL MESSAGE MTRC = RATIO OF AVERAGE MESSAGE TO RFNM FORMULA (1) = DELAY FORMULA FOR IMP PROCESSING TIME FORMULA ( 2 ) = DELAY FORMULA FOR CHANNEL TRANSMISSION FORMULA ( 3 ) = DELAY FOMULA FOR HOSTS I F FORMULA (I) IS 1 THEN THE DELAY IS AS IN M/M/1 QUEUES I F FORMULA(I) IS 2 THEN THE DELAY IS AS IN M/G/1 QUEUES I F FORMULA (I) IS 3 THEN THE DELAY IS AS IN G / G / 1 QUEUES V A R ( 1 ) , VAR ( 2 ) VARIANCES FOR ARRIVALS AND DEP. FOR IMPS VAR ( 3 ) , VAR ( 4 ) VARIANCES FOR ARRIVALS AND DEP. FOR CHANNELS VAR (5) , V A R ( 6 ) VARIANCES FOR ARRIVALS AND DEP. FOR HOSTS (IF / M / PROCESSES ARE USED THE CORRESPONDING VAR (I) IS REDUNDANT) S P E C I F I C A T I O N S ( I , J ) : HOST TO HOST TRAFFIC I F K=M,J<=M AND I-.=J IN MESS/SEC HOST PROCESSING CAPACITIES IF K = M , J < = M AND I=J MESS/SEC ROUTING MATRIX IF I > M , J < = M IMP-IMP LINE CAPACITIES IF I>K,J>M AND J>I IMP-IMP DISTANCES IF I > M , J > M AND I>J (MILES) IMP PROCESSING CAPACITIES IF I>M,J>M AND I=J MESS/SEC ALL INPUT CAN BE PUNCHED CONTINUOUSLY ACROSS CARDS LEAVING ONE BLANCK BETWEEN NUMBERS. (CHECK TYPE) OUTPUT: ALL INPUT IS ECHO-PRINTED (EXCEPT FOR SPECIFICATIONS) THEN THE HOST TO HOST TRAFFIC DEMAND MATRIX IS PRINTED THEN THE ROUTING MATRIX IS PRINTED SEVERAL COLUMNS FOR NODE VECTORS. LAST COL IS NODE NUMBER (NOTE THAT NODES 1 TO M ARE THE HOSTS) SEVERAL COLUMNS FOR LINE VECTORS, LAST TWO ARE FOR LINE NUMBER THE WAITING TIME AT HOSTS I S AN AVERAGE WHICH INCLUDES THE ROUND TRIP OF THE RFNM. OVERALL RAITING TIME (SELF EVIDENT) END OF COMMENT *********************************** LOGICAL OVERLOADED, TEMP, RF, PRINT RO_CHANGES, CONVERGING; INTEGER M ,N, ITER; REAL MMAC, MTRC, COEF; REAL OVERALL RAIT; INTEGER ARRAY FORMULA (1 : : 3) ; REAL ARRAY V A R ( 1 : : 6 ) ; REAL TOLERANCE,ERR,ERROR; READ (ERROR) ; WRITE (" "} ; WRITE ("ERROR = ", ERROR) ; READ (TOLERANCE) ; WRITE ( " " ) ; WRITE ("TOLERANCE = ", TOLERANCE) ; READ (PRINT RO CHANGES); READ (FORMULA (T) ,FORMULA (2) ,FORMULA (3) ) y WRITE ( " " ) ; WRITE ("FORMULAS USED"); WRITE (FORMULA (1) , FORMULA (2) ,FORMULA (3)) ; READ(MMAC); WRITE (" ") ; WRITE ("RATIO VIRTUAL MESSAGE/REAL MESSAGE") ; WRITE (MMAC) ; READ (MTRC) ; WRITE (" ") ; WRITE («!RATIO VIRTUAL MESSAGE/RFNM") ; WRITE (MTRC) ; READ (VAR (1) , VAR (2) , VAR (3) , VAR (4) , VAR (5) , VAR (6)) ; WRITE ( " " ) ; WRITE ("VARIANCES USED"); WRITE (VAR (1) , VAR (2) r VAR (3) ,VAR(4) ,VAR (5) ,VAR (6)) ; READ (M) ; READON (N) ; BEGIN REAL ARRAY DELAYS (1 : : M+N, 1 : : M + N) ; REAL ARRAY LENGTHS(1::M+N,1::M+N); REAL ARRAY AV_NODE_QUEUES (M +1::M+N) ; INTEGER ARRAY~SPECIFICATI08S (1::M+N ,1::M+N); INTEGER ARRAY TRAFFIC (1 :: M +N, 1 :: M+N) ;• REAL ARRAY WAIT_TIME(1::M) ; REAL ARRAY NODE_PI(M+1::M + N) ; INTEGER ARRAY NODE_BUF_CAP(M+1::M+N); COMMENT ************** PROCEDURE DELAY_FORMULA *****************; COMMENT COMPUTES A DELAY ACCORDING TO 1 OF 3 FORMULAS; REAL PROCEDURE DELAY_FORMULA (INTEGER VALUE A,B,C,D); BEGIN REAL CAP,SAVE; INTEGER TRA; IF (C=D) AND (A-.=3) THEN A : = 1 ; CAP: = I F B=1 THEN SPECIFICATIONS (C, D) ELSE SPECIFICATIONS (D,C) TRA:=TRAFFIC(C, D) ; SAVE := CASE FORMULA(A) OF ( (TRA/CAP) / (CAP-TRA) , ( (TR A/CAP) * (1 . + (CAP*VAR (A + 1 ) )**2) )/ (2„* {CAP-TRA) ) , (TRA* (VAR (A) * VAR (A) + VAR (B) * VAR (B) ) ) / (2* (1 . -TRA/CAP) ) ) SAVE:=SAVE + 1/(CAP*COEF) ; SAVE END; COMMENT ************** PROCEDURE LENGTH_FORMULA *****************; REAL PROCEDURE LENGTH_FORMULA(INTEGER VALUE A,B,C,D); BEGIN REAL CAP,SAVE,SAV; INTEGER TRA; I F (C=D) AND (A-.=3) THEN A: = 1 ; CAP: = I F B=1 THEN SPECIFICATIONS (C,D) ELSE SPECIFICATIONS (D,C) TRA :=TR AFFIC (C, D) ; SAV := CASE FORMULA(A) OF (TRA/(CAP-TRA) , TRA/(CAP-TRA) , TRA/CAP); SAVE:=IF C-=D THEN SAV-TRA/CAP ELSE SAV; SAVE END; COMMENT *************PROCEDURE MODIFY_CAP_COMPUTE *******; PROCEDURE MODIFY_CAP_COMPUTE; BEGIN REAL TEMPERR; INTEGER ARRAY CAP(1::M + N,1::M+N) ; REAL ARRAY TEMP_PI (M +1: :M+N) ;• REAL ARRAY ERRO (M +1: : M + N) ; PROCEDURE MODIFY; BEGIN FOR J:= M+1 UNTIL M+N DO FOR I := 1 UNTIL J - 1 DO I F SPECIFICATIONS ( I , J) 0 THEN BEGIN SPECIFICATIONS ( I , J) : = ENTIER (CAP ( I , J) * (1 - NODE_PI (J) ) ) ; IF SPECIFICATIONS(I,J) <= TRAFFIC(I,J) THEN BEGIN OVERLOADED;= TRUE ; WRITE (*• ") ; WRITE ("NETWORK OVERLOADED") ; WRITE ("DETECTED AT ",I,J,"ITERATION",ITER) END END END; FOR I : = 1 UNTIL M+N DO FOR J : = 1 UNTIL M+N DO CAP ( I , J) ^ --SPECIFICATIONS ( I , J) ; FOR I:=M+1 UNTIL M+N DO TEMP_PI (I) : = MAXREAL/2. ; WHILE CONDITION DO BEGIN TEMPERR:=0; ITER: = ITER+1; COMPUTE_DELAYS_QUEUES; COMPUTE_NODE_QU£UE_PI ; FOR I:=H+1 UNTIL M+N DO ERRO (I) : = TEMP_PI (I)-NODE_PI (I) ; FOR I:= M+1 UNTIL M+N DO TEMPERR:=IF ERRO (I) >TEMPERR THEN ERRO (I) ELSE TEMP ERR I F TEMPERR>=ERR THEN BEGIN WRITE ("NOT CONVERGING") ; CONVERGING:= FALSE END ELSE BEGIN ERR := TEMPERR; FOR I:=M+1 UNTIL M+N DO TEMP PI (I) z= NODE_PI(I); MODIFY END END END; COMMENT ************* PROCEDURE CONDITION ************; LOGICAL PROCEDURE CONDITION; BEGIN LOGICAL T; T: = IF (ERR>ERROR) AND -.OVERLOADED AND CONVERGING THEN TRUE ELSE FALSE; T END; COMMENT ********** PROCEDURE PRINT_DEMANDS ****************; PROCEDURE PRINT_DEMANDS; BEGIN INTEGER TEMP; WRITE (" " ) ; WRITE (" ") ; WRITE ("DEM ANDS") ; WRITE (" ") ; FOR I:=1 UNTIL M DO FOR J:=1 UNTIL M DO BEGIN TEMP:=IF I=J THEN 0 ELSE S P E C I F I C A T I O N S ( I , J ) ; I F J=1 THEtf WRITE (TEMP) ELSE WRITEON (TEMP) END; WRITE(" ") ; WRITE(" ") ; WRITE ("ROUTING") ; WRITE (" " ) ; FOR I:=M+1 UNTIL M+N DO FOR J:=1 UNTIL M DO IF J=1 THEN WRITE(SPECIFICATIONS(I,J) ) ELSE WRITEON {SPECIFICATIONS(I,J)) ; WRITE(" ») ; WRITE(" " ) ; WRITE ("NODE BUFFER CAPACITIES") WRITE (" ") ; FOR I:=M+1 UNTIL M+N DO WRITE (NODE_BUF_CAP (I) ) END; COMMENT ************* PROCEDURE PRINT_CAPACITIES_TRAFFIC ********; PROCEDURE PRINT_CAPACITIES_TRAFFIC; BEGIN INTEGER T,TT; WRITE (" ") ; WRITE (" ») ; WRITE ("CAPACITIES AND TRAFFIC") ; WRITE (" " ) ; WRITE ("NODES") ; FOR I:=1 UNTIL M+N DO BEGIN I F I=M + 1 THEN WRITE (" ") ; T := SPECIFICATIONS ( I , I) ; TT := T R A F F I C ( 1 , 1 ) ; WRITE ( I , " ", T,TT,TT/T); I F TT>=T THEN BEGIN WRITEON ("NETWORK OVERLOADED"); OVERLOADED:= TRUE END END; WRITE (" ") ; WRITE ("LINES"); FOR I:=1 UNTIL M+N DO FOR J:=1 UNTIL M+N DO I F (I-«=J) AND ( (I>M) OR (J>M>) THEN BEGIN T := IF K J THEN SPECIFICATIONS(I,J) ELSE SPECIFICATIONS ( J , I) TT := TRAFPIC ( I , J) ; IF T-»=0 THEN BEGIN WRITE (I,J,T,TT,TT/T) ; I F TT>=T THEN BEGIN WRITEON("NETWORK OVERLOADED"); OVERLOADED:= TRUE END END END END; COMMENT ******************** PROCEDURE ADD_RFNM_FLOW ************; PROCEDURE ADD_RFNMj_FLOW ; BEGIN FOR I:=1 UNTIL M-1 DO FOR J:=I+1 UNTIL M DO SPECIFICATIONS (I,J) := SPECIFICATIONS(I,J)+SPECIFICATIONS (J,I) ; FOR I:=2 UNTIL M DO FOR J:=1 UNTIL 1-1 DO SPECIFICATIONS(I,J) := SPECIFICATIONS(J,I) END; COMMENT ********************* PROCEDURE PRINT_TABLES ********; PROCEDURE PRINT TABLES; BEGIN INTEGER L, LL; INTEGER T,TT; WRITE (" " ) ; WRITE ("ITERATIONS =",rITER) ; WRITE (" ") ; WRITE (» ") ; WRITE ("NODE VECTORS"); WRITE {" ") ; WRITE (" NUMBER MOD CAP TRAF UTILIZATION") ; WRITEON(" DELAY AV. LENGTH ") ; WRITE (" " ) ; FOR I:=1 UNTIL M+N DO BEGIN IF I=M + 1 THEN SRITE(" » ) ; L: = SPECIFICATIONS(1,1) ; LL: = TRAFFIC (I, I) ; WRITE(I," ",L,LL,LL/L,DELAYS(1,1) ,LENGTHS (I,I)) END; WRITE (" ") ; WRITE ("LINE VECTORS") ; WRITE (" ") ; FOR I : = 1 UNTIL M+N DO FOR J:=1 UNTIL M+N DO IF (I--=J) AND ((I>M) OR (J>M)) THEN BEGIN T: = IF K J THEN SPECIFICATIONS(I,J) ELSE SPECIFICATIONS ( J , I ) ; IF T-.=0 THEN BEGIN TT:=TRAFFIC(I,J) ; WRITE (I , J , T ,TT ,TT/T, DELAYS (iV'J) ,LENGTHS (I , J) ) END END; WRITE (" ") ; WRITE (" ") ; WRITE ("WAITING TIMES AT HOSTS") ; WRITE ( " " ) ; • • • FOR I:=1 UNTIL M DO WRITE (WAIT_TIME (I) ) ; WRITE(" " ) ; WRITE(" " ) ; WRITE("OVERALL WAIT = ", OVERALL_WAIT); WRITE (" ") ; WRITE (" ") ; WRITE("AVERAGE NUMBER OF PACKETS IN BUFFERS", NODE'S PROBABILITY OF BLOCKING"); WRITE (" " ) ; . • • FOR I:=M+1 UNTIL M+N DO WRITE(AV_NODE_QUEUES(I) ," ", •» ",NODE_PI (I) ) ; WRITE(" " ) ; WRITEl" ») END; COMMENT ********** PROCEDURE WAITING_TIMES **************; PROCEDURE WAITING_TIMES; BEGIN REAL T,S,TT,SS; TT:=SS:=0; FOR I:=1 UNTIL M DO BEGIN T:=S:=0; FOR J : = 1 UNTIL M DO IF I-*=J THEN BEGIN T:=T + DELAYS(I,J)*SPECIFICATIONS (I,J) ; S: = S +SPECIFICATIONS (I,J) END; WAIT_TIME (I) :=T/S; TT:=TT+T; SS:=SS+S END; OVERALL_WAIT:=TT/SS END; COMMENT ************* PROCEDURE ADD_ACKNOWLEDGMENTS *********; PROCEDURE ADD ACKNOWLEDGEMENTS; BEGIN FOR I: = 1 UNTIL M+N DO FOR J:=1 UNTIL I DO TRAFFIC {I, J) := TRAFFIC (I, J) + TRAFFIC (J, I) ; FOR I: = 1 UNTIL M+N-1 DO FOR J:=I+1 UNTIL M+N DO > . • TRAFFIC(I,J):=TRAFFIC(J,I) END; COMMENT ************ PROCEDURE COMPUTE_DELAYS_QDEUES ************* PROCEDURE COMPUTE_DELAYS_QUEUES; BEGIN FOR J:=1 UNTIL M+N DO FOR I:=1 UNTIL M+N DO BEGIN LENGTHS (I,J):=0; DELAYS (I, J) := 0 END; FOR J:=M+1 UNTIL M+N DO FOR I:=1 UNTIL J DO IF SPECIFICATIONS(I,J)=0 THEN BEGIN DELAYS (I, J) : = 0; LENGTHS (I,J) :=0 END ELSE IF (SPECIFICATIONS (I,J)-TRAFFIC(I,J) ) <=0 THEN BEGIN DELAYS (I, J)-": = MAXREAL ; LENGTHS (I,J) :=MAXREAL END ELSE BEGIN DELAYS (I, J)>:=DELAY_FORMULA (2,1,1, J) ; LENGTHS (I,J) :=LENGTH_FORMULA(2,1,I,J) END; FOR I:=M+1 UNTIL M+N DO FOR J:=1 UNTIL 1-1 DO IF SPECIFICATIONS (J,I) = 0 THEN BEGIN DELAYS (I, J) : = 0; LENGTHS (I,J) :=0 END ELSE IF (SPECIFICATIONS (<3,I) -TRAFFIC (I, J) ) <=0 THEN BEGIN DELAYS (I, J);:=MAXREAL; LENGTHS (I,J) : = MAXREAL END ELSE BEGIN DELAYS (I, J ) : = DELAY_FORMULA (2,2,1,J) ; LENGTHS (I, J) :=LENGTH_FORMULA (2*2 ,1, J) END; FOR I:=1 UNTIL M DO IF SPECIFICATIONS(1,1)=0 THEN BEGIN DELAYS (1,1) : = 0; LENGTHS (I, I) :=0 END ELSE IE (SPECIFICATIONS (I eI)-TRAFFIC (1,1))<=0 THEN BEGIN DELAYS (I, I) :=MAXREAL; LENGTHS (1,1) : = MAXREAL END ELSE BEGIN DELAYS (1,1) : = DELAY_FORMULA (3, 1,1,1) ; LENGTHS (1,1) :=LENGTH_FORMULA (3, 1 ,1,1) END ~ END; COMMENT ************PROCEDURE HOST_HOST_DELAYS ****************; PROCEDURE HOST_HOST_DELAYS; BEGIN PROCEDURE CONTINUATION(INTEGER VALUE A,B,C); BEGIN INTEGER P; P: = SPECIFICATIONS(A,B) ; DELAYS (C, B):=DELAYS (C,B)+DELAYS (A,P)+DELAYS (P,P) ; IF P-*=B THEN CONTINUATION (P,B,C) END; FOR I:=1 UNTIL M DO FOR J:=1 UNTIL M DO IF I-*=J' THEN BEGIN INTEGER K, U, V; IF RF THEN BEGIN U : = ' J; V := I END ELSE BEGIN U .: = I; V := J END; K:=M+U; DELAYS(I,J): = DELAYS (I,J)+DELAYS (U,U)+DELAYS(U,K)+DELAYS (K,K) CONTINUATION (K,V,U) END END; COMMENT *************** PROCEDURE ADD_RFNMS_DELAYS ************; PROCEDURE ADD_RFNM_DELAYS; BEGIN REAL ARRAY TEMPDEL (1::M+ N,1::M + N) ; FOR I:=1 UNTIL M+N DO FOR J:=1 UNTIL M+N DO TEHPDEL (I, J) :=DELAYS (I, J) ; COEF:=MTRC; RF:= TRUE; COMPUTE_DELAYS_QUEUES; HOST_HOST_DEL AYS; FOR I:=1 UNTIL M+N DO FOR J:=1 UNTIL M+N DO IF (K=H) AND <J<=M) AND (I- = J) THEN DELAYS (I, J) : = DELAYS (I, J) + TEMPDEL (I,J) ELSE DELAYS ( I , J) : = TEHPDEL ( I , J) END; COMMENT *************** PROCEDURE COMPUTE_TRAFFIC ************; PROCEDURE COMPUTE_TRAFFIC; v BEGIN PROCEDURE FOLLOW (INTEGER VALUE A,B,C); BEGIN INTEGER P fQ,CHECK; P:=SPECIFICATIONS(A,B); . TRAFFIC(P fP):=TRAFFIC(P ,P) + C; CHECK:=IF A<P THEN SPECIFICATIONS(A,P) ELSE SPECIFICATIONS (P,A) ; I F CHECK-O THEN BEGIN WRITE (">!); WRITE ("*** INCOMPATIBLE ROUTING***") ; WRITE ("DETECTED AT " , A , P) ; WRITE (" ") END; TRAFFIC(A,P):=TRAFFIC(A ,P)+ C ; I F P-=B THEN FOLLOW (P,B,C) END; FOR I:= 1 UNTIL M+N DO FOR J:= 1 UNTIL M+N DO TRAFFIC ( I , J) : = 0 ; FOR I:=1 UNTIL M DO FOR J:=1 UNTIL M DO I F I-.=J THEN BEGIN INTEGER K r L ; L:=SPECIF1CATI0NS ( I , J ) ; K:= M+I; TR A F F I C ( I , I ) := TRAFFIC (1 ,1 ) + L; TRAFFIC (K,K):=TRAFFIC (K f K) +L;' TRAFFIC (I,K):=TRAFFIC (I,K)+L; * FOLLOW (K,J,L) END END; COMMENT *************PROCEDURE COMPUTE_NODE_QUEUE_PI ************ PROCEDURE COMPUTE_NODE_QUEUE_PI; BEGIN LOGICAL FLAG; INTEGER MU,T,K; REAL ARRAY RO (0::M) ; I F PRINT_RO_CHANGES THEN WRITE (" ") ; FOR I:=M+1 UNTIL M+N DO BEGIN : PROCEDURE CHECK_ROS; BEGIN REAL TEP; FOR IND:=1 UNTIL K DO FOR KUM:=0 UNTIL IND-1 DO I F ABS (RO (IND)-RO (KUM)) < TOLERANCE THEN BEGIN TEP: = RO (IND)+2*TOLERANCE; IF PRINT_RO_CHANGES THEN WRITE ("RO CHANGE IN NODE ",I,"FROM n,BO (IND), "TO ",TEP) ; RO(IND) := TEP END END; PROCEDURE COMPUTE_PI; BEGIN INTEGER D; REAL PROD; NODE_PI(I):=0; D: = NODE_BUF_CAP(I) ; FOR J:=0 UNTIL K DO BEGIN PROD:=1; FOR L:=0 UNTIL K DO IF L -*= J THEN PROD := PROD * (1-RO(L)) / (RO (J) - RO (L) ) ; PROD:=PROD*RO (J) ** (D + K-1) ; NODE_PI (I) : = NODE_PI (I)+PROD END END; T := SPECIFICATIONS(1,1) ; K: = 1; RO (0) := TRAFFIC (1,1)/SPECIFICATIONS (1,1) ; MU:=IF I>T THEN SPECIFICATIONS (T,I) ELSE SPECIFICATIONS(I,T) ; RO(1):=TRAFFIC(I,T)/MU; AV NODE QUEUES (I):=LENGTHS (1,1)+LENGTHS(I,T); FOR J:=2 UNTIL M DO BEGIN FLAG:= TRUE ; FOR Q := 1 UNTIL J-1 DO IF SPECIFICATIONS(I,J)^SPECIFICATIONS (I,Q) TH FLAG:= FALSE; IF FLAG THEN BEGIN K:=K+1; T := SPECIFICATIONS(I,J) ; HU: = IF I>T THEN SPECIFICATIONS (T,I) ELSE SPECIFICATIONS(I,T) ; HO (K) :=TRAFFIC (I,T)/HU; AV_NODE_QUEUES (I) :=AV_NODE_QUEUES (I) •LENGTHS (I,T) END END; CHECK_ROS; COMPUTE_PI END END; COMMENT ************* MAIN PROGRAM ****************; ERR:= MAXREAL; CONVERGING := TRUE ; ITER:=0; RF := FALSE; INTFIELDSIZE:=7; OVERLOADED := FALSE ; READ(NODE_BUF_CAP (M +1)); FOR I:=M+2 UNTIL M+N DO REA DON (NODE_BUF_CAP (I) ) ; FOR I: = 1 UNTIL M+N DO BEGIN READ (SPECIFICATIONS ( 1 , 1 ) ) ; FOR J : = 2 UNTIL M+N DO READON (SPECIFICATIONS (I, J) ) END; PRINT_DEMANDS ; ADD_RFNM_FLOW; COMPUT E_TRAFFIC; ADD_ACKNOWLEDGE MENTS; PRINT_CAPACITIES_TR AFFIC; IF -.OVERLOADED THEN BEGIN CCEF := MMAC; MODIFY_CAP_COMPUTE; IF -^OVERLOADED THEN BEGIN HOST_HOST_DELAYS; ADD_RFNM_DELAYS; HAITING_TIMES END; PRINT_TABLES END END END. 9 5 APPENDIX 6 "A GPSSV SIMULATION OF A PACKET SWITCHING NETWORK" REALLOCATE COM,50000,CHA ,99 REALLOCATE XAC,1000,BLO,200,FAC,99,TAB,99,VAR,25,HMS,15 SIMULATE EXP FUNCTION RN1,C21 ,0/. 1,.101/.2,.222/.3,.355/.1,.509/.5,.69/.6,.915/.7,1.2/.75,1.38 .8,1.6/.81,1.83/.88,2.12/.9,2.3/.92,2.52/.91,2.81/.95,2.99/.96,3.2 .97,3.5/.98,3.9/.99,4.6/.995,5.3/.998,6.2/.999,7/.9997,8 * 1 MATRIX MH,1,1 N BY 1 MATRIX FOR IMP SEQ # 2 MATRIX MH,1,1 N BY 1 MATRIX FOR HOST SEQ t SENT MATRIX MH,1,1 N BY N MATRIX FOR # OF MESSGS SENT RECV MATRIX MH , 1 , 4 N BY N MATRIX FOR # OF MESSGS RECD LINES MATRIX MH,1,1 N BY N MATRIX LINES BETWEEN NODES MODEM MATRIX MH,1,4 N BY N MATRIX FOR CAPACITY OF MODEMS DIST MATRIX MH,1,4 N BY N MATRIX FOR DISTANCES OF LINES ROUTE MATRIX MH,1,1 N BY N ROUTING FIND NEXT DEST HPR MATRIX MH,1,2 N BY 2 HOST PROCESS AND SENTQ IPR MATRIX MH,1,2 N BY 2 IMP PROCESS AND SENTQ HOST MATRIX MH,1,1 N BY N TABLE #«S FOR HOST DEL LINK MATRIX MH,1,1 N BY N TABLE #«S FOR LINK DEL IMPT MATRIX MH,1,1 N BY 1 TABLE #»S OF IMP DELAYS INITIAL XH2,8 LENGTH OF HEADER INITIAL XH3,32 LENGTH OF PACKET INITIAL XH1,10 PACKET LENGTH INCLUDING HEADER INITIAL XH5,16 LENGTH OF ACK INCLUDING HEADER INITIAL XH6,16 LENGTH OF RFNM INITIAL XH7,32 MEAN OF ' MESSAGE LENGTH DISTRIBUTION INITIAL XH8,250 INTERVAL OG NUMBERS 9 IMP S HOSTS •A INITIAL XH9,500 XH8*2 w INITIAL MH1-MH2(1-1,1 -i) ,o INITIAL MH1 (1,1) ,1/MH1 (2,1) ,501/MH1 (3, 1) , 1001/MH1 (1, 1) , 1501 INITIAL HH$LINES(1,1) ,1/MH$LINES (2,2) ,11/MH$LINES (3,3) ,21 INITIAL MH$LINES (1,1) ,31/MH$LINES (1,2) ,11/MH$LINES (1,3) ,12 INITIAL MHSLINES (1,1) ,13/MH$LINES (2 , 1) ,11/MH$LINES (2,3) ,15 INITIAL "MHSLINES (2,1) ,16/MH$LINES (3, 1) , 17/MHSLINES (3,2) ,18 INITIAL MH5LINES (3,1) ,19/MH$LINES (1,1) ,50/MH$LINES (1,2) ,51 INITIAL MH$LINES (1,3) ,52 INITIAL MH$MODEM (1-1, 1-1) ,32/MH$MODEM (1, 1) , 16 INITIAL MH$MODEM (2,2) ,16/MH$MODEM (3,3) ,16/MH$MODEM (1,1) ,16 INITIAL MH$DIST (1-1 , 1 -1) ,5/MH$DIST (1, 1) ,0/MH$DIST (2, 2),0 INITIAL MH$DIST(3,3) ,0/MH$DIST (1 ,1) ,0 INITIAL MH$ROUTE (1,1) ,1/MH$ROUTE (1,2 !"Dr2 INITIAL MHSROUTE (2,1) , 1 /MH$ROUTE (2,2) ,2/MH$ROUTE (2 ,3) ,3 INITIAL MHSROUTE (2,1) ,1/KH$ROUTE (3, 1 1-1), 2/MHSROUTE (3,3),3 INITIAL MHSROUTE (4,1-3) ,2/MH$ROUTE (1,1) ,1 INITIAL MH$SENT(1-1,1 -1) ,0/MH$RECV (1 -1,1-1) ,0 INITIAL MH$HPR (1—4,1) ,5/MH$IPR(1-1,1), 10 * 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 * SORC INITIAL INITIAL INITIAL INITIAL INITIAL INITIAL INITIAL INITIAL INITIAL INITIAL INITIAL INITIAL INITIAL INITIAL INITIAL TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE STORAGE FUNCTION MH$HPR (1-4,2) MH$HOST (1 ,1 MHSHOST (1 ,4 MH$HOST (2,3 MHSHOST (3,2 MH$HOST(4,1 MHSHOST (4,4 MH$LINK(1,1 MH$LINK (4,4 MH$LINK (1,4 MHSLINK (2,4 HH$LINK (3,4 HH$LINK (4,3 HHSIHPT (1 ,1 MH$IMPT(4,1 M1,200,200,5 M1,200,200,5 •HI, 200, 200, 5 H1,200,200,5 HI,200,200,5 H1,200,200,5 M1,200,200,5 M1,200,200,5 M1,200,200,5 MT, 200,200, 5 M1,200,200,5 H1,200,200,5 .HI ,200,200,5 H1,200,200,5 M1,200,200,5 H1,200,200,5 V14,30,30,5 V14,30,30,5 V14,30,30,5 V14,30,30,5 V14,30,30,5 V14,30,30,5 V14,30,30,5 V14,30,30,5 V14,50,50,5 V14,50,50,5 V14,50,50,5 V14,50,50,5 V14,50,50,5 V14,50,50,5 V14,50,50,5 V14,50,50,5 V14,50,50,5 V14,50,50,5 714,50,50,5 V14,50,50,5 V14,40,40,5 V14,40,40,5 V14,40,40 ,5 V14,40,40,5 S1-S4 ,600 RN4,D3 ,2) ,100 2/MH$HOST (1, 3) ,3 ,5/MH$HOST (2,2) ,6 8/MH$HOST (3, 1 ) ,9 , 11/HH$HOST (3,4) , ,50/HH$IPR (1-4  1/MHSHOST (1,2), 4/HH$HOST (2,1)  7/HHJHOS'T (2,4) , 10/MH$HOST (3, 3) M 12 13/HHSHOST (4,2) ,14/MH$HOST (4,3) ,15 16 96 ,19/MH$LINK (3,3) ,21 , 25/MH$LINK (1, 3) ,26 ,28/HH$LINK (2,3) ,29 ,31/MH$LINK (3,2) ,32 , 34/MHSLINK (4, 2) , 35 17/MH$LINK (2,2) 23/MHSLINK (1,2) 27/HH$LINK (2,1) 30/MH$LINK (3, 1) 33/MH$LINK (4, 1) 36 37/MHSIMPT (2, 1) , 38/MH$IHPT (3,1), 39 40 BUFFER MAGNITUDE a) EACH IMP DISTRIBUTION OF ORIGIN .333,1/0.6667,2/1,3 1 FUNCTION RN4,D2 DESTINATION FOR SOURCE 1 97 .5,2/1 ,3 2 FUNCTION RN4,D2 DESTINATION FOR SOURCE 2 .5,1/1,3 3 FUNCTION RN4,D2 DESTINATION FOR SOURCE 3 .5,1/1,2 LEN FUNCTION RN3,C2 .0,32/1,129 HOST NET7 * NET 1 NET 12 NET2 NET 3 NET4 GENERATE 25,FN$EXP,,10,, 23 ASSIGN 2,FN$SORC ASSIGN THE ORIGIN OF THE MESSAGE ASSIGN 3,FN*PH2 ASSIGN THE DESTINATION MSAVEVALUE SENT+,PH2,PH2,1 ,MH tOF MESSAGES SENT AWAY FROM *2 MSAVEVALUE SENT+,PH2,PH3,1 ,MH # OF MESSGS SENT FROM *2 TO *3 ASSIGN 6 ,V$LEN ASSIGN LENGTH OF MESSAGE ASSIGN 1 ,2 MESSAGE TYPE - TWO FOR REG. MESSAGE HSAVEVALUE 2+,PH2,1,1,MH OBTAIN NEW MESSAGE # FOR MESSAGE 3) ASSIGN 8,MH2(PH2,1) ASSIGN MESSAGE SEQ # ASSIGN M ,PH2 CURRENT SOURCE ASSIGN 5,PH2 CURRENT DEST - IMP OF THE HOST ASSIGN 15,V2 HOST OUT CHAIN # ASSIGN 16,V3 HOST PROCESSOR CHAIN ASSIGN 17,V4 HOST SENTQ ASSIGN 18,MH$LINES (PH5,PH5) RECV CHAIN S 3 HOST ASSIGN 22,V1 ASSEMBLE CHAIN # FOR COMBINING LIN K PH16,FIFO,NET7 JOIN PROCESSOR CHAIN SEIZE PH16 OCCUPY HOST PROCESSOR ADVANCE V$HPDL HOST PROCESSING DELAY RELEASE PH16 RELEASE HOST PROCESSOR UNLINK PH16,NET7,1 SEND NEXT TO PROCESSOR TEST NE PH1U,1,NET75 IF RECEIVED PACKET FROM IMP , JUMP ASSIGN 1 1, V9 MESSAGE CONSISTING OF TOTAL # OF ASSIGN 7,V10 LENGTH OF LAST PACKET TEST NE PH6,0,NET12 CHECK FOR NOLL MESSAGES TEST NE PH7,0,NET2 CHECK FOR LESS THAN PACKET LENGTH ASSIGN 1 1 + , 1 ASSIGN 7+,XH2 ADD OVERHEAD FOR HEADER TRANSFER , NET3 ASSIGN 7,XH4 PACKET LENGTH INCLUDING HEADER ASSIGN 10,PH11 FOR MORE THAN ONE PACKETS ASSIGN 12,MR$LINES(PHU ,PH5) LINE t FOR HOST TO IMP IS ONE MORE ASSIGN 12+, 1 IMP TO HOST ASSIGN 23,MH$LINK(PHU, PH5) ASSIGN TABLE # FOR LINE DELAY ASSIGN 2 3+, 1 TABLE # FOR HOST TO IMP IS ONE GRT ASSIGN 9 ,V$HSNDL ASSIGN HOST SENTQ DELAY SPLIT 1,NET5 BREAK MESSAGE INTO PACKETS ASSIGN 7,XH4 PACKET LENGTH INCLUDING HEADER LOOP 10PH,NET4 LOOP AROUND TO MAKE *10 COPIES TERMINATE NET5 MSAVEVALUE 2 ,PH2,2,V13, MH GET NEW HOST SEQ # ASSIGN 20rMH2(PH2,2) ASSIGN HOST SEQ # TRANSFER ,NEXT8 SEND OUT FOR TRANSMISSION *£ NET75 TEST E PH1,0,NET8 IF NOT ACK JUMP UNLINK PH17,HACK,1, 20PH,, ERRHS SPLIT 1,RECHK RELEASE IF SAME ON OUTQ TERMINATE HACK TEST NE PH1,1,RFNM IF ACK FOR RFNM TERMINATE ASSEMBLE PH11 ASSEMBLE ALL PACKETS AFTER ACK -Ju LINK PH22,8PH MESSAGE SAFELY ACK WAITING FOR RFNM NET 8 UNLINK PH18,NEX45,1 ,20PH, ,NET85 IF DUPLICATE TERMINATE TERMINATE NET 8 5 SPLIT 1,NEX44 SEND COPY FOR RECV CHAIN SPLIT 1,HACKR SEND ACK TO IMP TEST E PH1,1,NET10 IF NOT RFNM JUMP TABULATE MH$HOST (PH3,1) ENTRY FOR HOST DELAY TABLE UNLINK PH22,RFNM,1, 8PH,,ERRHT RFNM RECEIVED RELEASE MESSAGE RFNM TERMINATE NET 10 ASSEMBLE PH11 WAIT FOR OTHER PACKETS MSAVEVALUE RECV+,PH3,PH3,1,MH INCREMENT #OF MESSAGES RECD a) *3 MSAVEVALUE RECV+,PH2,PH3,1,MH # OF MESSGS RECD FROM *2 AT *3 ASSIGN RFNM PRIORITY FOR MESSAGE ASSIGN 3,PH2 ASSIGN THE DESTINATION FOR THE RFNM ASSIGN 2,PH5 ASSIGN ORIGIN FOR THE RFNM ASSIGN 14,0 GOING TO IMP ASSIGN 6,XH6 ASSIGN LENGTH FOR RFNM TRANSFER , NET1 HACKR ASSIGN 14,0 GOING TO IMP ASSIGN T2+,1 LINE •# OF HOST TO IMP ASSIGN 23+,1 TABLE # FOR HOST TO IMP IS ONE GRT TRANSFER , ACK EBRHS ADVANCE 1 RETRY AFTER 1 TIME UNIT UNLINK PH17,HACK,1, 20PH,, ERRH TRY TO UNLINK a IMP TERMINATE ERRH TERMINATE 10000 IF CANNOT UNLINK S HOST THEN ERROR ERRHT * TERMINATE 10000 - RFNM DID NOT FIND MESSAGE 3 HOST * BEGIN RELEASE PH12 RELEASE THE LINE UNLINK PH15,RESET,1 , 12PH, ,NEXT FREE THE NEXT ONE FOR THIS LINE FUNAVAIL PH12 NOT AVAILABLE TO ANY OTHER PACKE NEXT TABULATE PH23 ENTRY FOR LINE DELAY TABLE TEST E PH14,0,HOST IF ONE BRANCH TO HOST TEST NE PH1,0,IMPE ACK ALWAYS ACCEPTED 3 IMP IMP TEST GE V16,PH7,REJEG- . IF NO STORAGE AVAILABLE 3 IMP IMPE QUEUE PH5 ENTER THE QUEUE TO IMP BUFFER ENTER PH5,PH7 OCCUPY THE IMP BUFFER DEPART PH5 LEAVE THE QUEUE TO IMP EUFFER MARK 19PH MARK TIME FOR PROCESSING DELAY ASSIGN 15, V5 ASSIGN THE IMP OUT CHAIN NUM3ER ASSIGN 16,V6 ASSIGN IMP PROCESSOR CHAIN NUMBER ASSIGN 17, V7 ASSIGN IMP SENTQ CHAIN NUMBER 99 ASSIGN 18,V8 ASSIGN IMP RECEIVING CHAIN NUMBER *& LINK PH16,FIF0,NEXT1 JOIN IMP PROCESSING QUEUE NEXT 1 SEIZE PH16 SEIZE THE IMP PROCESSOR ADVANCE V$IPDL PROCESSING DELAY 3 IMP RELEASE PH16 IMP PROCESSING IS OVER UNLINK PH16,NEXT1,1 LET THE NEXT ONE SEIZE THE PROCESSOl TABULATE MH$IMPT(PH5,1) ENTRY FOR IMP PROCESSING BELAY TABL TEST E PH1,0,MESSG IF NOT ACK THEN JUMP ASSIGN 4,PH5 IACK UNLINK PH17,ACKRL,1,20PH,, ERRIM ACK FREES THE MESSAGE ON SENTQ SPLIT 1 ,RECHK RELEASE IF SAME ON OUTQ ACKRL LEAVE PH4,PH7 RELEASE THE BUFFER SPACE 9 IMP TERMINATE END OF THAT ACK AND MESSAGE MESS G UNLINK PH18,NEX45,1,20PH,, NEXT4 IF DUPLICATE FREE IT AND LEAVE PH5,PH7 ALSO RELEASE EUFFER REJEC * TERMINATE TP NEXT 4 SPLIT 1 ,NEXT5 MAKE A COPY FOR RECEIVE CHAIN NEX44 SPLIT 1 ,NEX46 SEND COPY FOR RECV DELAY NEX45 LINK PH18,4PH JOIN THE RECEIVE CHAIN NEX46 ADVANCE V.15 DELAY FOR DUPLICATES TO ARRIVE UNLINK PH18,RETRM,1,20PH IN AND SO TERMINATE AFTER THAT TIME RETRM TERMINATE NEXT5 SPLIT 1 , RTACK SEND COPY TO RETURN ACK TT ASSIGN 4 ,PH5 ASSIGN NEW CURRENT SOURCE ASSIGN 5 ,MH$ROUTE(PH4,PH3) ASSIGN NEW CURRENT DESTINATION MSAVEVALUE 1 ,PH4,2,V12,MH GET NEW IMP SEQUENCE # ASSIGN 20,MH1 (PH4,2) ASSIGN IMP SEQUENCE # ASSIGN 12,MH$LINES(PH4,PH5) ASSIGN LINE # FOR TRANSMISSION ASSIGN 23,MH$LINK(PH4,PH5) ASSIGN TABLE # FOR LINE DELAY «*. ASSIGN 9,V$ISNDL ASSIGN IMP SENTQ DELAY TEST E PH4,PH5,NEXT8 CHECK IF THIS IS FINAL DESTINATION ASSIGN 14,1 NEXT8 ASSIGN 13,V$LNDEL ASSIGN THE LINE DELAY FOR UNLINK PH17,NRPL,ALL,20PH CHECK FOR OLD MESSAGE OF SAME SEQ # NEXT9 SPLIT 1,READY SEND COPY FOR TRANSMISSION NEX1 0 SPLIT 1 ,NEX11 ONE COPY FOR SENTQ DELAY - WAIT LINK PH17,FIFO JOIN SENTQ NEX1 1 ADVANCE PH9 WAIT FOR SENTQ DELAY UNLINK PH17,READY,1,20PH,, REJEC RELEASE FROM SENTQ FOR TRANSFER ,NEX10 RETRANSMIT READY MARK 19PH MARK TIME FOR LINE DELAY TRANSFER BOTH,GETLN,WAIT TRY FOR THE LINE ELSE WAIT ON CHAIN WAIT LINK PH15,1PH WAIT FOR LINE ACCORDING TO PRIORITY RESET FA VAIL PH12 UNLINKED PACKET TAKE THE FACILITY GETLN SEIZE PH12 OCCUPY THE LINE FOR TRANSMISSION ADVANCE PH13 DELAY ON LINE TRANSMISSION TRANSFER ,BEGIN RTACK TEST E PH4,PH5,NEX12 CHECK TO SEE IF GOING TO HOST ASSIGN 14,1 INDICATOR FOR GOING TO HOST TRANSFER ,NEX13 100 NEX1 2 SAVE VALUE 1 , PH4,XH SAVE LAST SOURCE TO SEND ACK ASSIGN 4,PH5 ASSIGN CURRENT SOURCE ASSIGN 5 ,XH1 DESTINATION FOR ACK NEX13 ASSIGN 1 2, MHSLINES (PH4 ,PH5) LINE # TO BE TAKEN FOR TRANSMISSION ASSIGN 23,MH$LINK (PH4,PH5) ASSIGN TABLE # FOR LINE DELAY ACK ASSIGN 1,0 PRIORITY IS THE HIGHEST ASSIGN 7 ,XH5 ACK LENGTH INCLUDING HEADER ASSIGN 13,V$LNDEL TRANSMISSION DELAY ON LINE TRANSFER ,READY SEND IT FOR TRANSMISSION ERRIM ADVANCE 1 RETRY AFTER 1 TIME UNIT UNLINK PH17,ACKR1,1,20PH,,ERRI RETRY TO UNLINK MESSAGE ACKR1 LEAVE PH4,PH7 RELEASE THE BUFFER SPACE 3 IMP TERMINATE ERRI ASSIGN 25,0 IF CANNOT UNLINK THEN ERROR TERMINATE 1C000 RECHK UNLINK PH15,NRPL,10,20PH RELEASE IF SAME ON OUTQ NRPL TERMINATE MESSAGES NEVER ACK ' D * GENERATE 6 00 PRINT , , c PRINT RELATIVE AND CLOCK TIME PRINT ,,B,N PRINT BLOCK COUNTS PRINT , ,0,N PRINT USER CHAIN STATISTICS PRINT , ,F,N PRINT FACI ITY STATS PRINT , ,Q,N PRINT QUEUE S IMP STATS PRINT ,,S,N PRINT STOR GE STATS PRINT ,,T,N PRINT 1,4,MH,N TERMINATE GENERATE 99 TERMINATE 1 1 VARIABLE MHSLINES(PH5,PH5)+9 HOST ASSEMBLY CHAIN # 2 VARIABLE MHSLINES (PH5,PH5) +2 HOST OUT CHAIN # 3 VARIABLE MHSLINES(PH5,PH5) + 3 HOST PROCESSOR CHAIN # 4 VARIABLE MHSLINES(PH5,PH5)+4 HOST SENTQ CHAIN # 5 VARIABLE MH$LINES(PH5,PH5)+5 IMP OUT CHAIN # , 6 VARIABLE MHSLINES(PH5,PH5)46 IMP PROCESSOR CHAIN # 7 VARIABLE MHSLINES(PH5,PH5)+7 IMP SENTQ CHAIN # 8 VARIABLE MHSLINES(PH5,PH5)+8 IMP RECEIVE CHAIN # 9 VARIABLE PH6/XH3 TOTAL # OF PACKETS FOR MESSAGE 10 VARIABLE PH6-PH11*XH3 LENGTH OF THE LAST PACKET 12 VARIABLE MH1 (PH4, 1)+XH8+MH1 (PH4,2) SXH8 NEW IMPSEQ # 13 VARIABLE MH1(PH2,1)+MH2(PH2,2)3XH8 NEW HOST SEQ * 14 VARIABLE (AC1-PH19) 332767 MOD 2**16-1 FOR HALFWORD PARAMETER 15 VARIABLE PH9*4 RECV DELAY 4 TIMES LAST SENTQ 16 VARIABLE R*PH5-XH5 ALLOW FOR AT LEAST ONE ACK LN DEL FVARIABLE MHSDIST(PH4,PH5)+MH$MODEM (PH4,PH5)*PH7*FN SEXP/XH4 LINE RPDL FVARIABLE MHSHPB (PH4, 1) HOST PROCESSING DELAY IPDL FVARIABLE MHSIPR (PH5, 1) IMP PROCESSING DELAY HSNDL FVARIABLE MH$HPR (PH2, 2) HOST SENTQ DELAY ISNDL FVARIABLE MHSIPR (PH4,2) IMP SENTQ DELAY LEH FVARIABLE Xfl7*FNSEXP+0. 5 LENGTH DISTRIBUTION START 1 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0051753/manifest

Comment

Related Items