Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Real time distributed network simulation with pc clusters Hollman, Jorge Ariel 1999

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2000-0076.pdf [ 4.09MB ]
Metadata
JSON: 831-1.0065175.json
JSON-LD: 831-1.0065175-ld.json
RDF/XML (Pretty): 831-1.0065175-rdf.xml
RDF/JSON: 831-1.0065175-rdf.json
Turtle: 831-1.0065175-turtle.txt
N-Triples: 831-1.0065175-rdf-ntriples.txt
Original Record: 831-1.0065175-source.json
Full Text
831-1.0065175-fulltext.txt
Citation
831-1.0065175.ris

Full Text

R E A L TIME DISTRIBUTED NETWORK SIMULATION WITH PC CLUSTERS by J O R G E ARIEL H O L L M A N  Ingeniero Industrial, Orientacion Electrica, Universidad Nacional del Comahue, A r g e n t i n a , 1996  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  MASTER of APPLIED SCIENCE in THE FACULTY OF GRADUATE STUDIES DEPARTMENT OF ELECTRICAL ENGINEERING of the UNIVERSITY of BRITISH COLUMBIA  We accept this thesis as conforming to the required standard  The University of British Columbia December 1999 © Jorge Ariel Hollman, 1999  In p r e s e n t i n g t h i s t h e s i s i n p a r t i a l f u l f i l m e n t of the r e q u i r e m e n t s f o r an a d v a n c e d d e g r e e a t t h e U n i v e r s i t y o f B r i t i s h C o l u m b i a , I a g r e e t h a t t h e L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e f o r r e f e r e n c e and s t u d y . I f u r t h e r a g r e e t h a t p e r m i s s i o n f o r e x t e n s i v e c o p y i n g o f t h i s t h e s i s f o r s c h o l a r l y p u r p o s e s may be g r a n t e d by the head o f my d e p a r t m e n t o r by h i s o r h e r r e p r e s e n t a t i v e s . I t i s u n d e r s t o o d t h a t c o p y i n g or p u b l i c a t i o n o f t h i s t h e s i s f o r f i n a n c i a l g a i n s h a l l not be a l l o w e d w i t h o u t my w r i t t e n p e r m i s s i o n .  Department o f _ E l e c t r i c a l  Engineering  The U n i v e r s i t y o f B r i t i s h V a n c o u v e r , Canada .  Columbia  Date _ 2 1  / 12  /  1999  Abstract  REAL TIME DISTRIBUTED NETWORK SIMULATION WITH PC CLUSTERS by Elec. Eng. Jorge Ariel Hollman M A S T E R of A P P L I E D S C I E N C E University of British Columbia, Canada Professor Ph.D. J. R. Marti, Chair  This work presents a new architecture layout to perform real time network simulations distributed among multiple IBM -compatible desktop computers. A network simula1  tion with a P C cluster scheme can successfully cope with the size of growing power systems and fast transient studies. A powerful product has been developed using off-theshelf Pentium II 400 M h z personal workstations with a commercially available real-time operating system, standard I/O interfaces, a multimachine scheme and the R T N S real2  time power system simulation software as core solver. Models based on the standard tool worldwide for power systems transients simulations, the E M T P  program [1],[2], and  improved ones for real time performance assure accurate simulation results. Taking advantage of the hardware and software characteristic of the designed simulator, fast and accurate simulations can be executed in a very portable, efficient and economic platform,  1. International Business Machines Corporation 2. Real Time Network Simulator 3. Electromagnetic Transients Program  ii  placing the presented simulator in a better competitive position than expensive T N A ' s supercomputer simulator systems.  1. Transient Network Analyzer  ill  Table of Contents ABSTRACT  n  T A B L E OF CONTENTS  ilia**  LIST OF TABLES  iv  LIST OF FIGURES  v  ACKNOWLEDGMENTS 1 INTRODUCTION  vi 1  1.1 1.2 1.3  1 2 5  2  2.1 2.2 2.3 2.4 2.5 3  3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4 5  Previous Work Background Paradigm R E A L - T I M E DISTRIBUTED NETWORK SIMULATOR ARCHITECTURE  8  Real-time Simulation under P C Cluster Architecture Network Topology in a P C Cluster Integration Rule Accuracy vs. Nyquist Frequency Input/Output Interface Port Selection I/O Interface Latency ;  8 10 1 1 13 16  I/O INTERFACE CARD IMPLEMENTATION  20  Input/Output Interface Card Design Process and Prototype Implementation Double Port Memory Block Synchronization Block Process Panel Indicator Block Digital/Analog & Analog/Digital Cards Graphical User Interface Link Line Block I/O Card Performance  20 23 25 27 32 33 33 34 39  TEST CASES CONCLUSIONS AND RECOMMENDATIONS  43 54  REFERENCES  56  APPENDIX I. SCHEMATICS AND P C B FILES  59  Final Netlist I/O Interface Card  63  APPENDIX II. TEST CASE FILES  66  Preprocessor Input Files Test Case 1 Test Case l a Test Case l b Test Case 2b Real Time Input File Test Case l a .  •  iii (X  67 67 70 71 73 74 74  List of Tables T A B L E O F CONTENTS  m  LIST OF TABLES  iv  LIST O F FIGURES  v  1  INTRODUCTION  1  Table l.lSingle P C vs. P C cluster, test case 1  6  2  R E A L - T I M E DISTRIBUTED N E T W O R K S I M U L A T O R A R C H I T E C T U R E  3  I/O INTERFACE CARD IMPLEMENTATION  Table 2.1 I D E Transaction timings (PCI clock) Table Table Table Table Table 4  19 20  3.1 Data Transfer Rates 3.282C54 Operation Modes 3.3MnLink Flag options 3.4Communication times vs. links layout 3.5Communication Times vs. Number of Subsystem Nodes ;  TEST CASES  21 29 38 40 41 43  Table 4.1 Test Case 1 timing results Table 4.2Test Case 2 timing results 5  8  ,  C O N C L U S I O N S AND R E C O M M E N D A T I O N S  44 53 54  REFERENCES  56  A P P E N D I X I. S C H E M A T I C S A N D P C B F I L E S  59  APPENDIX II. T E S T C A S E F I L E S  66  iv  List of Figures T A B L E OF CONTENTS LIST OF TABLES LIST OF FIGURES 1  INTRODUCTION  Fig. 1.1 U B C Power system research group past, present & future Fig. 1.2 Power Network Topology 2  R E A L - T I M E DISTRIBUTED NETWORK SIMULATOR ARCHITECTURE  Fig. 2.1 Frequency response of integration rules Fig. 2.2 Proposed Multimachine solution architecture Fig. 2.3 Double Port Memory functionality 3  I/O INTERFACE CARD IMPLEMENTATION  Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. 4  TEST CASES  Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. 5  3.1 Connectivity alternatives with a PII400 M h z 3.2 Connectivity alternatives with a PHI 600 M h z 3.3 Picture of the I/O interface card, component side 3.4 Picture of the I/O interface card, bottom side 3.5 Dual Port Memory Block Diagram 3.6 Memory width Expansion 3.7 82C54 C H M O S Programmable Internal Interval Timer 3.8 Snapshot of the implemented Control G U I 3.9 Lossless Transmission Line model in phase domain 3.10 Phase and Modal domain connection for a three phase line 3.11 Link line block implementation 3.12 I/O interface Write operation 3.13 Communication Timings, Single P C vs. P C cluster solution 4.1 Test Case 1 4.2 Test Cases 1 a & 1 b 4.3 P C Cluster of two computers running Test Case 1 a & l b , front view 4.4 P C Cluster of two computers, rear view 4.5 P C Cluster of three computers 4.6 Execution times Single P C vs. P C clustered scheme 4.7 Single P C vs. P C cluster, Solution using the same time step 4.8 R T N S vs. R T D N S , Plots are superimposed and indistinguishable 4.9 Single P C (70 microsec) vs. P C Cluster (50 microsec) 4.10 Analog Outputs from test easel, P C cluster simulation 4.11 Test Cases 2a, 2b & 2c  CONCLUSIONS AND RECOMMENDATIONS  REFERENCES APPENDIX I. SCHEMATICS AND P C B FILES  Fig. Fig. Fig. Fig.  1.1 Top P C B Plot, I/O Interface Card 1.2 General Block Schematic of the I D E Interface Card 1.3 Schematic of the Synchronization Block 1.4 Top P C B Plot, I/O Interface Card  APPENDIX II. TEST CASE FILES  ni iv v 1  4 6 8  12 14 18 20  22 23 24 24 26 27 30 34 35 35 37 40 42 43  45 46 ...46 47 47 48 49 50 51 52 53 54 56 59  60 61 62 65 66  Acknowledgments I would like to express my gratefulness for the continuous support I received from my family, sponsorship and the U B C power group members, without whom this thesis would not have been possible. I specially would like to thank: -My beloved wife, Sandra and my daughter Rocio Belen, for their love and unconditional support. They are the light and joy of my life. - M y parents, Esteban Antonio and Maria del Carmen, for their love and wisdom. -Ph.D. J. R. Marti, for his guidance and continuous support; for deeply sharing his knowledge; for teaching me the fundamentals of network discretizations; and for trusting me and encouraging me in this work. -Ph.D. H . W . Dommel, for sharing freely and deeply his knowledge in every possible occasion. -FUNDACION Y . P . F . for its financial support for the completion of this work. - M . A . S c . L . Linares for sharing and teaching me the secrets of highly efficient programming for real-time power system simulation, and M . A . S c . J. Calvifio-Fraga, for sharing with me his extensive knowledge of real-time hardware systems. For their friendship and for their unconditional support.  Jorge Ariel Hollman  vi  INTRODUCTION  1.1 Previous Work The present thesis purportsto describe the development of a P C cluster for a realtime power system simulator. The need to achieve real-time simulation for fast power system transients using a distributed solver architecture stems from the fact that simulations performed in a single computer for a given step size can deal only with a natural maximum number of power systems nodes. Unless more sophisticated solver algorithms or faster hardware are developed, this size limit can be a severe restriction. A solution to this problem is presented in this thesis. This solution is based on the concept of mapping a network of inexpensive P C ' s (PC cluster) to the particular characteristics of the power system solution network. Day after day the industry and utilities demand more accurate simulators, capable of representing the behavior of larger electrical grids. In addition, it will always be desirable to simulate a given system with a smaller time step since consequently the error introduced in the simulation will also experience a decrease. Even though in our lab a system of 40 nodes can be simulated within 50 microseconds in a Pentium II 400 M h z , this system cannot be simulated to investigate faster transients than the imposed Nyquist frequency limit according to the chosen time step. A P C cluster simulator proves to be a solution to this limitation.  1  Today, speed and accuracy are simply not enough. Capability of simulation of bigger systems has become an ultimate goal. The U B C ' s R T N S software can achieve real-time performance using a single P C 1  with a defined time step of 50 microseconds for a maximum of 40 nodes and 6 outputs using a Pentium II 400 Mhz. In an attempt to improve this performance, test case 1 presented in this work simulates a 54 node system in real-time with a time step of 47 microseconds and 6 outputs using two P C Pentium II 400 Mhz. This test case requires 68 microseconds in a single Pentium II400 Mhz. Many other research groups around the world are working towards an efficient and economic real-time simulation solution. Among them, U B C power research group plays a leading role since its solution is not only accurate but also fast and inexpensive in comparison to the others.  1.2 Background There are mainly five well-known research groups working in the field of real-time power system simulators using different approaches. E D F and I R E Q , have chosen the very expensive and not so portable supercomputer architecture. Manitoba research group has developed a hybrid solution based on a convenient arrangement of either D S P ' s or 4  1. Real-Time Network Simulation 2. Electricite deFrance 3. Hydro Quebec Research Institute  2  transputers. Mitsubishi started with supercomputers and, after join work with U B C , moved their research to P C ' s . A n d last but not least, U B C ' s research group, was the first one to choose an inexpensive P C solution and achieve real-time performance with this scheme. Recently E D F presented results of a parallel approach based on shared memory architecture. This approach is still based on supercomputers and shows no extra gain in speed in proportion to the number of C P U ' s in service because the time needed for communication between the C P U ' s increases with the degree of parallelization [10]. In contrast, the P C cluster architecture proposed in this thesis exhibits a constant communication time for each type of configuration link between the subsystem nodes, independently of the number of node solvers included in the array. Mitsubishi research group achieved a P C cluster system based on six interconnected machines using a Myricom Myrinet giga-bit network. This approach, as will be explained later in this work, presents a serious performance problem due to its round trip overhead of 15 microseconds [8]. U B C ' s real-time simulator is based on solid grounds. During the last three decades the software created by H . Dommel [1],[2] —the E M T P — has been steadily recognized 5  and supported by the industry as well as the academic milieu. That support emerged in response to the accuracy and simplicity of the models. A l l this knowledge evolved in the natural direction within the real-time group to produce a faster network simulator, which indeed is the software known as R T N S [4] created by J. Marti and L . Linares in 1993.  4. Digital Signal Processing 5. Electromagnetic Transient Program  3  ilBlllllliMllI OVNI  Figure 1-1. UBC Power system research group past, present & future  This was the very first P C based real-time network solver. Stemming from this work several other research projects originated, examples of which are the R T N S extensions and hardware implementation for testing Relays [9] developed by J. Calvino Fraga; the Multilayered tearing solution software developed by L . Linares [3]; and the present work, the R T D N S implementation for multi-PC clusters. See Figure 1-1. These efforts are part of 6  the O V N I project [16] to develop a full system real-time power system simulator. 7  6. Real-Time Distributed Network Simulator 7. Object Virtual Network Integrator  4  1.3 Paradigm The main reason for our increased computational efficiency is that U B C ' s solution algorithm is structured in the same way as the system it is modelling. In this P C clusterbased layout, dense computational nodes represent the power system substations, while transmission lines connecting substations are represented by simple links to the other dense nodes. See Figure 1-2 on page 6. This segmentation is based on the natural decoupling introduced in the network through the transmission lines. This framework also allows for easy scalability of the computational resources to match the size of the problem. Thus, the P C cluster-based concept perfectly matches the computational paradigm. Table 1-1 on page 6 presents the obtained results with a single vs. a P C cluster scheme using two computers. The timings were obtained with a non-symmetric node subsystem load, which is not the most efficient situation since the minimum possible time step cannot be used. As it becomes evident, by using the P C cluster scheme it is feasible for the system size considered to achieve real-time simulation with time steps under 50 miscroseconds, while by using a single computer this would not have been possible.  5  Table 1-1. Single PC vs. PC cluster, test case J Totnl Number of nodes simulated in ciuh computer  Number of Outputs  Needed Time to solve the system  Siiifilt- Pentium II 401) Mhx  54  6  62.8 (ts  Two Pentium 11 400 Mhz clustered  30/24  6  46 / 43.6 (is  Tested CunllKiinititm  a  a. For a perfectly symmetric distribution of loads the time step decreases to the minimum. In this benchmark the distribution of load is asymmetric and the time step is imposed by the largest subsystem node.  One crucial aspect of the P C cluster design is its ability to achieve linear scalability. The solution time for each subsystem consists of two parts: the time needed by the computer to solve its own subsystem, and the time needed by the computer to communicate its state to the neighboring computers.Because the communication overhead is independent of the number of machines that make up the cluster, the proposed solution shows linear scalability of system size to be simulated with respect to the number of P C ' s used. PC 1  PC 5  Figure 1-2. Power Network Topology  6  With the proposed layout, whenever we need to increase the extent of the system representation by one nodal subsystem we only need to add one more computational node to the model, which means just to add another PC to the array. To achieve real-time performance all the computational processes must be done within the clock time step. Moreover, when real-time performance is required with a PC cluster scheme, in addition to all the computational operations, the communication time between all the elements in the array must be achieved within the clock time step. The proposed solution to cope with this task is based on three fundamental concepts: •  Implementation of double port memory blocks  •  Stable and accurate shared-synchronization between the array of computers  •  Control of the interrupt requests  The present thesis evolved from previous research work presented by L.Linares and J. Cai vino Fraga in their respective M.A.Sc. theses [4], [9]. The core network solver software employed for the Real-Time Distributed Network Simulator is the RTNS [4] code originally written by L. Linares. Additional software developed for the RTNS Relay Tester [9] written by J. Calviho Fraga is also used in the present work. New software blocks, a multimachine link line model, and modifications in the original RTNS code were introduced in the present Real-Time Distributed Network Simulator. Finally, a new I/O interface to achieve an efficient and accurate communication between the nodes of the PC cluster was developed.  7  2  R E A L - T I M E DISTRIBUTED NETWORK SIMULATOR ARCHITECTURE  2.1 Real-time Simulation under PC Cluster Architecture Two imperative concepts to consider in a real-time network solution with a multimachine scheme are the inherent communication time and the synchronization for each time step. To achieve real-time performance in more than one machine, solving the system of equations.in less than the desired time step is not sufficient. In fact, the transfer of all the necessary data between computers must be done at each time step without violating the imposed time step limit. This situation represents a new challenge, since it means that both the software and the hardware designs must achieve their best performance in order to obtain small solution time steps. Synchronization is an essential issue in real-time simulations under a P C cluster scheme. A perfect synchronization must be assured in order to present all the outputs scheduled at each tick of the real-time clock, even without knowing the particular performance of each individual computer. A n effective source of synchronism must be present in the design to provide the correct clock signals to start the simulation by triggering the network solver processes in all machines at the same time and for all the simulation steps. Further-  8  more, it is also desirable to have the possibility of stopping the simulation in all the components of the multimachine arrangement at any time. For each real-time clock pulse, adjusted to match the desired time step through the synchronization block, all the computers must: •  Output selected node variables to the A / D card.  •  Write needed history terms to the I/O Interface card. Those values will be needed in the neighboring subsystem node to solve for future time steps.  •  Read needed history terms from the I/O Interface card. Those data values are provided by the other linked subsystem node.  •  Solve the system.  In the case of different computer performances or even in the presence of asymmetrical computational loads, the simulation must allow the slowest subsystem node solver to compute its solution plus transfer the data to the other nodes in less than the time step. A particular problem present almost in all the I/O interface technologies is related to the latency inherent to the Read/Write process. The architecture designed in this thesis 1  introduces a simple and efficient way to minimize this undesirable overhead time.  1. The time interval between the instant at which an instruction control unit issues a call for data and the instant at which the transfer of data is started. 2. The amount of time a computer system spends performing tasks that do not contribute directly to the progress of any user task. 9  2.2 Network Topology in a PC Cluster Power Networks can be visualized as separate blocks which are interconnected by transmission lines. See Figure 1 -2 on page 6. This network representation proves to be convenient in order to perform a natural and efficient partition of the system to be simulated into several machines, since each individual block is time decoupled by the model of the transmission line. The M A T E concept [16] proposes a general approach to solve a network composed 3  by dense subsystems connected by sparse links as number of computational nodes (e.g., PC's) connected by a link subsystem. The Transmission line link is an example of this general concept and is the one used in this thesis. Future work will further explore the tearing along other types of links. Under the proposed P C cluster layout, a master unit is in charge of pre-processing the input data, performing the user interface functions, and distributing the solution subsystems among the node solvers (slave computers). This distribution of cases can be accomplished through any standard communications port. For instance, this task can be implemented using either parallel or serial ports for smaller P C cluster arrays and TCP/IP for larger P C cluster ones. In the present implementation, the master unit is also in charge of setting up the synchronization block through one of the available L P T ports. Each subsystem node solver 4  calculates its part of the network solution while performing the interaction with its neigh-  3. Multi-Area Thevenin Equivalent. 4. A port that transfers data one byte at a time, each bit over its own line. Also known as Parallel Port.  10  bors by using the developed I/O interface card. Under standard P C motherboard configurations, it is possible to interconnect each slave computer with up to two other subsystem nodes. When interaction with analog equipment is desired, like a relay for instance, it is feasible to include a D / A card, as proposed by J. Calvino-Fraga [9], in the corresponding subsystem node. See Figure 2-2 on page 14. The master computer can run Windows 95/98/NT while for the subsystem nodes Phar Lap T N T E T S 8.5 [11], a real-time operating system, is chosen. The highest computational power is required for the subsystem computer nodes, but no other extra peripheral device unit is needed. These nodes can consist of only the C P U , motherboard, floppy disk and R A M memory. To avoid the inclusion of a floppy disk unit in each slave computer a boot P R O M  5  in which the real-time operating system is loaded is  implemented.  2.3 Integration Rule Accuracy vs. Nyquist Frequency The Nyquist frequency is the bandwidth of a sampled signal, and is equal to half of the sampling frequency of that signal. In step by step time-domain simulation, the sampled signal should represent a continuous spectral range starting at 0 Hz, the Nyquist frequency is the highest frequency that the time-domain solution will contain.  5. Programmable read-only memory. A form of nonvolatile memory that is supplied with null contents and is loaded with its contents in the laboratory or in the field. Once programmed, its contents cannot be changed.  11  _ N  sample_  1  2At  2  y~  The smaller the integration time step, the higher Nyquist frequency.The associated distortion error introduced by the integration rule increases as the frequency in the simulation gets closer to the Nyquist frequency. For a given frequency in the simulation, the error will decrease if the time step is reduced. See Figure 2-1. Magnitude 1  1  1  1  1  Backward & F b r w a r d E u | e x - — • —  "————~___Trape2oidal  0  1  0  1  0.05  0.1 0.15 0.2 0.25 0.3 0.35 0.4 f(pu) <Trapezoidal: green; B E : red; F E : blue; C 3 : magenta>  0.45  0.5  0.45  0.5  Phase Backward Euler •  '  —i  -  -100  0  Calvin o 3  Trapezoidal  i  Forward Euler  I  i  0.05  0.1 0.15 0.2 0.25 0.3 0.35 0.4 f(pu) <Trapezoidal: green[; B E : red; F E : blue; C 3 : magenta>  i  Figure 2-1. Frequency response of integration rules  In power system transients simulation it is desirable to obtain an accurate representation of high frequencies when fast transients are applied (e.g. fast switching operations, HVDC  6  converters, T R Y  7  studies). Beyond a certain system size, it is not possible to  6. High-voltage direct-current systems 7. Transient Recovery Voltage  achieve accurate results, with an acceptable distortion error, unless a faster C P U or a faster solution algorithm is implemented. A n alternative solution to this situation is to simulate the power system with a P C cluster architecture, which speeds up the global solution. For instance, test case 1 simulated in a single Pentium II 400 M h z can not achieve real-time for a distortion error less than 3% at a frequency of 2kHz, but if the same system is simulated with a P C cluster, real-time can be achieve with the desired accuracy.  2.4 Input/Output Interface Port Selection To achieve real-time performance using the P C cluster scheme with a time step of 50 microseconds with 6 outputs per subsystem node, and solving the network system with R T N S , a maximum communication time of 0.45 microseconds per byte transferred is 8  imposed. This limit permits keeping the maximum distortion error introduced by the integration rule under 3%, as well as the use of three phase single and double circuit line links between subsystem nodes. The necessary bandwidth is determined by the amount of data to be transferred between solver nodes. A n input/output standard interface such as a parallel port in its configuration E P P  9  and E C P , including the outstanding PCI-EPP/ECP interface technology, is not fast 1 0  enough for the requirements of R T N S with a multimachine scheme. Certain input/output standard interfaces offer high transfer rates. However, these transfer rates are valid only for very large memory blocks. This results from the fact that  8. Real-time Network Simulator 9. Enhanced Parallel Port 10. Extended Capabilities Port  13  they are meant for video, massive storage devices, or the video-conference applications in which great volumes of data need to be exchanged. Examples of the above I/O standards are PCI , SCSI 11  12  and A G P . 13  •ZKKkpit & Syrtchroni?alioh  - Oislnbulion Oooo Dua  m ... g  Interface Cardl  L—!P. J—J E  Interface Card 2  Computer 3  Computer 2  Computer 1  Figure 2-2. Proposed  Multimachine  solution  architecture  In applications like RTNS, the needs are different since instead of transferring large amounts of interchanged data the objective is to transfer at high transfer speed small memory blocks. In order to complete each transfer cycle even in the case of small-sized blocks these massive I/O standard interfaces consume much more time than the allowed  11. Personal Computer Interface 12. Small Computer System Interface 13. Accelerated Graphics Port  14  time step size for most of the power simulation cases. Consequently, these devices do not meet the real-time power system simulation requirements. Although the SCSI I/O interface seems more attractive than IDE because of its capability of transferring 32 data bits at each hand shake, it is not entirely suitable for our application since it is not directly connected to the C P U bus. There is no point in using a more expensive SCSI based system when the lower-cost A T A / I D E interface will do a better job. The I D E I/O interface is directly connected to the computer bus and it is not necessary to use an extra driver or card to write or read the desired data. Not only is the I D E interface faster and more economical, but also the system is not tied to any third part vendor or manufacturer of the driver as in the SCSI alternative. A l l standard P C computers have at least two I D E ports in their motherboard. Each of them can address two devices. Some of our measured data transfer rates for the E P P / E C P , P C I - E P P / E C P and IDE I/O interfaces are shown in Table 3-1 on page 21. Special consideration is given to communication networks like the costly Myricom Myrinet networks chosen by some research groups such as Mitsubishi. These type of networks are not suitable for our real-time network requirements. Even though they can achieve transfer rates between 78 MBytes/sec and 120 MBytes/sec, their roundtrip time is remarkably large, from 15 to 25 microseconds. For instance, using a Myricom Myrinet network the necessary time to transfer 4 KBytes is 25.6 microseconds while with the pin-down option the roundtrip time is increased up to 47 microseconds [7], where the p i n - d o w n cost 14  depends on processors and operating systems. For smaller memory blocks the latency is  14. User virtual memory must be copy to a physical memory location before the message is sent or received. 15  more important and the consequent bandwidth smaller. These type of networks are designed to transfer extensive amount of data but are not fast enough to reconfigure themselves in a few microseconds for repetitive small size data block transfers as required in our problem of power system transients simulation. For all the above mentioned reasons, the IDE I/O interface was chosen in the present thesis to interconnect the subsystem computers.  2.5 I/O Interface Latency Latency is understood as the time required to read from or write to a storage device after the proper controls and addresses have been applied. This overhead time is also present when data transfer between computers is performed. In real-time applications with time steps in the order of microseconds, the lack of care for this accessing time can be the difference between the success of the communication process or its failure. The communication channel cannot be used until the initialization process finishes and the hardware-software implementation needs a rather complicated synchronization procedure in order to avoid simultaneous access requests from the communicating computers. For example, in the case of two computers that finish their step calculation at the same time, they need to wait twice the necessary data transfer time before they complete their cycle and start the next time step calculation, this in addition to the time required to send and receive the communication request acknowledge. A n effective way to reduce the latency problem as well'as to provide the communication system with a pseudo simultaneous bidirectional response is through the use of a  16  double-port memory array between computers. Dual Port Static Random Access Memories allow two independent devices to have synchronous access to the same memory. Both devices can then communicate with each other by passing data through the common memory. A D P M  1 5  has two sets of addresses, data and read/write signals, each of which access  the same set of memory cells. Each set of memory controls can independently and simultaneously read any word in memory including the case where both sides are accessing the same memory location at the same time. See Figure 3-5 on page 26. The I/O interface card developed in this work uses a D P M configuration with two pages, where one of the pages is dedicated to write the data and the other one to read it. Since the memory is transparent to both computers, a re-direction to the appropriate page for each computer can be implemented with suitable hardware programming. See Figure 2-3 on page 18. In that way the data transfer can be done simultaneously for both machines thus reducing the latency communication time even to a negligible value. During the real-time simulation, both subsystem nodes perform the writing and reading process at the same time. This feature permits the code for any of the subsystem nodes to be transparent to the process, without having to include any special waiting cycle to perform the data transfer. In the proposed implementation, the latency problem is considerably reduced because when either of the computers needs to read o write data, the memory is always available. The access time to the double port memory is 35 nanoseconds for the chosen IDT  15. Dual Port Memory 17  double memory chip [12], and the inherent I D E port round trip latency using PIO  mode  2 is of approximately 3 microseconds for a memory block of 48 Bytes.  Interface Card Double Port Memory  Figure 2-3. Double Port Memory functionality  Some motherboards implement a common PCI bus on top of which the I D E bus is running. To make compatible the I D E peripheral devices with the PCI'bus speed, the I D E signals are controlled with the granularity of the P C I clock. For instance a data port compatible I D E transaction type takes a total of 25 P C I clock pulses [15]. Under this situation an extra overhead is introduced since the P C I latency must be considered. The options to improve the I D E transfer performance are: •  use the most appropriate PIO mode.  •  choose a different system architecture, such as Intel 810e chipset, which allows the I D E port to access directly a faster system bus.  •  configure the PCI latency cycles to a faster transfer mode.  16. Programmed Input/Output  18  There are five PIO I D E timing modes: 0, 1, 2, 3, and 4. Modes 0 through 4 provide successively increased performance. I D E data port transaction latency consists of start-up latency, cycle latency, and shutdown latency. Start-up latency is incurred when a P C I master cycle targeting the I D E data port is decoded and the data address and chip select lines are not set up. Start-up latency provides the setup time for the data address and chip select lines prior to assertion of the read and write strobes. Cycle latency consists of the I/O command strobe assertion length and recovery time. Recovery time is provided so that transactions may occur back-to-back on the I D E interface (without incurring start-up and shutdown latency) without violating minimum cycle periods for the I D E interface. The command strobe assertion width for the enhanced timing mode is selected by the I D E T I M Register and may be set to 2, 3,4, or 5 P C I clocks. The recovery time is selected by the I D E T I M Register and may be set to 1, 2, 3, or 4 P C I clocks. If I O R D Y is asserted when the initial sample point is reached, no wait states are added to the command strobe assertion length. If I O R D Y is negated when the initial sample point is reached, additional wait states are added. Since the rising edge of I O R D Y must be synchronized, at least two additional PCI clocks are added. Table 2-1. IDE Transaction  timings (PCI  clock)"  RCT  Shutdou n Latency  ii  22  2  3  6  14  2  2  2-5  1-4  2  IDE Transaction Type  Start-up Latency  No-Data Port compatible  4  Data Fort compatible I as! Timing Mode  a. for instance with a PCI bus of 33 Mhz, each PCI clock takes 30 nanoseconds.  19  I/O INTERFACE CARD IMPLEMENTATION  3.1 Input/Output Interface Card The I D E port was chosen for several reasons: to achieve portability, to ensure the necessary transfer rate, and to follow the policy of using only off-the-shell computer components. Since the I D E port is available in all motherboards at the present time it makes a perfect choice from the portability point of view, also allowing to maintain a low design cost. Previously to the selection of the I D E port, the parallel port was tested applying the same block design, but the obtained maximum transfer rates were below or near the imposed limit for the real-time requirements when standard L P T and fast P C I L P T ports were tested. This is shown in Table 3-1 on page 21. Transfer rates between 0.23 and 0.32 |is/Byte were achieved using the adopted approach in almost every standard Pentium Pro and Pentium II PCs, giving an extraordinary portability to the design. Obtained rates depends on the selected operating mode and the system bus architecture. A minimum time step of 50 us for the test cases was chosen in order to keep the maximum distortion error under 3% for the trapezoidal integration rule representing frequencies up to 2 kHz. This constraint places the desired transfer rate in ranges of less than 20  0.45 |a.s/byte for the case without further data optimization, and in a smaller transfer rate for the case with data optimization.  Table 3-1. Data Transfer Rates I/O Interline  Dell Pentium MMX 233 Mh7  Bus Widlh [hits]  Time needed for 3 ph;ise  2.4 us/Byte [ 0.416 MB/sec ]  8  57.6 us  1 us/Byte [ 1 MB/sec ]  8  24 (is  0.3 .us/Uvte [ 3.3 MB/sec ]  16  7.2 us  :l  VVX'IYXV  <l>:ir:illcl) l»( 1 - YVVIV.VY  I Parallel) 11)1  Traininiwiiiii Line  a. The CPU speed does not impose a great difference in the port transfer timings.  The feasible number of line links to be connected in the P C cluster scheme between subsystem nodes depends on four variables: •  Size of the subsystem C P U speed  •  Number of Outputs  •  Symmetry of the distribution of loads among the subsystem nodes  A more flexible link connection scheme is obtained when the following conditions are met: A faster subsystem node C P U , a smaller number of outputs, and a smaller size of the case system loaded in each C P U node. The presence of a perfect symmetry of computational loads guarantees the maximum gain of speed of the cluster array. For instance, using a Pentium II400 Mhz, three outputs, loading each subsystem node with a case of size  21  30 nodes, and a time step of 50 microseconds, it is possible to implement the following configuration of links: •  Two 3 phase circuits: communication of the subsystem node to other two subsystem nodes. Figure 3-1, scheme I.  •  One 3 phase Double circuit: communication to another subsystem node. Figure 3-1, scheme II.  Scheme I  Figure 3-1.  Connectivity  Scheme II  alternatives  with a PH 400 Mhz  Just by upgrading the C P U from a Pentium II 400 M h z to a Pentium III 600 M h z , within a time step of 50 microseconds, and keeping the previous setting of three outputs and a 24 node subsystem load, the schemes shown in Figure 3-2 on page 23 are feasible of implementation. The inherent flexibility of the approach is based on the simplicity of the communication concept and on the fixed communication time characteristic for a given subsystem node connectivity layout which is independent of the number of computers added to the P C cluster array.  22  Better performances can be achieved with the implementation of data optimization algorithms, such as delta modulation or data compression processes.  Scheme I  Figure 3-2.  S c h e m e II  Connectivity  alternatives  with a PHI 600 Mhz  3.2 Design Process and Prototype Implementation Three C A D tools were used to produce the prototype the hardware interface card implementation shown in Figure 3-3 on page 24, and Figure 3-4 on page 24. During the period of design and evaluation of the individual groupal blocks, both OrCad and an experimental protoboard were used. See Figure 1-1 on page 60. Once the final prototype version presented in this thesis was defined, OrCad and Tango were run to obtain the translation of the schematic into the Netlist and the Printed Circuit Board version respectively. Appendix I includes illustrations of the circuit schematics and P C B 1. Printed Circuit Board  23  1  files.  Figure 3-3. Picture of the I/O interface card, component side  3.3 Double Port Memory Block The D P M block is the central element in the present I/O interface card. This block is in charge of storing and transferring data between the node subsystems. Each subsystem writes and reads data from different memory pages. See Figure 2-3 on page 18. The available memory resource for each subsystem node using the chosen IDT7132 [12] is 16K (2K x 16 Bit). For instance, each page can address 512 memory cells. The IDT7132/7142, the master and the slave D P M , provides two ports with separate controls, addresses, and I/O pins that permit independent access to read or write to any location in the D P M . Since the implemented software for both subsystem nodes writes and reads to/from the D P M in different memory pages, and since both simulations are perfectly synchronized through the external synchronization block, there is no possibility for simultaneous access conflicts in the D P M . For every time step during the data transfer, subsystem node I writes the data into memory page 1, while subsystem node II writes it into memory page II. Next, subsystem node I will read the data from page II while subsystem node II will read it from page I. To know which memory cell should be addressed a hardware counter is used every time the memory is accessed. The counter is implemented through a pair of synchronous binary 4 bit counters, such as the 74LS161. When the process of writing data is started at every time step, the counter is reset to the first position in page I on one side of the memory array and in the other half of the memory it is reset to the first position in page II. This  2. Dual Port Memory 3. Applying Width Expansion with the IDT7142 "slave" DPM.  25  counter is incremented after each memory cell is filled. Once this process is accomplished, the counters are reset again to the correct values in order to read from the proper pages. The DPM is available to each subsystem node at any moment, and it can be accessed simultaneously. In this implementation the simultaneous access condition is limited to different pages and access is not allowed to an individual memory cell. To clean up the signals and assure a proper functionality, all the DPM control lines — Chip select, Write, Read, and Busy — are passed through an octal D-type flip flop. The chosen 74LS273 successfully copes with this task. See Figure 1-3 on page 62.  DUAL-PORT MEMORY DATA  4 CPU OR I/O DEVICE "LEFT"  LEFT DATA I/O  ADDRESS  RAV  LEFT ADDRESS DECODE  RIGHT DATA I/O  DUAL-PORT RAM MEMORY CELLS  CPU OR I/O DEVICE "RIGHT"  RIGHT ADDRESS DECODE  CONTROL LOGIC BUSY, INTERRUPT SEMAPHORE  BUSY, INTERRUPT SEMAPHORE  Figure 3-5. Dual Port Memory Block Diagram  DPMs can be combined to form large dual port memories using master and slave memory components. In the design described here, a set of two DPMs is used in each I/O Interface card in order to obtain an array of 2K x 16 bits memory blocks. See Figure 3-6 on page 27. This situation is strikingly useful because since the IDE port is used, 16 bits can be written or read at any handshake without any further delay. Even though a depth expan-  26  sion with this type of devices is feasible, for the present applications it is not necessary to incorporate this option in the design.  +5V  +5V  • 270Q  ADDR DATA R/W  Qi CE BUSY  11  4  AL DL MASTER ft>R / W L • OEL CEL BUSYL  f ^  DATA  •270O.  ^  )>.  AL DL R/WL OEL CEL BUSYL  SLAVE  AR DR R/WR OER CER BUSYR  —  1  >  AR DR R/WR OER CER B U S Y R M—  pi  ADDR DATA R/W OE CE BUSYR  DATA  1  « — •  Figure 3-6. Memory  width  Expansion  3.4 Synchronization Block In real-time simulation all the necessary operations must be completed within the adopted time step. Moreover, as the size of the network assigned to each machine can be different as well as the capability of each computer to perform the subnetwork calculation, it is desirable to count with an independent synchronization source. When the subsystems calculations are finished, this source is in charge of triggering a signal to all the cluster computers so that they start each individual subsystem calculation for the upcoming time step synchronically with the real time clock.  27  The simulation program must only verify that the slowest computer in the system is capable of solving the system and sending the data within the time step. Then all the other computers will follow the slowest one or the one with more calculation load. Another function incorporated into the synchronization block is the start-off signal. This signal allows the user to start and interrupt the simulation in all the machines simultaneously. The main advantage of using an.external source of time is that it can be selected with the appropriate accuracy and it can be easily shared by all the computers in the parallel array. In the present work, a Programmable Internal Interval Timer C H M O S was selected to provide the external real-time clock, where the resolution can be expressed as follows:  frequency  Since the first I B M P C based in the 8088 microprocessor appeared, and up to current high performance Pentium based PCs, these type of counters have been available and have remained unchanged. For the earlier PCs the 82C53 was first used, and later it was replaced by the 82C54 which exhibits an improved and backward compatible design. Six programmable timer modes allow the 82C54 to be used for several applications (e.g., as a counter, as an elapsed time indicator, or as a programmable one-shot). See Table 3-2 on page 29. The selected working mode for the I/O card design presented in this work is mode 3, a square wave generator.  28  The alternative of using an internal source of time provided by each subsystem node is not suitable for P C cluster layouts, since it makes the synchronization among the slave nodes much more difficult, and it also increases the complexity of the communications. Table 3-2. 82C54 Operation Modes MODEO  INTERRUPT ON TERMINAL COUNT  MODE 1  HARDWARE RETRIGGERABLE ONE-SHOT  MODE 2  RATE GENERATOR  MODE 3  SQUARE WAVE MODE  MODE 4  SOFTWARE TRIGGERED STROBE  MODE 5  HARDWARE TRIGGERED STROBE  The real-time source is configured through one of the parallel ports available in the master unit. During the process of configuration of the synchronization block the interrupts are disabled. A n associated error will be present in the delta t, since the hardware time step is obtained trough an integer value programmed into the counter when the desired time step is not an integer multiplier of delta. However, the error introduced is not relevant (e.g. +/0.0625 nano seconds). Another error of around 25 ppm /°C is present in the real-time clock 4  introduced by the crystal. The real-time clock resolution for the chosen crystal is:  5 = ( 1 ^ )  _  1  =  0.125ns  Programming of the 82C54 is available to the user through the simulator's G U I , 5  since it is necessary to reprogram the external timing source when a new delta t is chosen for a simulation. The Control of the Synchronization is implemented in the master unit computer. 4. parts per million. 5. Graphical User Interface  29  The 82C54 can be fed with a crystal of a frequency of up to 10 M h z , achieving a maximum resolution of 100 nanoseconds for the external timing source. The designed circuit allows flexibility since higher frequency crystal oscillators can be also used through a convenient frequency divider. For instance, with a crystal oscillator of 20 M h z with a division factor of 2, a final precision of 100 nanoseconds is obtained. This feature is achieved through a Synchronous 4 BIT counter, the 74LS161. Figure 1-3 on page 62.  D -D„ 7  Data Bus Buffer  Counter 0  CD ZD  -xi Ready Write Logic  CQ  c B c  Control Word Register  Counter 1  Counter 2  Figure 3-7. 82C54 CHMOS Programmable Internal Interval Timer  When a control word is written to one of the counters, all control logic is immediately reset and the output goes to a known initial state. The 82C54 counters are programmed by writing a control word and then an initial count. The control words are written into the control word register, which is selected when  30  pins A l & AO are set to 1, and the control itself specifies which counter is being programmed. Each of the three timers included in the 82C54 have a resolution of 16 bits, one in 65536 multiplies of the input frequency period. After writing a control word and initial count, the counter will be loaded on the next clock pulse. This allows the counter to be synchronized by software. The control word format is as follows:  Control Word Format A 1 , AO = 11 C S = 0 R D = 1 W R =0  D7  D6  D5  D4  D3  D2  D1  DO  SC1  SCO  RW1  RWO  M2  M1  MO  BCD  M - Mode :  S C - Select C o u n t e r : M2  M1  MO  Select Counter 0  0  0  0  Mode 0  1  Select Counter 1  0  0  1  Mode 1  1  0  Select Counter 2  X  1  0  Mode 2  1  1  Read-Back Command  X  1  1  Mode 3  X  0  0  Mode 4  1  0  1  Mode 5  SC1  SCO  0  0  0  R W - Rfiari/Write : RW1  RWO  0  0  Counter Latch Command  0  1  Read/Write least significant byte only  1  0  Read/Write most significant byte only  Binary counter 16 bits  1  Read/Write least significant byte first then most significant byte  Binary Coded Decimal (BCD) cunter  1  BCD :  The following is part of the C code to accomplish the 82C54 programming: // A v a i l a b l e int  Base  LPT  [3] =  Ports  { 0x338,  0x378  , 0x278  31  };  i n t Port; unsigened char timer_low, timer_high; double deltaT; //  82C54 O u t p u t  Format  ,  t d e f i n e M0DE2 0x34 # d e f i n e MODE3 0x3 6 tdefine  Output  #define'Input #define Data # d e f i n e ko  //Mode 2, //Mode 3,  Pulse Square  Wave  2 1  0  4  Disablelnterrupts  () ;  / / S e t u p t h e 'hardware' d e l t a t timer_value = RoundRealToNearestlnteger timer_low = timer_value%0xl00; timer_high = timer_value/0x100;  (deltaT*CLKIN);  //82C54 S e l e c t i o n Mode outp outp outp outp  ( Base ( Base ( Base ( Base  //Divider  [Port] [Port] [Port]  +  [Port]  +  + +  Output Data Output Output  • o |  ko ) ; MODE 3 ) ; . 1 1 ko ) ; - 0 | ko  ) ;  S e t t i n g s 8254  / / Low B y t e outp ( Base outp ( Base  [Port]  +  Output  [Port]  +  Data  outp ( Base outp ( Base //High Byte  [Port] [Port]  +  Output Output  , ,  outp ( Base outp ( Base  [Port] [Port]  +  Output Data  ,  +  OxA | ko ) ; timer _high )  outp  [Port]  +  Output  [Port]  +  Output  , ,  OxB | ko ) ; OxA ko ) ;  outp  ( Base ( Base  Enablelnterrupts  +  ,  OxA | ko ) ; t i m e r _ l o w ); OxB | ko OxA | ko  ); ) ;  j  () ;  3.5 Process Panel Indicator Block To obtain an external visualization of the transfer data process status, two leds, which are used to indicate the active read or write processes, are added to each half of the design. This function is implemented with four retriggerable monostable multivibrators, such as the 74LS123. Since the transfer process is too fast, a small time delay is introduced to the monostables multivibrators to assure that their status can be easily visualized.  32  3.6 Digital/Analog & Analog/Digital Cards In order to access the analog outputs and digital inputs of the simulation computer, the monitoring computer requires the appropriate data acquisition hardware. The selected hardware is a multi I/O board from National Instruments, the A T - M I O - 1 6 [17]. This board provides 16 analog inputs with a resolution of 12 bits (11 bits + sign). The drivers provided by National Instruments are used to access the AT-MIO-16.  3.7 Graphical User Interface To cope with the task of setting up and running the study cases with the real-time distributed network simulator, a basic G U I was implemented. This G U I was developed using the C V I software tool [13] from National Instruments. The main functions included in the G U I are the following: •  Synchronization block set up  •  Real-Time Distributed Network Simulator cases Edition and Compilation  •  Simulation condition set up  •  Link to Software Tools, such as Matlab, Schematic editor, Microtran and WinPlot.  •  Cases Load and Distribution among the Pc cluster array.  •  Start/Stop simulation control  The introduced G U I is a beta implementation, and it merely intends to simplify the testing and researching processes, but it could serve as a base element to develop a more 33  flexible  and powerful G U I user interface for future R T D N S versions. F i g u r e 3-8 on  page 34 shows a snapshot o f the above mentioned interface.  Figure 3-8. Snapshot  of the implemented  Control  GUI  3.8 Link Line Block T h e link line block is a lossless transmission line model implementation w i t h the losses l u m p e d at the line ends. T h e decoupling provided by the lossles line makes possible the solution o f the network using a P C cluster array. The lossless transmission m o d e l is clearly explained by D o m m e l and M a r t i i n [2], [14] and can be represented by the f o l l o w i n g equations i n the modal domain:  34  H (t) = 2 . G V ( t - T ) - H ( t - T ) mi  r  H (t) = 2 . G V ki  [G] I  [V (t)l  r  k i  m i  k i  (t-T)-H  [H (t)]^^  m i  (t-T)  (^^)[H (t)] | [ G ]  k  [V (t)]  m  m  k  Figure 3-9. Lossless  Transmission  Line model in phase  domain  In the model, Gj is the conductance of mode i, Tj the propagation time of mode i and H  k i  the modal history value of the considered node. Since the rest of the network is always  defined in phase quantities, these modal equations must be transformed to phase quantities to complete the solution at each time step. The connection between modal and phase domain can be graphically represented as follows: k1  "ki  [Gphase]  'k2 1 'k3  Linear Transformation [Q]  k2  'ml  ml Transmission Line  k3  m2 m3  Linear Transformation [Q]  'm2 I 'm3  [Gphase] [h (t)l m  y P h a s e Domain  Modal Domain  Figure 3-10. Phase and Modal domain connection  35  P h a s e Domain  for a three phase  line  The modal matrix [G] is pre-calculated and fixed for a given time step. The history terms, both in modal and phase domain, must be re-calculated at each time step. The needed number of history values to remember can be expressed as follows:  To split the calculation of the lossless transmission line between two computational nodes at each time step, three voltages and three voltage history terms need to be transferred. Since the time price inherent to the data transfer is by far more expensive than the computational time, it was important to realize in the implementation that only the voltages at the other end need to be transmitted while the histories can be recalculated at the local node. In that way for a three phase transmission line only three phase voltages are needed to be transferred at each time step. Then each solver node will perform the calculation of an equivalent full line but receiving half of the voltage nodes from the other solver node through the I/O interface card. The functional diagram of the link line block implementation is shown in Figure 311 on page 37.  36  Yes-  Write sender voltages to the other solver node  Read receiving voltages from the other solver node  1  Solve Transmission Line  • •  Update history sources  Accumulate nodal currents  •  Calculate nodal voltages  Figure 3-11.  Link line block  implementation  The aim was to implement the link line block making it fully compatible with the R T N S line model as well as with the developed hardware. In this way the user can still dis-  37  tribute the original R T N S cases in the new P C cluster array running R T D N S by only adding the type of line used. For instance, all lines have an extra flag parameter -MmLink- which defines whether the line is used as a multimachine link or is fully solved in a single P C . The same flag is used to identify to which IDE port the link line must be connected. The rest of the transmission line data follows the same structure used by R T N S .  phases: 6 MmLink: 1 Zc: 987.9 328.4 275.9 222.6 237.2 244.1  Table 3-3.  MnLink  Flag  options  Link Line Options  Mini ink  Normal Lossless transmission line solved in a single PC  0  Lossless transmission line solved in PC cluster array Link between PC nodes-, connected to IDE port 1  1  Lossless transmission line solved in PC cluster array Link between PC nodes-, connected lo IDF. port 2  2  V.IIIL-  The following C code shows how the process of reading and writing the data from and to the I/O interface card is implemented. if {  ( l i n e [ i ] .mmlink==l) outp(IDE0_CS1, ReadMm3phLine ReadMm3phLine ReadMm3phLine  4 | 0x80 ); ( ( u n s i g n e d s h o r t i n t *)&vl_mm, IDE0_CS0) ( ( u n s i g n e d s h o r t i n t *)&v3_mm, IDE0_CS0) ( ( u n s i g n e d s h o r t i n t *)&v5_mm, IDE0_CS0)  }  e l s e i f (line[i].mmlink==2) { o u t p ( I D E l _ C S l , 4 | 0x80 ); ReadMm3phLine ( ( u n s i g n e d s h o r t i n t *)&vl_mm, IDE1_CS0) ReadMm3phLine ( ( u n s i g n e d s h o r t i n t *)&v3_mm, IDE1_CS0) ReadMm3phLine ( ( u n s i g n e d s h o r t i n t *)&v5_mm, IDE1_CS0) }  38  Under the present architecture a check-sum is needed in order to verify the accuracy of the transferred data between nodes.  3.9 I/O Card Performance The inherent time for the write operation of a byte through the I/O interface card is shown in Figure 3-12 on page 40, the data was acquired with a Tektronik 340A digital Scope. For power simulation in which the links between substations usually involve no more than a double three phase circuit, it is much more convenient to design a communication interface which achieves a scant latency time. This approach is more convenient even though the final data transfer rate may become lower than that of a high speed communication network. This situation is especially relevant to our case since thanks to the decoupling introduced by the transmission links, only a few bytes per time step need to be transferred to the other subsystem node. If the load distribution among the P C cluster were to involve a high number of transmission links between solver nodes, such as in the case of lower-voltage distribution systems could be feasible the implementation of a high speed communication network. This is not the case, however of High voltage power systems for which the presented topology works appropriately. While a typical Myrynet communication network can achieve up to 138 Mb/sec with a round trip latency roundtrip of 20 microseconds [7], the I/O interface card presented in this work achieves 4.5 Mb/sec with a round trip latency of around 1.5 microseconds. For  39  example, while the Myrynet network is still under configuration, the developed I/O card interface is already able to transfer all the information needed for the calculation. This round trip I D E latency is fixed by the C P U bus frequency. The faster the bus the smaller the latency introduced.  Tek Run: 200MS/S Sample T [  fflBB I c h i -width 329.2ns  EB  am  M 250ns  1 V  Chi "V.  3.42 V  21 Nov 1999 04:37:57  Figure 3-12. I/O interface Write operation  Once the links layout is defined in the simulator, the associated communication time is constant, independently of the number of computers added to the array. SeeTable 3-4. Table 3-4. Communication times vs. links layout 3 Phase Line Single circuit  3 Phase Line Double circuit  1  7 u.s  14 LIS  2  14 LIS  28  Links per subsystem node  40  LIS  As it can be observed in Table 3-4 on page 40, in the case of a substation node with one incoming and one outgoing three-phase double-circuit link, the time involved in the communication is 28 microseconds, a few microseconds more than the needed time to configure a Myrynet high speed communication network. Table 3-5 below, and Table 3-13 on page 42 show the relation between the number of nodes connected, the timings, and the connection layout of the nodes in the array. It is clear that for a given load size —expressed in terms of the computational time needed to solve it— applied to each node element of the PC cluster array, there is a defined time step which can be implemented simulating in real-time, independently of the number of PCs added to the array. Table 3-5.  Communication  Times vs. Number  Maximum of 2 line Links per Subsystem  Number of P C s 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15  Delta T Single M 20.00 40.00 60.00 80.00 100.00 120.00 140.00 160.00 180.00 200.00 220.00 240.00 260.00 280.00 300.00  3 phase-MMLine (24 Bytes) Delta T .Communication Time Real MM 20.00 0 7.2 27.20 34.40 14.4 14.4 34.40 14.4 34.40 34.40 14.4 14.4 34.40 14.4 34.40 14.4 34.40 14.4 34.40 34.40 14.4 14.4 34.40 14.4 34.40 14.4 34.40 14.4 34.40  0.30 20.00  % Time Improvement 0.00 32.00 42.67 57.00 65.60 71.33 75.43 78.50 80.89 82.80 84.36 85.67 86.77 87.71 88.53  6. Timings for Pentium II400 Mhz, PIO mode 2.  41  of Subsystem  Nodes  6  microseconds / Byte Nodal Load (microsec) equivalent to 30 nodes  Delta T Real M M 20.00 34.40 48.80 48.80 48.80 48.80 48.80 48.80 48.80 48.80. 48.80 48.80 48.80 48.80 48.80  6 phase MMLine (48 Bytes) Communication % Time Improvement Time U 0.00 14.4 14.00 18.67 28.8 39.00 28.8 28.8 51.20 28.8 59.33 28.8 65.14 69.50 28.8 72.89 28.8 75.60 28.8 77.82 28.8 28.8 79.67 28.8 81.23 82.57 28.8 28.8 83.73  This feature is advantageous, since it allows the simulation of systems of any size with the desired time step just by defining correctly the basic load block to be assigned to each P C in the cluster and connecting the necessary P C solvers to the array.  Single PC vs P C cluster scheme  0.00 -1  1  1 Delta T single PC  1  2  1  3  4  1  1  5  — m — Delta T Real MM, 3 phase single circuit  1  6  1  7  8  1  9  1  10  —A— Delta T Real MM, 6 phase double circuit  Figure 3-13. Communication Timings, Single PC vs. PC cluster solution  42  TEST CASES  Test case 1 and its subcases are described in this chapter. Each test case was run first with a single P C using the R T N S software, applying the minimum possible time step to achieve real-time performance. After this, the same system was pre-processed to run with a P C cluster scheme running the R T D N S software. Two cases were considered. The first case used the same time step as the generic single P C case to verify the accuracy of the distributed scheme implementation. The second case adopted the minimum possible time step to achieve real-time performance with a P C cluster scheme in order to quantify the speed gain of the proposed distributed architecture. The adopted nomenclature is as follows: •  Test Case 1 - Full System simulated in one P C using R T N S . See Figure 4-1 on page 45.  •  Test Case l a - Subsystem of the full system, simulated in Node I of the P C cluster array. See Figure 4-2 on page 46.  •  Test Case lb - Subsystem of the full system, simulated in Node II of the P C cluster array. See Figure 4-2 on page 46.  Test Case 1 is a 54 node/88 Branch system which includes sources, three phase transmission lines featured in single and double circuits, M O V s , and Thevenin equivalent  43  circuits for certain parts of the network. When a P C cluster solution is implemented, one of the three-phase transmission lines is chosen as the Link Line between subsystem calculation nodes. See Figure 4-2 on page 46. Under these circumstances, the load of the subsystem is not symmetric and it does not produce the most effective distribution. In spite of this situation, the speed gain is still very good. For instance, in Test Case 1 the speed gain under the tested load distribution is 32.89%, while under a perfectly symmetrical distribution of the load the speed gain can reach a value around 34,7%'. The obtained speed performances for Test Case 1 are shown in Table 4-1. To perform the simulations, Pentiums II 400 Mhz, Asus P3B motherboards, 64 M b R A M , Phar Lap T N T E T S 8.5 [11] as the real-time operating system, and R T D N S were used. The present times include 6 outputs for the single-PC case, while for each subsystem case - l a & lb- 3 outputs were included.  Table 4-1. \rchilcclurc  '  Test Case I timing  results  Nodes per  M i n i m u m '1 ime  Adopted T i m e  Machine  Step to achicxc  Step for test  Keal l i m e  simulations  Single l'(  54  68.7 usee  70 usee  IT  24  43.6  Lisec  50 usee  30  46.1  Lisec  50  (luster  scheme with two  lisec  Nodes Speed (Jain  A single phase fault was applied during 10 milliseconds in Test Case 1. To compare the accuracy of R T D N S against R T N S , the results obtained by both the single P C and P C  1. Adopting a P C cluster layout with single three phase L i n e L i n k and perfect Symmetric subsystem node Load.  44  cluster architectures using the same time step were plotted on the same graph. See Figure 47 on page 49. A detailed zoom graph is shown in Figure 4-8 on page 50. There is no visual difference between both results, as it can be seen in the detailed zoom graph. The difference in percentage between the full system simulated in only one P C using R T N S and the same case distributed in two machines is zero. This proves that the Link Line model is correct, and that the I/O interface card does not introduce any error in the simulation result. R T N S was validated against Microtran E M T P in [4].  66.0nF/250KV  Figure 4-1. Test Case 1  A comparison between running the same system at the delta t required by the singleP C solution of R T N S , which is 70 microseconds, and the two-PC-solution of R T D N S , which is a 50 microseconds, is shown in Figure 4-9 on page 51. Figure 4-6 on page 48, presents the execution times of Test case 1 applying both single P C and P C cluster scheme solutions. The timing difference between the P C cluster nodes is due to the non-symmetric distribution of the load. The minimum possible real-time 45  step is defined for the perfect symmetric load distribution and it is located between the presented node 1 and node 2 timings. Subsystem Node I - 30 Nodes / 49 Branch  Figure 4-2. Test Cases la & lb  Figure 4-3. PC Cluster of two computers running Test Case la & lb, front view  46  Figure 4-4. PC Cluster of two computers, rear view  Figure 4-5. PC Cluster of three computers  Al  o  co  in  -3-  co  CNJ  i -  spuoossojojuj U| SUJJX  Figure 4-6. Execution  times Single PC vs. PC clustered scheme  48  o  Figure 4-7. Single PC vs. PC cluster, Solution using the sametimestep  49  O O CM I  O LO CM I  O O co I  co cO x:  CL  O CM CO  O O CO  o o  00  CM  CO CM  O CM  Figure 4-8. RTNS vs. RTDNS, Plots are superimposed and indistinguishable  50  51  The obtained analog output voltages at the evaluated nodes by using a multi I/O board from National Instruments, the AT-MIO-16 are presented in Figure 4-10.  T e k R u n : 1 oks/s T  sample I  QiSE I  Ch2 Freq 6 1 . 1 6 Hz  Chi  T K  are  1 V\  (vl  5ms  Ext  Field 1  Figure 4-10. Analog Outputs from test easel, PC cluster  21 N o v 1 9 9 9 05:08:24  simulation  Also, Test case 2 and its subcases are presented. The simulated systems using a P C cluster of three Pentium II400 M h z is shown in Figure 4-11 on page 53. The speed gain is presented in Figure 4-2 on page 53. Since the load distribution is non symmetric, the obtained speed gain differs from the theoretical 42%.  52  Table 4-2. Test Case 2 timing results" Architecture  Nodes per Machine  Minimum Time Step to achieve Real Time  Adopted Time Step for test simulations  Single PC  78  60 usee  65 p.sec  PC Cluster  30  33.42 usee  40 usee  with three  18  37.77 usee  40 usee  Solver Nodes  30  33.44 usee  40 usee  Speed Gain  37.05%  a. Simulations without Analog Outputs and using Pentium II400 Mhz, PIO mode 2 and non symmetric load.  53  CONCLUSIONS AND RECOMMENDATIONS  The presented thesis attempts to introduce an efficient and accurate PC-cluster solution for real-time network simulations using off-the-shelf equipment plus an efficient I/O interface to perform the communication between the nodes of the array. The proposed solution layout shows a better and less expensive performance than the only current competing solution in the field in P C cluster arrays. A n extra relevant time saving is introduced by a more efficient solver algorithm solution. The constant inherent time of the communication interface represents a great advantage since an unlimited number of P C ' s can be connected to the array without affecting the communication time in the time step solution. So far, a P C cluster of up to three PCs has been tested. The present R T D N S software allows communication between nodes of the P C cluster with up to two computers for each subsystem node, achieving real-time performance for three-phase single-circuit links. A simple C P U upgrade to a Pentium III 600 M h z would allow the simulation of systems with up to two double-circuit links. The importance of the multi-PC's synchronization should be stressed, since even if small differences, like one time step, were allowed, they could introduce numerical oscillations and accumulated run-off errors. To address this problem a robust external and shared synchronism source is implemented based on a programmable internal interval timer C H M O S — 82C54—. 54  Part of the appeal of the proposed solution is that the concept can perfectly be applied to future faster C P U busses with minor or no modifications. Future research should be done to develop: •  A better and friendlier user interface for the simulator.  •  A n implementation of new models for full frequency dependent transmission lines in real time and their corresponding Link Line version.  •  A n implementation of data compression algorithms to increase the capabilities of the I/O interface.  •  A n exploration of Direct Memory Access performances. 1  A crucial topic that calls for future research is related to the development of a Load Distribution algorithm which automatically finds the optimal load distribution among the subsystem nodes. In order to implement bigger P C cluster arrays, it is necessary to perform the distribution of the data cases among the P C cluster array using a more flexible scheme, such as F T P or TCP/IP.  1. A method for transferring data between an external device and memory without interrupting program flow or requiring C P U intervention  55  References  [1]  H.W. Dommel, (April 1969).Digital computer solution of electromagnetic transients in single and multiphase networks, I E E E Trans, on Power Apparatus and Systems, Vol. PAS-88, No. 4, pp 388-399.  [2]  H.W. Dommel, (1992). E M T P Theory Book. (2nd ed.), Vancouver, Microtran Power System Analysis Corporation.  [3]  J.R. Marti, L . R . Linares, (August 1994). Real Time EMTP-based transients simulation. I E E E Trans, on Power Systems, Vol. 9, NO.3, pp. 1309-1317.  [4]  L . R . Linares R., (April 1993). A Real Time Simulator for Power Electric Networks, M . A . S c . Thesis, U B C Library.  [5]  W . H . Press, S.A. Teukolsky, W.T. Vetterling, B . R Flannery, (1997). Numerical Recipies in C. (2nd. ed.), Cambridge.  [6]  K . K . Fung, S.Y.R. Hui, (1996). Transputer simulation of decoupled electrical circuits. Mathematics and Computers in simulation 42, 1-13.  [7]  Hiroshi Tezuca, Francis O ' Carroll, Atsushi Hori and Yutaka Ishikawa, (1997). Pin-down Cache : A virtual Memory Management Technique for zero-copy Communication.  [8]  Yasushi Fujimoto, Yuan B i n , Hisao Taoka, (1999). Real-time Power System Simulator on a P C cluster. IPST'99 Budapest.  56  [9]  J. Calvino-Fraga, (June 1999). Implementation of a Real Time Simulator for Protective Relay Testing. M . A . S c . Thesis, U B C Library.  [10]  Fromonto, Levacher, Strunz, (1999). A n efficient Parallel Computer Architecture for real time simulations of electromagnetic transients in power systems Electricite de France, 31st North American Power Symposium, 490 - 496.  [11]  Phar Lap. (1999V Phar Lap T N T Embedded ToolSuite. Available: http://www.pharlap.com/  [12]  IDT, (1999). IDT 7132/42 Data Sheet. Available: http://www.idt.com/products/pages/Multi-Port-7132.html  [13]  National Instrument, (1999). C V I home page Available: http://www.ni.com/cvi/  [14]  Marti, J. R., (1991). Simplified Updating Schemes for E M T P ' s models. The U n i versity of British Columbia. Department of Electrical Engineering Internal Report.  [15]  Intel, (1999). I D E Controller Functional Description. Available:http://developer.intel.com/design/intarch/techinfo/440bx/idefunc.htm#716485  [16]  Marti, J. R., Linares, L.R., Calvino-Fraga, J., Dommel, H . W., (1998). O V N I : an Object Approach to Real-Time Power System Simulators. Proceedings of the 1998 International Conference on Power System Technology, Powercon'98 Beijing, China, April 18-21, pp. 977-981.  [17]  National Instruments, (September 1994). AT-AO-6/10 User Manual.  57  [18]  Shanley, T., & Anderson, D.(1997). PCI System Architecture (3rd ed.), AddisonWesley.  [19]  Texas Instrument, (1985). The T T L Data Book (vol. 2).  [20]  D'Ajuz, A . , Fonseca, C , Carvalho, F.M., Filho, J., Dias, L . E . , Pereira, M . , et al., (1987). Transitorios Electricos e Coordenagao de Isolamento. aplicacao em sistemas de potencia de alta tensao, Universidade Federal Fluminense, Editora Universitaria.  58  Appendix I. Schematics and P C B Files  59  Figure 1-1. Top PCB Plot, I/O Interface Card  60  5  «  i&  8  I |  i i J i l  3  S  •  r S  3<3333£<<<o§o33S£S  OQQOOOOfJ  a l t K I n : a : K | o : a : Qt: c : ot: c c a : a : OE a : a : K  E  v. tc a. oc a;  5 5 5S  0  3  III  0  11 S  555 55 5  I  i! 3  i  E  S555  —  C f CC t E K  L  S3  a8 •  Figure 1-2. General Block Schematic of the IDE Interface Card  61  go  < ra  HI  J  .11 I  i  i  s-i  1t  '  ? 8SSSSSSS  8°HH lj—1  H }  IN.  iii' AAAAA  innnnr  ffl  Figure 1-3. Schematic of the Synchronization Block  62  Final Netlist I/O Interface Card R e v i s e d : June 24  1999  (Cl  cap300  lOOuF]  [CIO  caplOO  22p]  [Cll  caplOO  22p]  [C12  caplOO  22p]  [C13  caplOO  22p]  [C14  caplOO  22p]  [C15  caplOO  lu]  [C16  caplOO  lu]  [C17  caplOO  lu]  [C18  caplOO  lu]  [C19  caplOO  22p]  [C2  caplOO  22p]  [C20  caplOO  22p]  [C3  caplOO  22p]  [C4  caplOO  22p]  [CS  caplOO  22p]  [C6  caplOO  22p]  [C7  caplOO  22p]  [C8  caplOO  22p]  [C9  caplOO  22p]  [Dl  ledlOO  LED Green]  [D2  ledlOO  LED Green]  [D3  ledlOO  LED Y e l l l o w ]  [D4  ledlOO  LED Red]  [D5  ledlOO  LED Red]  [JI [J2  molex40  CON40A]  molex40  CON4 0A]  [J3  dipl4  XTAL 16Mhz]  [JP1  idc2 6  DB25]  [RI [RIO  res300  390]  res300  150k]  [Rll  res300  330]  [R2  res300  390]  [R3  res300  150k]  [R4  res300  330]  [R5  res300  150k] '  [R6  res300  330]  [R7  res300  330]  [R8  res300  150k]  [R9  res300  330]  [SI  dip8  SW DIP-4]  [Ul  dip48  IDT7132]  [U10  dipl6  74LS161]  [Ull  dipl6  74LS161]  [U12  dip20  74LS273]  [U13  dipl4  74LS00]  [U14  dipl6  74LS123]  [U15  dipl6  74LS123]  [U16  dip24  82C54]  [U17  dipl6  74LS161]  [U2  dip4 8  IDT7142]  [U3  dipl6  74LS161]  [U4  dipl4  74LS00]  (N10656  UIO,15  Ull,10)  (N10703  U13, 1  U12.6  (N10669  U13.4  U12.19)  (N10657  J2.31  U13,6)  (N10681  U12.1  J2,l)  (N07333  U4, 8  U4, 5)  U13.2)  63  (N07366  U4,4  (N07420  U9,3  U4,13)  (N07407  U8, 6  U7,l)  (N07218  U8,3  U6,11)  (N07311  U3,2  U4, 6  (N07134  U3.15  US,10)  (N07209  J I , 38  U3,l  (N07413  U4, 12  U6,19)  (N07421  U4, 11  J1.31)  (N07343  U6, 1  (N00332  U16, 8  J I , 1) JP1.2)  (N00359  JP1,14  U16,19)  (N00351 (N00372  JP1, 8  U16.2)  JP1.4  U16,6)  (N00327  JP1, 1  U16,23)  (N00354  U16, 1  JP1,9)  (N00365  JP1,17  U16,20)  (N00333  JP1, 3  U16.7)  (N00338  JP1, 5  U16,5)  (N00284  U17,13  SI,7)  (N00283  U17,12  S I , 6)  (00346  JP1.7  U16,3)  (N00257  SI, 2  SI, 3  (N00285  U17,14  S I , 8)  (N00282  U17,11  SI, 5)  (N01635  J3 , 8  U17,2)  (N00343  JP1, 6  U16,4)  U4,3)  U5,2) U5,l  U8,2  U8,4)  SI,4  U16.9  Sl,l  64  U16.15  U16.18)  Figure 1-4. Top PCB Plot, I/O Interface Card  65  Appendix II. Test Case Files  66  Preprocessor Input Files Test Case 1 .BEGIN FILE .BEGIN GENERAL-DATA d e l t a T : 70.0E-6 t o t a l T i m e : 30000 numLumped: 3 0 nujnLines: 6 numSources: 6 numOutNodes: 6 .END GENERAL-DATA .BEGIN LUMPED R 6.5 n3a n2a c a l c C u r r no MOV no R 6.5 n3b n2b c a l c C u r r no MOV no R 6.5 n3c n2c c a l c C u r r no MOV no L 345 n2a n l a c a l c C u r r no MOV no L 345 n2b n i b c a l c C u r r no MOV no L 345 n2c n l c c a l c C u r r no MOV no C 66. 0 n6a n8a c a l c C u r r no MOV yes 250000 C 66 . 0 n6b n8b c a l c C u r r no MOV yes 250000 c 66 .0 n6c n8c c a l c C u r r no MOV yes 250000 c 66 .0 n7a n8a c a l c C u r r no MOV yes 250000 c 66 . 0 n7b n8b c a l c C u r r no MOV yes 250000 c 66 . 0 n7c n8c c a l c C u r r no MOV yes 250000 L 31 .11 NodBa n l 2 a c a l c C u r r no MOV no L 31 .11 NodBb n l 2 b c a l c C u r r no MOV no L 31 .11 NodBc n l 2 c c a l c C u r r no MOV no R 56 . 0 n l 2 a GROUND c a l c C u r r no MOV no R 56 . 0 n l 2 b GROUND c a l c C u r r no MOV no R 56 . 0 n l 2 c GROUND c a l c C u r r no MOV no R 6 . 5 n l 5 a n l 4 a c a l c C u r r no MOV no R 6 . 5 n l 5 b n l 4 b c a l c C u r r no MOV no R 6.5 n l 5 c n l 4 c c a l c C u r r no MOV no L 345 n l 4 a n9a c a l c C u r r : no MOV: no L 345 n l 4 b n9b c a l c C u r r : no MOV: no L 345 n l 4 c n9c c a l c C u r r : no MOV: no no MOV yes 250000 C 66 0 n l 8 a NodAa c a l c C u r r no MOV yes 250000 C 66 0 n l 8 b NodAb c a l c C u r r no MOV yes 250000 C 66 0 n l 8 c NodAc c a l c C u r r no MOV yes 250000 C 66 0 n l 9 a NodAa c a l c C u r r no MOV yes 250000 C 66 0 n l 9 b NodAb c a l c C u r r no MOV yes 250000 C 66 0 n l 9 c NodAc c a l c C u r r .END LUMPED .BEGIN LINES . BEGIN LINE-0 phases: 6 MmLink: OMmLink: 1 Zc: 987.9 328.4 275.9 222.6 237.2 244.1 d e l a y : 1.4069e-3 0.8699e-3 0.8506e-3 0.8418e-3 0.8418e-3 0.8420e-3 nodes: n l a no n4a no n i b no n4b no n l c no n4c no n l a no n5a no n i b no n5b no n l c no n5c no q-matrix: 0 23244529 -0 41893693 -0 28289203 0 51146427 -0 48902472 -0 55148454 0 01379915 -0 39499073 -0 48191931 0 26900888 0 58057410 0 33293163 0 45421967 0 48894408 -0 28793145 0 35712111 0 51054438 -0 16496383 0 16496383 -0 45421967 -0 48894408 -0 28793145 0 35712111 0 51054438 0 39499073 0 48191931 -0 26900888 0 58057410 0 33293163 0 01379915 0 55148454 -0 23244529 0 41893693 -0 28289203 0 51146427 -0 48902472 . END L I N E - 0 . BEGIN L I N E - 1 phases: 6 MmLink: OMmLink: 0 Zc: 987.9 328.4 275.9 222.6 237.2 244.1 delay: 1.4069e-3 0.8699e-3 0.8506e-3 nodes:  0.8418e-3 0.8418e-3 0.8420e-3 n 4 a no n6a no n4b no n6b no n4c no n6c no  n5a q-matrix:  no n7a no n5b no n7b no n5c no n7c no  0 51146427  48902472  -0  55148454  0 33293163 0 35712111  0 01379915 0 51054438  -0  39499073  -0  16496383  0 35712111  0 51054438  0 16496383  48191931 0 45421967 -0 45421967  0 33293163 0 51146427  0 01379915 -0 48902472  0 39499073 0 55148454  0 48191931 -0 23244529  -0  0 23244529 -0  .END LINE-1 .BEGIN LINE-2  67  -0  41893693  0 26900888 0 48894408 -0  48894408  -0  26900888  0 41893693  -0  28289203  0 58057410 -0  28793145  -0  28793145  0 58057410 -0  28289203  phases: 3 MmLink: 0 Zc:  621.9 275.3 290.9  delay:  0.4710e-3  nodes:  n8a no NodBa no n8b no NodBb no n8c no NodBc  0.3416e-3  0.3399e-3  •  q-matrix:  0.58702696 -0.40302458 0 7 0 7 1 0 6 7 8 0.55427582 0.82086139 0 0 0 0 0 0 0 0 0 0.58702696 -0.40302458 -0 7 0 7 1 0 6 7 8 .END LINE-2 .BEGIN LINE-3 phases: 6 MmLink: 0 Zc: 987.9 328.4 275.9 222.6 237.2 244.1 delay: 1.4069e-3 0.8699e-3 0.8506e-3 0.8418e-3 0.8418e-3 0.8420e-3 nodes: n9a no nl6a no n9b no nl6b no n9c no nl6c no n9a no nl7a no n9b no nl7b no n9c no nl7c no q-matrix: 0 51146427 -0 48902472 -0 55148454 0 23244529 -0 0 33293163 0 01379915 -0 39499073 -0 48191931 0 0 35712111 0 51054438 -0 16496383 0 45421967 0 0 35712111 0 51054438 0 16496383 -0 45421967 -0 0 33293163 0 01379915 0 39499073 0 48191931 -0 0 51146427 -0 48902472 0 55148454 -0 23244529 0 . END LINE-3 . BEGIN LINE-4 phases: 6 MmLink: 0  41893693 26900888 48894408 48894408 26900888 41893693  -0 0 -0 -0 0 -0  28289203 58057410 28793145 28793145 58057410 28289203  41893693 26900888 48894408 48894408 26900888 41893693  -0 0 -0 -0 0 -0  28289203 58057410 28793145 28793145 58057410 28289203  Zc: 987.9 328.4 275.9 222.6 237.2 244.1 delay: 1.4069e-3 0.8699e-3 0 . 8506e-3 0.8418e-3 0.8418e-3 0.8420e-3 nodes: nl6a no nl8a no nl6b no nl8b no nl6c no nl8c nl7a no nl9a no nl7b no nl9b no nl7c no nl9c q-matrix:  0 51146427 -0 48902472 -0 55148454 0 23244529 -0 0 33293163 0 01379915 -0 39499073 -0 48191931 0 0 35712111 0 51054438 -0 16496383 0 45421967 0 0 35712111 0 51054438 0 16496383 -0 45421967 -0 0 33293163 0 01379915 0 39499073 0 48191931 -0 o' 51146427 -0 48902472 0 55148454 -0 23244529 0 . END LINE-4 .BEGIN LINE-5 phases: 3 MmLink: 0 Zc: 621.9 275.3 290.9 delay: 0.4710e-3 0.3416e-3 0.3399e-3 nodes: NodAa no NodBa no NodAb no NodBb no NodAc no q-matrix: 0.58702696 -0,.40302458 0.70710678 0.55427582 0.82086139 0.00000000 0.58702696 -0.40302458 -0.70710678 .END LINE-5 .END LINES .BEGIN SOURCES 408248 0 n3a 408248 120 n3b 408248 -120 n3c 408248 0 nl5a 408248 120 nl5b 408248 -120 nl5c . END SOURCES .BEGIN SWITCHES total: 6 n5a GROUND close 1 open: close: 3000 open: 3 600 n5b GROUND close: 1 open: 1 close: 3000 open: 3600 n5c GROUND close: 1 open: 1 close: 3000 open: 3600 GROUND nl7a close: 1 open: 1 close: 0.10 open: 0.20 GROUND nl7b close: 1 open: 68  c l o s e : 3000 open: 3 600 GROUND n l 7 c c l o s e ; 1 open: c l o s e : 3000 open: 3600 . END SWITCHES .BEGIN OUTPUT NodAa NodAb NodAc NodBa NodBb NodBc .END OUTPUT .BEGIN DACS total: 6 .BEGIN TO type: CCVT port: 0 3.00E-10 C l : 9.97E-08 Cc: 1.30E-10 0.708 Rc: Rf: 37.5 0.73 Cf: R: 100 r a t i o b : 110 r a t i o a : 5000 .END tO .BEGIN T l type: CCVT port: 1 C2: 3.00E-10 C l : 9.97E-08 0.708 Rc Cc: 1.30E-10 0.73 Cf Rf: 37.5 R: 100 r a t i o b : 110 r a t i o a : 5000 .END t l .BEGIN T2 type: CCVT port: 2 C2: 3.00E-10 9.97E-08 1.30E-10 0.708 Rc : 37.5 0 .73 Cf ; 100 r a t i o b : 110 r a t i o a : 5000 . END t2 .BEGIN T3 type: CCVT port: 3 C2: 3.00E-10 C l : 9.97E-08 Lc: 0.708 Rc: Cc: 1.30E-10 L f : 0.73 Cf: Rf: 37.5 R: 100 r a t i o b : 110 r a t i o a : 5000 . END t3 .BEGIN T4 type: CCVT port: 4 C2: 3.00E-10 C l : 9.97E-08 Lc: 0.708 Cc: 1.30E-10 Rc : L f : 0.73 Rf: 37.5 Cf : R: 100 r a t i o b : 110 r a t i o a : 5000 . END t4 . BEGIN T5 type: CCVT port: 5 C2: 3.00E-10 9.97E-08 Cl 1.30E-10 0 .708 Cc Rc : Rf 37 . 5 0.73 Cf : R: 100 5000 r a t i o b : 110 ratioa .END t5 .END DACS .END FILE  628  9.6E-06  628 9.6E-06  628 9 . 6E-06  628 9.6E-06  628 9.6E-06  628 9.6E-06  69  Test Case la .BEGIN FILE .BEGIN GENERAL-DATA d e l t a T : 50.0E-6 t o t a l T i m e : 30000 numLumped: 18 numLines: 4 numsources: 3 numOutNodes: 3 .END GENERAL-DATA .BEGIN LUMPED R 6.5 n3a n2a c a l c C u r r : no R 6.5 n3b n2b c a l c C u r r : no R 6.5 n3c n2c c a l c C u r r : no L 345 n2a n l a c a l c C u r r : no L 345 n2b n i b c a l c C u r r : no L 345 n2c n l c c a l c C u r r : no C 66 .0 n6a n8a c a l c C u r r no C 66 .0 n6b n8b c a l c C u r r no c 66 .0 n6c n8c c a l c C u r r no C 66. 0 n7a n8a c a l c C u r r no C 66 .0 n7b n8b c a l c C u r r no C 66 .0 n7c n8c c a l c C u r r no L 31. 11 NodBa nlOa c a l c C u r r L 31. 11 NodBb nlOb c a l c C u r r L 31 .11 NodBc nlOc c a l c C u r r R 56. 0 nlOa GROUND c a l c C u r r R 56. 0 nlOb GROUND c a l c C u r r R 56 .0 nlOc GROUND c a l c C u r r  MOV MOV MOV MOV MOV MOV MOV MOV MOV MOV MOV MOV no no no no no no  no no no no no no yes yes yes yes yes yes MOV MOV MOV MOV MOV MOV  250000 250000 250000 250000 250000 250000 no no no no no no  . END LUMPED .BEGIN LINES .BEGIN LINE-0 phases: 6 MmLink: 0 Zc: 987.9 328.4 275.9 222.6 237.2 244.1 d e l a y : 1.4069e-3 0.8699e-3, 0.8506e-3 0.8418e-3 0.8418e-3 0.8420e-3 nodes: n l a no n4a no n i b no n4b no n l c no n4c no n l a no n5a no n i b no n5b no n l c no n5c no q-matrix: 0 51146427 0 33293163  - 0 48902472 0 01379915  0 35712111 0 35712111 0 33293163 0 51146427  0 51054438 0 51054438 0 01379915 -0 48902472  -0 -0  55148454 39499073  -0 16496383 0 16496383 0 39499073 0 55148454  -0  0 23244529 48191931  -0 41893693 0 26900888  0 45421967 -0 45421967  -0  0 48894408 48894408  0 48191931 -0 26900888 -0 23244529 0 41893693  .END LINE-0 . BEGIN LINE-1 phases: 6 MmLink: 0 Zc: 987.9 328.4 275 9 222.6 237 .2 244.1 0.8699e-3 0 8506e-3 d e l a y : 1.4069e-3 0.8418e-3 0 8420e-3 0.8418e-3 nodes: n4a no n6a no n4b no n6b no n4c no n6c no n5a no n7a no n5b no n7b no n5c no n7c no q-matrix: 0.51146427 48902472 - 0 . 5 5 1 4 8 4 5 4 0.23244529 0. 48191931 0. 0.33293163 0.01379915 -0.39499073 4 5 4 2 1 9 6 7 0 0.35712111 0.51054438 -0.16496383 4 5 4 2 1 9 6 7 0 0.35712111 0.51054438 0.16496383 0.33293163 0.01379915 0.39499073 0 . 4 8 1 9 1 9 3 1 - 0 0 0.51146427 -0.48902472 0.55148454 - 0 . 2 3 2 4 4 5 2 9 .END LINE-1 . BEGIN LINE-2 phases: 3 MmLink: 0 Zc: 621.9 275.3 290.9 d e l a y : 0.4710e-3 0.3416e-3 0.3399e-3 nodes: n8a no NodBa no n8b no NodBb no n8c no NodBc q-matrix: 70710678 0.58702696 - 0 . 4 0 3 0 2 4 5 8 0.55427582 0 . 82086139 00000000 0.58702696 - 0 . 4 0 3 0 2 4 5 8 70710678 . END LINE-2 . BEGIN LINE-3 phases: 3 MmLink: 1 Zc: 621.9 275  70  41893693  -0 28289203 0 58057410 -0  28793145  -0 28793145 0 58057410 -0 28289203  -0.28289203  26900888  0.58057410  48894408  -0 .28793145  48894408  -0.28793145  26900888  0.58057410  41893693  -0.28289203  d e l a y : 0.4710e-3 0.3416e-3 0.3399e-3 nodes: NodBa no GUNO no NodBb no GDOS no NodBc no GTRES no q-matrix: 0.58702696 -0.40302458 0.70710678 0.55427582 0.82086139 0.00000000 0.58702696 -0.40302458 -0.70710678 .END LINE-3 .END LINES .BEGIN SOURCES 408248 0 n3a 408248 120 n3b 408248 -120 n3c . END SOURCES .BEGIN SWITCHES total: 3 n5a GROUND c l o s e c l o s e : 3000 open: 3 600  1 open:  1  n5b GROUND c l o s e : c l o s e : 3000 open: 3600  1 open:  1  n5c GROUND c l o s e : c l o s e : 3000 open: 3600 .END SWITCHES .BEGIN OUTPUT NodBa NodBb NodBc .END OUTPUT .BEGIN DACS total: 3 .BEGIN TO t y p e : CCVT port: 0 9.97E-08 Cl 1.30E-10 Cc Rf 37.5 R: 100 ratioa 5000 .END tO .BEGIN T l t y p e : CCVT port: 1 C l : 9.97E-08 C c : 1.30E-10 R f : 37.5 R: 100 ratioa 5000 .END t l .BEGIN T2 t y p e : CCVT port: 2 9.97E-08 Cl 1.30E-10 Cc 37.5 Rf R: 100 r a t i o a : 5000 .END DACS . END FILE  1 open:  1  C2 Lc Lf  0.708 0.73  ratiob:  Rc Cf  628 9.6E-06  110  Rc : 62 8 Cf ; 9.6E-06 ratiob:  110  00E-10 708 Rc: 6 2 8 73 Cf : 9 . 6 E - 0 6 ratiob:  110  Test Case lb .BEGIN FILE .BEGIN GENERAL-DATA d e l t a T : 50.0E-6 t o t a l T i m e : 30000 numLumped: 12 numLines: 3 numsources: 3 numOutNodes: 3 .END GENERAL-DATA .BEGIN LUMPED R 6.5 n3a n2a c a l c C u r r : R 6.5 n3b n2b c a l c C u r r : R 6 . 5 n3c n2c c a l c C u r r : L 345 n2a n l a c a l c C u r r : L 345 n2b n i b c a l c C u r r : L 345 n2c n l c c a l c C u r r :  no no no no no no  MOV: MOV: MOV: MOV: MOV: MOV:  no no no no no no  71  c c c c c c  no MOV yes 250000 66 0 n6a NodAa c a l c C u r r no MOV yes 250000 66 0 n6b NodAb c a l c C u r r no MOV yes 250000 66 0 n6c NodAc c a l c C u r r no MOV yes 250000 66 0 n7a NodAa c a l c C u r r no MOV yes 250000 66 0 n7b NodAb c a l c C u r r c a l c C u r r no MOV yes 250000 66 0 n7c NodAc .END LUMPED .BEGIN LINES .BEGIN LINE-0 phases: 6 MmLink: 0 Zc: 987.9 328.4 275 9 222.6 237.2 244.1 0.8699e-3 0.8506e-3 d e l a y : 1.4069e-3 0.8418e-3 0.8420e-3 0.8418e-3 n l a no n4a no n i b no n4b no n l c no n4c no nodes: n l a no n5a no n i b no n5b no n l c no n5c no 0  51146427' -0  48902472  -0  55148454  0  23244529  -0  41893693  -0  0 0  33293163  01379915  -0  39499073  58057410  16496383 16496383  0 0  0  -0 0  48191931 45421967 45421967  26900888  51054438 51054438  -0 0  48894408  -0  48894408  -0 -0  28793145 28793145  01379915 48902472  0  39499073 55148454  48191931  -0 0  26900888 41893693  0 -0  58057410  23244529  28289203 58057410  0 0 0  35712111 35712111 33293163 51146427  0 0 0 0 -0  0  -0 0 -0  28289203  28289203  .END LINE-0 .BEGIN LINE-1 phases: 6 MmLink: 0 Zc: 987.9 328.4 275.9 222.6 237.2 2 4 4 . 1 d e l a y : 1.4069e-3 0.8699e-3 0 8506e-3 0.8418e-3 0.8418e-3 0 8420e-3 nodes: n4a no n6a no n4b no n6b no n4c no n6c no n5a no n7a no n5b no n7b no n5c no n7c no q-matrix: 0  51146427  -0  48902472  -0  55148454  0  23244529  -0  41893693  -0  0  33293163  0  -0  39499073  -0  26900888  0  35712111 35712111  0 0  -0 16496383 0 16496383  0 -0  48894408 48894408  33293163 51146427  0 -0  01379915 48902472  0 -0 0  48191931 45421967 45421967  0  0 0 0  01379915 51054438 51054438  -0 0  26900888 41893693  -0 -0 0  0  0 39499073 0 55148454  -0  48191931 23244529  .END LINE-1 .BEGIN LINE-2 phases: 3 MmLink: 1 Zc: 621.9 275.3 290.9 d e l a y : 0.4710e-3 0.3416e-3 0.3399e-3 nodes: NodAa no GUNO no NodAb no GDOS no NodAc no GTRES no q-matrix: 0.58702696 -0.40302458 0.70710678 0.55427582 0.82086139 0.00000000 0.58702696 -0.40302458 -0.70710678 .END LINE-2 .END LINES . BEGIN SOURCES 408248 0 n3a 408248 120 n3b 408248 -120 n3c .END SOURCES .BEGIN SWITCHES total: 3 GROUND n5a c l o s e : 1 open: 1 c l o s e : 0.10 open: 0.20 GROUND n5b c l o s e : 1 open: 1 c l o s e : 3000 open: 3 600 GROUND n5c c l o s e : 1 open: 1 c l o s e : 3000 open: 3600 . END SWITCHES .BEGIN OUTPUT NodAa NodAb NodAc .END OUTPUT .BEGIN DACS total : 3 .BEGIN TO type: CCVT port: 0 C l : 9.97E-08 C2: 3.00E-10 Cc: 1.30E-10 L c : 0.708 Rc:  628  72  -0  28793145 28793145 58057410 28289203  Rf: 37.5 R: 100 r a t i o a : 5000 . END tO .BEGIN T l type: CCVT port: 1 9.97E-08 Cl 1.30E-10 Cc 37.5 Rf R: 100 5000 ratioa .END t l .BEGIN T2 type: CCVT port: 2 C l : 9.97E-08 Cc: 1.3 0E-10 Rf: 37.5 R: 100 r a t i o a : 5000 .END t2 .END DACS .END FILE  Cf:  L f : 0.73  9.6E-06  r a t i o b : 110  00E-10 708 Rc: 628 73 Cf : 9.6E-06 ratiob:  110  00E708 73  Rc : 628 Cf : 9.6E-06  r a t i o b : 110  Test Case 2b  1  .BEGIN F I L E .BEGIN GENERAL-DATA d e l t a T : 40.0E-6 t o t a l T i m e : 30000 numLumped: 12 numLines: 3 numSources: 3 numOutNodes : 1 .END GENERAL-DATA .BEGIN LUMPED R 6 . 5 n2a n l a c a l c C u r r : no MOV no R 6.5 n2b' n i b c a l c C u r r : no MOV no R 6.5 n2c n l c c a l c C u r r : no MOV no L 345 n l a nOa c a l c C u r r : no MOV no L 345 n i b nOb c a l c C u r r : no MOV no L 345 n l c nOc c a l c C u r r : no MOV no C 66 0 n3a NodAa c a l c C u r r : no MOV yes 250000 C 66 0 n3b NodAb c a l c C u r r : no MOV yes 250000 C 66 0 n3c NodAc c a l c C u r r : no MOV yes 250000 c 66 0 n4a NodAa c a l c C u r r : no MOV yes 250000 c 66 0 n4b NodAb c a l c C u r r : no MOV yes 250000 c 66 0 n4c NodAc c a l c C u r r : no MOV yes 250000 .END LUMPED .BEGIN LINES .BEGIN LINE-0 phases: 6 MmLink: 0 Zc: 987.9 328.4 275.9 222.6 237.2 244.1 delay: 1.4069e-3 0.8699e-3 0.8506e-3 0.8418e-3 0.8418e-3 0.8420e-3 nodes: nOa no n3a no nOb no n3b no nOc no n3c no nOa no n4a no nOb no n4b no nOc no n4c no q-matrix: 0 23244529 0 51146427 -0 48902472 -0 55148454 0 01379915 -0 39499073 -0 48191931 0 33293163 0 45421967 0 35712111 0 51054438 -0 16496383 0 16496383 -0 45421967 0 35712111 0 51054438 0 01379915 0 39499073 0 48191931 0 33293163 0 55148454 -0 23244529 0 51146427 -0 48902472 . END LINE-0 .BEGIN LINE-1 phases: 3 MmLink: 1 .Zc: 621.9 275.3 290.9 d e l a y : 0.4710e-3 0.3416e-3 0.3399e-3 nodes: NodAa no GUNO no NodAb no GDOS no NodAc no q-matrix:  1. Case with two line links 73  -0 0 0 -0 -0 0  41893693 26900888 48894408 48894408 26900888 41893693  GTRES no  -0 0 -0 -0 0 -0  28289203 58057410 28793145 28793145 58057410 28289203  0.58702696 -0.40302458 0.70710678 0.55427582 0.82086139 0.00000000 0.58702696 -0.40302458 -0.70710678 .END LINE-1 .BEGIN LINE-2 phases: 3 MmLink: 2 Zc: 621.9 275.3 290.9 d e l a y : 0.4710e-3 0.3416e-3 0.3399e-3 nodes: NodAa no GUNO no NodAb no GDOS no NodAc no GTRES no q-matrix: 0.58702696 -0.40302458 0.70710678 0.55427582 0.82086139 0.00000000 0.58702696 -0.40302458 -0.70710678 .END LINE-2 . END LINES .BEGIN SOURCES 408248 0 n2a 408248 120 n2b 408248 -120 n2c . END SOURCES . BEGIN SWITCHES total: 0 .END SWITCHES .BEGIN OUTPUT NodAa . END OUTPUT .BEGIN DACS total: 1 .BEGIN TO type: CCVT port: 0 9.97E-08 0.708 1.30E-10 Rc : 628 0.73 37.5 Cf : 9.6E-06 100 r a t i o b : 110 r a t i o a : 5000 .END tO .END DACS .END FILE  Real Time Input File  2  Test Case la .BEGIN FILE .BEGIN GENERAL-DATA i n i t C o n d i t i o n s : no totalTime: 3.000000000e+004 deltaT: 4.990019960e-00S lumped: 18 lines: 4 3ph-blocks: 1 6ph-blocks: 3 9ph-blocks: 1 12ph-blocks: 0 15ph-blocks: 0 sources: 3 .END GENERAL-DATA .BEGIN NODES 2 n3b 3 n3c 4 nla 5 n4a 6 1 n3a n5a 11 7 8 nlc 9 n4c 10 n5b 12 n4b 16 n7a 17 n7b 18 n7c 14 n6b 15 n6c 23 n8c 24 NodBc 25 GUNO 21 n8b 22 NodBb 30 n2c 31 nlOa 32' nlOb n2a 29 n2b 28 .END NODES .BEGIN BLOCKS .BEGIN BLOCK-30 size: 3 s o u r c e s : no nodes: 25 GUNO 2 6 GDOS 27 GTRES numSwStatus: 1  2. Only Tesis Case la RTDNS input file is shown  74  nib n5c 19 26 33  13 n6a n8a 20 NodBa GDOS 27 GTRES nlOc  initSwStatus: 0 A: switchstatus: 0 3.98585768052983550e+002 1.16776794444526690e+002 3.93826596145230500e+002 1.07 6857 6707 67 04830e+002 1.16776794444526670e+002  3.985857 68052983 500e+002  .END BLOCK-30 .BEGIN BLOCK-60 size: 6 sources: yes nodes: 28 n2a 4 1 n3a 2 shots: 334 numSwStatus: 1 initSwStatus: 0 A:  nla 6 n3b 3  nib n3c  8  nlc  29  n2b  30  n2c  switchstatus: 0 6.49699865385394750e+000 1.12165735520932090e-001 2.38724923349250280e+002 5.96027514653236780e-002 1.26853911391759140e+002 5.71927667756561430e-002 1.21724685361708410e+002 2 . 38724 9233492503 60e+002  2.54664140164076660e+002 1.26853911391759210e+002  0.00000000000000000e+000 5.96027514653236710e-002 1.19654831977974570e-001 5 . 9602751465323692Oe 002 6.49700217263196400e+000 0.00000000000000000e+000 5.71927667756561430e-002 5.96027514653237190e-002 1.12165735520932130e 001 0.00000000000000000e+000 6.49699865385394660e+000 Gab: -1.53846e-001 0.00000e+000 0.00000e+000 0.00000e+000 0.00000e+0000.00000e+000  00000e+000 00000e+000 00000e+000 00000e+000 53846e-001 00000e+000-  sources: 4.08248e+005 4.08248e+005 4.08248e+005  0.00000e+000 0.00000e+000 0.00000e+000 0.00000e+000 0.00000e+000 1.53846e-001  0.00 120.00 -120.00  n3a n3b n3c  .END BLOCK-60 .BEGIN BLOCK-61 size: 6 sources: no nodes: .  5 n4a 7 n4b numSwStatus: 8 initSwStatus: 0 A:  9  n4c  10  n5a  11  n5b  12  n5c  switchstatus: 0 1.90167503452413290e+002 7.04 866426837186280e+001 1.97657 212577195200e+002 5.48146643140818670e+001 7.51762535182828340e+001 2.00758557055152380e+002 5.009047 67 326908950e+001 6.21942973234862540e+001 7.644 003 983 59593250e+001 2.00758557055152410e+002 5.80571208401811490e+001 6.45009165784957050e+001 6.21942973234862390e+001 7.517 62 53 51828282 00e+001 1.97657212577195200e+002 6.90018318131449320e+001 5.80571208401811700e+001 5.00904767326908950e+001 5.48146643140818740e+001 7.04866426837186420e+001 1.90167503452413310e+002 switchstatus: 1 1.77669625921748660e+002 5.49687885574195430e+001 1.78389637255364450e+002 3.57424110910364160e+001 5.14953970969511660e+001 1.7165354765563853Oe+002 0.00000000000000000e+000 0.00000000000000000e+000 0.00000000000000000e+000 0.00000000000000000e+000 3.93001899628889360e+001 4.12115767288503460e+001 3.35704822395939710e+001 0.00000000000000000e+000 1.69506636203543740e+002 5.53252408488032050e+001 4.10757299592906760e+001 2.92194603973220910e+001 0.00000000000000000e+000 4.99606877185066980e+001 1.752 0103094962457Oe+002 switchstatus: 2 1.73114600417392550e+002 5,15410273971645980e+001 1. 76608811736061090e+002 3.65465637565594080e+001 5.48805651848575380e+001 1.8118866345437368Oe+002 2.80092342730034010e+001 3.76622441105388650e+001 5.27852781299789410e+001 1.72166283495502030e+002 0.00000000000000000e+000 0.00000000000000000e+000 0.00000000000000000e+000 0.00000000000000000e+000 0.00000000000000000e+000  75  4.82 980513714922490e+001 2 . 80060209513998900e+001  3.50554150063242390e+001  0.00000000000000000e+000  2.79113356062652260e+001  1.65031224679475430e+002  switchStatus: 3 1.68557859174961750e+002 4.5413 8659279058100e+001  1.68370005608385840e+002  2.79590796610056530e+001  4. 33335189803903660e+001  1.6500497 4460866260e+002  0.00000000000000000e+000  0.00000000000000000e+000  0.00000000000000000e+000  0.00000000000000000e+000  0 . 00000000000000000e+000  0.00000000000000000e+000 0 . 00000000000000000e+000 0.00000000000000000e+000  0.00000000000000000e+000  4.37418328950511470e+001 0.00000000000000000e+000  2.89289564673964070e+001  0.00000000000000000e+000  1.93248366982355190e+001  1.60475528909050410e+002  switchStatus: 4 1 . 6 5 1 3 0350904 628100e+002 4 . 9 4 2 07 527 9 6 9 5 3 2 5 9 0 e + 0 0 1 1 . 7 9 9 3 2 6 8 4 3 6 3 5 9 9 2 9 0 e + 0 0 2 3.66394525234181430e+001 5.98838991053753200e+001 1.87 564631627530080e+002 3.0201104577 9959910e+001 4.54596737304864930e+001 6.20017545475275330e+001 1.84 958552483827990e+002 3.24 812081566964610e+001 4.29817218811148650e+001 5.48588933637309280e+001 1.71530946487439820e+002  4.36279835673727230e+001  0.00000000000000000e+000 0.00000000000000000e+000 0.00000000000000000e+000 0. 00000000000000000e+000 0.00000000000000000e+000 0.00000000000000000e+000 switchStatus: 5 1.60198939485787120e+002 4.19978337592621360e+001 2.65154466423013840e+001 0.00000000000000000e+000 0.00OOOOOOOOOO00000e+000  1.68759467943326030e+002 4 . 4 6 4 4 9 1 9 9 1 4 6 4 6 8 7 7 0 e + 0 0 1 1 . 6 6 7 804 2 07 037 9 9 0 8 0 e + 0 0 2 0.00000000000000000e+000 0.00000000000000000e+000  2.35235300538899810e+001 2.94983367656911710e+001 2.52382005840503790e+001 0 . 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 e + 0 0 0 1 . 5 5 2 5 9 7 4 3 3 4 1 2 3 04 0 0e+002 0.00000000000000000e+000 0.00000000000000000e+000 0.00000000000000000e+000 0.00000000000000000e+000 0.00000000000000000e+000 0.00000000000000000e+000 switchStatus: 6 1 . 5 8 9 7 9 6 8 8 8 5 2 542 3 8 0 e + 0 0 2 4.12817068466882160e+001 2.83780300528522820e+001 1. 9 8 1 2 9 8 9 9 4 8 6 4 0 5 5 3 0 e + 0 0 1 1.67413624045427700e+002  1.69162450457143340e+002 4 . 89517268395155580e+001 1.76468086140361550e+002 3.17132930012326110e+001 4 . 80486869104154200e+001  0.00000000000000000e+000 0.00000000000000000e+000 0. 00000000000000000e+000 0.00000000000000000e+000  0.00000000000000000e+000  0.00000000000000000e+000 0.00000000000000000e+000 0.00000000000000000e+000 0.00000000000000000e+000 0.00000000000000000e+000 0.00000000000000000e+000 switchStatus: 7 '1.5663487030540662Oe+002 3.75285167520683420e+001 2.26915863346091630e+001  1.63154976700353250e+002 3.98498267104017150e+001  1.62677832647081540e+002  0.00000000000000000e+000  0.00000000000000000e+000  0.00000000000000000e+000  0.00000000000000000e+000  0.00000000000000000e+000  0 . 00000000000000000e+000 0.00000000000000000e+000 0.00000000000000000e+000  0.00000000000000000e+000  0.00000000000000000e+000 0 . 00000000000000000e+000 .END  0.00000000000000000e+000  0.00000000000000000e+000  0.00000000000000000e+000  0.00000000000000000e+000  BLOCK-61  .BEGIN BLOCK-62 size:  6  sources:  no  nodes: 20 NodBa 22 NodBb numSwStatus: 1 initSwStatus: 0 A:  24  switchStatus: 0 1.69 800896813590920e+002 4.25952896594734090e+001 3.89577705146050950e+001 7.29828226442739680e+000 5.39067314162251850e+001 002 002  1.83080568420767120e+000 5.39029304989115370e+001  NodBc  31  nlOa  32  nlOb  33  nlOc  1.67743457332343240e+002 4.25952896594734090e+001 1.83080568420767050e+000  1.69800896813590980e+002 1.67445997603009980e+000  7.20985061089727970e+000  1.83080568420767120e+000  7.86906129791195980e  1 . 6 7 4 4 59 97 603 0 1 0 0 0 0 e + 0 0 0 1 . 8 3 0 8 0 5 6 8 4 2 0 7 6 7 1 6 0 e + 0 0 0 7.86906129791196530e-002 5.39067314162251990e+001  7.29828226442740040e+000  7.19706537178657210e  .END BLOCK-62 .BEGIN BLOCK-90 size:  9  76  s o u r c e s : no nodes: 13 n6a 14 nSb 15 n6c 16 n7a 17 n7b 18 n7c 19 n8a 21 n8b 23 n8c numSwStatus: 1 initSwStatus: 0 A: switchstatus: 0 1 . 48082042496012490e+002 6.66765934059378790e+001 1.54344824666412190e+002 6.39381644667309810e+001 6.66823415088062030e+001 1.48096876522046640e+002 1.4771193168723474Oe+002 6 . 66820418062505380e+001 6.39423751249307840e+001 1.48096876522061820e+002 6.667 67 84 3805432680e+001 1.5396733906906033Oe+002 6.6682 0418062457490e+001 6.66823415088109640e+001 1.54344 824 6664122 80e+002 6.39340870019702120e+001 6.66767843805385070e+001 1.47711931687219590e+002 6.39381644667310030e+001 6.66765934059331610e+001 1.48082042495997430e+002 1.47832483263018390e+002 6.66702433196850990e+001 6.39302844294020640e+001 1.47 839897419550880e+002 6.66704884989597420e+001 6.392 61410991537600e+001 1.47960631408367760e+002 6.66667795629496710e+001 1.54087509455538480e+002 6.66722807507821640e+001 6.66722807507869390e+001 1.54087509455538510e+002 6.66667795629449240e+001 6.66604755062300280e+001 1.54207883485791650e+002 6.39261410991537460e+001 6.667 04 884 98954 9670e+001 1.47839897419535730e+002 6.393 02 844294020640e+001 6.66702433196803530e+001 1.47832483263003300e+002 6.39182455133034890e+001 6.66604755062252820e+001 1.47960631408352640e+002 .END BLOCK-90 .END BLOCKS .BEGIN OUTPUT numOutNodes: 3 NodBa NodBb NodBc recordEveryOther: 1 .END OUTPUT .BEGIN LUMPED n2a c u r r e n t : no MOV: no n3a R 6 50000e+000 n3b n2b c u r r e n t : no MOV: no R 6 50000e+000 n2c c u r r e n t : no MOV: no n3c R 6 50000e+000 n l a c u r r e n t : no MOV: no n2a L 3 45000e+002 n i b c u r r e n t : no MOV: no n2b L 3 45000e+002 MOV: no n2c n l c c u r r e n t : no L 3 45000e+002 2 50000000e+005 MOV: yes n6a n8a c u r r e n t : no C 6 60000e+001 2 50000000e+005 n8b c u r r e n t : no MOV: yes n6b C 6 60000e+001 2 50000000e+005 MOV: yes n6c n8c c u r r e n t : no C 6 60000e+001 2 50000000e+005 n8a c u r r e n t : no MOV: yes n7a C 6 60000e+001 2 50000000e+005 n7b n8b c u r r e n t : no MOV: yes C 6 60000e+001 2 50000000e+005 n7c n8c c u r r e n t : no MOV: yes C 6 60000e+001 c u r r e n t : MOV: no NodBa nlOa no L 3 lllOOe+001 MOV: no NodBb nlOb c u r r e n t : no L 3 lllOOe+001 MOV: no NodBc nlOc c u r r e n t : no L 3 lllOOe+001 MOV: no nlOa GROUND c u r r e n t : no R 5 60000e+001 R 5 60000e+001 nlOb GROUND c u r r e n t : no MOV: no nlOc GROUND c u r r e n t : no MOV: no R 5 60000e+001 .END LUMPED .BEGIN LINES .BEGIN LINE-0 phases: 6 MmLink: 0 d e l a y : 1.4069e-003 8.6990e-004 8.5060e-004 8.4180e-004 8.4180e-004 8.4200e-004 Zc: 987.90 328.40 275.90 222.60 237.20 244.10 Q: 5.1146427e-001 -4.8902472e-001 -5.5148454e-001 2.3244529e-001 -4.1893693e-001 -2.8289203e-001 3.3293163e-001 1.3799150e-002 -3.9499073e-001 -4.8191931e-001 2.6900888e-001 5.8057410e-001 3.5712111e-001 5.1054438e-001 -1.6496383e-001 4.5421967e-001 4.8894408e-001 -2.8793145e-001 3.5712111e-001 5.1054438e-001 1.6496383e-001 -4.5421967e-001 -4.8894408e-001 -2.8793145e-001 3.3293163e-001 1.3799150e-002 3.9499073e-001 4.8191931e-001 -2.6900888e-001 5.8057410e-001 5.1146427e-001 -4.8902472e-001 5.5148454e-001 -2.3244529e-001 4.1893693e-001 -2.8289203e-001  3.4058426e-003 -7.0983944e-004 -3.0119034e-004 -1.8216425e-004 -3.3219722e-004 -7.6411917e-004  3.4075382e-003 -7.3570212e-004 3.2957907e-003 -3.5033407e-004 -7.7090140e-004 3.2957907e-003 -4.2026982e-004 -3.5033407e-004 -7.3570212e-004 3.4075382e-003 -3.3219722e-004 -1.8216425e-004 -3.0119034e-004 -7.0983944e-004 3.4058426e-003  calcCurr: no no no no no no no no no no no no nodes: n l a n4a n i b n4b n l c n4c n l a n5a n i b n5b .END LINE-0 .BEGIN LINE-1 phases: 6  77  n l c n5c  MmLink: 0 d e l a y : 1.4069e-003 8.6990e-004 8.5060e-004 8.4180e-004 8.4180e-004 8.4200e-004 Zc: 987.90 328.40 275.90 222.60 237.20 244.10 Q: 5.1146427e-001 -4.8902472e-001 -5.5148454e-001 2.3244529e-001 -4.1893693e-001 -2 . 8289203e-001 3.3293163e-001 1.3799150e-002 -3.9499073e-001 -4.8191931e-001 2.6900888e-001 5.8057410e-001 3.5712111e-001 5.1054438e-001 -1.6496383e-001 4.5421967e-001 4.8894408e-001 -2.8793145e-001 3.5712111e-001 5.1054438e-001 1.6496383e-001 -4.5421967e-001 -4.8894408e-001 -2.8793145e-001 3.3293163e-001 1.3799150e-002 3.9499073e-001 4.8191931e-001 -2.6900888e-001 5.8057410e-001 5.1146427e-001 -4.8902472e-001 5.5148454e-001 -2.3244529e-001 4.1893693e-001 -2.8289203e-001  3.4058426e-003 -7.0983944e-004 -3.0119034e-004 -1.8216425e-004 -3.3219722e-004 -7.6411917e-004  3.4075382e-003 -7.3570212e-004 -3.5033407e-004 -4.2026982e-004 -3.3219722e-004  3.2957907e-003 -7.7090140e-004 3.2957907e-003 -3.5033407e-004 -7.3570212e-004 3.4075382e-003 -1.8216425e-004 -3.0119034e-004 -7.0983944e-004 3.4058426e-003  calcCurr: no no no no no no no no no no no no nodes: n4a n6a n4b n6b n4c n6c n5a n7a n5b n7b .END LINE-1 .BEGIN LINE-2 phases: 3 MmLink: 0 d e l a y : 4.7100e-004 3.4160e-004 3.3990e-004 Zc: 621.90 275.30 290.90 Q: 5.8702696e-001 -4.0302458e-001 7.0710678e-001 5.5427582e-001 8.2086139e-001 0.0000000e+000 5.8702696e-001 -4.0302458e-001 -7.0710678e-001 g: 2.8629197e-003 -6.7850268e-004 2.9415655e-003 -5.7468770e-004 -6 . 7850268e-004 2.8629197e-003 calcCurr: no no no no no no nodes: n8a NodBa n8b NodBb n8c NodBc . END LINE-2 .BEGIN LINE-3 phases: 3 . MmLink: 1 d e l a y : 4.7100e-004 3.4160e-004 3.3990e-004 Zc: 621.90 275.30 290.90 Q: 5.8702696e-001 -4.0302458e-001 7.0710678e-001 5.5427582e-001 8.2086139e-001 0.0000000e+000 5.8702696e-001 -4.0302458e-001 -7.0710678e-001 g: 2.8629197e-003 -6.7850268e-004  2.9415655e-003  -5.7468770e-004  -6.7850268e-004  2.8629197e-003  calcCurr: no no no no no no nodes: NodBa GUNO NodBb GDOS NodBc GTRES .END LINE-3 .END LINES .BEGIN SWITCHES total: 3 s w i t c h : 0 n5a GROUND i n i t i a l l y C l o s e d : no b l k : 2 pos : numGaLink: 36 add: no gaLink: 25 add: no s w i t c h : 1 n5b GROUND i n i t i a l l y C l o s e d : no b l k : 2 pos : numGaLink: 38 add: no gaLink: 27 add: no s w i t c h : 2 n5c GROUND numGaLink: i n i t i a l l y C l o s e d : no b l k : 2 pos : 40 add: no gaLink: 29 add: no events: 6 no t 3 00e+003 S W : 0 openOperation: no t 3 00e+003 S W : 1 openOperation: no t 3 00e+003 S W : 2 openOperation: t 3 60e+003 S W : 0 openOperation: yes t 3 60e+003 S W : 2 openOperation: yes t 3 60e+003 S W : 1 openOperation: yes . END SWITCHES  78  n5c n7c  total: 3 .BEGIN TO type: CCVT port: 0 C l : 9.970000e-008 C2: 3.000000e-010 Cc: 1.300000e-010 L c : 7.080000e-001 Rc: 6.280000e+002 Rf: 3,750000e+001 L f : 7.300000e-001 C f : 9.600000e-006 R: 1.000000e+002 r a t i o a : 5.000000e+003 r a t i o b : 1.100000e+002 .END TO .BEGIN TI type: CCVT port: 1 C l : 9.970000e-008 C2: 3.000000e-010 Cc: 1.300000e-010 L c : 7.080000e-001 Rc: 6.280000e+002 Rf: 3.750000e+001 L f : 7.300000e-001 C f : 9.600000e-006 R: 1.000000e+002 r a t i o a : 5.000000e+003 r a t i o b : 1.100000e+002 .END T I .BEGIN T2 type: CCVT port: 2 C l : 9.970000e-008 C2: 3.000000e-010 Cc: 1.300000e-010 Lc: 7.080000e-001 Rc: 6,280000e+002 Rf: 3.750000e+001 L f : 7.300000e-001 Cf: 9.600000e-006 R: 1.000000e+002 r a t i o a : 5.000000e+003 r a t i o b : 1.100000e+002 .END T2 . END DACS .END FILE  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0065175/manifest

Comment

Related Items