UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Interaction-based simulation Lee, Gene S. 2002

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2002-751163.pdf [ 19.35MB ]
Metadata
JSON: 831-1.0051501.json
JSON-LD: 831-1.0051501-ld.json
RDF/XML (Pretty): 831-1.0051501-rdf.xml
RDF/JSON: 831-1.0051501-rdf.json
Turtle: 831-1.0051501-turtle.txt
N-Triples: 831-1.0051501-rdf-ntriples.txt
Original Record: 831-1.0051501-source.json
Full Text
831-1.0051501-fulltext.txt
Citation
831-1.0051501.ris

Full Text

Interaction-Based Simulation by Gene S. Lee M.Sc. (Computer Science), The University of British Columbia, 1994 Sc.B. (Computer Science and Engineering), Massachusetts Institute of Technology, 1989 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR T H E D E G R E E OF Doctor of Philosophy in T H E FACULTY OF GRADUATE STUDIES (Department of Computer Science) we accept this thesis as conforming to the required standard  THE  U N I V E R S I T Y  OF  BRITISH  July 27, 2001 © Gene S. Lee, 2001  C O L U M B I A  In presenting this thesis in partial fulfillment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying for this thesis for scholarly purposes may be granted by the Head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission.  Department of Computer Science The University of British Columbia 2366 Main Mall Vancouver, British Columbia Canada V6T 1Z4  Abstract This thesis presents a technique for producing visual simulations — programs that visualize the execution or behavior of time-varying scenes — from software interactions, first-class structures that moderate the flow of information among software components: The building of programs with software interactions gives rise to Interaction-Based Programming (IBP), a programming methodology that separates the concerns of computation from coordination. Software components compute independently, while software interactions coordinate communications. The integration of IBP with the methods of computer simulation produce Interaction-Based Simulation (IBS), an approach to visual simulation that binds the execution of software interactions to the advancement of time. As tiihe advances, software interactions control the flow of information among components arid the dissemination of temporal information. The utility of IBS is presented in two ways: one, through a logical description of the effect of software interactions on software development and systems created using the approach; and two, through empirical evidence demonstrating the ability of IBS to produce a wide variety of programs that encourage the reuse of software.  n  Contents Abstract  ii  Contents  iii  List of Tables  ix  List of Figures  x  Acknowledgements  xv  Dedication  xvii  1 Introduction 1.1 1.2 1.3  1.4  1.5  1  Difficulties of Creating Visual Simulations Overview of Existing Approaches Shortcomings of Existing Approaches . 1.3.1 Animating Methods 1.3.2 Complexity Management 1.3.3 Software Reuse Thesis Statement 1.4.1 Interaction-Based Simulation 1.4.2 Benefits of Interaction-Based Simulation Thesis Overview  2 Background 2.1 Computer Animation 2.1.1 Structural Overview 2.1.2 Animating Methods 2.1.3 Complexity Management 2.1.4 Software Reuse 2.2 Computer Simulation 2.2.1 Structural Overview  2 3 3 4 5 7 8 9 14 15 17 18 18 19 21 22 23 24  iii  2.3  2.2.2 Simulation Methods 2.2.3 Complexity Management 2.2.4 Software Reuse Software Communications  26 28 29 29  3 Illustrative Examples 3.1 Descriptions 3.1.1 General Example 3.1.2 Extended Example 3.1.3 Contextual Example 3.2 The R A S P Toolkit 3.2.1 Application Development 3.2.2 Software Library  31 31 32 32 33 34 34 35  4 Interaction-Based Programming 4.1 Overview 4.1.1 Program Development 4.1.2 Programming Benefits . . 4.2 Activities for Production 4.2.1 Software Components 4.2.2 Software Interactions 4.2.3 General Example Revisited 4.3 Activities for Coordination 4.3.1 Scripts 4.3.2 Meta-Scripts 4.3.3 General Example Revisited 4.4 Program Execution 4.4.1 Run-Time Evaluator 4.4.2 Three Phases of Execution 4.4.3 Dynamic Topology 4.4.4 General Example Revisited  36 37 . 38 39 39 40 40 45 47 48 49 50 51 52 53 54 56  5 Interaction Constraints 5.1 Foundation 5.2 Virtual Ports 5.2.1 Backwards Propagation 5.2.2 Extended Example Revisited 5.3 Virtual Events 5.3.1 Executive Request vs. Logical Request 5.3.2 Utility iv  57 57 58 58 59 60 61 62  5.3.3 6  Multi-Modeling Methods  6.1  6.2  6.3  7  7.2  7.3  7.4  Forms of Software Interactions  66  6.1.1 Categories 6.1.2 Time-Varying . 6.1.3 Extended Example Revisited Scripts 6.2.1 Kinds of Scripts . 6.2.2 Declarative 6.2.3 Functional 6.2.4 Extended Example Revisited Software Composition . 6.3.1 Aggregation 6.3.2 Assembly 6.3.3 Extended Example Revisited  67 68 69 69 70 72 74 76 77 78 79 82 86  Model Graph 7.1.1 Parts 7.1.2 Evaluation 7.1.3 File Format Action Graph 7.2.1 Parts 7.2.2 Evaluation . . 7.2.3 File Format 7.2.4 Extended Example Revisited Time Graph 7.3.1 Parts 7.3.2 Evaluation 7.3.3 File Format 7.3.4 Contextual Example Revisited Comparison to V R M L and J A V A 3 D 7.4.1 Similarities 7.4.2 Differences'  Reuse Guidelines  8.1  63 66  General Specification  7.1  8  Extended Example Revisited  87 88 89 89 90 90 92 92 95 96 96 99 100 100 103 103 103 106  Interactional 8.1.1 Properties 8.1.2 Contextual Example Revisited v  107 107 110  8.2  8.3  Behavorial 8.2.1 Properties 8.2.2 Contextual Example Revisited Temporal ; 8.3.1 Properties 8.3.2 Contextual Example Revisited  9 Implementing Animating Methods: A Comparison 9.1 Overview 9.1.1 Implementation and Analysis 9.2 Key-Frame Animation 9.2.1 C O R Y ' S Implementation 9.2.2 M E L ' S Implementation 9.2.3 R A S P ' S Implementation 9.2.4 Comparison 9.3 Procedural Animation 9.3.1 M A M / V R S ' S Implementation 9.3.2 F R A N ' S Implementation 9.3.3 R A S P ' S Implementation 9.3:4 Comparison 9.4 Particle System Animation  9.5  •  112 112 115 116 117 120 122 122 123 124 124 128 129 131 134 135 138 140 142 143  9.4.1  ASAS's Implementation  144  9.4.2  PARTICLE API's Implementation  145  9.4.3 R A S P ' S Implementation 9.4.4 Comparison Summary of Evaluation  146 149 151  10 Implementing Domain Applications: A Comparison 10.1 Overview 10.2 Algorithmic Animation 10.2.1 P O L K A ' S Implementation 10.2.2 OBLlQ-3D's Implementation 10.2.3 R A S P ' S Implementation 10.2.4 Comparison 10.3 Scientific Visualization 10.3.1 V T K ' S Implementation 10.3.2 VRML's Implementation 10.3.3 R A S P ' S Implementation 10.3.4 Comparison 10.4 Summary of Evaluation vi  153 153 154 155 158 160 162 165 166 168 173 175 177  11 Easing Application Development: An Assessment 11.1 Managing Complexity with Dynamic Topology  179 179  11.1.1 Run-time Configuration  180  11.1.2 Application to Visual Simulations  181  11.2 Managing Complexity with Multi-Modeling  184  11.2.1 Application to Visual Simulation .  184  11.3 Supporting Reuse with the Guidelines  187  11.3.1 Production  188  11.3.2 Adaptation  190  11.3.3 Preliminary Experiences  191  12 Related Work 12.1 Interconnections in Software Engineering  193 193  12.1.1 Semantic Data Modeling  194  12.1.2 Module Interconnection Languages  194  12.1.3 Architecture Descriptions  194  12.1.4 Dynamic Software Architectures  195  12.1.5 Connectivity Modeling  196  12.2 Interconnections in Visual Simulation  197  12.2:1 Computer Graphics  198  12.2.2 Computer Simulation  200  12.3 Interconnections in Dataflow Architectures 13 Conclusion  .  201 203  13.1 Visual Simulation  203  13.2 Summary of the Thesis  204  13.2.1 Objective  204  13.2.2 Solution  204  13.2.3 Contributions  206  13.3 Impact  207  13.3.1 Visual Simulation  207  13.3.2 Software Development  209  13.4 Future Investigation  210  13.4.1 Drawbacks  210  13.4.2 Enhancements  212  13.4.3 User Studies  214  Bibliography  216 vii  Appendix A Source Code Supplement A . l Animating Methods A . 1 . 1 Key-Frame Animation with C O R Y A . l . 2 Procedural Animation with M A M / V R S A.1.3 Procedural Animation with F R A N . A.1.4 Particle System Animation with A S A S A. 1.5 Particle System Animation with R A S P A. 2 Domain Applications A.2.1 Algorithmic Animation with P O L K A A.2.2 Algorithmic Animation with O B L I Q - 3 D A . 2.3 Scientific Visualization with R A S P  228 228  Appendix B E B N F Supplement B . l General B.2 Model Graph B. 2.1 Nodes B.2.2 Fields and Ports B.3 Action Graph . B.3.1 Locals B.3.2 Events B.3.3 Virtuals B.3.4 Activities  235 235 235 235 236 236 236 236 237 237  . . . :  B.4 Time Graph  228 229 229 229 230  231 231 232 233  237  Glossary  238  Index  240  viii  List of Tables 4.1  Two means of passing information from a and b  41  4.2  Common query types of first-class operations  42  4.3  Attributes of software interactions  44  4.4  Attributes of the software interactions in Figure 4.3  46  4.5  Attributes of software interactions .  48  6.1  Various kinds of events  67  6.2  Various kinds of scripts  70  6.3  Description of process states  74  7.1  Kinds of model graph nodes  88  7.2  Kinds of time graph nodes  98  7.3 The effects of temporal relations on active and inactive scripts 11.1 Various simulations produced with RASP  98 189  11.2 Property violations of the original code for "stampede" and "mesh compress" 191  ix  List of Figures 1.1  Three software components under the influence of two software interactions .  1.2  The three layers of Interaction-Based Simulation  1.3  A software network arises from the arrangement of components with software  10  interactions 1.4  10  The activities for coordination consist of scripts that determine the execution of a software network  1.5  9  11  As individual scripts execute, they produce topological changes to the interconnections amongst software components  11  1.6  An interaction constraint that relates the sum of compA and compB  12  1.7  An example of a multi-model simulation  13  1.8  The general specification partitions a visual simulation into three graphs  2.1  Four phases of simulation development  24  2.2  The methods of temporal management [169]  25  2.3  An example of a multi-model system  28  3.1  A software network of a simple animation  32  3.2  The software network of the extended example  33  3.3  An illustration of the contextual example  34  3.4  Four steps of application development with R A S P  35  4.1  A typical software network  37  4.2  A component with five unidirectional ports  40  4.3  The general example revisited  46  4.4  The coding of the general example of Figure 3.1  51  4.5  A meta-script for animating the movements a robot  52  4.6  The timeline of Figure 4.5  52  4.7  An interaction-based program  53 x  . . 13  4.8  The three phases of program execution with IBP  53  4.9  A comparison between a program that changes topology and a program that consists of a single network  55  5.1  Exemplary definitions of three virtual ports  58  5.2  A usage of virtual ports that propagates backwards for values  59  5.3  The virtual ports of Figure 5.2  60  5.4  An implementation of the extended example that uses three virtual ports . . 61  5.5  The virtual ports of the extended example  62  5.6  An exemplary definition of two virtual ports  63  5.7  The virtual events of Figure 5.6  63  5.8  An implementation of the extended example that uses two virtual events . . 64  5.9  The virtual events of the extended example  64  6.1  The software interactions of the extended example  70  6.2  A M u l t i A c t i v i t y with two threads  71  6.3  Exemplary definitions of group scripts  71  6.4  A script that moves a ball between three poles  72  6.5  The finite state automaton of Figure 6.4 . .  6.6  A visualization of the script in Figure 6.4. The numbers adjacent to the  •. . .  73  software interactions correspond to the states of the FiniteStateAct of Figure 6.5 73 6.7  A script that moves information between SINK and SOURCE via Q U E U E . . . .  75  6.8  The four phases of Figure 6.7  75  6.9  An implementation of the extended example that applies two scripts  76  6.10  The aggregation of models that forms a multi-model script .  79  6.11  An assembly for producing a helical motion  80  6.12  The assembly of Figure 6.11  81  6.13  A script using the assembly of Figure 6.11 to move a ball towards a pole helically  81  6.14  A hierarchical assembly  82  6.15  An assembly formed from the script of the extended example  83  6.16  The assembly of Figure 6.15  83  6.17  A usage of the assembly for the extended example  84  6.18  Complete Listing of Code for Extended Example  85  7.1  The three graphs of the general specification  87  xi  7.2  The syntax of a node from the model graph  90  7.3  The syntax of an action graph  93  7.4  The syntax of the E V E N T S of an action graph  93  7.5  The syntax of the VlRTUALS of an action graph  94  7.6  The syntax of the A C T I V I T I E S of an action graph  95  7.7  The action graph of Figure 4.4  95  7.8  The file format for the script of Figure 7.7  97  7.9  An exemplary time graph that uses temporal relations  99  7.10  The syntax of a time graph  100  7.11  A time graph of the extended example  101  7.12  The file format for the time graph of Figure 7.7  102  8.1  Interface methods for communicating information  109  8.2  A component that obeys P R O P E R T Y B - l '  8.3  An inconsistent component with racing activities  113  8.4  Explicit and implicit means for linking computations to time  118  8.5  A timeline involving non-linear temporal advancement  119  8.6  The use of interpolation to estimate the value of a fixed timestep integrator . 120  8.7  Two means for extracting a value from a parametric curve  9.1  The camera adjusts its view as it follows the ball along a translating path.  9.2  A n implementation in C O R Y  9.3  The formulas applied by Figure 9.2 in computing the orientation of the camera. 127  9.4  A partial implementation of the evaluators of Figure 9.2  127  9.5  An implementation in M E L  129  9.6  An implementation in R A S P  130  9.7  The software network produced by the code of Figure 9.6  131  9.8  A robot moves procedurally between two pucks  135  9.9  An implementation in M A M / V R S  9.10  The method called upon by the callback function of Figure 9.9  137  9.11  An implementation in FRAN  139  9.12  A finite state automaton  140  9.13  An implementation in R A S P  9.14  The software network produced by the code of Figure 9.13  142  9.15  A particle system avoids a moving pole  144  9.16  An implementation in AsAS  145  : .'  •.  112  121 . 124 126  136  141  xii  9.17  An implementation with the  9.18  An implementation in  9.19  A partial implementation of the preamble to the code of Figure 9.18  149  9.20  The software network produced by the code of Figure 9.18  149  10.1  A visualization of selection sort  155  10.2  An implementation in  POLKA  156  10.3  An implementation in  POLKA  157  10.4  An implementation in  POLKA  158  10.5  An implementation in  OBLIQ-3D  159  10.6  An implementation in  OBLIQ-3D  161  10.7  An implementation in  RASP  162  10.8  A n implementation in R A S P  163  10.9  The software network produced by the code of Figure 10.8  164  PARTICLE  147  API  148  RASP  10.10 A scene exemplifying scientific visualization 10.11 An implementation in V T K  166 . .  167  10.12 The dataflow network produced by the code of Figure 10.11  168  10.13 An implementation in V T K  169  10.14 An implementation in V R M L  170  10.15 An implementation in  VRML  171  10.16 An implementation in V R M L  172  10.17 The network produced by the routes of Figure 10.16  173  10.18 An implementation in  RASP  174  10.19 A n implementation in R A S P  175  10.20 The software network produced by the code of Figure 10.19  176  11.1  The coding of a conventional program having fixed topology  180  11.2  The networks produced by the code of Figure 11.1  181  11.3  Pseudo-code that adapts the code of Figure 11.1 to have dynamic topology . 182  11.4  The timeline and active networks of a program undergoing dynamic topology 183  11.5  The partitioning of a software network into several smaller ones  184  11.6  A multi-model scene  185  11.7  Interactions for moving the rocket and having the rocket flash ten times . . . 186  11.8  The script of Figure 9.13 augmented with the interactions of Figure 11.7  11.9  An update to the script of Figure 11.6a that repeatedly executes two sets of interactions concurrently  . . 186 187  xm  11.10  Samples of applications built with R A S P  188  11.11  Adapted applications with R A S P  190  13.1  The three layers of Interaction-Based Simulation  A.l  An implementation in C O R Y  229  A.2  An implementation in C O R Y  230  A.3  An implementation in M A M / V R S  231  A.4  An implementation in F R A N  231  A.5  An implementation in A S A S  232  A.6  An implementation in R A S P  232  A.7  An implementation in P O L K A  233  A.8  An implementation in O B L I Q - 3 D  234  A.9  An implementation in R A S P  234  xiv  205  Acknowledgements The ideas and words of this thesis are my own, but none of this thesis would have been possible without the help and support of a great many individuals. To these people, I owe much thanks. First and foremost, I thank my advisor, Gail Murphy. She is a great teacher and software engineer: dedicated, hard-working, yet compassionate and kind. Even while she labored with newborn twins, she continually challenged me to improve the quality of my work, the clarity of my thoughts, and the presentation of my ideas. I am thrilled to have met Gail, and it is an honor to have her as my mentor and friend. I thank my supervisory committee (Dinesh Pai and Alan Mackworth of UBC, and David Forsey of Radical Entertainment) and my external examiners (Mark Greenstreet and Dale Cherchas of UBC, and Giancarlo Succi of the University of Alberta) for reading this thesis and providing many useful comments on content and presentation. I am especially grateful to Dave Forsey for sparking my interest in the subject matter and for serving as my first advisor at UBC. I also thank the past and present graduate students and faculty of the Imager Laboratory of Computer Graphics and the Software Engineering Research Group. With these people, I spent countless numbers of days in conversation, laughter, and bewilderment. I owe special thanks to Robert Walker, a co-member of both labs, for taking interest in my work and commenting on my ideas. I am grateful to the faculty and staff of the Department of Computer Science and of CICSR for providing me with financial support and granting me the opportunity to teach. One learns much as one attempts to mold the minds of the many. I am especially grateful to Kirsty Barclay, the CISCR reader, for proof-reading my thesis and providing helpful comments on grammar and presentation. I am grateful to my close friends: Jason Smolensky, Jeff Sember, Danae Slater, Torre Zuk, Gretchen Greer, and Dave Morales. Over the years, they have been wonderful friends and a constant source of support. Many good meals were eaten together at The Great Wall, Mongolie BBQ. I thank my sister, Gia Lee, for her kindness and her wisdom. Her endless words of encouragement and her commitment to helping me finish gave me the strength to complete my studies. I thank her for proof-reading my writing — a seemingly endless stream of text and for treating me to a wonderful vacation to Europe - a timely reward for the completion xv  of this thesis. Finally, I give my greatest thanks to my parents, Mansuk and Kyungja Lee, for their patience and moral guidance. Despite the years and a great physical distance, I always felt their presence and thoughts at my side and in my heart.  GENE  . The University of British Columbia July 2001  xvi  S. L E E  To my parents and to the memories of my grandparents  xvii  Chapter 1  Introduction The number and variety of applications employing techniques from computer graphics are growing rapidly. Traditional fields, such as computer animation and scientific visualization, and emerging fields, such as simulation visualization [38, 123, 157] and virtual reality [62], rely upon computer graphics to visualize the properties of dynamic systems. All of these fields produce visual simulations, computer simulations with visual properties. More specifically, visual simulations are computer simulations that visualize (in two- or three-dimensions) the experimental results of a time-varying system, a system that employs numerical modeling methods, either deterministic or stochastic, to update its state to the advancement of simulation time. Visual simulation is useful in that it serves as an effective medium for expressing scientific results, studying the behaviors of complex systems, and simulating the appearance of virtual worlds. To meet the growing demand for visual simulation, academics and professionals are continually inventing new approaches to visual simulation so as to simplify their development and improve their performance. These efforts have produced faster hardware, better development tools, and superior design methods. Faster hardware facilitates visual simulations of greater complexity, increasing speeds, and enhanced visual displays. For example, REALITY ENGINE  [4], a high performance workstation, is an effective device for producing  an immersive virtual environment. Better development tools improve the quality and accuracy of a visual simulation while decreasing developmental costs. Exemplary tools include VRML  [159] and JAVA3D [33], both of which streamline the development of interactive appli-  cations. Superior design methods support the reuse of software, developed as components or scripts, and help manage software complexity. For instance, nodekits [145], abstractions that encapsulate an ordering of nodes, are useful for reusing code and controlling its complexity so as to produce visual simulations of moderate scale and size. The nodekits are applied like 1  CHAPTER  1.  2  INTRODUCTION  templates to re-create useful motions and geometries.  1.1  Difficulties of Creating Visual Simulations  Despite the emerging popularity of visual simulations and the numerous advances in software and hardware supporting their development, the production of a visual simulation remains a difficult endeavor, requiring much time and effort.  From its inception to its finish, a  visual simulation may require many hours, days, or even years of work to produce. The costs of producing visual simulations, such as the animations of Toy Story [30], or of buying development packages for visual simulation, such as Alias|wavefront's  MAYA  [6], are high  due in part to the time and expertise required to implement them. Visual simulations are difficult to implement, especially with existing technologies, for three key reasons. First, visual simulations vary significantly in design and application. There is no one single animating method that models effectively the complexity of all visual simulations. For instance, some simulations, such as those that visualize the execution of an algorithm, are driven with high-level expressions, while others, such as those that animate the movements of an articulated figure, apply low-level instructions to set explicitly state changes over.time. Second, the development of visual simulations involves the use of advanced programming concepts, which include the meticulous advancement of simulation time, the execution of parallel behaviors, and the continuous update of state information. Simulation time synchronizes the global behavior of a system. It determines when a system changes state and in what order a succession of state changes occurs. Parallel behaviors enact simultaneous changes to a system's state with the advancement of simulation time. They induce concurrent computations and concurrent dataflow. State information determines the future behavior of a system. It requires continuous update to evolve properly and to produce accurate results. Third, visual simulations are unlike traditional simulations, which develop systems that rely heavily upon their initial conditions and the occurrence of random events. Visual simulations are unusual in that they frequently apply controls and application frameworks to. guide specifically the evolution of state values over time. The results of visual simulations are commonly known before they begin. In some cases, such as in the positioning of articulated figures [155], the aim of a visual simulation is to recover the proper initial conditions from a given result. For visual simulation, the methods of traditional simulation are useful in introducing abstractions and validating mathematical models that stress computation, not control or aesthetics.  CHAPTER  1.2  1.  3  INTRODUCTION  Overview of Existing Approaches  The tools of visual simulation vary widely, as visual simulations are difficult to produce and the predominant tools for and approaches to building them rely upon conventional programming languages and design methodologies. Popular tools, such as O P E N G L [105] and J A V A - 3 D [33], focus their efforts in defining and manipulating the shape and appearance of geometric entities. They facilitate visualization but provide few means for describing and coordinating the time-varying behaviors of the entities.  ;  Tools that animate, such as T B A G [42] and OBLIQ-3D [104], tend to fix the architecture of a visual simulation to a single application framework. From the start of a visual simulation to its end, one framework decides how animation occurs and what techniques may be applied to animate state. For instance, the application framework of A S A S [127] supports processes, in contrast with the works of Fiume [47] and Kalra [74], which support temporal relations [7]. Each of these abstractions help manage the behavior and performance of visual simulations at a high level of abstraction. Other animating tools, such as G R O O P [84] and U G A [172],  provide a less restrictive approach to visual simulation by expanding their ca-  pabilities with special mechanisms, such as hooks and callbacks. These mechanisms permit developers of visual simulations to introduce new code, such as new animating methods. The mechanisms augment the functionality of the tools and vary the means for updating state. In G R O O P , for instance, user-defined objects animate by inheriting functions that access and increment a global clock. Be it for modeling or animation, the predominant approaches to visual simulation, such as those offered by I N V E N T O R [145] and A L I C E [114], focus on producing prototypes. They neglect the concerns of software reuse and problems of growing complexity in favor of simplifying the rapid development and testing of new algorithms and interactive techniques. They provide abstractions that are often optimally defined for a single purpose. For instance, tools such as S W A M P [16] and C O R Y [56], animate state with only keyframing, an explicit mapping between time and state, while others such as A S A S [127] and O B L I Q - 3 D [104], animate state primarily with animated types, special variables that change state over time.  1.3  Shortcomings of Existing Approaches  Coding visual simulations with existing approaches is marked by three major shortcomings. First, existing approaches generally do not allow for the use and integration of several animating methods. The intricacies of visual simulation compound when multiple animating  CHAPTER  1.  INTRODUCTION  4  methods are made available to animate state. Hence, it is not uncommon for the tools of visual simulation to support only a few animating methods. The tools limit developers from selecting the animating methods that work best for their visual simulations. Second, existing approaches do not provide for the effective management of software complexity. Too often, the concerns of rapid prototyping overshadow the concerns of building manageable software. Practical approaches, such as abstraction building and architectural design, are either missing or difficult to apply. The developers of visual simulations are left to handle complexity by their own means, leading to inconsistency among applications. Third, existing approaches do not support the benefits of software reuse. Building reusable software requires methodological planning; current approaches to visual simulation do not plan for reuse. Without an effective plan for reuse, visual simulations are difficult to decompose into reusable parts for sharing and improving the design of the simulations. 1.3.1  Animating Methods  Not every visual simulation animates well with the same animating method. For instance, one kind of visual simulation may animate well with keyframing, the pairing of key states with simulation time, while another is better suited to animate well with constraints, the building of dependencies between states. Keyframing fixes the state of animation from beginning to end while constraints adapt the state of animation to run-time events, such as the collision of moving objects. Other kinds of visual simulations may best operate by combining the two approaches and applying them simultaneously. The two methods may act independently upon different scenes, or both may work together to produce cumulative effects. Hence, it is beneficial for developers to be able to select and apply the methods that best match the needs of individual scenes. For instance, for an animation involving a camera that views a ball moving along a path, the camera's orientation is best set by placing constraints between it and the ball, and the ball's movements are best performed with keyframing. Since general tools to mix animating methods are unavailable, developers exercise great care when using the tools of visual simulation, as noted by Green [62]. They carefully scrutinize their options and apply the tool (or tools) that best matches their needs. Once an appropriate tool is selected, developers produce visual simulations either by applying the tool and its animating methods directly to the system undergoing development, or by adapting the system so as to accommodate the tool. Both approaches produce adequate results but not without drawbacks. The former approach leads to inefficiencies and errors when the tool and its animating methods are not applied effectively. For example, keyframing  CHAPTER  1.  INTRODUCTION  5  is an inappropriate method for animating a liquid. As noted by Kass and Miller [77], the movements of a liquid are non-linear and irreproducible with keyframes alone. The number of keyframes required to produce reasonable results is excessive and tedious to apply. The latter approach, adapting a system to accommodate the tool that builds it, frequently diminishes the accuracy of a visual simulation. When a system accommodates an external influence, such as the tool, it may adapt in a way that its design becomes overly complicated or unduly simplified. As suggested by many simulationists [169], the potential for error in such systems is large enough to throw into question the system's validity and to challenge the system's results. For example, if a liquid were animated as a few spheres to accommodate the use of fewer keyframes, its motion would probably be poor and suspect to misinterpretation. • • Given such drawbacks, it is not uncommon for developers to forgo the tools of visual simulation for tools that create only shape and appearance, such as OpenGL [105] and QuickDraw-3D [11]. The developers animate their visual simulation by writing code specific to the needs of individual applications. Each application applies the animating method that seems best. That option provides developers with great flexibility, but its usefulness is limited. The coding process revolves around the development of individual applications: each application is written optimally to satisfy a single task. This approach hinders software reuse, or hinders the application of a new or alternative animating method. These drawbacks result in higher costs and longer development times. In addition, computer simulation generally applies only one of several modeling techniques, such as those involving processes [90, 130], to build a complete system. That limitation, also characteristic of the many tools of visual simulation [84, 104, 42], precludes the production of systems that intermix multiple modeling techniques. Recent work in multimodeling [46], the building of simulations with multiple modeling methods, indicates that intermixing is possible and a great benefit to simulation design. For visual simulations, the multi-modeling approach demonstrates the utility of intermixing the many methods for animating state and the many means for synchronizing parallel behaviors.  1.3.2  Complexity Management  With current approaches, as a visual simulation grows in size, it becomes more difficult to manage its complexity and make changes to its design. Much of the research in software engineering oriented towards managing complexity, such as framework design [20] and design patterns [51], has not yet been applied directly to the problems of visual simulation, in part because current approaches favor the building of prototypes. For instance, the procedural callbacks of DORE [85] and the animated types of MlRA [94] do not enable developers to  CHAPTER  1.  INTRODUCTION  6  structure a visual simulation into representative parts and then determine how the parts assemble. Developers are left with the tasks of managing the complexity of code produced in these approaches and of ensuring that the approach they take is applied consistently. Unfortunately, consistency is difficult to maintain without a proper understanding of the growing complexities of a visual simulation and of how the complexities are handled. An approach applied inconsistently produces visual simulations that are difficult to build and troublesome to integrate. For example, the integration of M A M / V R S [36] with A N I G R A P H [24] does not occur easily although each applies a "graph-based" approach to temporal sequencing. Operating dissimilarly, the graphs of M A M sequence the execution of behaviors while the graphs of A N I G R A P H sequence the continuity of dataflow. A further problem that may arise when developers create their own approaches to handle complexity is architectural mismatch, the use of software in contexts that mismatch those intended by its creators [53]. In basing the structure and assembly of their programs on mismatched assumptions, developers, for example, may use incompatible control loops or may combine inappropriate parts. Architectural mismatch can produce code susceptible to error and sensitive to design changes. Errors result, for instance, if the low level primitives of C L O C K W O R K S [56] are altered by controls that neglect to communicate their changes to CLOCKWORKS's high level abstractions, "cues" and "scenes." Unaware of any changes caused by the controls, the high-level abstractions use stale information to compute faulty results. Design changes, such as those changing the timing of visual simulations, cause components to compute erroneously if the components relate their computations to a particular time or timestep. To avoid the difficulties of managing complexity, some developers may choose either to resolve such problems as they occur, or to devise separate approaches for each visual simulation. Although both options can lead to quick results, each provides only short-term gains. The first option manages the overall complexity of a visual simulation with only local solutions while the second choice supports inconsistencies in the development of visual simulation. With regard to the former, local solutions remedy a small problem without consideration of their global effects on other visual simulations. Thus, this approach does not take into account how a visual simulation is formed from parts or how the parts cooperate to perform specific tasks, limiting a developer's ability to secure long-term gains. For example, the application of inheritance and templates to animation languages, such as C H A R L I [29] and T B A G [42], avoids the replication of code but provides no assistance in assembling the overall design of a visual simulation. As noted by the proponents of framework design [72, 119, 20], a proper understanding of a program and its assembly is essential to manage the complexity  CHAPTER  1.  INTRODUCTION  7  of a system and to understand its design. With regard to the latter option of devising separates approaches for each visual simulation, the inconsistencies that result seriously hamper the effective development of complex visual simulations. The parts of a system and their assembly ultimately determine what a visual simulation computes and how animation occurs. Each part performs an important role in determining what a visual simulation can do easily and what it cannot dd without difficulty.  If the design and use of parts are inconsistent or varying across  applications, developers are unable to apply changes to visual simulations readily, especially changes concerning who is animating, what animation is occurring, and when the animation performs. A varying use of parts, moreover, requires practitioners to learn the variants and to recall which parts of each visual simulation implement its variations. Ultimately, that requirement diminishes the efficacy of software development and amplifies the difficulties of software complexity. For example, inheritance differs greatly from delegation when applied to manage the complexities of visual simulation. Inheritance, as applied by M A M [36], encapsulates behaviors within separate classes while delegation, as applied dynamically by U G A [172],  1.3.3  introduces behaviors to class instances with time-varying messages.  Software Reuse  The reuse of a visual simulation or its parts is not an easy task. Few tools of visual simulation directly apply the popular approaches of software reuse, such as component software [1] or domain engineering [106], to the reuse of a visual simulation and its parts. In preparing the development of prototypes, most tools ignore the concerns of generality and adaptability, two important forms of software reuse [59]. The few that do, such as V R M L [159] and O P E N G L [105],  are effective only in developing and reusing the parts of a visual simulation  that form and render geometric shapes. They provide developers with few means to reuse the parts of a visual simulation that animate. Developers of visual simulation must decide independently how to separate their animation into parts and to re-apply them elsewhere. Unfortunately, experience [163, 19, 148] shows that such an approach to software reuse rarely succeeds. Effective reuse requires the use of an explicit agenda for software reuse and a systematic reuse-directed software process [71]. Lacking tools to support reuse, developers encounter many difficulties in attempting to separate their visual simulations into reusable parts. Some tools and approaches to visual simulation establish bindings that are useful only to the needs of specific applications. The bindings fix the order and times in which the many parts of a program communicate. In doing so, the bindings complicate the design of the parts and hamper the general applicability  CHAPTER 1. INTRODUCTION  8  of the parts to a wider variety of applications. To reuse parts effectively, developers must extract or alter the bindings so as to remove unwanted communications. Developers do so, for example, with As AS  [127]  and P O L K A  [143].  Each tool contains numerous elements  that fix the communications between their parts. Some other tools and approaches of visual simulation tightly integrate the statements that control the interaction and behavior of their parts. The separation of these statements is difficult and frequently requires the parts to undergo design changes when applied to new applications. The approaches of F R A N U G A [172],  [41]  and  for example, intertwine the parts of visual simulations that update state with  those that manage time. Beyond the difficulty of separating a visual simulation into its representative parts, problems also arise because the parts of a visual simulation rarely accommodate reuse in alternative contexts. The contexts in which the parts operate may vary widely from one simulation to another. In one context, the interactions of the parts may vary while in another, the parts' operation may change over time. Not designed for reuse, the parts rarely adapt well to evolutionary changes, such as the application of animation techniques with simulation modeling. Animation techniques, such as keyframes [ 1 6 , 5 6 ] and animated types [ 1 2 7 , 1 0 4 ] , form diverse mappings between time and state. Simulation modeling, including functional modeling  [136,  73]  and constraint methods  [117],  can apply such mappings to build com-  plex systems. Further troubles arise if the parts heavily rely upon one specific context. For example, the parts of a visual simulation that require constant updates over time, such as those that increment a positional value by a constant for a set interval of time, commonly falter when integrated into an environment that updates time unevenly. The parts incorrectly perceive the passage of time as constant and thereby produce inconsistent results, as exemplified by the advancement of the positional value with inaccurate increments. In light of the difficulties of applying software reuse and the troubles of securing reuse from tools that primarily support prototype development, it is not uncommon for developers to overlook the process of engaging in software reuse. Instead, they opt to produce, or assemble, their visual simulations by coding each application anew. Avoiding the obstacles of software reuse, they accept the difficulties accompanying the development of new code, such as design flaws and coding errors.  1.4  Thesis Statement  This thesis shows that high-quality visual simulations are easier to devise, manage, and reuse when interactions between software components that make up visual simulations are  CHAPTER  1.  INTRODUCTION  9  given first-class status. The first-class interactions - software interactions - manage and regulate the communications between software components by deciding when and how the components transmit information. As illustrated in Figure 1.1, software interactions liberate the components from managing their own communications. The components act without acknowledging the identities of their communicating partners or coupling tightly with one another. Thus, the components are free from administering to the needs of complex communications and are better equipped to accommodate their various complexities. The software interactions also act as intermediaries for the transmission and manipulation of information. They adapt communication so as to ensure that all components send and receive information properly. In the long run, software interactions encourage the production of programs by composition and facilitate the extraction and application of software components for reuse and adaptation. To demonstrate the utility of software interactions for building visual simulations, this thesis introduces interaction-based simulation (IBS), a new approach to developing time-varying systems. An implementation of IBS is presented in the form of the RASP  toolkit, a software library of stylized C++.  I Software f ~ i Software (~~ Component ~~J~ 1 h •*^}Component~ -1 .-f •*•[] Component I Interaction I I Interaction L~I . L  Figure 1.1: Three software components under the influence of two software interactions  1.4.1  Interaction-Based S i m u l a t i o n  As shown in Figure 1.2, IBS consists of five basic parts, which are organized in three layers. Layer one forms the basis of IBS with interaction-based programming (IBP), a new approach to program development that works with software interactions. Layer two augments the first layer with interaction constraints, multi-modeling methods, and a general specificationTogether, the three parts apply IBP to building visual simulations. Interaction constraints and multi-modeling methods apply the methods of constraint technology [132] and computer simulation to the application of software interactions, while the general specification partitions visual simulations into the product of three graphs: the first graph for assembling parts, the second for organizing software interactions, and the third for deciding time-varying behaviors. Layer three provides structure to the application of the first two layers. It presents reuse guidelines, software properties for preparing reusable software.  CHAPTER  1.  INTRODUCTION  Reuse Guidelines  Level 3 Level 2  10  Chapter S  Interaction Constraints  Multi-Modeling Methods  General Specification  Chapter 6  Chapter 7  Chapter 5  Level 1  INTERACTION-BASED SIMULATION  Interaction-Based Programming Chapter 4  Figure 1.2: The three layers of Interaction-Based Simulation Interaction-Based Programming IBP is an approach to programming that supports the building and reuse of software by parts. An IBP program is made up of two parts, activities for production and activities for coordination, which are developed separately. The former activities identify a program by its parts; the latter manage a program by its communications. The two activities act collectively to produce a program that continually mediates the flow of information among its parts. The activities for production consist of software components, which establish state and transform input to output, and software interactions, which identify the communications among components.  As shown in Figure 1.3, the two constituents establish a software  network that defines the topology of the program and decides its execution. When the network operates, by way of the activities for coordination and a run-time evaluator (RTE), the software interactions force the software components to communicate their state and to react to their communications.  Component  Figure 1.3: A software network arises from the arrangement of components with software interactions The activities for coordination produce and manage the execution of software interactions. As shown in Figure 1.4, they consist of two kinds of abstractions, scripts and meta-scripts. Scripts configure an arrangement and ordering of software interactions so as  CHAPTER  1.  INTRODUCTION  11  to produce a specific sequence of communications. These communications decide what a program computes and ultimately, how it animates its state. Meta-scripts decide how the RTE executes individual scripts or how the RTE switches its focus from one meta-script to the next. They present rules, statements, and conditions for producing a program as a sequence of scripts for controlling the use of software interactions. Meta-scripts support animation by relating the execution of multiple scripts to the value of simulation time. As time flows, the meta-scripts indicate which scripts execute in sequence and which scripts execute in parallel. For example, the meta-script of Figure 1.4 indicates that script X precedes the parallel executions of scripts Y and Z.  Scripts  Meta-Script AT T=100:  Ii  ScriptX  InteractionA follows Interaction!) or  AT T=200: II  ScriptY + ScriptZ  InteractionC ScriptX  AT T=300:  ScriptZ  ScriptZ  Sequence of ScriptX (Component)  ionB  *"(Componenl)  ionC (Component)  <l InteractionA ScriptY  Figure 1.4: The activities for coordination consist of scripts that determine the execution of a software network. As the RTE sequences through several scripts, the topology of a program changes. As shown in Figure 1.5, the progress of simulation time updates the configuration of a visual simulation to include new connections or new components. Unlike the process in conventional programming approaches, the couplings between components in an I B P program are neither fixed nor always deterministic. The program may dynamically switch from one software network to another, permitting a visual simulation to vary radically its behavior and its use of animating methods with the passage of time.  Sequence of ScriptX (Component)  Sequence of ScriptY  ^Component)  ^Componeru^c  Sequence of ScriptK  ^Component)  (Component)  (Component)  IntetactionG  (Componenl  AT  Interact ionK  (Component)  AT Component)  Figure 1.5: As individual scripts execute, they produce topological changes to the interconnections amongst software components.  CHAPTER  1.  INTRODUCTION  12  Interaction Constraints Interaction constraints apply constraint technology to the application and use of software interactions. Constraint technology, such as unidirectional constraints with forward propagation, coordinate complex communications with equations of interactions. The equations, established with mathematical operators and primitive functions, determine how software interactions relate and disseminate information. For example, the interaction constraint in Figure 1.6 establishes VP, a virtual port that permits i n t e r a c t i o n B to communicate the sum of compA and compB. Each time i n t e r a c t i o n B requires a value, VP applies the operator + to the two components.  Script  Sequence of ScriptlC  Constraint V P : compA + compB InteractionB: VP to compC ScriptlC  Figure 1.6: An interaction constraint that relates the sum of compA and compB.  Multi-Modeling Methods Multi-modeling methods support the development and reuse of multi-model  simulation,  a  simulation composed of multifarious subsystems. Each subsystem employs a design method for organizing software interactions that best matches the problem it models. As noted by Fishwick [46], the subsystems collaborate to produce a simulation of greater accuracy and simpler design. The design of a simulation and its mapping of variables to animating methods are chosen by the developers, not the tools employed by the developer. The multi-model simulation of Figure 1.7, for example, combines several design methodologies into one system. The primary design method "functional" partitions the simulation into five subsystems, with data flowing sequentially from one subsystem to the next. Each subsystem, in turn, applies a different method for mediating the software interaction of various animating methods. Subsystems Q and S, for instance, apply the methods "declarative" and "constraint" respectively.  CHAPTER  1.  13  INTRODUCTION  Q s  Declarative  0—0  R  Constraint  Functional  Functional Figure 1.7: An example of a multi-model simulation General Specification The general specification establishes an extensible means for describing the contents and behaviors of a visual simulation. Supporting the development of software by composition, it partitions a visual simulation into three graphs: scene, action, and time. The scene graph assembles software components to define geometric shapes with physical attributes; the action graph applies interaction constraints and multi-modeling methods to produce scripts; and the time graph uses temporal operators to produce meta-scripts. The temporal operators determine when the scripts of the action graph communicate information to and from the shapes of the scene graph. As shown in Figure 1.8, the three graphs establish a logical approach to constructing a visual simulation incrementally. Developers simply add (and remove) nodes from the graphs to vary or change a program's complexity or behavior.  Activity  1  ( Initial ) ExiiiDT  1 ( Finish )  (  Start  )  EvitiCr  EvmGP  (a) Scene Graph  s  EVIIDI  E.»lFI'  (b) Action Graph  (c) Time Graph  Figure 1.8: The general specification partitions a visual simulation into three graphs  Reuse Guidelines Reuse guidelines improve the reusability of software components involved with software interactions. The guidelines are divided into three sets of properties: interactional, behavioral, and temporal. The properties determine how software components communicate informa-  CHAPTER  1.  14  INTRODUCTION  tion, complete computational tasks, and behave over time. The guidelines ensure that the use of software components is consistent with the foundations of IBP: software components should interact freely with software interactions and with the environments that software interactions create. When software components operate consistently, they are simpler to apply in varying contexts, especially those that change their communications over time.  1.4.2  Benefits of Interaction-Based S i m u l a t i o n  The benefits of IBS are four-fold: it provides developers with a programming approach that eases the production of visual simulations, a means to integrate multiple animating methods, a basis to manage complexity, and a means to apply software reuse.  Programming Approach IBS eases the task of programming time-varying systems. Its design separates the specification of communications from its execution. With the separation, developers can update and abstract the communications of a program without preparing controls for its execution. The execution of a program varies according to its specification, not to the ordering or sequencing of function calls, or of operational controls supporting its specification. Hence, the specification of an IBS program contains no elements, such as semaphores, that apply to the execution of communications nor reference statements, such as co-routines, that cause its underlying program to switch between multiple contexts.  The absence of these types  of commands, which unnecessarily obfuscate the meaning and application of a program's communications, is a significant benefit of IBS.  Multiple Animating Methods IBS eases the application and integration of multiple animating methods. Activities of coordination structure a program as a collection of software networks, each reproducing the communications of a particular animating method. Animation occurs as the software networks communicate the advancement of simulation time to its components. The animation varies its methods as software networks vary their communications and act in parallel. The use of multi-modeling methods allows for practical means to vary and to integrate the animating methods of several software networks. Applied with controls for relating the networks over time, the multi-modeling methods either embed the networks hierarchically or link the networks in sequence.  CHAPTER  1.  INTRODUCTION  15  Complexity Management IBS introduces three means to ease the task of managing the complexity of visual simulations. First, it partitions a program into two parts. One part establishes computational activities while the other organizes communicational activities. The two parts act independently, each providing an alternative means for managing software complexity.  Second, IBS supports  the use of interaction constraints so as to simplify the application and execution of software networks. The interaction constraints reduce the number and complexity of software interactions in order to clearly represent complex communications. Third, IBS encourages the development of a general specification for visual simulation. Application development involves the building of software via software composition, the building of code with abstractions that generalize objects, agents, and functions.  Using the general specification,  developers methodically organize software interactions into layers, orderly arrangements, and recursive structures.  Software Reuse IBS facilitates the reuse of software by producing programs with IBP and by advocating the use of reuse guidelines. With IBP, computational components operate without.direct reference to their environment. Therefore, they are easier to extract and to apply in new applications. The reuse guidelines ensure the consistency of components as they operate in varying contexts. With IBP, the context of components change as they either encounter new software interactions over time or operate in new applications.  1.5  Thesis Overview  The remainder of this thesis consists of twelve chapters. Chapter 2 provides an overview of previous work. It discusses the strengths and drawbacks of various tools for visual simulation that are commonly used in the fields of computer graphics and computer simulation. Chapter 3 provides an overview of IBS and introduces a general example for presenting its concepts. Chapters 4 through 8 describe the five major parts of IBS: interaction-based programming, interaction constraints, multi-modeling methods, the general specification, and reuse guidelines. Each of the five chapters discusses the details of one part of IBS and demonstrates its application through further elaboration of the general example set forth in chapter 3. Chapters 9 through 11 provide evidence to the claims of this thesis. These chapters show how IBS is better enabled than existing methods and tools to support the complexities of  CHAPTER  1.  INTRODUCTION  16  developing visual simulations of wide variety and kind. The first two chapters (9 and 10) use empirical results and logical arguments to validate the design of IBS. Each of these two chapters presents a set of examples and directly compares an implementation of the examples using IBS to implementations using the currently available visual simulation tools. The examples are representative of common scenarios appearing in visual simulations. The tools chosen for comparison are representative of the currently available approaches to visual simulation. The comparisons demonstrate that IBS is capable of expressing common scenarios with additional desirable properties in comparison to existing approaches. The third chapter (11) assesses the utility of IBS in easing the development of visual simulations. It presents logical arguments to describe how IBS simplifies the development of visual simulations and presents qualitative results from the use of IBS that support the claims of this thesis. Chapter 12 compares IBS to other approaches in software engineering, computer graphics, and computer simulation. The software interactions presented here share some similarities with those in previous works, but none of those works apply software interactions in precisely the same way. Chapter 13 summarizes this thesis and presents future work. It also details the contributions of the thesis to the development of visual simulations.  Chapter 2  Background The literature abounds with assorted descriptions of development tools for visual simulation. Practically all of these tools originate in the fields of computer graphics and computer simulation. Earlier tools from these fields were useful but limited. Those from computer graphics primarily visualized information while those from computer simulation primarily organized state changes over time. More recent tools from each field attempt to do both. Computer graphics employs simulation techniques to animate its scenes while computer simulation employs visualization techniques to animate its execution. Despite the convergence of the two fields, each field produces visual simulations for different purposes. The computer graphics community creates guided simulations while the computer simulation community creates undirected simulations. Guided simulations manage information to reproduce deterministic results. Their outputs are generally known before they begin. Undirected simulations manage information to present or compute results. Their outputs are not usually known before they begin and are discovered only after they end. Modern computer games blend both graphics and simulation. Most of the play of a game is guided; secondary motions, which appear in the background or produce ornamental effects, are developed as an undirected simulation. This approach is taken to ensure the outputs of a game are restricted to those that produce "interesting" results. This chapter discusses briefly the designs of previous development tools for visual simulation. For both computer graphics and computer simulation, the discussion presents a structural overview of the tools and the software approaches the tools enlist to establish and to control dynamic systems. It explains how the separate aims of the two fields encourage differences in the designs of their tools and in the software engineering techniques they employ. The chapter concludes with a short commentary on the role of communications in software programs. It discusses how the tools of visual simulation provide features for its 17  CHAPTER 2.  BACKGROUND  18  control.  2.1  Computer Animation  The computer graphics community produces computer animations - plausible, visual simulations. The animations emphasize aesthetics, interaction, and control over computation and analysis. The outcome of the animations are usually known before they begin. Pre1  cision and accuracy are desirable goals but not absolutely essential. Imprecise movements and exaggerated motions are often acceptable if they convey reasonable results. The aim of visualization is to present the results of controlled computation. As computations occur, visuals animate. Each time step in a computer animation updates the values effecting computation. For instance, a free-falling object's position updates as time progresses, and an articulated figure grabs as a planning algorithm calculates. For many computer animations, there exists a simple mapping between state variables and visual attributes. As the variables update, the visual attributes change. 2.1.1  Structural Overview  Emulating the steps of traditional hand-drawn animation, the graphics community decomposes computer animations into three production phases: modeling, rendering, and animating. Modeling involves the building of virtual scenes with lights, cameras, and shapes. Rendering entails the interpretation of virtual scenes to yield synthetic imagery. Animation involves the changing of virtual scenes over time, as when animating the entities of modeling and the visuals of rendering. The interplay of the three phases produces a series of discrete images that visualize the contents and behaviors of virtual scenes. Many tools for graphics, such as G R O O P [84] and G R A M S [40], address only a subset of the three phases. They offer primary support for one or two phases and secondary support for the others that remain. In building a complete application, the tools expect developers to introduce additional tools for managing the phases that are incomplete. Developers either produce the additional tools themselves or find other tools to address their needs. Secondary support often comes in the form of callbacks, functions, or extensions. Developers apply these three forms in integrating their programs with the tools. All three forms enable developers to access and exchange information with the tools so as to accommodate the Unpredictable results may arise with keyframing in the form of secondary effects. The overall movements of an articulated figure, for instance, are unpredictable if each of its joints is animated simultaneously with keyframing. 1  CHAPTER 2.  BACKGROUND  19  needs of specific applications. For example,  VRML  [159,  18],  a specification for modeling,  animates by providing hooks for scripting utilities. Developers apply the scripting utility of their choice in integrating the hooks with their code.  2  Of the tools for graphics that address multiple phases, most aim to support only modeling and rendering. Animation is a secondary concern and often given minimal support, usually in the form of functions or GRAMS  [40],  interfaces.  Functions, as provided by  BAGS  [144]  and  create specific effects, such as spinning an object around a point. Functions are  simple to use but difficult to apply as computer animations increase in complexity. They frequently apply their effects directly to state variables and thus tend not to accommodate the application of their effects elsewhere, nor their collaboration with others in producing higher-order effects, such as spinning an object sporadically around a moving point. In addition, the functions are difficult to .manage as the rising complexity of an animation produces a greater number of function calls. Few means are available to correlate their use or to apply them in forming higher-level abstractions. Interfaces, as supported by TwiXT  [61],  UGA  hooks for developers to associate state variables to  [172],  and  controllers,  OBLIQUE-3D  [104],  provide  specialized components that  regulate the evolution of state. As the controllers compute new states, the interfaces relay the changes to the state variables. This approach shares several similarities with the approach presented in this thesis, but there are several differences. IBS presents additional elements, such as scripts and meta-scripts, that compose and organize interactions between interfaces and controllers. Most tools supporting interfaces require developers manually to establish, manage, and control these interactions. IBS also employs time to control the dynamic properties of the interactions. Time produces communications, not state changes. State changes happen as communications occur.  2.1.2  Animating Methods  The tools for graphics that address animation do so directly by applying a variety of methods for animating state. Each method applies a different construct in establishing a mapping between state and time. The methods decide when the mappings change, and for many, what the mappings compute. The constructs which distinguish the methods are messages, animated  types,  and  declarative  expressions.  keyframes,  Keyframes are "time, value" pairs;  messages are coded signals; animated types are self-modifying variables; and declarative expressions are specialized statements. 2  Each of the constructs makes trade-offs between  The scripting utilities often applied with VRML include Java [12] and ECMAScript [39].  CHAPTER  2.  BACKGROUND  20  ease of use, expressiveness, and potential for reuse. Keyframes are data pairs that form explicit mappings between fixed states and future times. The mapping establishes a low-level specification that requires most, if not all of its states to be known before an animation starts. Animation based on keyframes is referred to as event-based: a series of keyframes records the events that signal a change in state. Keyframes are attractive in that they are simple to create, quick to interpret, and easy to store. They manage state changes effectively and support numerous interpolation techniques [81, 168]. Animation systems that rely upon keyframes include S W A M P [15], T.WIXT [61],  and CLOCKWORKS [56]. The systems differ in their development of keyframes  and in their means of enlisting keyframes to animate state. In developing keyframes, S W A M P produces, them as "streams," T w i X T produces them as "tracks," and CLOCKWORKS produces them as "evaluators." Despite their great utility, keyframes are not useful for producing all types of animation, especially those of higher complexity. Few primitives exist to manage the use of keyframes at multiple levels of abstraction. For instance, signal processing methods [168, 25] prove useful in adapting keyframes to new motions, but not in associating the keyframes with logical expressions or temporal controls. Keyframes are also not useful in encouraging a separation between state and time. Primitives are generally unavailable to adapt keyframes to run-time conditions, such as the activation of an event or the appearance of obstacles. Messages are signals passing between the interfaces of components. The signals are useful in establishing communications, declaring state changes, and synchronizing temporal events. Messages are often associated with time stamps that determine when they appear and how they act. Animation systems using messages to animate state include ASAS [127] and UGA [172]. Messages are useful and simple to apply but arduous to handle as the complexities of an animation grow [166]. Few means are available to integrate the effects of multiple messages or to coordinate effectively the passing of many messages. In addition, rules do not exist to standardize the way that components interpret messages and react to communications. Such rules are important in improving the reuse of the components in multiple contexts and having the components work properly in time-varying systems. Animated types are variables bound with time-varying structures. As the structures compute new values over time, the variables change state. Animated types range from simple interpolating variables, such as the Newtons of ASAS [127], to complex constraints, such as the behavioral functions of T-BAG [42] and OBLIQ-3D [104]. With animated types, the changing of state is usually automatic and fixed with time. Few mechanisms exist to regulate carefully the timing of state updates and to control dynamically the associations between  CHAPTER 2.  BACKGROUND  21  variables and time-varying structures. Of the few that do exist, such as the behaviors of T - B A G , most provide limited support for developing a wide variety of scenes. Declarative expressions are statements detailing the contents of an animation and presenting its dynamic properties. Organized into a language, the expressions indicate "what" is happening and "when" it occurs. A specialized interpreter evaluates the expressions in deciding "how" an animation progresses over time. Most systems employing declarative expressions, such as  ANIGRAPH  [24],  FRAN  [41],  MAM/VRS  [36],  and  ARYA  [13],  organize  the expressions into node graphs or recursive lists. The two data structures ease the use of applying expressions and support the development of software by composition. The drawbacks of applying declarative expressions are the same as those of applying a conventional programming language to animation. The expressions are not well-suited for specifying concurrent state changes and encouraging the reuse of high level abstractions. Most lack rules or guidelines for handling the intricacies of a complex system. Thus, as a system grows complex, the declarative expressions produce a large set of statements that are difficult to comprehend, debug, and reuse. 2.1.3  Complexity Management  With most tools for animation, the management of complexity is overshadowed by the goals of prototype design. The tools strive solely to simplify the development of animation with compact notations and simple structures. The most common approach to handle complexity uses extension mechanisms, such as templates and callbacks. Such mechanism not only introduce extensions, but also facilitate the management of a moderate growth in complexity. Other mechanisms providing utility include the use of constraints [ 4 2 , 7 6 , 5 8 ] , lazy evaluation [ 1 4 , 4 1 ] , spatial operators [ 1 4 ] , behavioral transformations [ 1 4 ] , and object-oriented design [ 1 6 , 2 9 , 5 6 , 1 7 2 , 1 1 3 ] . While useful, extension mechanisms provide insufficient means to manage animations of great complexity. The mechanisms are unsuitable for manipulating animations at multiple levels of abstraction, a task providing innumerable benefits in dealing with complex systems. Computer simulation, for instance, introduces higher-order mechanisms, such as processes, to manage the intricacies of a simulation at a level higher than the handling of events and state changes. The extension mechanisms are also ineffective in introducing new animating methods, and certainly cannot alter the application of animating methods over time. The complexity of an animation grows dramatically if only one method, such as keyframing, is applied to reproduce the effects of other methods, such as those involving physically-based motions  [152],  particle systems  [126],  or algorithmic animation  [143].  CHAPTER 2.  BACKGROUND  22  A number of tools in graphics, such as F R A M E S [118] and C O N D O R [76], manage the complexity of an animation with software composition,  the methodical integration of  software components that encapsulate functions and commands. Integration involves the organization of components into hierarchies, orderly arrangements, or nodes of a dataflow graph. An evaluator parses the organization of the components so as to build and execute a successful animation. The primary use of software composition, when applied by the tools, is the establishment of an ordering of state changes, or the development of a set of relations. The ordering of state changes, as exercised by P H I G S [154] and G K S [69], models an animation at a low-level of abstraction. The state changes directly affect the values of specific states. The ordering of state changes neither establishes nor modifies the behaviors of higher-order abstractions. Conversely, the ordering of relations, as employed by I N V E N T O R [145] and GROOP  [84], models an animation at a high-level of abstraction. The ordering determines  how the components relate individually or in groups. Most often, the relations that develop between components are fixed. It is difficult to introduce new relations that are not provided directly by the tools. Moreover, most relations among components are static. Few tools, not just those based on software composition, provide any means to augment the variety of existing relations or to manipulate these relations dynamically. This deficiency dramatically increases the complexity of an animation that attempts to vary these relations over time.  2.1.4  Software Reuse  Rapid development is a primary goal of the current tools of animation. Prototyping helps developers test and apply new algorithms or techniques to specific scenes or contexts. However, when developers wish to re-apply the same algorithms or techniques, such as those controlling the movement of a crowd or the collaboration between two figures, to a wider variety of scenes or to a greater number of contexts, the existing tools are limiting. Except for those employing declarative expressions, existing tools primarily offer low-level structures that are simple or compact. As the complexities of the animations grow, the animations and their parts often become difficult to manage and impractical to reuse.  3  Tools using  declarative expressions offer some utility in that they may apply practical approaches in software development, such as software composition, to the building of reusable code. But like those providing low-level structures, the tools using declarative structures are designed Occasionally the term "reuse" is applied by the animation community with regard to the development of "reusable motion libraries" [168, 25] and "reusable motion controllers" [155]. Rather than addressing the reuse of code, these terms refer to the ability of an animation to apply and integrate the motions of an articulatedfigureor the streams of motion-captured data. 3  CHAPTER 2.  BACKGROUND  23  for building prototypes, not reusable applications. As such, a specific agenda of reuse is unsupported with the tools of animation. Some tools, such as V R M L , accommodate reuse for modeling and rendering, but not for animation. This is partly because the processes of modeling and rendering are better understood. In several tools, especially those applying software composition, there is an implicit suggestion of reuse. Components that compose well should decompose easily into its representative parts. This approach works well if the parts that developers wish to reuse do not involve cross-cutting concerns. Except for a few, such as A R Y A [14] and A S A D A L / P R O T O [80]', the tools of animation do not propose a specification that controls independently the "who," "what," and "when" of a simulation. In facilitating prototype design, most specifications concisely indicate who changes, in what way they change, and when the changes occur. It is difficult to partition the specification and reuse its parts separately.  2.2  Computer Simulation  Unlike the computer graphics community, the computer simulation community creates visual simulations to verify, present, and validate mathematical models. They stress computation, 4  not control or aesthetics. Simulations establish initial conditions, execute state changes, and monitor results. Unlike their counterparts in computer graphics, they observe, not control, the production of results. Simulations execute repeatedly to ensure that they observe the proper relationship between initial conditions and final results. Attempts to integrate general simulation techniques with computer graphics have produced mixed results. Many attempts, such as those based loosely on physics [152], serve limited use. It is difficult to predict and to vary the outcome of such simulations. However, some integrated approaches permit great flexibility. Simulations, such as those based on L-systems [124] or particle systems [126], provide adequate controls for users to carefully produce deterministic results. Simulation research, unlike graphics research, attempts to analyze, classify, optimize, and verify a simulation's design. From a simulation's design, it is possible to identify the type of systems it can simulate, the degree of accuracy to which it performs, and the roles of its components. Recognition of the roles of components and their interactions is also important for software reuse and integration. It defines how components communicate, relate, and behave. As advocated by many [106], reuse of components is greatest when components To validate a model means to prove that it is an adequate representation of a system under investigation. To verify a model means to check the consistency of its design (i.e. accuracy, numerical solutions, etc.). 4  CHAPTER  2.  24  BACKGROUND  move to and from similar environments, undergo similar working constraints, and accept similar roles and responsibilities.  2.2.1  Structural Overview  The computer simulation community dissects a simulation into four phases: model design, model execution, model verification, and data, analysis [119, 151, 46].  Model design pro-  duces the simulation; model execution executes the simulation; model verification conforms the validity of the simulation; and data analysis reviews the results of the simulation.. As illustrated in Figure 2.1, all four phases are essential for building a reliable simulation. Model design is critical to the overall accuracy and usefulness of a simulation. An improperly formed design critically affects each of the remaining phases. Similarly, model verification is critical to assessing the reliability of a simulation. Thus, as indicated by the backwards facing arrowheads, simulation development is an iterative process. Each time a deficiency or problem is encountered, the four phases repeat.  Model Design  Model Execution  ^  Model Verification  ^  Data Analysis  !_y= Figure 2.1: Four phases of simulation development  . Model design, the phase of utmost importance in developing visual simulations, consists of the development of entities, an executive logic, a temporal management strategy, and an approach to display building. Entities are the elements of primary interest within a simulation. They represent tangible things, such as machines and parts, or intangible things, such as queues and routes. The executive logic refers to the computational algorithms applied by a simulation for altering the states of its entities. As a simulation progresses, the executive logic controls the mapping of state to time. The temporal management strategy is the method applied by a simulation in determining when and how simulation time advances. The temporal management strategy affects when and how the executive logic acts. Display building is the process of visualizing a simulation for presentation and analysis. Most often, the visualization conveys the behaviors and attributes of the entities or of the executive logic.  CHAPTER  2.  25  BACKGROUND  Classification Many classifications of simulation [169] identify the temporal management strategy as the distinguishing characteristic. As noted in Figure 2.2, temporal management strategies divide into two types, discrete-time  and continuous-time.  In discrete-time simulation, time  updates incrementally at fixed intervals. All state changes occur only at the times set by the incremental updates. In continuous-time simulation, time flows smoothly at varying intervals. All state changes occur freely and arise at any moment in time.  5  Time-Varying Simulation  Discrete Time  Continuous Time  •  I Continuous State  • 1  1 Event Scheduling  \  Discrete Event  1 t  • Scanning . Activity  1  Process Interaction  Figure 2.2: The methods of temporal management [169]  Of the two types, discrete-time simulation is simpler to build and is used frequently by the tools of computer graphics. A discrete-time simulation can be as simple as a program that iterates continuously over a set of functions. Each iteration embodies an incremental progression of time. The popularity of discrete-time simulation in graphics [84] is probably due to the nature of animation, which involves the periodic display of visual imagery. Unfortunately, discrete-time simulation is not apt in producing simulations of superior quality and size. It is not favored because it aligns state changes to fixed increments in time. A simulation is not permitted to reproduce accurately any event occurring in between time steps. Ultimately, this limits the worth of a simulation's results, and reduces the validity of its design. In addition, discrete-time simulation is not favored because it fixes the pace of a simulation to a constant rate. A simulation can not update faster than the rate of its slowest part. Otherwise, the simulation would suffer from temporal aliasing, the buildup of artifacts arising from a lack of temporal resolution. Continuous-time simulations, which involve greater efforts to build, are of two types, continuous  state  and discrete-event.  In continuous-state simulation, state variables undergo  I n a literal sense, all computer simulations advance time discretely, since computers and programming languages behave discretely. However, it is the reaction of state variables to temporal advancement that separates the two simulation types. 5  CHAPTER  2.  BACKGROUND  26  smooth changes. Most often, the changes derive from the solutions of mathematical equations, which are either differential or algebraic. As exemplified by the tools CsMP [ 1 4 1 ] and DESIRE [ 8 3 ] , continuous-state simulations are normally associated with block diagram methods for control and system engineering. In discrete-event simulation, state variables update discretely and simulation time jumps irregularly. The popular forms of discrete-event simulation are  event scheduling,  activity  scanning,  and  process interaction.  simulation time advances according to the scheduling of  events,  In event scheduling,  the times of important state  changes. Time advances incrementally from one event to the next. In activity scanning, time advances according to the evaluation of activities, sets of events paired with conditionals. Activities with truthful conditionals apply their events in advancing time. In process interaction, simulation time advances according to the performance of processes, tasks that encapsulate conditionals and events. As the processes interact, they compete for resources and schedule the timing of events.  The tools of simulation that perform discrete-event  simulation include  [122],  2.2.2  GPSS  [133],  SLAM  and SlMSCRIPT  [78].  Simulation Methods  The methods that the tools of simulation apply in developing simulation vary. Often, the system under investigation influences a simulation's design and the representation of its parts. For example, Petri nets [ 1 1 6 ] easily model systems involving resource contention, while engineering block diagrams [ 1 3 7 ] easily model systems involving continuous variables. Therefore, the tools of simulation that support several methods are favored over those that support only one. Those that support only one are  uni-model  while those that support mul-  tiple methods, which may be either discrete-event or continuous-state, are  multi-model.  As  noted by various researchers [ 1 0 9 , 4 6 , 1 7 1 ] , multi-model tools support multiple approaches within one developmental framework. They provide a unified approach to simulation development that provides greater flexibility and superior design. They permit developers to partition simulations into parts and model each part with the approach that best emulates the part's design and function. Uni-model Approaches Fishwick [46] classifies four uni-model approaches based upon the implementation and cooperation of the four parts of every simulation. The four approaches are tional,  tions  constraint, [112]  and  spatial.  and event graphs  declarative,  func-  Declarative approaches, as exemplified by transition func-  [135],  focus on the concept of state. For each state, there are  CHAPTER 2.  BACKGROUND  27  rules that identify when it occurs and when it changes. Collectively, the rules establish a total ordering of the state space. Each rule updates the system from one state to the next. Declarative approaches often employ transition tables to model this ordering of state. Functional approaches, as employed by CsiM [136] and SlM+4- [90], produce networks of coupled blocks. Data, representing functions or variables, flow through the network from one block to the next. As data flow, states update and events occur. Functional approaches best model systems in which a directionality of flow exists between system components, such as queues and control engineering. By contrast, constraint approaches model nondirectional relationships, such as the laws of nature, with difference or differential equations. They update state by converting the equations into first-order forms, and then solving the forms with numerical integration techniques. Spatial approaches, as employed by L-systems [124], decompose simulated environments into many basic elements acting in parallel. The properties of the system under investigation determine how decomposition occurs and how basic elements interact. The two most common decompositions divide environments into many spaces, such as a lattice of cellular automata, and into many entities, such as a branching network of L-systems. Multi-model Approaches Multi-model tools encourage developers to mix and match the subsystems of a simulation with the individual methods that best match a subsystem's behavior and structure. Architecturally, this mix and match process forms a network of subsystems that cooperate to solve a larger task. The integration of uni-model approaches is done with one of two methods, combination or refinement. Combination methods present a unified approach to simulation modeling. As exemplified by G A S P [122], this usually entails a combination of continuous state modeling with discrete-event operations. Refinement methods, such as those offered by SlMPACK [45], M A S T [10], and S I M I I [44], hierarchically embed uni-model approaches within each other. As illustrated in Figure 2.3, each embedded approach augments the definition of one or more subsystems of the original approach. Successive refinements produce a modeling hierarchy. Each level of the hierarchy describes the operations of the model at a different level of abstraction. For developers, the hierarchy permits them to analyze the system's behavior from several perspectives and to manage effectively the growth of a complex system. Two types of refinement methods occur in multimodel modeling. When an embedded approach matches the design of the approach it refines, the refinement is referred to as  CHAPTER 2.  BACKGROUND  28  Declarative  R  s 0 — 0 Constraint  Functional  Functional Figure 2.3: A n example of a multi-model system  homogeneous. For example, subsystem "T" of Figure 2.3 uses the same functional approach to simulation modeling as does the system governing its actions. The subsystem improves the definition of the system with homogeneous refinement. By contrast, when a subsystem models differently than its governing system, as illustrated by subsystems "Q" and "S" of Figure 2.3, the refinement is called heterogeneous. The subsystem improves the definition of the system with an alternative method. 2.2.3  Complexity Management  Like graphics tools, many tools for computer simulation, especially those based upon scenario languages [130], employ software composition to improve the design and reuse of simulations. They decompose systems into numerous interacting structures, many of which are identified as either processes, blocks, or activities. Communications among the structures are inferred from their organization, such as in dataflow architectures, or decided upon by explicit means, as what occurs with function calls or callbacks. Simulation results from the direct execution of the structures by active transactions, or from the evaluation of the structures by an interpreter. Like the tools of computer graphics, the tools of simulation using software composition suffer several drawbacks. First, the roles and goals of components vary from one tool to the next. Many tools, such as G P S S [133] and S L A M I I [122], primarily support only one of the many methods for simulation modeling. To produce a multimodel simulation from these tools requires extra effort and time. Few composition techniques support facilities to integrate additional methods.  They expect users to develop their simulation from a  single level of abstraction. Second, the relations among compositional units usually remain fixed. Topological changes that vary the structure of a simulation over time are difficult  CHAPTER 2.  BACKGROUND  29  or impossible to produce. Third, the building of programs as networks of compositional units is useful to model sequence, not hierarchy. The refinement of a program involves the replacement of a short sequence with another sequence of greater length and complexity. For some situations, this replacement scheme obfuscates the design of a program and complicates its re-verification. 2.2.4  Software Reuse  In simulation, the reuse of software is an important issue, but there is no specific agenda of reuse. Reuse is commonly realized through the reuse of building blocks, and occasionally, through the reuse of composition units that operate in similar contexts. The tools rarely provide mechanisms or methods effectively to reuse abstractions of greater size, such as those custom-built by developers, or to reuse the abstractions in varying contexts. The tools aim primarily to support short-term concerns, such as rapid prototyping. However, there is a growing trend in the field to provide such mechanisms. For example, simulation frameworks [119, 31], composition techniques [45], and experimental frames [170, 146] introduce methods to describe simulations as the interplay of environments and interacting components, many of which are potentially reusable and many of which are useful for software integration. Similar to graphics tools, few simulation tools fully support a separable specification that reuses easily. "When" mechanisms are separate, but "what" and "how" mechanisms heavily intertwine. This approach permits the tools to reconfigure dynamically the representation of a system, but not without difficulty. To separate the "what" from the "when," numerous tools, such as PRISM [156], must employ many logical conditions. The conditions produce code with many breakpoints that, as McCabe's complexity measure [97] indicates, are cumbersome to read and difficult to reuse.  6  2.3  Software Communications ..  Except for the tools mentioned later in section 12.2, most computer graphics tools and computer simulation tools provide little assistance in designing.and building the communications that occur amongst the parts of a system. Most tools consist of components without communications, components with fixed communications, or components with regulated communications. In using the tools that provide only components, such as OBLIQ-3D [104], A breakpoint is a branching statement that increases the number of execution paths through a program. 6  CHAPTER 2.  BACKGROUND  30  developers establish and manage all forms of communications with general programming structures. They use either implicit [54] or explicit invocation, or data encapsulation to control communications among components. These types of tools permit great flexibility, but provide limited assistance in managing the growth of complexity or encouraging the reuse of software. They suffer drawbacks similar to those that discourage the widespread use of component libraries. For instance, two components of a library coupled tightly are usually difficult to apply separately and to use in alternative contexts. In using the tools that provide components with fixed communications, such as G R O O P [84],  developers either parameterize or organize the use of components. Methods to  directly influence or change communications are not normally present. These types of tools are useful for rapid prototyping, but not for building programs of wide variety. The fixed communications limit the kinds of programs that may be produced. Introducing alternate forms of communications, such as dynamic communications, requires great effort. In using the tools that provide components with regulated communications, such as T B A G [42],  developers structure the organization of components. They organize components  to signify the occurrence of state changes and the timing of interesting events. Normally, an evaluator interprets the components and decides how and when the communications occur. These types of tools execute more slowly, but provide the greatest flexibility. Without adequate facilities to control communications, tools of graphics and simulation provide only partial benefit to the building of a complete program. As the software engineering community has acknowledged for many years [19], components are simply inadequate to ensure the effective building and reuse of software programs. Similar sentiments have also been expressed by members of the coordination language community [27] and the community of object-oriented design [37]. Employing only components, the tools for visual simulation distribute the code for coordinating communications throughout the body of a program. Hence, the code is neither localized nor easily changed. With this drawback, the specification of topological changes to the system of a program is difficult or nearly impossible. Topological change is important to the building of programs with many changing communications, as found in many visual simulations of moderate complexity.  Chapter 3  Illustrative Examples This chapter and the five chapters that follow present the design of IBS and a description of its parts. This chapter introduces an illustrative example, which appears in the five successive chapters, that clarifies the presentation of IBS and demonstrates its application. The illustrative example consists of several scenes involving the movements of an articulated robot. This chapter also briefly describes the RASP TOOLKIT, a software package providing an implementation of IBS with a stylized version of C++. Many of the code segments of this thesis apply the abstractions of the RASP toolkit to describe concepts and to present a working implementation of the illustrative example.  3.1  Descriptions  The illustrative example consists of three scenes: a general example, an extended example, and a contextual example. The general example involves the movements of an articulated robot searching to activate a stationary light. Upon reaching the light, the robot selects a button that toggles the light's state. The extended example augments the general example to introduce movements to the light, while the contextual example places the robot in a larger environment that reacts to the actions of the robot. The three scenes appear in successive chapters to help describe the design and application of IBS. The general example presents elementary concepts, while the other examples establish a larger context for presenting the concepts and themes of computer simulation employed by IBS. The description of the three scenes, which appears in this chapter, provides an overview of the illustrative example. The overview describes the goals of each scene and presents the software networks comprising each scene's implementation. Finer details, such as the nature of the components that comprise the software networks and the coding of the 31  CHAPTER  3.  ILLUSTRATIVE  32  EXAMPLES  scene's software interactions, appear in successive chapters as greater information on the design and application of IBS emerge. 3.1.1  General Example  The diagram in Figure 3.1 identifies the software network of a simple animation in which an articulated robot selects a button to turn on a flashing light. The system consists of six components: two controllers ( H A N D - C O N T R O L L E R and P O S I T I O N A L - C O N T R O L L E R ) , three geometries ( R O B O T , B U T T O N , and L I G H T ) , and one data plotter ( G R A P H E R ) . CONTROLLER  POSITIONAL-  updates the world position of the robot's body while H A N D - C O N T R O L L E R  updates the location of the robot's hand. G R A P H E R charts the position of the robot's X Y coordinate over time. The directed arrows identify the relationships among components that the software interactions control. The solid rays identify the passing of information. One component acts as the source or controller of another by providing useful information. The stippled arrow identifies an implicit relation. It indicates that one component depends on another's state, not necessarily on the value that the other produces. For example, the light flashes by the activation of the button, not by the passing of information from the button to the light.  Hand Controller  Button  M  Robot  Positional Controller  Light  Grapher  Figure 3.1: A software network of a simple animation  3.1.2  Extended  Example  The diagram in Figure 3.2 extends the original software network of Figure 3.1 with four additional software components, as indicated by the gray boxes, and with seven new relationships, as indicated by the directed arrows of greater thickness. The additional components and relationships enable the original robot to react to the changes in the light's state, as produced by the PosCONTROL, O R I E N T C O N T R O L , and C O L O R C O N T R O L . The three controllers update the position, orientation, and color of the light, respectively. The first two  CHAPTER 3. ILLUSTRATIVE  EXAMPLES  33  controllers operate at intervals of twenty units of time with ten units of time spaced between them. Thus, successive changes in position and orientation alternate every ten units of time. When the orientation or the position of the light changes, the illumination state of the light remains fixed. Therefore, the button, which toggles the light, produces no changes until the light halts and remains stationary. The third controller, C O L O R C O N T R O L , updates the light's color to modify gradually the scene's illumination.  Hand Controller  Button  RobotB  Robot  Positional Controller  f^  Pos Control  Color Control  Light  -  Grapher  OrientControI  Figure 3.2: The software network of the extended example The extended example also introduces R O B O T B , a secondary robot. When the button attains success in changing the light's state, the button induces either C O L O R C O N T R O L to change the red channel of the light's color or P O S I T I O N A L - C O N T R O L L E R to advance the 1  original robot's movements such that the original robot keeps moving towards R O B O T B . When the original robot moves, it moves until it meets R O B O T B or until sixty units of time pass. When either event occurs, P O S I T I O N A L - C O N T R O L L E R restarts and continues moving the original robot towards the button. The entire process repeats until the light switches state ten times. 3.1.3  Contextual Example  The diagram in Figure 3.3 illustrates a larger scene applying the extended example of Figure 3.2. The scene contains two robots, a light, a large crane on a dock, a barge, and several boxes. The movements of the first robot delimit the actions of the other entities. When the robot stops toggling the light, the scene halts and the entire animation stops. When the robot is working and the light is on, the crane attempts to load the barge with the boxes. Loading is successful if both the barge and any box are accessible to the crane. The barge moves to and from the dock periodically. It starts its motions shortly after the two It is common in graphics to identify an individual color as an intermixing of three primary colors, each of which is called a channel. One of the most popular representation of color is RGB, which applies red, green, and blue as its three channels. 1  CHAPTER  3.  ILLUSTRATIVE  34  EXAMPLES  robots begin to move. The second robot,  ROBOTB,  makes a box accessible to the crane by  repetitively placing a box near the crane's base. The robot places a box by picking it up and moving it to an appropriate place. Unlike the first robot, breakdowns. When this occurs,  ROBOTB  ROBOTB  suffers from periodic  must wait for time to pass before it resumes its  normal activity. The final movement of the barge brings an end to the movements of the first robot. The first robot stops as the barge comes to a halt.  Figure 3.3: An illustration of the contextual example  3.2  The RASP Toolkit  The testing and development of IBS is made possible with the imental software package for visual simulation.  R A S P  R A S P  TOOLKIT,  an exper-  supports an interaction-based style  of programming for animating virtual scenes. When using  RASP,  developers write compo-  nents and organize software interactions with a stylized form of C++. The stylized code is then conjoined with a run-time evaluator, also written in C++, that interprets and executes software interactions. A complete program is tested and compiled with a standard C++ compiler. The result of compilation is an executable application with software interactions governing the communications and topology of the application according to the advancement of simulation time. 3.2.1  Application Development  As illustrated in Figure 3.4, the development of an application with  R A S P  occurs in four  steps. In step one, a developer establishes the visual look of the simulation. The developer details the scene with shapes, colors, and lights. A renderer is chosen to synthesize images and an output device is selected either to store or to display the images for viewing. In step two, the developer considers the design of the software network that defines the application.  CHAPTER 3. ILLUSTRATIVE  EXAMPLES  35  The developer identifies the proper components and determines their interactions. If a component or an interaction is unavailable within R A S P , the developer produces new code in accordance to the principles of IBP. The design of new components involves careful attention to the integration of time and to the fostering of software reuse (see chapter 8).  In step  three, the developer produces the scripts of the application. The developer organizes the software interactions of the software network into orderly arrangements. In step four, the developer produces the meta-scripts of the application. This involves the binding of scripts with temporal structures, such as a time graph, and the binding of the temporal structures with the toolkit's run-time evaluator.  Visuals  Network Preparation  Scripts  Metascripts  Stepl  Step2  Step3  Step4  Figure 3.4: Four steps of application development with  RASP  Upon completion of the four steps, the developer iteratively refines the design of a RASP-based application by adjusting its parameters or re-implementing any of the four steps. 3.2.2  Software L i b r a r y  The current version of the  RASP  TOOLKIT  is 2.0. Its implementation consists of 73,500  lines of stylized C++ code partitioned into 360 classes, of which 35 represent software interactions, 100 introduce components for visualization, such as geometries and cameras, and 65 introduce abstractions for animating state, such as ports (section 4.2.2) and time graphs (section 7.3). The remainder of the classes introduce basic utilities and elementary data structures, such as arrays and vectors, common to all visual simulations.  Chapter 4  Interaction-Based Programming It has long been recognized that the kinds of mechanisms available to support communication between software components affects the reusability of components [165, 60, 121]. Most common programming approaches couple communication mechanisms directly to components, thereby complicating the development and reusability of the components. Building reusable components within those environments requires significant effort. Purtilo, for instance, states that "[the programmer must] anticipate the program unit context of use: system calls must be installed for communicating with other program units, parameters must be marshaled for transmission, data representations may need to be coerced, and so on ..." [125, p. 152]. Interaction-based programming (IBP) is an approach intended to improve the reusability of software components by encouraging a separation between computation and coordination. It relies upon first-class language constructs, software interactions, to moderate communications between the interfaces of software components. Software interactions permit the specification of relationships among components that are typically spread across multiple statements of multiple components when using common programming language constructs, such as function invocations. Software interactions localize and consolidate the statements that determine when and how communications occur. The collective operations of software interactions form a network of communications, as shown in Figure 4.1. With software components as nodes and software interactions as links, the "software network" simulates a dataflow network. Data flows between the software components that the software interactions join. The term topology is used to refer to a particular configuration of a software network. With IBP, the topology of the network may change as the program executes. This chapter presents IBP and exemplifies its use in developing the general example. The chapter begins with an overview of IBP that entails a discussion of program develop36  CHAPTER. 4. INTERACTION-BASED  PROGRAMMING  37  Component  Figure 4.1: A typical software network  ment and of programming benefits. With IBP, program development involves the building of programs as two parts, one part for computation and another part for coordination. Fol-. lowing the overview, this chapter provides a detailed description of the two parts of an IBP program and presents an application for their use. The part for computation involves the development and use of software interactions and software components, and the part for coordination involves the development and use of scripts and meta-scripts. The chapter concludes with a discussion of program execution. It explains how a special evaluator interprets the two parts of an IBP program to establish a working system that may change its topology at run-time.  4.1  Overview  An interaction-based program organizes software components with software interactions. The program relies upon this organization to form systems of communicating components. The main body of an interaction-based program is unlike that of programs written with conventional programming languages. As discussed in section 4.4.1, its execution does not produce immediate results or initiate communications. The execution of an interaction-based program imbues software interactions with argument values and uses them to form descriptions of software networks. Communications occur when the descriptions are interpreted, not when the descriptions are formed. Thus, an interaction-based program shares greater commonalities with the script of a theatrical production. It describes the behaviors and interactions of its constituents by indicating for each scene who participates, what happens, and when the scene occurs.  CHAPTER  4.1.1  4.  INTERACTION-BASED  PROGRAMMING  38  P r o g r a m Development  The focus of an interaction-based program resembles that intended by the designers of many scripting languages, such T C L [110] and P E R L [160]. As noted by Ousterhout, the developer, of Tel, scripting languages simplify the production of connections between components and provide the means to build applications quickly [111]. General coding techniques offer the best means to define the data structures and algorithms of software components. Employing these techniques, the development of an interaction-based program involves the building of two activities, activities for production and activities for coordination. The former decides the computational state of a system while the latter decides the communications of the system that regulate the computational state. The activities for production establish the computational state by transforming inputs to outputs, and by communicating information between the outputs and the inputs of individual components. The activities for coordination establish the communications of a system by structuring the use of the activities for production. Collectively, the two activities naturally partition a program into parts for computation and parts for communication. Unlike a program developed with common software practices, such as procedural or object-oriented programming, an interaction-based program enables its computation and communications to be produced and updated individually. The code for each part is localized and isolated from the other. Activities for Production  The development of the activities for production involves the design and implementation of software components and software interactions. Software components maintain and update state information according to a set of changing input values. As presented in the example of section 3.1.1, software components receive messages as inputs. The messages emanate from software interactions while communicating information from the components' environment or from the components' peers. Software interactions send messages to software components by referencing the software components by their operations. Upon invoking the operations, the software interactions call a variety of actions either to update or to alter the communication of information, such as filtering the information to reduce its value or limiting the information to pass only when specific run-time conditions apply. Activities for Coordination  The development of the activities for coordination involves the design of scripts and metascripts, two independent abstractions that are assembled from an organization of parts.  CHAPTER  4. INTERACTION-BASED  PROGRAMMING  39  Scripts detail the use of software interactions in coordinating the communications between a variety of software components; each software interaction assumes a specific role in building and modifying the computational state of a program. Each component receives a participatory role in contributing to the development of a software network of fixed topology. When the script executes, the components act out their roles and maintain continual relations with their peers. Meta-scripts define a larger blueprint for the collaborative workings of numerous scripts. Each script receives a participatory role in introducing a new software network to a program's execution. As individual scripts execute, the topology of the program's network of communications changes. Unlike that which occurs with traditional programming methods, such as object-oriented programming, the communications between software components is not realized until the program executes and applies its software interactions dynamically.  4.1.2  1  P r o g r a m m i n g Benefits  As a program becomes large and complex, the separation between the two activities gives rise to several programming benefits. First, the separation facilitates the rapid identification and manipulation of the coding that controls when and why components communicate. The coding for controlling state remains local to the definition of the activities for production. The coding does not intermix with the coding for controlling communications. Hence, there is less clutter to dissect and to extract when installing changes to activities of either type. Second, the separation supports popular component technologies, such as corba [107], which apply a building block approach to developing application programs. Components are fundamental abstractions that IBP uses to produce the activities for production. IBP requires only that components adhere to the tenets of IBP, as discussed in chapter 8.  4.2  Activities for Production  The activities for production consist of software components and software interactions. Software components compute the state of a system while software interactions configure the communications of the system. The activities for coordination assemble and structure programs similarly to several concurrency models, such as ACTOR-based systems [2], C S P [ 6 5 ] , and 7T-CALCULUS [102]. Each of the concurrency models apply either high-order abstractions or special notations to express or monitor patterns of communications between collections of components (or processes). x  CHAPTER  4.2.1  4.  INTERACTION-BASED  PROGRAMMING  40  Software C o m p o n e n t s  In IBP, software components are elemental abstractions that partition a system into a collection of interacting parts. Similar to those of component-based systems, such as MILs (section 12.1.2) and ADLs (section 12.1.3), these elemental abstractions determine a program's design and configuration. Software components are the primary entities of a program that establish state and exhibit behavior. IBP requires that software components send and receive information via ports, first-class operations for defining a component's interface. The ports permit software interactions to reference and to invoke a software component by its operations. A software interaction explicitly manipulates a port to send and accept information when required. Conceptually similar to the ports of software architectures [9, 52], the ports of IBP establish a logical interaction between a component and its environment. The illustration in Figure 4.2 presents a component with an interface consisting of five unidirectional ports (see section 4.2.2).  The ports permit information to flow in and  out of the component in only one direction. The code at the right of Figure 4.2 exemplifies the usage of the component by its ports. The variables begin and v a l reference respectively two individuals ports, inBegin and out V a l (lines 2-3). Normally done within the body of a software interaction, the two variables invoke functions on the ports either to communicate information or to query a port's state. The variable begin invokes setValue to communicate the value 10 to the component (line 4) while the variable v a l invokes getValue and getType to obtain a value and determine what type of information the port provides (line 5).  (1) Controller hand; (2) Inport *begin = hand->inBegin(); (3) Outport *val = hand->outVal(); (4) begin->setValue( 10); (5) cout « val->getValue() « val->getType() << endl;  Figure 4.2: A component with five unidirectional ports  4.2.2  Software Interactions  A software interaction is a first-class mechanism that forms a basic, indirect relation between software components. It communicates information among components to incite collaboration and to initiate the sharing of data. A software interaction permits components to communicate anonymously. Components neither maintain references to their peers nor inform their peers to signal the occurrence of important events. All communications among  CHAPTER  4.  INTERACTION-BASED  PROGRAMMING  41  components occur indirectly via the execution of a software interaction. The design and use of a software interaction is unlike those of general mechanisms for communication, such as implicit or explicit invocation [108]. A software interaction separates the specification and the execution of commands that form communications. A software interaction remains inactive until it a receives a notice to communicate, which may arrive from a variety of sources. As explored in this thesis, a notice to communicate may originate from high-level abstractions that relate the execution of software interactions to the advancement of time or to the occurrence of interesting events. For instance, the code in Table 4.1 presents two separate code segments for communicating information between two components, a and b. The first segment applies a software interaction between the components, while the second segment has the components communicate directly. The first segment initializes the software interaction s i with an invocation to s i . s e t O . The command provides s i with the identity of the two ports, one port of component a and one port of component b. That command is neither responsible nor capable of inciting the actual communications. Communications occur when the command si.execute() instructs the software interaction to communicate. Each time the command is invoked, the command passes information between the two ports. In contrast, the second segment applies an explicit invocation to communicate information directly from a to b. Embedded within the definition of component a, the communications between the two components is direct and immediate. Component a recognizes the identity of b and the invocation occurs immediately after it is specified. Repetitive calls to communicate, as indicated by the two calls to b->in(10), require that the receiver of information be repetitively identified.  Software Interaction  Explicit Invocation  Interaction si; si.set( a->out(), b-nn() ); si.execute(); si.executeQ;  define component a() { b->in( 10 ); b-»in( 10 );  }  Table 4.1: Two means of passing information from a and b By separating the specification of communications from its execution, a software interaction delays before operating. It may communicate information long after it is initially applied. For visual simulation, delayed communications are critical. Not all communication in a dynamic system occurs immediately. Most communications remain inactive until important times or states arise. Throughout the period of a program's execution, a software  CHAPTER  4. INTERACTION-BASED  PROGRAMMING  42  interaction may repetitively toggle between an active and inactive state. The toggling of a software interaction's state produces a program with dynamically-changing communications.  Operations A software interaction operates by accessing the ports of components. It invokes the methods of the ports to decide whether the ports are ready and able to communicate information properly. Table 4.2 presents three common queries performed by a software interaction on a port. A DATA T Y P E query identifies the data types that a port accepts or returns. A software interaction ascertains that information to ensure that the components that it interconnects are compatible. A RELATION query identifies the ports of components that exhibit similar behaviors, such as those that update the same state variables. Two ports that behave similarly should probably not be invoked at the same time. A PAST ACTION query identifies the last instance when a port has communicated information properly. In a time-varying system, one port should not be permitted to accept two values at the same moment in simulation time. Query  Description  DATA TYPE RELATION PAST ACTION  what type of data type does the port accept or return? with what other port does the operation behave similarly? at what time did the port operate last  Table 4.2: Common query types of first-class operations For a software interaction to operate properly, it is preferable that ports be either unidirectional read or unidirectional write. Both kinds of ports ensure only one-way flow of information to or from a component.  2  It is the software interaction, not the port, that  ultimately determines when and how information flows. A unidirectional read port provides information to a software interaction without requiring any input parameters: the port need not consume input in order to supply output. A unidirectional write port behaves similarly with informationflowingin the opposite direction. In this case, a software interaction passes information to the operation without the expectation that the port will return any outgoing results. A software interaction does not readily accept the use of bi-directional ports. Those kinds of ports hinder a software interaction from operating freely. Bi-directional ports require With the aid of a software interaction, the binding of an outport and an inport from the same component produces a unidirectional cycle. The cycle is not a producer of bi-directionalflowbecause information always travels in and out of the component in one direction. 2  CHAPTER  4.  INTERACTION-BASED  PROGRAMMING  43  that information flow in and out (or vice-versa) of a component in a fixed sequence. A software interaction is not freely permitted to control each flow individually. Hence, such a port undermines the authority of the interaction to choose when and how communications occur. For example, the bi-directional read port requires an argument value before it returns a result. A software interaction is not permitted simply to read the result without writing an argument. The design of the bi-directional read port also precludes software interactions from observing the results of previously written arguments. To achieve the proper results, the software interaction must either supply the port with duplicate arguments or inform the port to return its most recent result. Either way, each approach introduces additional complications. The former approach requires an interaction to share argument values while the latter approach requires an interaction to identify itself as a "primary" or "secondary" reader. A primary reader requests new information while a secondary reader queries for information given to previous requests. In some circumstances, a software interaction benefits from the use of a bi-directional write port. The port returns a result "before" it accepts information. Unlike a function call, it supplies the caller with argument values. For a software interaction, those operations help to perform a partial write, the writing of incomplete information, (see chapter 8 for further details). Attributes  There are many kinds of software interactions. Most are small in code size and produce a specific effect that is easy to comprehend and relatively easy to adapt. Some software interactions are domain-specific while others apply equally well to several domains. The ones that are presented in this work are well-suited for visual simulation and other dynamic systems. They observe the passage of time and produce time-varying communications. These software interactions are characterized by seven attributes, which appear in Table 4.3. Software interactions with comparable attributes are of the same kind and facilitate communications similarly. The ACTION attribute expresses the task of a software interaction and identifies the method by which the software interaction achieves its task. For example, a simple software interaction passes information between components with standard function calls. An interaction of greater complexity may employ a remote procedure call to move the information among components that reside on a distributed network, or an implicit invocation to register callbacks from one component to another. The PORT attribute identifies the number and kinds of first-class operations that  CHAPTER 4. INTERACTION-BASED  PROGRAMMING  Attribute  Description  ACTION PORT MODE FILTER CONDITION . TEMPORAL ' FAILURE PARALLELISM  specifies the task of the interaction and the method for achieving the task identifies the number and kinds of ports used by the interaction identifies the partition of ports according to their arrangement and usage specifies the kind of filter, if any, used by the interaction identifies the conditions that effect the interaction's operations specifies the temporal aspects, if any, of the interaction indicates how the interaction responds to failure indicates if an interaction performs any of its actions in parallel  44  Table 4.3: Attributes of software interactions a software interaction accepts and the types of information that the software interaction requires from its operations. Most often, this attribute is divided into "inports" and "outports." The former indicates (unidirectional) read operations while the latter refers to (unidirectional) write operations. Unless an appropriate filter is available, an operation that does not satisfy the data type requirement of the software interaction is identified as unacceptable or improper. The M O D E attribute expresses an arrangement and use of ports by a software interaction. A n arrangement determines how ports are applied as a software interaction operates. It is not uncommon for a software interaction that accepts multiple ports to partition its ports into several modes, of which only one is usually active. The active mode is referred to as the operating mode. The FILTER attribute indicates whether a software interaction accepts a filter and, if so, what type of filter it accepts. A filter is useful for transforming or updating information that does not transfer easily between normally incompatible ports. For example, information does not flow properly between an outport producing a three-dimensional value (x,y,z) and an inport accepting only a single dimensional value. A filter enabling a software interaction to transfer information properly between the two ports would simplify a three-dimensional value into a single value using a decimation technique, such as selection or reduction [67]. The CONDITION attribute indicates whether a software interaction works under any conditions, and if so, it identifies the conditions. Normally, a condition influences the behavior of a software interaction by altering how the software interaction mediates communications. A condition may either halt or alter the mode of the execution of the software interaction. Two common conditions that affect a software interaction are "time" and "state." The former identifies time as a regulatory factor. Under that condition, a software interaction operates according to the passage of time or the advent of future moments. The latter identifies state information as a determining factor. Under that condition, a software interaction  CHAPTER 4. INTERACTION-BASED  PROGRAMMING  45  operates according to the current or upcoming value of state information, which is usually obtained from one or more inports. The T E M P O R A L attribute indicates whether a software interaction communicates temporal information, and if so, indicates what the temporal information represents. Normally, a software interaction uses temporal information to synchronize the behaviors of software components operating in time-varying environments. For instance, a software interaction may communicate a duration of time to a port for which a software component bases its computations. The FAILURE attribute indicates whether a software interaction reacts to a failure, and if so, identifies how the software interaction reacts. Normally, a failure arises if a software interaction is unable to execute properly, to access its ports, or to satisfy its conditions. For most software interactions, failure is met by either doing nothing, waiting for success, or signaling an error. A software interaction doing nothing remains idle. A software interaction waiting for success hopes for the failure to be resolved. While the software interaction waits, it usually halts the execution of successive interactions until the failure disappears. A software interaction signaling an error indicates a major problem. It reacts by halting the evaluation of additional software interactions or by terminating the execution of the entire program. The P A R A L L E L I S M attribute indicates whether a software interaction performs any of its actions in parallel, and if so, it identifies those actions. For instance, a software interaction working with multiple ports may communicate information between the ports concurrently, if its underlying hardware permits. 4.2.3  General Example Revisited  The diagram in Figure 4.3 adds details to the original diagram of Figure 3.1. It presents software components with unidirectional ports, and communications with software interactions. The unidirectional ports permit the software interactions to control freely the flow of information to and from a component. For example, the ports inHand and outHand of the ROBOT  component permit a software interaction to read or to write separately the position  of the robot's hand. The software interaction D provides information directly to inHand without receiving any results. The software interactions A and F freely determine when and where to send the results of the positional update. They operate properly without knowing which software interactions initiated the update or which components provided the ROBOT with an updated value. The attributes of the software interactions found in Figure 4.3 appear in Table 4.4.  CHAPTER 4. INTERACTION-BASED PROGRAMMING  46  Figure 4.3: The general example revisited  The most commonly applied software interaction, which forwards information from an outport to an inport, is of type times (A-F, P-Q).  Event. In the diagram, this kind of interaction appears eight  The software interaction F differs slightly from the others in that it  employs a special filter to extract a two-dimensional value from outHand, an outport that identifies the position of the robot's hand in three dimensions.  Attribute PORT  A-F, P-Q  /  X  M  Event  FanEvent  TimeEvent  Y  DurationEvent  CondEvent  1 outport &  multiple inports  multiple inports  1 "bool" outport  pass current timestep to inport  pass current duration to inport  invokes other events according to value of outport  passing time to inports [transform info]  passing info . to inports [transform info]  ACTION  pass from port inport  PARALLELISM  NA  FILTER  [transforms info]  1 outport &; multiple inports pass info from outport to multiple inports passing info to inports [transform info]  CONDITION  NA  NA  NA  TEMPORAL FAILURE MODE  NA NA NA  NA NA NA  1 inport info outto  AT  CallEvent multiple "void" inports invokes inport  NA  calling inports  NA  NA  NA  waits for true value  NA  timestep  duration  NA NA  NA ' NA  NA NA  NA NA NA  true || false  an  Table 4.4: Attributes of the software interactions in Figure 4.3 The software interaction I is a  FanEvent, a specialized interaction that accepts  multiple inports. Upon activation, the software interaction passes information concurrently from a single outport to each of its inports. The software interactions X and Y provide the  CHAPTER 4. INTERACTION-BASED  HANDCONTROLLER  PROGRAMMING  47  and the P O S I T I O N A L C O N T R O L L E R with temporal information. X, of  type TimeEvent, communicates the advancement of time while Y, of type DurationEvent, communicates a duration of time for the two controllers to apply to their computations. The software interaction M, of type CondEvent, is a conditional event. That software interaction accepts one outport and two sets of interactions. Upon activation, M invokes one of the two sets of interactions according to the logical value of the outport. When the outport is true, M invokes M' to invoke the toggle port of the light. M', of type CallEvent, is a software interaction that invokes a "void" inport, an operation requiring no input values. When provided with information, the inport sends a dataless signal to its component. As discussed in section 3.1.1, the goal of the software network in Figure 4.3 is to coerce a robotic device to toggle a light source. The software network accomplishes its goal by communicating information among eight software components. Interactions C and D establish links between the two controllers and the robot. The robot's orientation and position are set by the arrival of information from the two controllers, H A N D C O N T R O L L E R and P O S I T I O N A L C O N T R O L L E R . Interactions  A  and B parameterize the controllers with initial  values. Each controller is set to start with an initial value that originates from the robot. Interactions X and Y disseminate temporal information to the two controllers. X provides a durational value while Y provides a value of temporal advancement. The repeated execution of Y decides how often the controllers update to the advancement of time. Finally, interactions M and F operate to toggle the light and provide the G R A P H E R with coordinate values. The components D I S T A N C E R and S U B T R A C T O R are secondary software components that perform numerical operations on positional data. Interactions E and I provide DlSwith two positions for computing a distance while P and I provide S U B T R A C T O R  TANCER  with two positions for computing an offset. Interaction Q communicates the result of S U B TRACTOR  to H A N D C O N T R O L L E R . Both D I S T A N C E R and S U B T R A C T O R serve only to adjust  the values communicated between components. As will be discussed in chapter 5, these components are unnecessary when applying interaction constraints to the network of connections.  4.3  Activities for Coordination  The activities for coordination consist of scripts and meta-scripts. Scripts organize software interactions while meta-scripts control scripts (and other meta-scripts).  CHAPTER 4. INTERACTION-BASED 4.3.1  PROGRAMMING  48  Scripts  A script is an orderly arrangement of software interactions that decides how numerous software interactions collaborate to produce a software network, a system of communicating components. The arrangement of software interactions within a script determines how a software network forms and what topology the software network assumes. The software network forms when a run-time  evaluator  (RTE) interprets the script and controls the exe-  cution of software interactions (see section 4.4.1). As the RTE operates, it toggles the script between two states, active and inactive. While active, the script produces communications. As the script toggles between the two states, the script alters the topology of the program so as to replace old communications with new ones. A script assigns meaning to the sequence of communications. Unlike a procedure, a script excludes commands that explicitly define and establish a computational state. A script isolates the communications of a system from commands that control state. In a procedure, the intent and meaning of communications are presented as a collection of uncorrelated messages or function calls. The communicational statements of the procedure are cluttered with commands for computing state values. As noted by various academics [22], a sequence of uncorrelated method invocations is difficult to decipher and reuse when the complexity of a system grows. A general mechanism, such as a script, is critical to the effective reuse and understanding of the communications that occur between collaborating components. Arrangements The means to organize and control the arrangement of software interactions in a script varies widely. Each means presents an alternative set of criteria for deciding when and how software interactions operate. The criteria apply to software interactions on their own or as a part of a larger set. Various criteria that are useful, especially to visual simulation, appear in Table 4.5. Criterion  Description  SEQUENCE CONCURRENCY DEPENDENCY FREQUENCY  orders software interactions executes software interactions simultaneously relates the execution of software interactions establishes a pattern of repetition for software interactions  Table 4.5: Attributes of software interactions The SEQUENCE criterion identifies an ordering of software interactions. Once begun, the software interactions activate sequentially. The successive activation of several software  CHAPTER 4. INTERACTION-BASED  PROGRAMMING  49  interactions are useful for controlling the flow of information through a network of coupled components. Each software interaction moves information from one component to the next. The C O N C U R R E N C Y criterion expresses the simultaneous execution of multiple software interactions. As the software interactions execute, they regulate communications concurrently. For many dynamic systems, concurrency is an important concern that effects performance and decides the results of analysis. The D E P E N D E N C Y criterion identifies all relations among software interactions that influence their activation. One interaction may depend upon the activation or deactivation of others before it operates. The specification of relations are useful for describing systems with many software interactions that rely upon the past (or future) activations of others. The F R E Q U E N C Y criterion identifies a pattern of repetition for a software interaction. It indicates how often it repeats and the number of times it occurs. In many systems, such as those that model continuous state, repetitive communications occur frequently. The frequency with which components communicate often decides the precision of state changes. Many numerical methods [120] are sensitive to the frequency of their execution. 4.3.2  Meta-Scripts  A meta-script establishes a plan for the execution of one or more scripts. It overlays the scripts with information that decides when and how the scripts of a program toggle on and off, and occasionally, when and how multiple meta-scripts execute. The information applied by a meta-script is specific to the system being produced. For visual simulation, a metascript associates scripts with activation times and with temporal relations. The activation times determine how scripts execute over time. As simulation time progresses, scripts with early activation times execute first. Temporal relations associate the activation time of scripts to the activation times of other scripts. For instance, one script may begin after the termination of another. A meta-script is similar to a script in that each applies similar organizational criteria (see section 4.3.1).  The organizational criteria of a script, such as sequence and concur-  rency, apply equally well to deciding the execution of meta-scripts. However, a meta-script differs from a script in that a meta-script applies its criteria to an "interval of execution." Unlike a software interaction, a script is a continuous action with a beginning and an end.  3  Therefore, the criteria for organizing a script applies equally well to its beginning and to its end, and occasionally, to both. For example, the sequencing of two successive scripts by 3  When the beginning and end of a script coincide, the script acts instantaneously.  CHAPTER 4. INTERACTION-BASED  PROGRAMMING  50  their beginnings does not naturally produce a simple linear sequence. If the second script terminates early, the first script will finish last.  4.3.3  General Example Revisited  The code in Figure 4.4 presents an implementation of the general example of Figure 3.1. Using the RASP toolkit, the code consists of two parts, one part declaring production activities and another part producing a script, entitled LightByRobot. The declaration of production activities defines the components and the software interactions of the script. The script parameterizes the software interaction with ports, and chooses a plan for executing the software interactions over time. With IBP, the coding for two parts of the program appears separately. The definition of the script does not depend on the design or identity of the production activities. The script operates equally well with other software interactions or with other software components providing similar ports. The script simply organizes the definition and application of the software interactions. The script LightByRobot organizes software interactions with an activity, entitled act  (lines 16-18). The activity partitions the software interactions into two sequences that  begin at different times and execute with different frequencies. The first sequence activates its software interactions immediately after the script begins (line 16).  The activation of  those software interactions occurs only once. The sequence begins with evf I and ends with evA + evB. The + operator indicates the parallel activation of software interactions. The software interaction executes in parallel if the underlying hardware permits. The second sequence executes its software interactions repeatedly while the script remains active (lines 17-18).  For every time step, the sequence begins with tevY and ends with evQ. The code in Figure 4.5 presents the meta-script for "RoboWorld," a simple animation  involving a robot that sits, spins, waves, and turns on a light. The meta-script animates the robot with four scripts, of which LightByRobot is one. The meta-script associates temporal information with the scripts by way of TimingActs (lines 5-6,8-9).  The TimingActs store  the times when the scripts begin and end, and the rates at which the scripts advance with simulation time. The temporal instructions of the meta-script instruct the robot to turn on the light for one hundred units of time (line 9) after the robot spins around its axis for seventy-five units of time (line 6).  While the robot spins about its axis, the meta-script  instructs the robot to wave its hand simultaneously. The simultaneity of the scripts for each action is accomplished with a relation that delimits one script to the execution of another (line 7). Another relation, M E E T , forms a dependency between lightByRobot and s i t (line 10). The robot sits down after it finishes its motions to trigger the flashing light.  CHAPTER  4. INTERACTION-BASED  PROGRAMMING  Activity* LightByRobot()  Component Definitions class Controller { . . . } class Robot { • • • } class Button { • • • } class Light { • • • } class Grapher { . . . } class Distancer { . . . } class Subtractor { . . . } Software Interactions class Event { . . . } class FanEvent { .... } class TimeEvent { • • • } class DurationEvent { • • • } class CondEvent { • • • } class CallEvent { . . . }  (1) (2) (3) (4) (5) (6)  (?) (8) (9) (10)  (11) (12) (13)  Component Instances Controller Robot Button Light Grapher Distancer Subtractor  hCtr, pCtr; rob; btn; lght; grph; dist; sub;  51  (14) (15) (16) (17) (18) (19)  { FilterXY *filtXY = new FilterXY(); / * pass info from outport to inport * / Event evA( rob->-outPos(), pCtr->inBegin() ); Event evB( rob->outHand(), hCtr-»inBegin() ); Event evC( pCtr-»outVal(), rob->inPos() ); Event evD( hCtr->outVal(), rob->inHand() ); Event evE( rob->outHand(), dist-HnPos2() ); Event evF( rob-»outHand(), grph->inXY(), filtXY ); Event evP( rbb->outPos(), sub-HnPosl() ); Event evQ( sub->outPos(), hCtr-»inEnd() ); / * pass info from outport to inports * / FanEvent evfl( but-»outPos(), sub->inPos2(), pCtr-MnEnd(), dist->inPosl() ); / * invoke inport info if outport is true * / CondEvent cevM( dist-»outZero?() ); cevM.setTrue( CallEvent( lght-)lnToggle() ) ); / * pass duration to inports * / DurationEvent devX( hCtr—>inDuration(), pCtr—HnDuration() ); / * pass time step to inports * / TimeEvent tevY( hCtr->inTime(), pCtr->inTime() ); / * organize software interactions * / Activity *act = new Activity( "lightByRobot"); act->setInitEvt( evfl, devX, evA + evB ); act-»setActEvt( tevY, evC, evD, evE + evF, cevM ); act->setActEvt( evP, evQ ); return act;  }  Figure 4.4: The coding of the general example of Figure 3.1 The overall timeline of the animation is shown in Figure 4.6. As simulation time advances, the robot begins by waving and spinning, and finishes by lighting and sitting.  4.4  Program Execution  The execution of an interaction-based program differs from that of conventional programs in three ways. First, an interaction-based program executes by way of a run-time evaluator (RTE), a special module that interprets the scripts and. meta-scripts of a program so as to enforce software interactions to communicate information. Second, an interaction-based program operates in three phases. The first phase creates scripts and meta-scripts while the remaining two phases apply the scripts and meta-scripts in structuring a program's  CHAPTER  4.  INTERACTION-BASED  PROGRAMMING  52  / * production activities of RoboWorld */. (1) (2) (3) (4)  Activity Activity Activity Activity  LightByRobot = LightByRobotQ; *spinningRobot = SpinningRobot(); *wavingRobot = RunningRobotQ; *sittingRobot = SittingRobot();  (5) (6) (7)  TimingAct *wave = new TimingAct( wavingRobot ); TimingAct *spin = new TimingAct( spinningRobot, 100, 175 ); spin->setRel( DELIMITS, wave ); / * sit occurs after light * / TimingAct *sit = new TimingAct (sittingRobot ); TimingAct *light = new TimingAct( lightByRobot, 250, 350 ); light->setRel( M E E T S , sit, 75 );  / * wave and spin occur simultaneously * /  (8) (9) (10)  Figure 4.5: A meta-script for animating the movements a robot  Production Activities  Spinning  |  Lighting meets  delimit  Waving  Sitting  I  I 100  200  ' 250  |  3  5  0  '  4  2  5  TIME  Figure 4.6: The timeline of Figure 4.5 topology and controlling a program's state. Third, an interaction-based program experiences changes to its topology. As scripts and meta-scripts dynamically mediate communications, the topology of the program changes with its execution. 4.4.1  R u n - T i m e Evaluator  The integration of the RTE with an interaction-based program produces an executable application. As shown in Figure 4.7, a standard, compiler integrates the RTE with an IBP program by parsing and linking the definition of each. The compiler ensures that the program and the RTE properly follow the syntax of an appropriate host language, such as C++. When the executable application is applied, it prepares the RTE to interpret a program by its activities for coordination. By examining the activities for coordination carefully, the RTE is capable of executing software interactions in parallel, optimizing run-time performance, and flagging run-time errors. The design of the RTE varies according to the domain of its application, not by its  CHAPTER  4.  INTERACTION-BASED  PROGRAMMING  Stylized C++  C++  (Action Unit) (Action Unit) (Action Unit)  53  Compiler  +  Run-time Evaluator  Executable  Figure 4.7: An interaction-based program intended use. The design varies according to the methods by which a program manages its scripts. In time-varying systems, for example, the RTE manages the advancement of time. As time advances, the RTE examines the timing values of scripts and evaluates those ready for activation. Timing values, such as sequence and temporal intervals [7], are set with the meta-scripts of a program. The application of the RTE is relatively transparent to the designers of an interaction-based program. The designers simply invoke the RTE to initiate the execution of IBP programs. The RTE operates independently of the design and behavior of the software components that define the program. 4.4.2  T h r e e Phases of E x e c u t i o n  As illustrated in Figure 4.8, an IBP program operates in three phases, referred to as script execution, RTE execution, and system execution. During script execution, an IBP program creates scripts and meta-scripts. It executes code to form, organize, and specify the arrangement and execution of software interactions. During RTE execution, the RTE interprets the scripts and meta-scripts of the first phase. The RTE executes software interactions so as to communicate information between software components. During system execution, the software components of the program communicate and compute state. It is during system execution that a program resolves its primary task: the algorithmic solution to a computational task.  Script Execution  c  RTE Execution  \  Task development  RTE enforces  Task managment  communications  System Execution  Computational Results  Figure 4.8: The three phases of program execution with IBP The latter two phases, RTE execution and system execution, occur concurrently. As  CHAPTER  the RTE  4.  INTERACTION-BASED  54  PROGRAMMING  induces communications, the software components send and receive information as  they actively compute the program's state. As the RTE  progressively interprets the scripts  (and meta-scripts) of a program, the topology of the system changes and the components receive new communications, upon which the RTE  4.4.3  Dynamic  computes new state information.  Topology  As an interaction-based program executes, it produces a software network resembling a dynamic, integrated dataflow network. Information passes dynamically between software components interconnected by the software interactions of the program. The continual activation and deactivation of the scripts (and meta-scripts) of a program dynamically alters the size and topology of the network's configuration. With each new topology change, the software network assumes a new configuration and a new set of interactions. The newly activated interactions replace existing ones, and thereby establish new dependencies between software components. As the new dependencies form, the software components acquire new roles and undergo new settings. They experience an alternative form of software reuse, which occurs during the program's execution. Each of the scripts of a program may introduce many alternative uses for one component in varying contexts. As noted in chapter 8, IBP requires that software components accept and accommodate this form of execution to improve the use and understanding of IBP  code.  If an RTE executes scripts in parallel, as performed with visual simulations, a software network undergoes concurrent changes to its topology. As the RTE  interprets each script, it  decides when and how to further advance the topology of the overall network. In a time-based system, the RTE  executes every script according to the advancement of simulation time. As  time flows, the RTE handles the simultaneous exchange of information amongst the many parts of a program. Whenever the RTE discovers an inconsistency in the networks, such as the simultaneous access and update of a single port, it signals an error in the program's design or execution. If a program's design is in error, its implementation is incorrect. The program is not designed well to performs its intended task. If a program's execution is in error, its ordering of scripts and meta-scripts is incorrect. The program is not designed to apply its communications properly.  Benefits For many application domains, such as visual simulation, topological change is an important tool for the development and maintenance of complex systems. As noted by many [21, 164,  CHAPTER  4.  INTERACTION-BASED  PROGRAMMING  55  115], it permits a system to introduce new components by either u p d a t i n g or removing o l d components, and by reconfiguring the m a p p i n g of software to hardware. W i t h I B P , the benefits of p r o d u c i n g a program w i t h d y n a m i c topology are two-fold: a p r o g r a m can develop from a gradual process of topological change, a n d a p r o g r a m can easily accommodate new configurations.  T h e configuration of a p r o g r a m derives from its execution, not from its  structure as specified by its i m p l e m e n t a t i o n . W h e n a p r o g r a m develops gradually, it adapts its topology either to m a t c h run-time conditions or to perform new tasks.  U n l i k e conventional implementations, the program  avoids having to configure itself as a single system that matches a l l possible conditions a n d . performs a l l required tasks.  A single system grows large and cumbersome as it expands  to handle a d d i t i o n a l conditions or tasks, or any c o m b i n a t i o n of the two. W i t h t r a d i t i o n a l p r o g r a m m i n g approaches, the c o d i n g of a single system entails the i n t e r m i x i n g of many control statements w i t h numerous c o m p u t a t i o n a l commands. A s the i n t e r m i x i n g increases, the single system grows difficult to decipher a n d to change. T h e two diagrams i n F i g u r e 4.9, for example, a p p l y separate approaches i n developing the same program. T h e d i a g r a m o n the left portrays the p r o g r a m as an evolution of three networks. T h e evolution of the networks involves a g r a d u a l process of topological change. E a c h network derives from a separate set of controls. T h e i n t r o d u c t i o n of a new c o n d i t i o n or task to the p r o g r a m involves the definition of a new network and the specification of controls for managing it. T h e d i a g r a m on the right portrays the p r o g r a m as a single network. T h e network integrates the three networks of the first approach a n d their respective controls into one system.  T o decipher or update the system requires great effort because the controls  of a l l three i n d i v i d u a l networks are present for p r o d u c i n g the network as a single system. Those controls coincide w i t h the statements that decide when and how the network evolves a n d the i n d i v i d u a l components communicate.  Topological Changes  Complete Integration  F i g u r e 4.9: A comparison between a p r o g r a m that changes topology and a program that consists of a single network  A p r o g r a m that accommodates new configurations is helpful i n assisting developers i n  CHAPTER  4.  INTERACTION-BASED  PROGRAMMING  56  understanding how a program works and what parts a program applies during its execution. While it executes, a program with dynamic topology may answer queries about its structure and about the parts that are actively engaged in communications. The program does this by examining the activity of its scripts and the attributes of its software interactions. Developers alter undesirable configurations by adding, removing, or altering scripts and meta-scripts. A program that changes its configuration is also capable of accommodating a broader variety of changes to its definition than that of a program devised as a single system. A single system grows large and complex as it accommodates new features and enhancements. Unforeseen changes. cause trouble if the program is unable to accommodate the changes smoothly. A program with changing topology accepts unforeseen changes by introducing a new script or meta-script that structures the program appropriately. The unforeseen changes neither require nor cause extensive modifications to the program's design and complexity, unlike what often occurs with a program designed as a single system. 4.4.4  General Example Revisited  The coding of the general example that appears in Figures 4.4 and 4.5 executes by an RTE designed specifically for visual simulations. The RTE executes in conjunction with the passage of simulation time. When the complete program begins, it begins with "script execution." The program defines the scripts and meta-scripts of the program, such as LightByRobot, and prepares them for the RTE to examine. Afterwards, the program performs "RTE execution." The RTE mediates communications, which in turn launches "system execution." While the RTE invokes the interactions of LightByRobot, the software components of the program, such as the hand controller, compute and update the program's state. . The program undergoes topological changes as the many scripts and meta-scripts of the program operate over time. The robot, for example, can acquire a new role with each change to the topology of the network. As with the network shown in Figure 4.3, the robot might begin by consuming information from a positioner controller. After a short delay, another script could augment the network to have the robot also provide information to another robot seeking to meet the original robot's position.  Chapter 5  Interaction Constraints Interaction constraints produce temporary relations between software interactions and between the ports of software components. They simplify the production of scripts by diminishing the number of ports, software interactions, and software components required to develop programs with complex communications. Interaction constraints consists of virtual ports  and virtual events. Virtual ports relate the binding of ports, while virtual events relate  the scheduling of software interactions. Collectively, both establish conditions, logical and numerical, to check, restrict, or compel the flow of information between two or more software components.  5.1  Foundation  Interaction constraints are founded upon one-way constraints, a popular approach for applying and evaluating unidirectional relationships [117]. With one-way constraints, the relations among ports and software interactions are represented as equations with variables, of which one variable is the primary recipient of a particular relation. The recipient variable updates state whenever any of the remaining variables, referred to as independents, change state. A one-way constraint is satisfied when its state reflects the value of its independents. For example, in the one-way constraint z=x+y, z receives the value of x plus y. If either x or y changes, the equation recomputes z. In a one-way constraint, the resolution of a relation occurs in only one direction: the direction of the variable receiving information. One-way constraints help simplify the expression of complex interactions as they permit the interactions to be stated declaratively. One-way constraints have been applied successfully to many forms of visual simulation, especially to the development of graphical user interfaces. In 2D graphics (see [131] for a survey), one-way constraints, such as those 57  CHAPTER  5.  INTERACTION  CONSTRAINTS  58  of S K E T C H P A D [150] a n d C O O L [75], have been a p p l i e d to geometric layout and interactive design. In 3 D graphics, one-way constraints, such as those of C O N D O R [76] and Avs [153], help m a i n t a i n dependencies between dataflow elements and c o m p u t a t i o n a l structures.  5.2  Virtual Ports  A v i r t u a l port is a short-hand mechanism for expressing one or more interactions among several software components. T h e v i r t u a l port employs operators — m a t h e m a t i c a l , functional, or logical — to express s y m b o l i c relationships among (unidirectional) ports. T h e operators form a one-way constraint a m o n g the ports that is satisfied each t i m e the v i r t u a l port receives a request for i n f o r m a t i o n . T h e request may come from either a software interaction or another v i r t u a l port. A v i r t u a l port responds to a request for i n f o r m a t i o n by a p p l y i n g the operators to the ports that it relates. It applies the operators w h e n they are required, not w h e n the operators are first interpreted.  Hence, the operators do not produce immediate  results, as they n o r m a l l y do i n s t a n d a r d imperative p r o g r a m m i n g . A v i r t u a l port acts l a z i l y u n t i l its value is required. T h e code i n F i g u r e 5.1 presents exemplary definitions of three v i r t u a l ports: add, examine, and e i t h e r (lines 1-3). p o s i t i o n a l values of two poles,  E a c h of the v i r t u a l ports a p p l y a single operator to the  add sums the positions w h i l e examine a n d e i t h e r compare  a n d select the positions respectively. E a c h of the ports applies its operator w h e n responding 1  to a request,  e i t h e r , for instance, returns non-deterministically the p o s i t i o n of either pole  w h e n the interaction e v t l requests the port's value (line 4).  examples of virtual ports (1) (2) (3)  V P o r t *add V P o r t *examine V P o r t *either  (4)  Event *evtl  = *polel-»outPosition() + *pole2-»outPosition()); = *polel-»outPosition() < *pole2-*outPosition()); = *polel-»outPosition() || *pole2-*outPosition());  interaction using virtual port = new Event( either, ball->inPosition() );  F i g u r e 5.1: E x e m p l a r y definitions of three v i r t u a l ports  5.2.1  Backwards Propagation  W h e n a v i r t u a l port applies its operators to other v i r t u a l ports, it initiates a request for information that propagates backwards. E a c h of the other v i r t u a l ports acknowledges the O n e position is less than another position if all its dimensions are less. 1  CHAPTER  5.  INTERACTION  CONSTRAINTS  59  request for i n f o r m a t i o n by a p p l y i n g their own operators to their own ports, w h i c h may consist of a d d i t i o n a l references to other v i r t u a l ports. A p p l y i n g the same a l g o r i t h m as that of T H I N G L A B [23], the backwards search terminates w h e n the value of every involved v i r t u a l port is f o u n d .  2  T h e short code segment i n F i g u r e 5.2 illustrates the use of v i r t u a l ports a n d the search that arises when one v i r t u a l p o r t references another. T h e v i r t u a l ports appearing i n the code segment are m i d d l e a n d b N e a r (lines 1-2).  A s shown i n F i g u r e 5.3, m i d d l e computes the  center of two poles while b N e a r determines i f the m i d d l e of the two poles is less t h a n ten units away from a b a l l . E a c h t i m e cEvt activates, it calls u p o n b N e a r to determine i f the m i d d l e of the two poles is near the ball's current p o s i t i o n . b N e a r computes the distance between the two objects by i n v o k i n g the function d i s t a n d requesting i n f o r m a t i o n from m i d d l e . If cEvt obtains a positive response from b N e a r , i t invokes e v t l , w h i c h sets the ball's p o s i t i o n to match the m i d d l e of the two poles. Since m i d d l e evaluates at run-time, it always computes correctly the m i d p o i n t of the two poles, even i f the two poles are continually moving.  (1) (2) (3) (4) (5) (6) (7)  establish constraint between poles and ball V P o r t *middle = *(*polel->outPosition() + *pole2->-outPosition()) / 2; indicates if the middle of the two poles is near pole3 V P o r t *bNear = *dist( *middle, ball->outPosition() ) < 10; create software interactions that access virtual ports Event *evtl = new Event( middle, ball-nnPositionQ ); CondEvent *cEvt = new CondEvent( bNear ); cEvt->setTrueEvt( e v t l ); add interactions to activity Activity *act = new Activity; act->addActEvent( cEvt );  F i g u r e 5.2: A usage of v i r t u a l ports that propagates backwards for values  5.2.2  Extended Example Revisited  T h e code i n F i g u r e 5.4 modifies the code of the general example (Figure 3.1) to use three v i r t u a l ports: hand, d i s t , a n d zero. T h e three ports simplify a n d improve the code that determines whether the p r i m a r y robot a n d the b u t t o n touch. T h e three ports replace the software interactions E a n d (part of) I, a n d the software component Distancer of the general Circular references arise if a virtual port initiates a search that returns to itself. Such instances are not permitted with the current implementation of interaction constraints. However, as noted in section 13.4.2, the resolution of circular references to interaction constraints is an area worthy of future investigation. 2  CHAPTER  5. INTERACTION  60  CONSTRAINTS  bNear - >  <  distFuii n c  middle =>  /  Figure 5.3: The virtual ports of Figure 5.2 example, with three one-way constraints. The first constraint identifies a simple equation to determine the position of the robot's hand in world coordinates (line 11). Unlike the code of the original example, the two objects meet when the robot's hand, not its body, touches the button. The second constraint applies a function Dist for computing the distance between two ports (line 12). Dist provides the virtual port d i s t with an equation that d i s t evaluates when it receives a request for information. The final constraint identifies a logical relation between the port d i s t and the constant zero (line 13). When cevM requests a value, the final constraint initiates a chain of requests that propagates backwards to the first two virtual ports (line 28). From the response to these requests, the final constraint indicates clearly to cevM whether the distance between the robot's hand and the button is zero.  The diagram in Figure 5.5 illustrates the data flowing to and from the three virtual ports. When cevM executes, it requests a value from zero to determine if it is true or false. If zero is true, the software interaction initiates l i g h t Tog, an interaction that toggles the state of the light, zero satisfies the request by initiating its own request to d i s t and comparing the result to zero, a constant, d i s t responds by evaluating the equation provided by Dist and by requesting a value from hand. The final request to hand computes a value and terminates the backwards propagation for information. The entire process repeats each time that cevM requires a value from zero. Each request to zero accesses the most recent values for all ports involved. Thus, zero always reflects properly the meeting of the robot and button. The movements of either object do not create problems or require changes to zero's definition.  5.3  Virtual Events  A virtual event is a short-hand mechanism for expressing symbolic relationships among software interactions. Applying operators similar to that of virtual ports, a virtual event forms  CHAPTER  5.  (2) (3) (4)  (5) (6) (7) (8) (9) (10) (11) (1.2) (13)  INTERACTION  61  CONSTRAINTS  / * pass info f r o m o u t p o r t to i n p o r t (s) * / Event evA( rob->outPos(), pCtr->inBegin() ); Event evB( rob->outHand(), h'Ctr-»inBegin() ); Event evC( pCtr-toutValO, rob->inPos() ); Event evD( hCtr-»outVal(), rob->inHand() ); Event evE( rob->outHand(), dist-»inPos2() ); Event evF( rob->outHand(), grph->inXY(), filtXY ); Event evP( rob->outPos(), sub->inPosl()); Event evQ( sub->outPos(), hCtr-»inEnd()); FanEvent evfl( but->outPos(), Ctr-MnEnd(), pCtr->inEnd() ); / * use v i r t u a l p o r t s to d e t e r m i n e i f r o b o t touches b u t t o n * / OPort *hand = *rob->outHand() + *rob-*outPos(); OPort *dist = Dist( hand, but-»outPos() ); OPort - *zero = (*dist == 0);  (19)  / * pass d u r a t i o n to i n p o r t s * / DurationEvent devX( hCtr->inDuration(), pCtr—MnDurationQ ); / * pass t i m e s t e p to i n p o r t s * / TimeEvent tevY( hCtr->inTime(), pCtr^inTimeQ );  (28) (29)  / * i n v o k e i n p o r t info i f o u t p o r t is t r u e * / CondEvent cevM( zero ); cevM. set True ( CallEvent( lght-»inToggle() ) );  (32) (33) (34)  / * o r g a n i z e software i n t e r a c t i o n s * / Activity act = new Activity; act-»setInitEvt( evfl, devX, evA && evB ); act-»setActEvt( tevY, evC, evD, evE + evF, cevM, evP, evQ );  (18)  Figure 5.4: An implementation of the extended example that uses three virtual ports a one-way constraint among software interactions that is satisfied each time the virtual event receives a request for action from a script or another virtual event. Similar to the requests for information that virtual ports receive, a request for action instructs a virtual event to execute a behavior, as opposed to simply computing a (numerical) value. In addition, a request for action divides into two kinds: executive or logical. An executive request initiates a sequence of software interactions while a logical request examines an ordering of software interactions and decides if the ordering is a recent occurrence. 5.3.1  E x e c u t i v e Request vs. L o g i c a l Request  An executive request constrains the execution of software interactions. The request applies the operators of a virtual event to decide upon an ordering of software interactions. The operators determine which interactions appear before others and which interactions execute stochastically. The expression "vEv = *evl AND  *ev2,"  for instance, indicates that the  CHAPTER  5. INTERACTION  CONSTRAINTS  62  Button  lightTog  +  T •  OPort *dist =. Dist( •  •  OPort *hand = •  7'  );  OPort *zero = (dist == 0); r • cevM E I 3  Figure 5.5: The virtual ports of the extended example interactions evi and ev2 execute before vEv. Had the expression applied "OR" instead of AND ," then either of the two interactions — but not both — would execute prior to vEv. In  contrast, a logical request performs a query on the recent execution of particular software interactions. The query confirms if the interactions of the virtual event are currently active. For visual simulations, the interactions of a virtual event are active if they share the same execution times as those of the virtual event. The operators of the virtual event apply logical operations to the query performed on each interaction. For example, the expression "vEv = *evl OR *ev2" returns a positive response if either evi or ev2 occurred before vEv received a request for action. The short code segment in Figure 5.6 exemplifies the production and application of virtual events in a script. It presents two virtual ports vEvl and vEv2 (lines 2-3), as illustrated in Figure 5.7.  vEv2 receives both types of requests for action. An executive  request is initiated by the activity act. When act sequences through its set of interactions, established by line 7, it initiates vEv2 to activate either vEvl or evt3. If the virtual event activates the former, the initial request propagates to vEvl, which responds by activating evtl and evt2 in sequence. A logical request is performed by the conditional event cEvt (line 4). Upon execution, the interaction associates vEv2 with the activation of evt4. vEv2 returns a positive response if either vEvl or evt3 precedes the activation of vEv2. Since vEvl is a virtual event, it is considered to precede vEv2 if both of its constituents, evtl and evt2, are already active.  5.3.2  Utility  A virtual event is useful for encapsulating a sequence of software interactions and for introducing variability to the sequencing of software interactions. A virtual event may apply  CHAPTER 5, INTERACTION  CONSTRAINTS  63  declares various events  (1)  Event *evtl, *evt2, *evt3, *evt4;  (2) (3)  VEvent *vEvl = *evtl && *evt2; VEvent *vEv2 = .*vEvl || *evt3;  defines virtual events  employs virtual event as boolean signal  (4) (5)  CondEvent *cEvt = new CondEvent( vEv2 ); cEvt->setTrueEvent( evt4 );  (6) (7)  Activity *act = new Activity; act->addActEvent( vEv2, cEvt );  add interactions to activity  Figure 5.6: An exemplary definition of two virtual ports Virtual Event vEv2-> vEvl -> [  ' evtl  ]"  (  &&  [ I I  )  vEv2  ) [  C  evt3  )  evt2  vEvl  Executive Action  execute vEvl or evt3 execute evtl and evt2  Logical Action  has either vEvl or evt3 executed? has both evtl and evt2 executed?  Figure 5.7: The virtual events of Figure 5.6  multiple relations, or reference the relations of other virtual events, to form and to encapsulate a lengthy sequence of constraints. To access the lengthy sequence, a script forwards a request to the virtual event that heads the sequence. To repeat the sequence, the script simply forwards another request to the same virtual event. The script avoids having to recreate any sequence that it needs to apply multiple times. A virtual event may also vary its use of relations so as to introduce variability to a lengthy sequence of constraints. As the virtual event performs, it applies statistical methods in evaluating its relations. As the relations vary, the virtual events vary the sequencing of communications. 5.3.3  Extended Example Revisited  The code in Figure 5.8 updates the code of Figure 5.4 to use three virtual events: vEv, vEv2 and ready (lines 25-27). The three events decide and influence the execution of cevM (line 28).  Rather than directly accessing the virtual port zero for its logical condition, cevM  accesses the virtual event ready, a logical expression that integrates the value of zero with the execution states of two events affecting the light's position and orientation, mvLight and  CHAPTER  5.  INTERACTION  64  CONSTRAINTS  rotLight.  (17)  /* new events for extended example */ Event mvLight( lposCtr->outVal(), lght->inPos() ); Event rotLight( loreCtr->outVal(), lght->inOrient() ); Event chgnCol( clrCtr->outClr(), lght-MnColor() ); CallEvent upClr( clrCtr-4inChgClr() ); CallEvent lightTog( lght->inToggle() ); ChainEvent chnEvts( upClr, chgnCol ); /* pass remaining duration to inports */ RemDurEvent remDur( pCtr-»inDuration() );  (26) (27)  /* virtual events to constrain events */ VirEvent *vEv = evA >> remDur; VirEvent *vEv2 = lightTog && (chnEvts  (28)  /* invoke inport info if outport is true */ OEyent *ready = zero && !(mvLight || rotLight);  (11) (12) (13) (14) (15) (16)  (29) (30)  inform controller of time remaining sequence events after toggle, do either chnEvts or vEv  vEv);  does touch button and no light animation  cevM( ready ); cevM. set True ( vEv2 );  CondEvent  move light rotate light change light's color update color control toggle light state chain color and update  Figure 5.8: An implementation of the extended example that uses two virtual events As illustrated in Figure 5.9a, the first two virtual events encapsulate two separate sequences of communications. vEv identifies a simple sequence involving two software interactions. In response to an executive request, vEv begins evA and finishes with remDur. vEv2 identifies a parallel execution of two events, l i g h t T o g and either chnEvts or vEv. While l i g h t tog executes, a statistical method invokes either of the two remaining interactions. vEv2 ->  [  &&  ]  [ trnsEvt (A) Executive  ready ->  J  (  evA  j  [ remDur  [  &&  ]  mvLight J  [ rotLight  (B) Logical  Figure 5.9: The virtual events of the extended example The virtual event ready, as illustrated in Figure 5.9b, identifies a logical expression with the virtual port zero (see Figure 5.4) and the software interactions mvLight and r o t L i g h t . ready  is true when zero evaluates to true and neither of the two software in-  CHAPTER  5. INTERACTION  CONSTRAINTS  65  teractions produces communications. In other words, the sequence of events that surround the toggling of the light occurs only when the robot touches the button and the light is stationary.  Chapter 6  Multi-Modeling Methods The separation between computation and coordination, offered by the use of software interactions, presents a variety of opportunities for applying computational algorithms to the coordination of communications. The methods of world view modeling [169], for instance, augment IBP with the algorithms of computer simulation. Rather than updating the state of a program directly, the methods of world view modeling, such as finite state automata and Petri nets [116], work readily in coordinating the workings of software interactions. As the algorithms vary the execution of software interactions, the components of a program send and receive information to update the computational state. The scripts and meta-scripts of IBP freely apply these algorithms in producing programs effectively and in applying software composition to the building of code. With the introduction of computational algorithms to IBP, the software interactions of IBP undertake various forms. Those forms separate into several categories, each of which classifies a different behavior. This chapter begins with a discussion of the working forms of software interactions and follows with a presentation of the various uses of computational algorithms in updating the design of scripts and in applying software composition to the building of IBS programs.  6.1  Forms of Software Interactions  In applying the computational methods of computer simulation to IBP, the software interactions of IBS develop into four distinct categories: communicational, temporal, organizational, and informational. Communicational interactions communicate information, temporal interactions synchronize components, organizational interactions collate interactions, and informational interactions support debugging. Augmenting any of these interactions with 66  6. MULTI-MODELING  CHAPTER  67  METHODS  time-varying information produces a time-varying  The information specifies  interaction.  when, how often, and under what conditions a software interaction executes. 6.1.1  Categories  The information in Table 6.1 identifies the four categories of software interactions and presents examples of each. Each software interaction is presented with a number that indicates how many ports it accesses during its execution or how many other interactions it manages. . Kind  SubKind Simple  Communicational  Logical Time  Temporal Organizational  Functional Group Random  Informational  Trace  Type Event(2); FanEvent(>2), StateEvent(l); CallEvent(l); DataEvent(l); BoolEvent(l); CondEvent(l); BiCondEvent(2); WhileEvent(l); ChangeEvent(l) TimeEvent(l); StepEvent(l); ReStepEvent(l); DurationEvent (1); RemainDurEvent(1) WaitEvent(l); HoldEvent(l); SuspendEvent(l) ToggleEvent(*); ChainEvent(*); SwitchEvent(*) RandToggleEvent(*); RandChainEvent(*); RandSwitchEvent(*); TextEvent(l); DebugEvent(l)  Table 6.1: Various kinds of events Communicational software interactions pass information among the ports of software components. They simply ensure that information passes properly from a single outport to one or more inports. Communicational software interactions divide into two sub-groups: simple and logical. Simple communicational interactions perform basic actions, such as pass-  ing information among ports  (EVENT)  or invoking ports to induce change  (CALLEVENT).  They accept the use of filters to modify the passing of information among components. Logical software interactions, such as  BOOLEVENT  and  WHILEEVENT,  evaluate logical values  (from logical outports) to produce communications. The logical values determine what the interactions produce, and for some, how often the interactions operate. Temporal software interactions synchronize software components with the advancement of time. They determine when software components update state and how often the updates occur. Temporal software interactions divide into two sub-groups: time and functional.  Time interactions describe and control the advancement of time. Some interac-  tions, such as T I M E E V E N T and S T E P E V E N T , identify the current time and its advancement. Others, such as  DURATIONEVENT  and  REMAINDUREVENT,  identify the duration of their  CHAPTER  6.  MULTI-MODELING  METHODS  68  executing script and the time remaining before the script ends. Functional interactions modify the execution of their governing action unit. They modify their script to wait for time to pass ( H O L D E V E N T ) , to await a logical value ( W A I T E V E N T ) , or to await a signal to resume execution ( S U S P E N D E V E N T ) . Organizational software interactions arrange other software interactions into primitive structures. Unlike the mechanisms that script the execution of software interactions, the primitive structures neither sequence software interactions nor establish attribute values. The structures generally manage the software interactions as a single structure with one or more operational modes. Organizational software interactions divide into two sub-groups: group and random. Group interactions, such as C H A I N E V E N T and S W I T C H E V E N T , collate collections of software interactions into a single unit. When executed, the single unit selectively decides which of its software interactions to trigger. Random interactions operate similarly to group interactions. They perform the same action, but with some randomness. For example, R A N D S W I T C H E V E N T decides randomly which event to trigger from its collection of interactions. Informational software interactions communicate the values of (out)ports. Usually for debugging purposes, they write information to a file or an output device. Informational software interactions consist of one sub-group: trace. Trace interactions identify the run-time properties of an IBS program. Primarily for debugging purposes, they produce information to a file or an output device. For example, D E B U G E V E N T presents the states of several outports. As the program executes, the software interaction presents a periodic history of state information. 6.1.2  Time-Varying  Especially useful for visual simulation, a time-varying interaction binds information with another interaction so as to vary the use of interactions. This information delays the execution of a software interaction until specific times pass. Unlike a software interaction with temporal attributes, a time-varying interaction separates easily into two parts, an interaction and a specification of time. In many instances, the development and maintenance of each part occurs separately. In an interaction with temporal attributes, the temporal information is set and normally is only one specific kind or value. Since a time-varying interaction develops from parts, it supports the reuse of temporal information. Temporal information may be separated from one interaction and bound to another. For animation and other dynamic systems, reuse of temporal information is as important as the reuse of interactions and components. The nature of a scene is characterized  CHAPTER  6.  MULTI-MODELING  METHODS  69  equally by the temporal information of its interaction and by the communications of its software components.  Binding an interaction with the temporal information of another  produces a scene with temporal characteristics identical to those of the other. The actions and results of the two scenes may differ, but their temporal characteristics are the same. Similarly, updating the temporal information of an existing interaction produces a scene similar in action to the original. The two scenes share the same actions but differ in> their temporal characteristics.  6.1.3  Extended Example Revisited  The diagram in Figure 6.1 details the diagram of Figure 3.2 with twelve software interactions: nine communicational and three temporal. The communicational interactions pass informa1  tion among the robots, light, and controllers. Consisting of four E V E N T S , two F A N E V E N T S , 2  two C A L L E V E N T S , and one C O N D E V E N T , the communicational interactions configure the  software network to associate the light's position, orientation, and color with the controllers. Many of the interactions, such as evFIJbut and upClr, provide the controllers with initial control values. With those values, the controllers compute an appropriate location, orientation, and color, which mvLight, rotLight, and chngClr communicate to the light. The temporal interactions synchronize the controllers to the advancement of time. Consisting of one D U R A T I O N E V E N T , one R E M A I N D U R E V E N T , and one T I M E E V E N T , the temporal interactions permit the controller to know how fast time advances and how much time the controller should expect to complete its actions. For example, the software interactions X and Y decide how long and how often P O S C O N T R O L and O R I E N T C O N T R O L produce positional and orientational values. The periodic execution of the communicational interactions determines when and how often the values reach the light.  6.2  Scripts  With the introduction of computational algorithms to IBP, the scripts of IBS divide into four distinct kinds: natural, declarative, functional, and group. Each kind of script outlines an alternative plan to synchronize software interactions to the passage of time. The software components and software interactions for updating the positions and orientations of the robots appear in Figure 4.3. The FANEVENTS of the Figure 6.1 also interconnect the components of the original example (see Figure 3.2). :  2  CHAPTER  6. MULTI-MODELING  METHODS  70  evfl rob remDur  Button outPos  L  Robot I inHmul I inPos  evFI but  ?  Bfs  AT-  L  inDurat'um . . _ , | inEnqL  inTime  Controller  lightTog  oulPos  i  oulHund  OPort *hand = •  Positional „u,v„  : ''" ''"  +•  j  T  f  Light Orientation  | inToggle ,  | i/iPwi'  **  '  inTime  | inDurarion  | inOrient  Robot#2 I itiHaml <""  ,  Pos  I inPos  oulHtmd  Communicational  OPort *dist = Dist( •  P  );  AT  OPort *zero = (dist == 0);  EO Event  El DurationEvent  B  FanEvent  El RemDurEvent  •  CallEvent  M TimeEvent  •  CondEvent  • i  F^n  -I  Color Control inChgClr  i  Temporal  oiaVal  Controller  I inColor  chngClr  cevM  outColor  Figure 6.1: The software interactions of the extended example 6.2.1  K i n d s of Scripts  The information in Table 6.2 identifies four kinds of scripts and presents two variants of each. The two variants, single and multiple, indicate the number of threads of execution that each script controls. A single-threaded script executes one set of interactions. Any changes to the ordering or execution of the interactions apply equally to the entire set. A multiple-threaded script executes multiple sets of interactions. Each set is managed as an individual group that does not affect the execution of others. Type  Kind single Natural Declarative Functional Group  Activity; RandAct DetFSAutAct; NonFSAutAct; PetriAct Process GroupAct; ToggleAct; SwitchAct; RandGrpAct  multiple MultiActivity MultProcess MultiGroupAct  Table 6.2: Various kinds of scripts Natural scripts order software interactions sequentially. They repetitively parse and execute an entire sequence of software interactions. The code of Figure 6.2 presents the use of M U L T I A C T I V I T Y , a natural script with several threads of control. Individual threads are identified by the constant T H R E A D _ # , where # corresponds to a particular thread. When the script begins, it simultaneously executes two threads of initial events (lines 2-3). Afterwards, it continuously sequences through the two threads of acting events (lines 4-5).  CHAPTER  6.  MULTI-MODELING  (1) , (2) (3) (4) (5)  71  METHODS  MultiActivity act; act.addInitEvent( T H R E A D J D , act.addInitEvent( T H R E A D . I , act.addActEvent( T H R E A D . O , act.addActEvent( T H R E A D . I ,  evi, ev2, ev3 ); ev5, ev6 ); ev3, ev4 ); ev7, ev8 );  Figure 6.2: A M u l t i A c t i v i t y with two threads  Declarative scripts apply declarative methods, such as event graphs [135], to partition the ordering and execution of software interactions into several states. Each state maintains a distinct list and ordering of the software interactions. For each state, there are rules that indicate how it arises and when it changes. Collectively, the rules establish a total ordering of state space. As the rules advance the script from one state to the next, they initiate a new ordering and execution of software interactions. Functional scripts use functional methods, such as those performed by SlM-|—h [90], to regulate the execution of software interactions. The scripts work in conjunction with functional software interactions (section 6.1.1) to regulate the flow of information between software components. Information flows according to the satisfaction of operational constraints, such as the availability of resources.  Functional scripts best model systems in  which a directionality of flow exists between system components, such as queues and control engineering. Group scripts collate collections of scripts into a single unit. When executed, the single unit selectively decides which of its scripts to apply. For instance, the code appearing in Figure 6.3 presents two group scripts, G R O U P A C T and R A N D G R P A C T . Each group script assembles the same set of scripts but applies the scripts differently. G R O U P A C T executes every script while a R A N D G R P A C T executes only one, which is chosen at random. The behavior of the former script is deterministic while the latter is not.  (1) (2)  GroupActivity gAct; gAct.addScript( actl, act2, act3 );  (4) (5)  RandGrpAct rAct; rAct.addScript( actl, act2, act3 );  Figure 6.3: Exemplary definitions of group scripts  CHAPTER  6.  6.2.2  MULTI-MODELING  72  METHODS  Declarative  Declarative methods, which normally govern the ordering of state changes, apply equally well to the scheduling of software interactions. Rather than issuing commands to update state, the methods are easy to apply to decide how and why software interactions operate. The scheduling of software interactions occurs as the declarative methods evaluate functions,  transition  conditional statements that alter a method's behavior. The transition functions  and the modeling methodology they represent may be encapsulated within abstractions that accept software interactions and ports as parameter values. When the abstractions are executed, they interpret their transition functions and schedule an appropriate response.  produces sequences of values  EvolutePt *ev = new EvolutePt; interactions to parameterize interpolator  (2) (3) (4) (5)  Event Event Event Event  *eSet = new Event( ball-»outPosition(), ev-HnBeginVal() ); *efl = new Event( polel->outPosition(), ev->inFinishVal() ); *ef2 = new Event( pole2-»outPosition(), ev-»inFinishVal() ); *ef3 = new Event( pole3-»outPosition(), ev->inFinishVal() );  (6) (7) (8)  DurationEvent *durEvt = new DurationEvent( ev—HnDurationVal() ); TimeEvent *tEvt = new TimeEvent( ev-»inCalcValue() ); Event *bSet = new Event( ev->-outCurVal(), ball-HnPosition() );  O)  (10) (11) (12) (13) (14) (.15)  FiniteStateAct *fsa = new FiniteStateAct( 1 ); fsa->setState( 1, eSet, ef2 ); fsa—>setState( 2, eSet, ef3 ); fsa->setState( 3, eSet, efl ); fsa—»setTransition( 1, 12, 2 ); fsa—»setTransition( 2, 12, 3 ); fsa-»setTransition( 3, 12, 1 );  (16) (17) (18)  Activity *act = new Activity; act—»addInitEvent( durEvt ); act->addActEvent( fsa, tEvt, bSet );  interactions to compute new position  add interactions to a finite state automaton script  add interactions to an activity  Figure 6.4: A script that moves a ball between three poles The sample code in Figure 6.4 exemplifies the application of declarative methods to the organization of software interactions. The code consists of seven events and two scripts, of which one is fsa, a script specifying a finite state automaton (FSA) — a common abstraction of declarative modeling. Placed within the script act, the FSA (lines 10-16) controls interactions for moving a ball towards one of three poles. When the script is active, act continuously sequences through fsa, tEvt, and bSet (line 18). Every twelve units of  CHAPTER  6.  MULTI-MODELING  73  METHODS  time, f sa adjusts the motion of the ball to advance toward a different pole. The illustration in Figure 6.5 presents the state description of f sa. As f sa advances from one state to the next, it activates a different set of software interactions (lines 11-13).  Figure 6.5: The finite state automaton of Figure 6.4 The illustration in Figure 6.6 visualizes the software interactions of Figure 6.4 and identifies their association with the states of the FSA. All three states of the FSA share the interactions tEvt, eSet, and bSet. Each of the three interactions communicate information to or from ev, a controller for interpolating positional data. tEvt communicates temporal advancement, eSet communicates an initial position from which the controller begins interpolation, and bSet communicates a positional value from ev to the ball. Each of the three states of the FSA are distinguished by the interaction that sets the terminal value of ev. The terminal value is set to one of three pole positions by the interactions ef 1, ef2, and ef3.  eSet  outPosition |  f-l  Ball inPosition  Communicational •  Even!  Temporal O DurationEvent •  TimeEvent  Figure 6.6: A visualization of the script in Figure 6.4. The numbers adjacent to the software interactions correspond to the states of the FiniteStateAct of Figure 6.5  CHAPTER  6.2.3  6.  MULTI-MODELING  METHODS  74  Functional  Functional methods, which normally regulate dataflow, also prove useful in controlling the evaluation of software interactions. Rather than regulating communications, the methods decide when and why software interactions undergo evaluation. They determine how the interactions of a script, as determined by declarative methods, are evaluated in relation to each other and during the progression of information among components. For most functional methods, the basic abstraction is the process, a sequence of instructions that work collectively to perform a task. The instructions either define a succession of state changes or modify the manner in which its governing process interprets subsequent instructions. Those instructions, which may be encapsulated in interactions that manage ports, may be adapted to update the manner in which a script processes its interactions. At any moment in time, a process is active, idle, holding, or waiting. As described in Table 6.3, an active process actively executes its instructions, an idle process awaits (re)activation, a holding process awaits a state change, and a waiting process awaits the passage of time. The performance of a process changes as interesting events occur, such as the presence or absence of information, or the notification of state changes. A process switches among the four states according to the dictates of its most recent instruction. For example, a waiting instruction directs the process to halt temporarily its operations for a short period of time. That same instruction may be adapted to halt the execution of a script as it processes a set of software interactions. Instructions of that kind, identified as functional events (see section 6.1.1), provide careful controls in coordinating a wide variety of communications. Process State  Description  active holding waiting idle  actively executing instructions awaits an interesting event or state change awaits the passage of time inactive; awaits re-activation  Table 6.3: Description of process states The sample code in Figure 6.7 exemplifies the application of functional methods to the organization of software interactions. The code controls the evaluation of software interactions by referencing two operational constraints, the presence of information (lines 23) and the passage of time (lines 6-8). The code operates by evaluating five events for four components: SINK, SOURCE, QUEUE, and WAITTlME. Information passes from SOURCE to  SINK via QUEUE when WAITTlME, a component that produces exponential values, indicates  CHAPTER  6.  MULTI-MODELING  METHODS  75  that it should do so. The event srcEvt indicates that the script should wait for SOURCE to produce information before the script handles additional events. When SOURCE has output available, a software interaction occurs and the evaluation of the script resumes. The event snkEvt acts similarly. It halts the evaluation of the script until SINK is available to accept information. The events cEvt and tEvt decide how much time passes between thefirsttwo events. The temporal value is obtained from the exponential.  create a queue to hold data  (1)  FifoQueue *queue = new FifoQueue;  (2) (3)  WaitEvent *srcEvt = new WaitEvent( source->outHasOutputAvail() ); srcEvt->setTrueEvt( source->outExtract(), queue-»inEnqueue() );  (4) (5)  WaitEvent *snkEvt = new WaitEvent( sink->outIsAvail() ); snkEvt->setTrueEvt( queue->outDequeue(), sink-)-inInput() );  (6) (7) (8)  ExponentialDbl *waitTime = new ExponentialDbl(lO); CallEvent *cEvt = new CallEvent( waitTime-MnMakeVal() ); HoldEvent *tEvt = new HoldEvent( waitTime->outVal() );  moving data from source to queue  move data from queue to sink  compute waiting time  organize events into a process  (9) (10) (11)  Activity *prc = new Activity( "queuing" ); prc->addActEvent( srcEvt, cEvt, tEvt, snkEvt, tEvt ); return pre;  Figure 6.7: A script that moves information between SINK and SOURCE via QUEUE. The organizational pattern of the events decided by the activity (line 10) partitions the evaluation of the script into four phases, as illustrated in Figure 6.8. In phase one, the script awaits information to be passed from SOURCE to QUEUE. In phase two, the script awaits the passage of time. In phase three, the script awaits information to be passed from QUEUE to SINK. In phase four, the script waits for the same amount of time that it waited for in phase two. The script continually repeats all four phases until it is terminated by other abstractions (see [86, 87] for details). srcEvt  cEvt & tEvl  snkEvt  tEvt  SOURCE to QUEUE  COMPUTE and HOLD TIME  QUEUE to SINK  HOLD TIME  ^  Phase 1  Phase 2  -> J  J  Phase 3  Figure 6.8: The four phases of Figure 6.7  Phase 4  CHAPTER 6. MULTI-MODELING 6.2.4  METHODS  76  Extended Example Revisited  The code in Figure 6.9 completes the coding of the extended example with two scripts, fsaAct (lines 21-24) and act (lines 32-36).  fsaAct describes the communications of the  original robot with a "2-state" finite state automaton (FSA) while act handles the actions of the robot and light with a "3-thread" multi-threaded process.  (26) (27)  / * determines the meeting of the two robots * / OPort *meet = ( *rob->outPos() == *rob->outPos() ); / * finite state automata activity * / FsaActivity fsaAct; fsaAct.setState( 0, evFI.but, evC kk evD ); fsaAct.setState( 1, evFI_rob, evC kk evD ); fsaAct.setTransition( 1, 0, meet || 60 ); CallEvent trnsEvt( fsaAct.inTransition(0,l) ); / * virtual events to constrain events * / VirEvent *vEv = trnsEvt >> evA >> remDur; VirEvent *vEv2 = light Tog (chnEvts || vEv);  (28)  / * invoke inport info if outport is true * / OEvent *ready = zero kk !(mvLight || rotLight);  (20) (21) (22) (23) (24) (25)  (29) (30) (31) (32) (33) (34) (35) (36)  CondEvent cevM( ready ); cevM.setTrue( vEv2 ); / * time delay events * / WaitEvent wEvtl0( 10 ), wEvt20( 20 ); / * organize software interactions * / MultProcess *act = new MultProcess; act-»setInitEvt( T H R E A D . O , evfl, devX, evA kk evB ); act-»setActEvt( T H R E A D . O , tevY, fsaAct, evD, evF, cevM ); act-4-setActEvt( T H R E A D . I , wEvtlO, mvLight, wEvtlO ); act->setActEvt( T H R E A D . 2 , wEvt20, rotLight );  j are positions same?  3  sO (initial, acting) si (initial, acting) transition: si to sO do transition: 0 to 1 sequence of events toggle light -> change color or vEv touch button and no light animation  functional events  Figure 6.9: A n implementation of the extended example that applies two scripts Each of the states of fsaAct describes a different set of interactions for controlling the robot's movements. In state zero (line 22), the interaction evFI.but updates POSITIONCONTROL to advance the robot towards the button. In state one (line 23), the interaction evFI_rob  updates the same controller to move the same robot towards robotB. Beginning in  state zero, the script switches respectively between the two states according to two transitions (lines 24-25). State one occurs when either the virtual port meet achieves a positive value The working version of the code segments permits some latitude in deciding when the two robots meet. For brevity's sake, the statements are approximated with the definition of line 20. 3  CHAPTER 6. MULTI-MODELING  METHODS  77  or sixty units of time pass. State zero re-occurs when the C A L L E V E N T trnsEvt executes, as performed by the newly updated definition of the virtual event vEv (line 25). Each of the threads of act regulates a separate task. The first thread, T H R E A D _ 0 , nearly replicates the organizational design of the script of Figure 5.4. Instead of directly accessing evC, which passes information from POSITION-CONTROL to the robot, the script invokes f saAct. When f saAct executes, it determines when and how to execute evC. The second thread, T H R E A D _ 1 , intermixes WaitEvent with an interaction to move the light. The repetitive execution of the thread leads to periodic movements to the light's position. When the thread begins, it waits for 10 units of time. Then, it executes mvLight and re-waits for another 10 units of time. The third thread, T H R E A D _ 2 , intermixes WaitEvent with an interaction to update the orientation of the light. Similarly to T H R E A D _ 1 , T H R E A D _ 2 waits 20 units of time between successive calls. The ordering of the two threads differs by their offset. T H R E A D _ 1 initially executes 10 units of time earlier. The organization and structures of the two scripts provide several benefits to program development.. First, the application of multiple states and threads establishes relationships among sequences of software interactions. The sequences relate transitionally, when formed with states, or concurrently, when formed with threads. Second, the distinct representation of states and threads facilitates the development and reuse of sets of software interactions. Each state or thread may be individually inserted or removed from a script with little difficulty. Third, the application of functional methods and their functional events, such as WaitEvents, is practical for establishing temporal delays among software interactions. The temporal delays that separate the execution of software interactions greatly affect the behavior and output of a system. The complete code for the extended example appears in Figure 6.18. The code forms a script for advancing a robot towards a light and having the robot toggle the light's state. The entire script consists of seventeen software interactions (events), three virtual ports, three virtual events, and two activities.  6.3  Software Composition  With computational algorithms providing a foundation upon which to build scripts, IBP supports software composition, the building of software with the systematic integration of building blocks, abstractions that encapsulate functions and commands. The organization and binding of building blocks determines how a software system behaves and what information it contains. With software interactions as building blocks, software composition proves  CHAPTER 6. MULTI-MODELING  METHODS  78  useful in supporting the development of multi-model scenes, visual simulations operating under the influence of several modeling methods such as those discussed in the previous section. Two popular forms of software composition, as noted by Fishwick [46], are aggregation and assembly.  Aggregation builds a large block from collections of smaller blocks  while assembly produces a new block from the combination of existing blocks. Aggregation produces a block that is identified by its parts while assembly produces a block with its own identity. Each of the techniques applies well to the development of high-level abstractions and the reuse of IBS. Aggregation forms a script from multiple models while assembly transforms a script into a block resembling a software component.  6.3.1  Aggregation  In a script, the aggregation of software interactions develops as a linear ordering or a hierarchical grouping. A linear ordering organizes a sequential arrangement of software interactions while a hierarchical grouping organizes a ranked order. Within a linear ordering, software interactions develop as a single abstraction by intertwining or integrating sequentially. When the interactions intertwine, they execute several modeling methods simultaneously. When the interactions integrate sequentially, they execute individual modeling methods in tandem. Within a hierarchical grouping, software interactions embed hierarchically. Interactions of one set operate under the influence of another set. Each level of a hierarchical grouping applies a different level of abstraction in aggregating software interactions. Each level refines the interactions of the previous level. When a hierarchical aggregation integrates multiple modeling methods, the resulting aggregation forms a heterogeneous multi-model script. Each part of the aggregation employs the modeling method that best matches its requirements. When the parts apply similar methods, the parts assemble a homogeneous multi-model script. The entire aggregation employs one method to organize several sets of interactions. A homogeneous multi-model script is similar but not equivalent to a linear aggregation with multiple parts. A homogeneous multi-model script models from a single abstraction level while a linear aggregation models from multiple abstraction levels, all of which happen to be the same. The illustration in Figure 6.10 exemplifies the use of the two forms of aggregation in developing one multi-model system. The joining of P with Q illustrates a linear ordering: Q follows the operations of P. The placement of Z within P represents a hierarchical grouping. Z refines the definition of P homogeneously: both Z and P employ the same organizational method.  CHAPTER 6. MULTI-MODELING  METHODS  Q  I ®~<£) \  I  Declarative  Events  I  79  I  J / .  Events  ]  Functional  Declarative  Functional Figure 6.10: The aggregation of models that forms a multi-model script 6.3.2  Assembly  The grouping of software interactions into a script supports the building of an assembly, an abstraction that behaves in a manner similar to that of a software component. An assembly has identity and supports ports as member functions. The ports establish an interface for the assembly to interact with its environment. Incoming information parameterizes the behavior of the script while outgoing information presents the results of parameterization. With ports, the assembly operates freely with software interactions in communicating information. It readily consumes and produces information as it sends and receives. Behaving similarly to a software component, an assembly actively participates in the formation of other scripts. An assembly may be progressively assembled to produce scripts of greater design and function. Despite the similarities, an assembly's internal design differs from that of a traditional software component. An assembly is an abstract entity for coordinating communications, not for performing computational algorithms. It is a scripted configuration that acts out the role of a software component.  4  There are several benefits of producing an assembly from a script. First, the assembly facilitates software development by composition. The coordination of communications among software components does not stop with the definition of a script. As a software component, the script freely interacts with other software interactions to build other scripts of greater size and complexity. Second, the assembly encourages modular development of IBS programs. As a software component, a script may be repeatedly invoked to form other scripts of varying designs. Changes to the original script are localized and hidden from its users. Third, the assembly promotes an alternative means of reusing a script. Rather than The question of whether the software guidelines of section 8.3 apply equally to both software components and assemblies is under investigation. 4  CHAPTER  6. MULTI-MODELING  METHODS  80  e x t r a c t i n g its parts or embedding the code directly into another script, a script may be set aside as a software component ready for reuse. Users search for and integrate the scripts or software components that meet their needs.  ActivityComp* helicMotion() { (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11)  produces sequences of values EvHelixPt *ev = new EvHelixPt(); identifies two proxy ports InProxy *iProxy = new InProxy( "in" ); OutProxy *oProxy = new OutProxy( "out" ); virtual ports sum incoming proxy and spinPt's value V P o r t *newPos = *ev->outCurVal() + *iProxy->outVal( "in" ); interactions to compute new position DurationEvent *durEvt = new DurationEvent( ev-MnDurationVal() ); TimeEvent *evt0 = new TimeEvent( ev-»inCalcValue() ); Event *eSet = new Event( newPps, oProxy->inVal( "out" ) ); add interactions to a coordination activity ActivityComp *act = new Activity( "spin" ); act->addInitEvent( durEvt ); act-»addActEvent( evtO, eSet ); return act;  } F i g u r e 6.11: A n assembly for p r o d u c i n g a helical m o t i o n T h e code i n F i g u r e 6.11 exemplifies the p r o d u c t i o n of an assembly that computes helical m o t i o n . T h e assembly augments a script w i t h two  proxy ports, iProxy and oProxy.  T h e p r o x y ports establish an external interface for the assembly. A s i l l u s t r a t e d i n F i g u r e 6.12, iProxy accepts an i n c o m i n g value w h i l e oProxy emits a n outgoing value.  T h e software  interactions of the assembly execute each time iProxy receives a new value. T h e last software interaction, eEvt, provides o P r o x y w i t h a n outgoing value. T h e v i r t u a l port (see section 5.2), w h i c h provides eEvt w i t h information, computes the results of parameterization by a d d i n g the i n c o m i n g parameter value w i t h the outgoing value of ev, a software component  for  c o m p u t i n g helical values.  A p p l y i n g an Assembly T h e code i n F i g u r e 6.13 exemplifies the a p p l i c a t i o n of the assembly by another script that moves a b a l l toward a pole.  T h e assembly updates the linear movements of the b a l l to  produce a helical m o t i o n . T h e script interfaces w i t h the assembly by accessing the proxy ports of the assembly by name (lines 7-8).  T h e " i n " proxy port obtains a parameter value  CHAPTER 6. MULTI-MODELING  METHODS  81  outVal + outCurVal = newPos Communicational i  •  Event  j Temporal  SpinPt  O  inDurutionVal outCurVal inCalcValue  DurationEvent  M TimeEvent  Figure 6.12: The assembly of Figure 6.11 from the motion interpolator ev via the event evO while the "out" proxy port returns a helical value that the event eSet uses to set the ball's position.  (1  (2 (3 (4 (5 (6 (7 (8  (9 (10 (11 (12  Activity* moveHelically( Model *ball, Model *pole ) { obtains assembly ActivityComp *hlMv = helicMotion(); produces a sequence of values EvolutePt *ev = new EvolutePt (); interactions that move a ball towards a pole Event *efB = new Event( ball->outPosition(), ev->inBeginVal() ); Event *efF = new Event( pole2-»outPosition(), ev-MnFinishVal() ); DurationEvent *durEvt = new DurationEvent( ev—HnDurationVal() ); TimeEvent *evtT = new TimeEvent( ev->inCalcValue() ); interactions that interface with the assembly Event *evt0 = new Event( ev-»outCurVal(), hlMv->inProxy("in") ); Event *eSet = new Event( hlMv->outProxy("out"), ball-MnPositionQ ); add interactions to an activity Activity *act = new Activity( "moveFSA3" ); act->addInitEvent( durEvt, efB, efF ); act->addActEvent( evtT, evtO, eSet ); return act; }  Figure 6.13: A script using the assembly of Figure 6.11 to move a ball towards a pole helically  Hierarchical Assemblies  An assembly may reference other assemblies to produce a hierarchical assembly. As illustrated in Figure 6.14, the hierarchical assembly consists of multiple assemblies, all of which are bound by additional software interactions. 5  5  As information flows into the hierarchical  The diagram of Figure 6.14 is unrelated to the diagram of Figure 6.12  CHAPTER 6. MULTI-MODELING  METHODS  82  assembly v i a the p r o x y ports, the software interactions incite communications and establish the values of outgoing i n f o r m a t i o n , w h i c h the p r o x y ports to others provide u p o n demand.  F i g u r e 6.14: A hierarchical assembly  6.3.3  Extended Example Revisited  A s illustrated i n F i g u r e 6.16, the code i n F i g u r e 6.15 p a r t i a l l y reuses the code of the extended example to produce an assembly that accepts two references, one for a robot a n d another for a light. T h e assembly determines i f the robot successfully moves to a specific l o c a t i o n when the light is stationary. Specified by a p r o x y port (line 1), the value of the l o c a t i o n is determined w h e n the assembly is i n use. In reusing the original code, the assembly duplicates the v i r t u a l ports d e t e r m i n i n g contact (lines 3-5), a n d converts as a d d i t i o n a l v i r t u a l ports (lines 6-7) the v i r t u a l events determining light movements. T h e conversion (reasonably) replaces the intent of the v i r t u a l events w i t h o u t referencing other events, as specified by the original coding. T h e conversion of the v i r t u a l events is performed w i t h the p o r t - p r o d u c i n g function prevState. T h e function accepts a port as an argument and returns a new port r e t u r n i n g the original port's most recently requested value. T h e v i r t u a l ports m o v i n g and r o t a t i n g (lines 6-7) use p r e v S t a t e to compare the current a n d previous values of the light's ports that indicate p o s i t i o n and orientation.  A change i n either value indicates that the light is m o v i n g or r o t a t i n g .  To  reuse the original code exactly w i t h o u t changes would require that the assembly also accept references to software interactions, such as evA and remDur, that were accessed by the o r i g i n a l v i r t u a l events. R a t h e r t h a n reacting to the execution of a software interaction, the new code reacts to the changes of state. Since software components are time-invariant, it is safe to infer a n equivalence between a state change and the execution of a software interaction.  State  changes occur only after a software interaction invokes a n i n p o r t to consume information. T h e c o n c l u d i n g code for the assembly forms an interaction and a script (line 9-11).  Together,  CHAPTER  6. MULTI-MODELING  METHODS  83  Activity* robotTouchWhileLightFixed( Robot *rob, Light *lght ) { / * create assembly by identifying two proxy ports * /  (1) (2)  InProxy OutProxy  (3) (4) (5)  OPort OPort OPort  *iProxy = new InProxy ( "in" ); *oProxy = new OutProxy( "out" );  / * virtual ports to check if robot touches position identified by inProxy * /  *hand = *rob->outHand() + *rob->outPos(); *dist = Dist( hand, iProxy-»outVal("in") ); *zero = (*dist == 0);  / * virtual ports to determine if light neither moving nor rotating * /  (6) (7) (8)  OPort OPort OPort  (9)  Event  *moving = (prevState( lght->outPos() ) == lght-»outPos()); *rotating = (prevState( lght->outOrient() ) == lght->outOrient()); *ready = zero && !(moving || rotating);  / * pass boolean to outProxy * /  *eSet = new Event( ready, oProxy-»inVal( "out" ));  / * add event to activity * /  (10) (11) (12)  Activity return act; }  *act = new Activity; act->addActEvt( eSet );  Figure 6.15: An assembly formed from the script of the extended example they extract information from the virtual ports and prepare the information for the outgoing proxy port (line 2).  I InProxy  OPort "moving = prevState( L~J] )==Fj]  otil Val  Light  OPort "hand  I in toggle oull'as  =6+6  | ilil'us I iiiOrienl  OPort *dist = Dist( F J F J );  i OPort *zero = (dist == 0);  :ate( Ll )==CJ OPort 'rotating = prevStatef  OPort *ready = zero && !( moving || rotating ); I , eSel  lOutProxjj •[] inVal  I  "oul  Figure 6.16: The assembly of Figure 6.15 The code segment in Figure 6.17 applies the assembly of Figure 6.15. It produces two instances of the assembly by invoking robotTouchWhileLightFixed twice. Each call acts upon the same light but with a different robot, rob or robZ (lines 27-28).  Each  instance receives information from an incoming proxy port, as indicated by touchA for  CHAPTER 6. MULTI-MODELING  METHODS  r o b a n d by t o u c h Z for r o b Z (lines 29-30).  84  T h e two events provide a p o s i t i o n a l l o c a t i o n  either to actCompA or actCompZ, the two assemblies. T h e v i r t u a l port a n y t o u c h establishes a logical constraint between the outgoing values of each assembly (line 29).  W h e n the  p r o g r a m executes, the C O N D E V E N T cevM accesses the v i r t u a l port to decide whether it should execute the v i r t u a l event vEv2. T h e last two lines of the code segment (lines 33-34) create the script a c t that aggregates the methods of the two assemblies w i t h a sequence of software interactions. T h e resulting script forms a hierarchical aggregation of software interactions. T h e immediate software interactions of the script (line 32-33) are high-level while those of the assemblies are low-level.  / * using assembly (27) (28)  ActivityComp ActivityComp  (29) (30)  Event Event  (31)  OPort  (32) (33)  CondEvent  (34) (35)  MultiActivity  *  /  ,  .  -  '  '  *actCompA = robotTouchWhileLightFixed( rob, lght ); *actCompZ = robotTouchWhileLightFixed( robZ, lght );  / * does either robot touch button * / *touchA = new Event( but->outPos(), a c t C o m p A - » i n P r o x y ( " i n " ) ) ; *touchZ = new Event( but-*outPos(), actCompZ-MnProxy("in"));  / * does any robot touch as light is stationary * / *anytouch = (actCompA->-outProxy("out") || actCompZ-^outProxy ("out")); cevM( anytouch ); cevM.setTrue( vEv2 );  / * form script * / *act = new MultiActivity; a c t - » s e t A c t E v t ( T H R E A D J D , tevY, fsaEvt, evD, evF, touchA, touchZ, cevM ) ;  F i g u r e 6 . 1 7 : A usage o f the assembly for the extended example  CHAPTER  6. MULTI-MODELING  METHODS  85  Activity* robotTouchLightButton()  (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) (21) (22) (23) (24) (25) (26)  (27) (28) (29) (30) (31) (32) (33) (34) (35) (36) (37)  { / * pass info from outport to inport(s) * / Event evA( rob->outPos(), pCtr-MnBegin() ); Event evB( rob—>outHand(), hCtr->inBegin() ); Event evC( pCtr-)-outVal(), rob-MnPos() ); Event evD( hCtr->outVal(), rob-HnHand() ); Event evF( rob->outHand(), grph-MnXY(), filtXY ); FanEvent evfl_but( but-»outPos(), hCtr->-inEnd(), pCtr-MnEnd() /*• virtual ports to determine if robot touches button * / hand global position OPort *hand = *rob->-outHand() + *rob->outPos(); dist: hand & button OPort *dist = Dist( hand, but-»outPos() ); hand touch button? OPort *zero = (*dist == 0); / * new events for extended example * / FanEvent evfI_rob( rob2->outPos(), Ctr->inEnd(), pCtr-MnEnd() ); Event • mvLight( lposCtr->outVal(), lght-nnPos() ); move light Event rotLight( loreCtr->outVal(), lght-MnOrientQ ); rotate light Event chgnCol( clrCtr-*outClr(), lght-MnColorQ ); change light's color CallEvent upClr( clrCtr->inChgClr() ); update color control CallEvent lightTog( lght^inToggle() ); toggle light state ChainEvent chnEvts( upClr, chgnCol ); chain: color,update / * pass duration to inports * / RemDurEvent remDur( pCtr—»inDuration() ); DurationEvent devX( hCtr—>inDuration(), pCtr—»inDuration() ) / * pass time step to inports * / TimeEvent tevY( hCtr->inTime(), pCtr-KnTime() ); / * determines the meeting of the two robots * / OPort *meet = ( *rob->-outPps() == *rob-»outPos() ); are positions same? / * finite state automata activity * / ' FsaActivity fsaAct; fsaAct.setState( 0, evFLbut, evC && evD ); sO (initial, acting) fsaAct.setState( 1, evFI_rob, evC && evD ); si (initial, acting) fsaAct.setTransition( 1, 0, meet || 60 ); transition: 1 to 0 CallEvent trnsEvt( fsmAct.inTransition(0,l) ); do transition 0 to 1 / * virtual events to constrain events * / VirEvent *vEv = trnsEvt + evA + remDur; sequence events VirEvent *vEv2 = lightTog && (chnEvts || vEv); light-) color or vEv / * invoke inport info if outport is true * / OEvent *ready = zero && !(mvLight || rotLight); button and no light CondEvent cevM( ready ); cevM.setTrue( vEv2 ); / * time delay events * / WaitEvent, wEvtl0( 10 ), wEvt20( 20 ); || / * organize software interactions * / Multiprocess *act = new Multiprocess; || act->setInitEvt( T H R E A D . O , evfl, devX, evA && evB ); act-»setActEvt( T H R E A D _ 0 , tevY, fsmAct, evD, evF, cevM act-+setActEvt( T H R E A D _ 1 , wEvtlO, mvLight, wEvtlO ); act->setActEvt( T H R E A D _ 2 , wEvt20, rotLight ); return act; }  Figure 6.18: Complete Listing of Code for Extended Example  Chapter 7  General Specification Scene modeling is the building of virtual scenes with a specification that describes the shape and material characteristics of geometric forms. Scene modeling is widely applied to develop the many aspects of interactive applications that are either unchanging or reactive to user inputs. Software interactions and the organizational methods of this thesis expand existing specifications for scene modeling, in particular V R M L [18] and J A V A 3 D [33], with a systematic approach to animating state. This approach, embodied as a general specification (for scene animation), develops visual simulations from an organization of parts, and encourages the development of a general file format for visual simulation that is logical, extensible, and device-independent. In developing visual simulations by parts, developers apply a consistent approach to assemble and disassemble the production of animated scenes. Developers test and update their scenes by tweaking their parts, modifying their assembly, or introducing enhancements. Through using a general file format for visual simulation, developers produce data files that are consistent and easy to share. Visual simulations quickly exchange animations and apply them in new and alternative contexts. Building upon IBP and its enhancements, the general specification consists of three hierarchical graphs: model, action, and time. The organization of the three graphs and their respective parts determines what appears and what occurs in visual simulations over time. The model graph extends the "scene graph," the common abstraction of scene modeling, to integrate the building of geometric forms with the unique features of IBP, such as ports and interaction constraints. The action graph constructs IBP scripts for animating the constructs of the model graph. The scripts apply software interactions to bind the constructs of the model graph with computational components, such as keyframe interpolators. The scripts ultimately decide how the visual simulations update state and vary over time. The time 86  CHAPTER  7. GENERAL  SPECIFICATION  87  graph produces meta-scripts for controlling the temporal properties of the action graph. The action graph indicates what happens in a scene while the time graph indicates when the actions of the scene occur. The diagrams of Figure 7.1 exemplify the basic designs of the three graphs. Each graph appears as a collection of nodes, which are either structural or. descriptive. Structural nodes organize the topology of the graph while the descriptive nodes modify and establish state. The nodes of each graph vary in design and application. The nodes of the model graph form geometric shapes with location, orientation, and color. The ordering of nodes establishes a distinctive association between geometries and material attributes. The nodes of the action graph organize software interactions into distinct groups. Each group applies a separate logic to the ordering and execution of software interactions. The nodes of the time graph specifies a temporal ordering to scripts. Each node of the time graph inherits an interval of time that begins at the root and changes continuously as it travels down to the leaves.  • . Activity  Interval  I  I ( Initial ) F.vntDT  EvnlCP  r—^  (  F i n i s h  ^ Acting j  )  (  Start  )  |  I ('irouplivenl  |  (a) Model Graph  | 1-vntM |  (b) Action Graph  (c) Time Graph  Figure 7.1: The three graphs of the general specification The remainder of this chapter details the definition of the general specification with its three graphs and the use of the general specification in developing a general file format for animation. The descriptions of the three graphs include a discussion of their parts and an evaluation of their application. The utility of the three graphs is demonstrated through the creation of the animation of the general example. The chapter concludes with a discussion of the relationships between the general specification and two popular specifications for scene modeling,  7.1  VRML  and JAVA3D.  Model Graph  A model graph is an acyclic, hierarchical structure for assembling geometric forms. The hierarchy of nodes within a model graph defines a modeling state, a set of transformations  CHAPTER  7. GENERAL  SPECIFICATION  88  and visual properties that effect the size, material attributes, and orientation of geometrical primitives. The positioning of nodes in a model graph decides the value of the modeling state and the application of the modeling state to the geometrical primitives.  7.1.1  Parts  As shown in Table 7.1, the nodes of the model graph are one of four kinds: shape, transform, appearance, and group. The first three appear as leaves of a model graph while the last establishes the topology of the graph and forms relations between nodes. Shape nodes represent geometrical primitives, such as spheres, cubes, and nurb surfaces. Transform nodes apply geometrical transformations, such as translation, scaling, and rotation, to the shape nodes so as to decide their position, orientation, and size. Appearance nodes assign material attributes, such as color, opacity, and reflectivity, to the shape nodes. Group nodes unite shape nodes, transform nodes, and appearance nodes into a single assembly. The composition of an assembly determines the associations between nodes and the scope of each node's influence within the hierarchy. G E O V I E W S W I T C H , for instance, limits the scope of its nodes according to the viewing conditions of a virtual camera.  Type SHAPE TRANSFORM APPEARANCE GROUP  Kinds GeoSphere, GeoCube, GeoCylinder, GeoSpline, etc. MatrixRS, QuartMatrix, etc. Material, TextureMap, Complexity, LightingModel, etc. GeoHorzComp, GeoSwitch, GeoViewSwitch, etc. Table 7.1: Kinds of model graph nodes  Fields and Ports Every node in a model graph consists of a collection of fields and ports. The fields are identifiers for storing the state of a node and for establishing the node's state space. A cylinder, for instance, usually contains two fields, one for its radius and another for its height. Ports are first-class, unidirectional operations that manipulate the values of fields. As discussed in section 4.2.2, ports freely support software interactions and the methods of IBP. For instance, the ports inSetRadius and outRadius are two ports affecting the value of a cylinder's radius. In conjunction with software interactions, the first port imports a new value for the radius while the second port exports the radius' current value.  CHAPTER  7.1.2  7.  GENERAL  89  SPECIFICATION  Evaluation  T h e evaluation of a m o d e l graph begins w i t h a default state and progresses depth-first from the root of the g r a p h to i n d i v i d u a l leaves. T h e default state, w h i c h consists of the identity m a t r i x a n d a list of v i s u a l properties, defines the i n i t i a l values of the m o d e l i n g state. A s the evaluation of a m o d e l g r a p h progresses, the m o d e l i n g state updates according to the placement a n d definition of i n d i v i d u a l nodes. T h e nodes of type T R A N S F O R M update the m o d e l i n g state w i t h a transformation m a t r i x while the nodes of type A P P E A R A N C E update the m o d e l i n g state w i t h v i s u a l properties. T h e group nodes of a m o d e l g r a p h decide how changes to the m o d e l i n g state accumulate and how b r a n c h points i n the m o d e l g r a p h d i s t r i b u t e their inherited m o d e l i n g state to their descendents. Changes to the m o d e l i n g state are n o r m a l l y local to a group node a n d affect only those nodes that are w i t h i n the same group or further d o w n the hierarchy.. F o r instance, the m a t e r i a l node of F i g u r e 7.1a creates changes to the m o d e l i n g state that affect only the sphere node, w h i c h resides i n the same group node as the m a t e r i a l node. T h e cube node is unaffected because the cube node is neither a m e m b e r nor a descendant of the group containing the m a t e r i a l node. T h e shape nodes of the graph, such as the 'cube, inherit the m o d e l i n g state set by their positions i n the hierarchy. Shape nodes n o r m a l l y inherit only those changes to the modeling state that are set by their predecessors.  T h u s , a shape node near the root of a  graph inherits a m o d e l i n g state quite different from that a p p l i e d to a shape node near the leaves of the graph.  7.1.3 The  File Format  composition and  independent file format. name, a nodeType,  representation  of the  model graph  readily supports  a device-  A s s h o w n i n F i g u r e 7.2a, the definition of a node consists of a  a n d a list of fields and ports. T h e name of a node is o p t i o n a l a n d re-  quired only w h e n other nodes, such as those of the a c t i o n graph, reference the node by its n a m e .  1  T h e n o d e T y p e of a node is a n identifier of the node's type, w h i c h is any of a  variety of kinds, as presented i n T a b l e 7.1. A l l fields a n d ports, as listed w i t h i n a node, are declared w i t h three identifiers. A field consists of a dataType,  a fieldld, a n d a  fieldValue.  T h e d a t a T y p e indicates a type, such as a float or integer; a fieldld provides a name; a n d the fieldValue  presents an i n i t i a l value. A port consists of a portType,  As in VRML, the name of a node is preceded by the term by its name is preceded by the term USE. 1  a portld, a n d a  dataType.  DEF and a reference to the same node  CHAPTER  7. GENERAL  SPECIFICATION  90  The portType identifies a port's type, which always indicates a port's direction, while the portld and dataType provide the port a name and declare what kind of information the port communicates.  [ D E F <name> ] <nodeType> { [ <dataType> <fieldld> <fieldValue> ] [ <portType> <portId> <dataType> ] { body }  D E F body GEO-CYLINDER { R S J N T radius 10.0 .OPORT outRadius RS-FLOAT IPORT inRadius RS-FLOAT  }  }  -A-  -B-  Figure 7.2: The syntax of.a node from the model graph  The text in Figure 7.2b presents a definition of the node body as it would appear in a data file. Of type G E C - C Y L I N D E R , body declares one field and two ports, which are a subset of its complete list of parts. The field radius stores a floating point number, while the ports outRadius and inRadius each communicate a floating-point number. The first port emits a number for a software interaction to read while the second port accepts a number for a software interaction to write. The parts that are missing from body, such as its height, are given default values as set by the node's definition.  7.2  Action Graph  Similar to a model graph, an action graph is a hierarchical structure. The hierarchy of the model graph assembles geometric forms while the hierarchy of the action graph assembles IBP scripts. Unlike the model graph, which produces a single structure, the action graph consists of four separate parts, all working together to coordinate the sequencing and execution of softwareinteractions. The first part defines nodes that are local to the definition of a script. The remaining three parts apply a hierarchy of nodes to develop a data structure that either manages or simplifies the building of IBP scripts with software interactions.  7.2.1  Parts  The four parts of an action graph are L O C A L S , E V E N T S , VIRTUALS, and ACTIVITIES.  The  first three parts, which are all optional, simplify the development of the last part, which builds a script from a hierarchy of nodes. The LOCALS define individual nodes that are local  CHAPTER  7. GENERAL  SPECIFICATION  91  to the definition of the action graph. The LOCALS encourage the encapsulation of nodes 2  within the body of scripts that apply them. The nodes appearing in the L O C A L S are usually time-varying nodes, such as the hand and positional controllers of the general example, or helper components, such as the color controller of the extended example. These nodes have limited scope and are best introduced within the context of the action graph. The E V E N T S define the software interactions of the script that are commonly applied by its ACTIVITIES. These interactions communicate information between either the nodes of model graphs or the nodes appearing in the L O C A L S . Hierarchical structures develop within the E V E N T S as individual interactions use or organize additional interactions. For example, a conditional interaction often applies other interactions to link the communications of a system to run-time conditions. The VIRTU ALS define interaction constraints for the script to apply in simplifying the definition of complex communications. The constraints either bind the ports of model graphs, which may appear in the L O C A L S , or link the execution of software interactions, which appear locally or in the E V E N T S . The VIRTUALS develop hierarchically as individual constraints repetitively apply other constraints in their definitions. The code in Figure 5.2, for instance, introduces a chain of interaction constraints. The chain progressively constructs a hierarchy of nodes as it accesses additional numbers of ports and interaction constraints. The ACTIVITIES apply the first three parts in producing an IBP script from an ordering of nodes, of which there are three kinds: interactional, operator, and grouping. Interactional nodes define software interactions. The nodes define the interactions explicitly or by referencing the interactions defined in the E V E N T S . Normally, software interactions that appear once in the hierarchy manifest in the ACTIVITIES while those that appear multiple times or those involved in interaction constraints manifest in the E V E N T S . Operator nodes define an ordering, such as parallelism or sequence, to interactional nodes. Grouping nodes assemble interactional nodes into groups, with each group applying an alternative method to order its nodes. The groups order their nodes with either functional or declarative methods, as discussed in section 6.2.1, or a combination of both. The hierarchy of the action graph grows as the ACTIVITIES repetitively embed nodes into groups and as interactional nodes reference other nodes in their definitions.  In general, the model graph and its nodes are globally visible to all other nodes, from either the action graph or time graph. To support greater encapsulation of information, model graphs may be defined locally within an action graph. 2  CHAPTER 7.2.2  7. GENERAL  SPECIFICATION  92  Evaluation  The evaluation of an action graph begins with an examination of its L O C A L S and follows with an evaluation of its VIRTUALS, E V E N T S , and ACTIVITIES. The L O C A L S always appear first while the three remaining parts appear or repeat in any order that maintains consistency amongst its nodes. An ordering of parts is consistent when each of its parts references only the nodes of other parts that appear earlier. For instance, if an ACTIVITIES accesses the interaction of a particular E V E N T S , that EVENTS must appear before the definition of the ACTIVITIES.  Otherwise, the ACTIVITIES will attempt to access an interaction that has not  yet been defined.  1  The evaluation of the L O C A L S is similar to that of the model graph, but unlike that of the other parts, which construct data structures. The L O C A L S apply nodes in developing and updating a modeling state while the VIRTUALS, E V E N T S , and ACTIVITIES apply nodes in constructing data structures for managing software interactions. The data structures arise with the evaluation of the hierarchy, which begins at the top of a hierarchy and progressively visits each level depth-first. Each level of the hierarchy either provides further information to the level above, as occurs with VIRTUALS and E V E N T S , or presents greater information regarding the relations between nodes, as occurs with the ACTIVITIES. 7.2.3  File Format  The composition and representation of the action graph decompose readily into a deviceindependent file format that shares commonalities with the file format of the model graph. As shown in Figure 7.3, the definition of an action graph consists of a name, an actionType, a list of optional parts, and a collection of ACTIVITIES. The name of an action graph is purely informative and is required only when the action graph is referenced by its name: The actionType of an action graph is a label for distinguishing a script by its type. Currently, the only available type is "Script," which denotes the assembling of software interactions into distinct A C T I V I T I E S .  3  The optional parts appear in sequence, with each part defined  by a title and a list of nodes, arranged sequentially or in a hierarchy. The title identifies the list of nodes as L O C A L S , E V E N T S , or VIRTUALS. The nodes of the ACTIVITIES create an ordering for software interactions while arranging the interactions into individual threads. The nodes associated with the LOCALS appear in the same format as those of the model graph (section 7.1.3).  Every node appears with a name, a type, and a listing of  ports and fields. The nodes associated with the E V E N T S present software interactions. As 3  New types will undoubtedly become available as IBS improves in functionality and design.  CHAPTER  7.  GENERAL  93  SPECIFICATION  [ DEF <name> ] <actionType> { [ L O C A L S {-..}] [ EVENTS {...}] [ VIRTUALS {...}] ACTIVITIES { ... } } Figure 7.3: The syntax of an action graph  found in Figure 7.4a, each interaction appears with a name, an interType,  a list of fields. The  name, set by the same naming convention as that of the model graph, is a required identifier. The ACTIVITIES of the action graph reference a node by its name when arranging the nodes into a sequence. The interType identifies a node's type and the interaction that the node represents. As discussed in section 4.2.2, software interactions are of many types, and each references an assortment of ports and filters. A l lfieldsare declared with two identifiers: fieldld  and fieldValue. The fieldld provides a name and the fieldValue presents an initial  value, which may be any numerical constants or the names of nodes or ports, as set by the naming convention established by the model graph.  [ DEF <name> ] <interType> { [ <fieldld> <portRef> ] [ <fieldld> <fieldValue> ] } <portRef> = <nodeId> <portName> -A-  DEF eventA COND.EVENT { source [ ID.COMP OP-BOOLEAN ] true { DEF eventB EVENT { source [ IXLC0MP1 OP.POS3 ] target [ ID.C0MP2 IP_POS2 ] filter ID.FILT } } -B-  Figure 7.4: The syntax of the E V E N T S of an action graph  The text of Figure 7.4b, for instance, presents the definition of EventA, a conditional interaction containing a source and body. The source references the port 0P_B00LEAN from the node ID_C0MP while the body defines EventB, a second interaction containing a source, target, and filter. EventB communicates information between the ports 0P-P0S3 and 0P_P0S2 of the nodes ID_C0MP1 and ID_C0MP2 via the filter ID_FILT. The second interaction is local to the definition of EventA and executes each time that 0P_B00LEAN provides a positive value.  CHAPTER 7. GENERAL SPECIFICATION  94  The nodes associated with the VIRTUALS define interaction constraints. As shown in Figure 7.5, the interaction constraints apply the same naming convention as that applied by the E V E N T in referencing ports and interactions. Every interaction constraint appears as an expression containing either an operator or a function. Both operators and functions accept references to ports, virtual ports, or numerical constants.  The name of the interaction  constraint is set by the return value of its expression. The example in Figure 7.5 presents two interaction constraints, VP_HAND and VP-CONTACT. The first constraint sums the value of two ports while the second computes the distance between the first constraint and the origin.  <vpName> = <vpFunction> <portRef | vpName | CONSTANT> | <portRef | vpName | CONSTANTS <operator> <portRef | vpName | CONSTANT> VP-HAND = [ ID.ROB, OP-HAND ] + [ ID .ROB, OP J O S ] V P . C O N T A C T = distance( VP_HAND, ORIGIN ) Figure 7.5: The syntax of the V I R T U A L S of an action graph  The nodes associated with the ACTIVITIES assemble an ordering to the software interactions of the script. As appearing in Figure 7.6a, every node appears with an association, an orderingType, and a collection of threads. The association links the node to an identifier, such as "InitialEvents" or "ActingEvents," that determines the node's role in a script. The association "InitialEvents," for instance, is understood by a script to execute immediately after the script begins. The identifiers understood by a script vary according to the script's type. The orderingType of a node specifies a particular ordering to a set of interactions. Different types apply alternative methods to decide the frequency and order of software interactions. The threads assemble software interactions into lists. Each list begins with an optional label containing a numerical value, which by default is zero. Threads apply operators to the list of interactions, which are defined either in the EVENTS or locally within the list, to identify a distinctive ordering, such as that involving parallelism or repetition. Figure 7.6b, for example, creates a single thread with four interactions, of which the last is defined locally. The four interactions occur in sequence with the second and third interactions occurring in parallel, as indicated by the + operator.  CHAPTER  -A- j  -B- 1  7. GENERAL  SPECIFICATION  95  SET <association> <orderingType> { thread<#> <USE | DEF> <interaction> [ , | <USE|DEF> <interaction> }  SET initialEvents SequenceEvents { threadO USE ID.evI, USE ID.evF + USE ID.evG, D E F ID.cevM Event { . . . } }  Figure 7.6: The syntax of the A C T I V I T I E S of an action graph  7.2.4  Extended Example Revisited  The diagram in Figure 7.7 visualizes the action graph of the script appearing in Figure 5.4.  4  The graph contains eighteen nodes, of which two are grouping, thirteen are interactional, and two are operator. The grouping nodes partition the interactions into two separate sequences: INITIAL E V E N T S  and A C T I N G E V E N T S . The interactions of the former execute immediately  after the script begins while the interactions of the latter execute repeatedly whenever the script is active. The two operator nodes, both of kind +, execute interactional nodes in parallel. The first operator executes evA with evB while the second executes evE with evF. Activity  Figure 7.7: The action graph of Figure 4.4 The text in Figure 7.8 presents a listing of the action graph of Figure 7.7. Entitled "lightbyRobot," the listing appears as it would in a file or database for sharing and storing the script. The listing begins with the optional parts of the script (lines 2-31) and concludes with the ordering of two lists of interactions (lines 32-40). The operational parts define a filter, three interaction constraints, and sixteen software interactions. The filter is applied by The code of Figure 7.7 accesses several global variables, which are not shown for the sake of brevity. 4  CHAPTER  7. GENERAL  SPECIFICATION  96  ID_evF to connect two components that communicate different kinds of data. The interaction constraints create virtual ports for determining whether a robot selects a button.  The  listing concludes with two lists of interactions, established by the nodes I n i t i a l E v e n t s and ActingEvents (lines 32-37). Each node specifies a different sequence for a separate selection of interactions.  7.3  Time Graph  A time graph is a hierarchical data structure that describes the temporal characteristics of an animated scene. As illustrated in Figure 7.9, it consists of a hierarchy of nodes for building a temporal state and assigning the state to scripts (or even other time graphs). The temporal state establishes a set of values and conditions that decide when and how scripts operate. A time graph of greater size and depth produces numerous changes to the temporal state and generally describes scenes of greater complexity. A time graph is similar to a model graph in that the positioning of its nodes is instrumental in determining how its nodes relate to each other and how its nodes inherit the graph's state. In a time graph, script nodes inherit the temporal state established by their positions. The two graphs are dissimilar in that the time graph permits cycles while the model graph does not. A cycle in a time graph is useful for producing repetitive behaviors and re-occuring events. 7.3.1  Parts  As shown in Table 7.2, there are five types of nodes of a time graph: timers, relations, modifiers, groups, and scripts. Timers, modifiers, and scripts form the leaves of a time graph  while relations and groups organize the leaves into a hierarchy. Timer nodes specify the temporal characteristics of a scene by manipulating an interval of time, controlling temporal properties, and establishing temporal delays. An interval of time defines a range of time upon which a script bases its computation. From the interval, the computations obtain a duration and time step. Some nodes, such as TMlNTERVAL, initialize an interval's range while others, such as T M S H I F T I N T E R V A L , apply transformations to an interval to shift or scale its range. Temporal properties control the attributes of an interval, such as granularity. An interval with high granularity causes a script to update its computations frequently. Temporal delays relate the passage of time. They extend an interval of time to include a period of inactivity upon which scripts cease to work. Relation nodes establish temporal relations among intervals of time. Appearing in  CHAPTER 7. GENERAL SPECIFICATION  97  DEF lightByRobot Activity { LOCALS{ (2) (3). DEF ID.FILT01 FilterXY { ...} (4) } VIRTUALS { (5) DEF VP-HAND = [ ID J O B , OPJHAND ] + [ ID_ROB, OP J O S ] (6) DEF VP.DIST = DIST( [ VP_HAND, [ ID.BUT, OP J O S ] ) (7) DEF VP_ZERO = VP.DIST == 0 (8) (9) } (10) EVENTS{ DEF ID.evA Event { (11) (12) source [ ID.ROB, OP J O S ] (13) target [ ID J C T R , IP .BEGIN JOINT ] (14) } (15) (16) DEF ID.evF Event { source [ ID.ROB, OPJIAND.POS ] (17) (18) target [ ID.GRPH , IPJCY ] (19) filter USE ID.FILT01 (20) } (21) (22) DEF ID.cevM Event { (23) source [ ID J) IST, OP.ZERO ] (25) true CallEvent { [ ID J G H T , IP.TOGGLE ] } (25) . } (26) (27) DEF ID.evQ Event { (28) source [ ID.ROB, OP J O S ] target [ ID .SUB, IP JOINT1 ] (29) (30) } (31) } (32) ACTIVITIES { (33) SET InitialEvents SequenceEvents { (34) USE ID.evI, USE ID.evX, USE ID.evA + USE ID.evB (35) } (36) SET ActingEvents SequenceEvents { (37) USE ID.tevY, USE ID.evC, ..., USE ID.evQ (38) } (39) } (40) } (1)  Figure 7.8: The file format for the script of Figure 7.7  Table 7.3, the temporal relations establish time-based conditions for deciding when an interval begins and ends. Some relation nodes, such as T M D E L I M I T S , initiate and terminate intervals simultaneously while others, such as T M M E E T S and T M S E Q U E N C E , order intervals sequentially or according to the advancement of time. With relation nodes, the execution of  CHAPTER  7. GENERAL  Type  SPECIFICATION  98  Kinds  TIMER  RELATION  MODIFIER GROUP  Tmlnterval; TmScalelnterval; TmShiftlnterval; TmNextlnterval TmGranularity; TmScaleGranularity TmHaltTime; TmSuspendTime; TmWaitTime; TmStarts; ReStarts; TmMeets; ReMeets; TmDelimits; ReDelimits; TmStops; TmSequence TmCondition; TmReActivate; Tmlnactive; TmCounter TmGroup; TmSwitch; TmToggle  Purpose establishes and alters interval establishes and alters temporal update stochastic controls  applies temporal relations modify state info of time graph collates nodes  Table 7.2: Kinds of time graph nodes a script need not always begin and end at preset times. One script may begin after another one ends. a Relation B TMSTOPS TMSTARTS RESTARTS INSTARTS TMDELIMITS REDELIMITS INDELIMITS TMMEETS REMEETS INMEETS  Description  when when when when when when when when when when  a a a a a a a a a a  starts, B stops starts, 8 starts starts, active B stops and restarts starts, inactive 8 starts starts (stops), 8 starts (stops) starts (stops), 8 restarts (stops) starts (stops), inactive 8 starts (stops) stops, 8 starts stops, active 8 stops and restarts stops, inactive 8 starts  Inactive 8  Active 8  N/A starts N/A starts starts & stops N/A starts & stops starts N/A starts  stops restarts restarts N/A starts & stops starts & stops N/A restarts restarts N/A  Table 7.3: The effects of temporal relations on active and inactive scripts Modifier nodes assign conditions to the temporal state and their application to scripts. The conditions relate the temporal state to special occurrences, such as the states of run-time variables or the past execution history of a script. For example, the modifier node T M I N A C TIVE indicates that temporal state affects only scripts that are inactive. When applied with T M D E L I M I T S , the T M I N A C T I V E node produces INDELIMITS, a temporal relation behaving similarly to T M D E L I M I T S but working only with scripts that are inactive. Group nodes arrange the nodes of the time graph into groups, with each group inheriting a single interval of time. The group nodes decide which nodes of a group are active and how the nodes of a group relate. For instance, the nodes T M G R O U P and T M S W I T C H apply different means to manage a set of nodes. The former group node activates the entire set while the latter activates only one at time.  CHAPTER  7.3.2  7. GENERAL  SPECIFICATION  99  Evaluation  The evaluation of a time graph involves a depth-first traversal of its hierarchical structure. The traversal process begins with default values and works incrementally to build a temporal state and apply it to every script in the graph. Changes to the temporal state evolve as the traversal process encounters nodes of varying types, as shown in Table 7.2. Each new node produces a new change or modifies a previous update. Nodes along the same path from root to leaf produce incremental changes while those along differing paths produce varying results. As the traversal process returns to a previous level, it undoes the changes performed by nodes of lower levels. For example, upon returning from the T M M E E T node of Figure 7.9, the changes of the T M R E A C T I V A T E node are undone. The effects of the T M R E A C T I V A T E node apply only to script A, not to scripts Z or E.  Interval  ( Start  Script TmReActivate TmMeets TmGranularity TmSequence TmStarts Tmlnterval  )  *A, *Z, *E, *R; *reactive = new TmReActivate; *met = new TmMeet ( R, reactive, A ); *grn = new TmGranularity; *seq = new TmSequence( met, grn, Z, E ); *sta = new TmStart( A, seq ); *rt = new Tmlnterval( sta );  Figure 7.9: An exemplary time graph that uses temporal relations  The top node of a time graph is normally an TMlNTERVAL node. The node establishes an initial reference interval for the remainder of the tree to inherit. A n interval is defined as a span of time with a beginning and an end. Every non-TMlNTERVAL node of the hierarchy 5  forms its own interval by inheriting the interval of its parent. They either copy the interval or derive a new one by applying modifications or invoking a temporal relation. Every script 5  The beginning and end of a interval may coincide to represent an instantaneous moment in time.  CHAPTER  7. GENERAL  SPECIFICATION  100  within the hierarchy also inherits the interval of its parent. The inherited interval determines the times that the script begins and ends. The illustration and code segment of Figure 7.9 presents a sample time graph with each of the three node types of Table 7.2. The T M I N T E R V A L node establishes an interval of time for the T M S T A R T S node to inherit. The T M S T A R T S node instructs script A and the TMSEQUENCE  node to begin simultaneously. The T M S E Q U E N C E node sequences through  scripts R, Z, and E. The T M G R A N U L A R I T Y nodes modifies the update frequencies of scripts Z  and E . The T M M E E T S node, with the modifier T M R E A C T I V A T E S , indicates that script A  "restarts" when script R finishes. 7.3.3  File Format  Like both the model graph and action graph, the time graph decomposes readily into a device-independent file format.  As shown in Figure 7.10a, the definition of every node  within a time graph consists of an optional name, a timeType, and a body. The timeType identifies the node as a type of timer, relation, modifier, or group. The body consists of either fields with values or references to nodes and scripts. The fields set the state of a node while the nodes and scripts provide the node with data. The listing of Figure 7.10b, for example, defines movements, a group node containing an T M I N T E R V A L node and two scripts. The T M I N T E R V A L node defines a span of time for the two scripts to reference in their computations.  [ DEF <name> ] <timeType> { [ <dataType> <fieldld> <fieldValue> [ <node> j USE <script> ] }  DEF movements TmGroup { TmInterval { RS.FLOAT begin 20.0 RS-FLOAT end 30.0 },  USE IDjmoveBall, USE ID_moveHand }  -BFigure 7.10: The syntax of a time graph  7.3.4  Contextual Example Revisited  The illustration and code segment of Figure 7.11 present the time graph for the large scene presented in Figure 3.3. The time graph consists of fifteen nodes, of which five are timers,  CHAPTER  7. GENERAL  SPECIFICATION  101  three are temporal relations, and two are groups. The five remaining nodes reference four scripts: robLight, robPush, mvBox, and mvBarge. The two references to robLight apply two temporal relations to the script, one for delimiting and another for stopping.  / * robot lights stationary light * / Script *robLight = RobotLight(...); / * robotB pushes boxes * / Script *robPush = RobotPush(...); / * cranes moves boxes * / Script *mvBox = moveCrane(...); / * barge moves * / Script *mvBarge = moveBarge(...);  TmStops TmGranularity TmGroup TmHaltTime TmGroup TmShiftlnterval TmHaltTime TmStarts TmDelimits Tmlnterval  *sta *grain *tmGrp *htTime *tmGrp2 •shift *htTime2 *sta *del *rt  = = = = = = = = = =  new new new new new new new new new new  TmStops( mvBarge, robLight ); TmGranularity( . . . ); • TmGroup( grain, robPush ); TmHaltTime( . . . ); TmGroup( hltTime, mvBox ); TmShiftlnterval( . . . ); TmHaltTime( . . . ); TmStarts( tmGrp, tmGrp2, shift, stp ); TmDelimits( robLight, htTime2, sta ); Tmlnterval( 10, oo, del );  Figure 7.11: A time graph of the extended example  The root of the time graph initializes the program with a reference interval. The interval begins at ten and terminates at infinity.  The nodes of level one and level two  indicate that the execution of robLight delimits the execution of levels three and four.  CHAPTER  7.  GENERAL  SPECIFICATION  102  The two lower levels begin and end with the activity ofrobLight. The nodes of level three indicate that robPath starts the actions of mvBox and mvBarge. mvBox starts simultaneously with robPath while mvBarge commences after a short delay. The nodes of level four alter the temporal state of robPath and mvBox, and relate mvBarge to robLight. The temporal states of the two scripts differ either by their granularity or by their halting time. The actions of mvBarge terminate the actions of robLight and eventually, the entire program. When mvBarge stops, robLight terminates the scripts of levels three and four. The text of Figure 7.12 presents a listing of the time graph of Figure 7.11. The listing embeds multiple node definitions into a single hierarchy containing eleven nodes. A l l of the nodes appear locally within the hiearchy while the definitions of the scripts appear elsewhere.  (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) (21) (22) (23) (24) (25) (26) (27)  DEF doLight TIME-GRAPH { TmGroup { TmInterval { RS-FLOAT begin 0 RS-FLOAT end infinity } TmDelimits { USE ID .robLight, TmHaltTime { ... }, TmStart { TmGroup { TmGranularity { ... } USE ID_robPush, } TmGroup { TmHaltTime {... }, USE ID_mvBox, } TmShiftlnterval { ... }, TmStops { USE ID .mvBarge, USE ID .robLight } } } } }  Figure 7.12: The file format for the time graph of Figure 7.7  CHAPTER  7.4  7. GENERAL  Comparison to  SPECIFICATION  V R M L  and  103  JAVA3D  The general specification and existing specifications for scene modeling, in particular  VRML  and JAVA3D, are similar yet noticeably different. They share several similarities in their usage of hierarchies but contrast dramatically in their approach to developing and evaluating the hierarchies. 7.4.1  Similarities  Both the general specification and existing specifications for scene modeling rely upon a hierarchical graph for developing software from an integration of parts. The arrangement of parts in a graph defines an ordering to the parts and ultimately decides what the parts do as a whole. The general specification applies the graph to the development of animation, while  VRML  and JAVA3D apply the graph to the modeling and rendering of geometry. The  graph is an appropriate structure because it is simple to build, easy to evaluate, and quick to express as a device-independent file format. Altering the composition or structure of a graph is a simple way to produce new and interesting results. Applications that apply either of the specifications benefit by exporting and disseminating their results. The applications store their results in a general format that is easy to interpret and readily understood. Applications reproduce the results of others by organizing their parts into similar graphs. The applications need neither link continuously with new code nor accommodate the designs of others. 7.4.2  Differences  The differences between the general specification and existing specifications for scene modeling are far more significant than simply a difference in goals or the number of graphs. Each of the three graphs of the general specification differs from the scene graph either in the design of their parts or in the methods they apply in evaluating their nodes. M o d e l Graph The model graph and scene graph share similarities in node types but differ in their means of supporting communications. The nodes of the model graph extend the nodes of the scene graph to use ports rather than fields when animating state. Fields, as specified by  VRML,  are first-class variables for storing state information. When involved with communications, the fields simply return or update their own state. Ports operate dissimilarly in that they  CHAPTER  7. GENERAL  SPECIFICATION  104  react to communications. They set the states of fields by either executing computational commands or interfacing with the software interactions of IBP. The ports either apply algorithms in updating the values of fields or work with interactions in supporting greater forms of communications, such as partial writes (see section 8.1.1). With ports complementing fields, the nodes of the model graph offer several advantages over the nodes of the scene graph. These advantages include an increase in functionality and a gain in support for software interactions. The functionality of a node increases as it is able to react to its communications. By executing commands or algorithms, the ports of nodes behave as methods, rather than data members. Unlike fields, the ports may reject, accept, or modify data as the data enters or leaves a node. For instance, one of the. ports serving the robot of the general example not only receives a position for the robot's hand but also configures the angles of the robot's joints. Had the robot simply accepted a position for the hand and not set the values of its joints, the joints would not accurately reflect the robot's state.  Action  Graph  Unlike the scene graph, the action graph provides direct support for organizing the communications of a system. The scene graph supports limited means for animating the topology of its interconnections and the states of its nodes. To animate the state of the scene graph, VRML  applies routes, simple interconnecting mechanisms for communicating state changes  between fields. When state changes occur, the routes move information from one field to 6  the next. The use of routes and is performed haphazardly or with the use of scripting nodes, which encapsulate the instructions of a scripting language. Neither approach is as useful or as easy to apply as the general specification. Haphazard use resolves only short-term gains while scripting nodes merely encapsulate code and support few aids in animating state. Normally, the scene graph and its nodes are globally visible to all other nodes. This is unlike the general specification which extends the rule of scoping to its definitions. The nodes of any graph may appear globally or as locals within another. For instance, the LOCALS  of the action graph encourage the encapsulation of time-varying nodes within the  scripts that apply them. In constrast, V R M L distributes the definition of its time-varying nodes, referred to as time-dependent nodes and interpolator nodes, globally throughout its body. Lutterman and Grauner [92] proposed an alternative approach, based upon the use of various "history" nodes, to vary the states of an scene graph over time. These nodes are best for presenting spatio-temporal data, not for developing a basis for animation. 6  CHAPTER  7. GENERAL  SPECIFICATION  105  Time Graph The evaluation of a time graph is similar to that of a scene graph ( V R M L , JAVA3D, etc.). Both involve the incremental building of state information with a hierarchical structure and each evaluates its respective graph with a complete depth-first traversal. However, the two graphs apply a different approach in evaluating their nodes.  Whereas the evaluation of  a scene graph is usually immediate, the evaluation of a time graph is not. Most changes described by a scene graph are not linked to run-time conditions.  Except for nodes that  link to scripts or interactive widgets, the final state and appearance of a scene graph are readily determinable. For a time graph, the results of an evaluation are not known until a program runs and simulation time advances. The run-time conditions of a program are very important for deciding the final outcome of the time graph's description. To advance simulation time simultaneously with the evaluation of the time graph would be impractical and unwise.  Such a task would require that the evaluation of several paths through the  tree occur in parallel. Many nodes, such as START and D E L I M I T , require simultaneous execution of multiple scripts.  For scripts along different paths, such as A and R appearing  in Figure 7.9, it is important that each is processed to start together.  If time advances  between the evaluation and execution of each, the program produces erroneous results.  Chapter 8 Reuse Guidelines The structuring of a program into a system of software components interconnected with software interactions does not guarantee the reusability of an interaction-based program or the reliability of its parts. It is easy to develop software components that work poorly in an interaction-based environment. The components may selfishly hard-code or initiate personal communications, or ignore the requests of software interactions in sending and receiving information. Hence, the reuse of an interaction-based program requires guidelines to govern the development of software components. The rules standardize the development and application of code so as to accommodate the workings of software interactions and to support the use of software components in an interaction-based environment. As noted by professionals and researchers, successful reuse requires that software components be developed with the intent of being reused [19, 60, 71]. Current proposals for component properties, such as correctness, efficiency, simplicity [66], composability [17], certifiability [162], and self-restraint [101], are helpful in developing components for IBP, but insufficient for ensuring that the components support IBP. In an interaction-based environment, software components experience dynamic "plug-andplay." As software interactions execute, software components "plug" into a software network and "play" to communicate information. During a program's execution, the software components participate in a large number of networks of varying configurations and designs. Within each network, the software components undergo new roles and provide new services. Unfortunately, current proposals do not recognize the ramifications of producing components for a plug-and-play environment. For IBP to operate properly, software components must not only behave uniformly, but must also support the interactions that control them. This chapter introduces a new set of properties, of which there are three groups, to complement the current proposals for component design [89]. These properties aid in 106  CHAPTER 8. REUSE  GUIDELINES  107  developing software components that work well in interaction-based environments and reuse easily in multiple contexts. The three groups of properties are interactional, behavioral, and temporal. Interactional properties determine how components work with interactions and communicate information. Behavioral properties determine how components respond to communications and complete their computational tasks. Temporal properties determine when components act and how components integrate their computations with time. This chapter describes the three groups of properties and explains the relative attractiveness of each. Not every property applies equally well to every component. To promote extensive reuse, components should accommodate as many properties as possible. Components that apply non-conforming properties benefit from short-term gains, but suffer in not reaping the benefits of IBP. The description of each group is accompanied with a detailed examination of the components of the contextual example. In supporting the three groups of properties, the components ready themselves for greater reuse.  8.1  Interactional  Interactional properties determine how components work with interactions and how components communicate information. The properties ensure that software components permit software interactions to control freely all forms of communications, and that software components provide a general interface consisting of unidirectional, first-class operations. With free-flowing communications, software interactions decide solely on how and when components communicate. With first-class operations, software interactions initiate queries to manipulate and to verify communicative tasks. 8.1.1  Properties  PROPERTY  I—1: Components should permit interactions to control communications freely.  Interactions freely control communications if they, not components, determine when and how information flows. Components should wait for interactions either to provide or to solicit information. If necessary,, components may inform interactions when they are ready to communicate, but they should never prevent a communication from occurring or order a communication to occur. When components wait (during push requests and callbacks) 1  for interactions to further their state, they ignore newly incoming information and remit old information as an outgoing substitute. :  A push request involves asking a recipient to receive and to handle an outflow of data.  CHAPTER 8. REUSE  GUIDELINES  108  By permitting interactions to act freely, components may assume a wide variety of roles and perform in many contexts. A component asserting communicational restrictions, such as that of requiring an immediate transfer of information, is useful only in contexts in which an interaction continuously monitors the component's state. In any other context, guarantees are. unavailable to ensure that the transfer occurs, let alone that the transfer occur immediately. For visual simulation and other dynamic systems, communicational restrictions entail grave consequences: they limit the types of topological changes that may occur and the type of systems that may be described. PROPERTY  1—2: Components should providefirst-classoperations.  First-class component operations are essential in interaction-based programming as they permit an interaction to query an operation about its state prior to its use. As discussed by Lee [87], not all interactions operate instantaneously or in fixed sequence. Many, such as those employed in dynamic systems, remain inactive until important times or states arise. An interaction queries an operation for its state to determine the datatypes supported by the operation and to ascertain the identity of the most recent interaction from which an operation communicated. The former query ensures proper communication between components while the latter query prevents simultaneous write access, the act of updating of a single state variable with multiple values. For time-varying systems, such as visual simulation, simultaneous write access is a problem warranting careful attention. As time advances, the value of a state variable may update only once per time-step, since all state changes are instantaneous. State changes occurring simultaneously introduce errors since they are unpredictable and frequently misinterpreted. First-class component operations also provide benefits to software components. Specifically, components gain the ability to monitor and register carefully the activity of strongly related operations, those which update similar state variables. Components may either register the operations manually or have the operations register themselves automatically. For example, a component may register two operations to be strongly related so as to signal a write access fault when both execute simultaneously. Such a situation would occur if two operations, acting simultaneously, apply different means in setting the base-joint angle of an articulated figure. The first operation sets the angle directly while the second operation sets it indirectly by setting the position of the robot's hand. PROPERTY  1—3: Components should provide a general interface, one where each operation  is either a unidirectional read, a unidirectional write, or a bidirectional write.  CHAPTER  8. REUSE  GUIDELINES  109  An interface is general if it provides a consistent means to read and write information, and if it permits interactions to act freely. An interface .supports these traits if its operations are unidirectional read, unidirectional write, or bidirectional write (see Figure 8.1). A unidirectional read operation simply supplies an interaction with data and thus operates without input parameters. A unidirectional write operation acts similarly, except that data flows inversely. In this case, an interaction passes data to the operation. A bidirectional write operation permits an interaction to obtain information from a component before the interaction writes data to the component. This operation behaves in a manner opposite to that of a bidirectional read, which is commonly called a function and is not a part of a general interface.  Operation  Data  Info Interaction  (A) Unidirectional Write  Operation  Data  Operation  Interaction  Data (C) Bidirectional Write Info  Interaction  (B) Unidirectional Read  Operation  Interaction  Data (D) Bidirectional Read  Figure 8.1: Interface methods for communicating information Unidirectional reads and unidirectional writes are beneficial because they permit interactions to control freely the movement of data. The interactions read and write data to the operations when appropriate. Bidirectional write operations are beneficial in that they simplify the coding of two communicative operations, read and write, into one. In addition, they support the implementation of partial writes: the passing of incomplete information to write operations that accept aggregate datatypes. Operations that support partial writes accept incomplete information and produce fewer changes than those that receive a whole datatype, complete with information. For example, operations that modify an aggregate "x,y" pair support partial writes if they, upon receiving only one value, modify either "x" or "y," but not both. The missing value neither introduces changes nor receives a default value.  2  Partial writes increase the reusability of components by expanding their capabilities and by simplifying their interfaces. Operations that accept partial writes displace all operations that apply equivalent changes to only a subset of the fields comprising the aggregate Only a special class of operations accept partial writes. These operations read and write aggregate datatypes, and handle each of the fields of the datatypes separately. 2  CHAPTER  8.  REUSE  110  GUIDELINES  datatype. In other words, a single operation that is variadic may replace many individual operations, which may or may not address every possible combination of field values. C O R O L L A R Y 1—3:  Components  should avoid the use of bidirectional  read  operations.  Acting in a manner opposite to that of bidirectional write operations, bidirectional read operations require an interaction to provide them with information before the interaction is provided with data. For interaction-based programming, bidirectional read operations lead to two major problems. First, such operations compel interactions to communicate the results of initial write operations immediately, undermining the authority of interactions to choose when and how communications occur. Second, such operations hinder multiple interactions from observing the results of the initial write operations. To reproduce the original results, additional interactions must either supply the operations with duplicate arguments or inform the operations to return their most recent results. Either way, each approach introduces additional complications. The former requires interactions to share argument values, while the latter requires interactions to identify themselves as "primary" or "secondary" readers. Primary readers request new information while secondary readers query for information given to previous requests.  8.1.2  Contextual Example Revisited  It is important that the components of the contextual example support the interactional properties so that they can be effectively reused in multiple contexts, be it the context of the immediate program or of the context of another application. In observing PROPERTY I—1, which counsels that components should permit software interactions to control communications, the components are free from moderating their communications and may issue fewer restrictions on their use. As discussed in the contextual example, the use of software components, such as those controlling the motions of the robot, may vary widely between scenes. For example, in one scene, the components direct a robot to toggle a light, while in another, the same components manage a robot in moving a box. When components rely on their own means to communicate, they complicate their reuse and their placement in software networks of differing topologies. For instance, if the robot of the contextual example communicates directly with a particular controller, other controllers encounter difficulties in applying new motions to the robot, such as that of placing boxes. To accommodate changing communications, the robot would either have to undergo continual modification or have to permit software interactions to introduce new  CHAPTER 8. REUSE  GUIDELINES  111  communications. The former is unfavorable in that it does not accommodate run-time changes, while the latter is undesirable in that it complicates the implementation of a robot. In conformity with PROPERTY 1—2, which states that components should support first-class operations, the components of the contextual example enable software interactions to verify that data flows properly between components. For instance, both the positional controller and the robot respond to queries that ask for the types of data they communicate. Software interactions cross-check these types to ensure that both the controller and the robot communicate properly. In addition, the components are able to identify which of its operations are strongly related, those manipulating similar state variables. Accordingly, the robot may signal an error if it receives two notices to update its configuration or its position from two separate controllers in the same time-step. With the use of strongly related operations, components aid in resolving programming errors that arise with conflicts in communications. In a time-varying system, such as that of the contextual example, those conflicts are difficult to unravel because the communications of the system occur at run-time and vary greatly with the use of stochastic techniques. Further enhancements to the contextual example may introduce additional actions to the robot, which in turn, create more opportunities for conflicts to develop. The strongly related operations are helpful in resolving these conflicts and in exploring the dynamics of a system, especially in learning how particular components respond to varying inputs. In following PROPERTY 1—3 and its corollary, which requires that components support a general interface involving unidirectional operations, the components permit the writing of partial data, which would, for example, animate the red color channel of the light. The light is neither obligated to provide operations for animating individual channels, nor required to interact with a superfluous component that binds the individual channels of separate colors to produce a complete datatype. With unidirectional operations, the components promote fewer restrictions on their use. By not supporting bidirectional reads, the controllers of the contextual example, for instance, work with many kinds of interactions, not only those that are able to provide the controllers with a parameter value. With one providing a temporal input, several interactions may manipulate a single controller in updating the positions of several robots, such as those moving in single file. One interaction would write a value to the controller while the remainder would read and scale the output from the controller so as to stagger the positions of the robots.  CHAPTER 8. REUSE  8.2  GUIDELINES  112  Behavorial  Behavioral properties determine how components respond to communications and complete their computational tasks. These properties suggest that software components base their communication on their current state and that multi-parameter operations are partitioned into an elemental set of unary operations. The former ensures that software interactions transport valid information among components. It is not the responsibility of software interactions to verify that all components communicate their states accurately. The latter permits software interactions to communicate freely individual values to multi-parameter operations. The interactions, not the operations, decide the number of parameters that the operations send and receive. 8.2.1  Properties  PROPERTY  B—1:  Components should always maintain and report a consistent state.  A component should not expect or rely upon external stimuli to intentionally reconcile its state. Otherwise, the component may erroneously base its responses on state values that are either old or invalid. For instance, the diagram in Figure 8.2 presents an articulated figure and its corresponding component. The component computes the position of the figure's elbow and hand from the joint angles, 6 and cp. The component acts in accordance with PROPERTY  B —1 if it re-computes the positions of the hand and elbow either immediately  after it receives input, or immediately before it responds to future queries. A n external stimuli is not required to re-compute the positions of the hand and elbow from past inputs.  Shoulder  Figure 8.2: A component that obeys P R O P E R T Y B - l It is important that a component continuously exhibit its current state for two reasons. First, it supports the separation of computation from coordination. It ensures that a component is not reliant on coordinational activities to reconcile its state. Second, it avoids potential race conditions that arise when multiple scripts interact concurrently with a single  CHAPTER  8. REUSE  GUIDELINES  113  component. Difficulties arise if the execution order of the scripts intertwine improperly. For example, the illustration in Figure 8.3 presents a single component managed by two scripts, P and Q . The component accepts three inputs, with the names A , B , and K , and produces one output, C . The component computes C from A and B when notified by K . The script Q produces an inconsistent answer when Q - l executes after P - 2 , but before P - 3 .  Script Q  computes a value of C that does properly reflect the values of A and B , as set by the first two lines of script  P.  Had the component been designed initially to conform to  P R O P E R T Y  B —1, it would not need K and would always compute the correct value of C from its inputs, A and B . A component that follows this property ensures that the execution order of its communications is responsible only for determining when information is sent and received, not for deciding when the component advances its state.  Coordination Activities  p  b Comp't  P-l:setA P-2: set B P-3: set K P-4: get C  Q  Q-l:getC  Figure 8.3: A n inconsistent component with racing activities  P R O P E R T Y B — 2 : Components should partition multi-parameter operations into an elementary set of unary operations. The operations of a component are instrumental in establishing dependencies between the component and its environment. The environment provides the operations with parameter values that influence the computational state of the component. Operations that accept several parameters establish multiple dependencies, one for each parameter. A n elementary set of unary operations partitions these dependencies into many individual ones. Each unary operation modifies one parameter value of the original operation. To complete the requirements of the original operation, the unary operations reuse the most recent values for the remaining parameters. For example, the operation o p ( A , B , C ) , which computes the function f(A,B,C),  can be partitioned into three unary operations opA(A), opB(B), and opC(C). A  call to any of these three operations updates one parameter and invokes f(A,B,C)  with  recent values for the two remaining parameters. Given the sequence "opA(X), opB(Y), and 3  opC(Z)," the last call reproduces the effect of a single call to o p ( X , Y , Z ) . Default values are stored for each parameter so as to handle cases in which the value of a missing parameter has never been set and thereby, has no recent value. 3  CHAPTER  8.  REUSE  114  GUIDELINES  Partitioning multi-parameter operations into an elementary set simplifies the development of coordination activities. Interactions can freely decide when and how the components receive each of their parameter values. Partitioning also permits interactions to satisfy selectively a subset of the dependencies. The calling sequence of the unary operations determines the subsets. For example, the sequence "opB(u), opC(v)" updates two of the three parameters of the previous example. The sequence invokes f(A',u,  C') followed by f{A',u,v),  with  A' and C' representing the last known values of A and C. Finally, partitioning simplifies the interfaces of components. A component need not support a combinatorial number of operations to permit selective assignment. Interactions assign parameters selectively by invoking unary operations in sequence. COROLLARY B — 2 : If a multi-parameter  operation must produce side-effects,  ing set of unary operations should ensure that side-effects of parameter  its correspond-  occur only once for a set sequence  values.  Multi-parameter operations producing side-effects are troublesome because they do not partition easily into an elementary set of unary operations that readily support P R O P ERTY B—1, the consistent state property. To alleviate this difficulty, components should  limit the impact of side-effects to a set sequence of parameter values. They may handle this either by aggregating all the operations into one that accepts an aggregate datatype, or by . predicting the incoming values of a sequence and applying side-effects wisely. The former option eliminates the problems of side-effects, but places restrictions on the use of components and their operations; the latter option handles the problems of side-effects rationally, but burdens the components in handling their inputs with care. In predicting the reception of a sequence, components should cache the inputs of unary operations, and when appropriate, apply the inputs collectively with side-effects. Only after a proper sequence is found should the side-effects occur. In determining that sequence, the components should not expect interactions to signal a beginning or an end; for interactions to do so would interfere with their role in coordinating communications. The handling of side-effects is a computational issue such that components should perform extra work in extracting the proper subsets from an indeterminate sequence of inputs. To extract a proper sequence, components should recognize a complete sequence, search for repetitive inputs, handle domain specific predictors, and interact with strongly related operations. A complete sequence of inputs, such as "A, B, C," is a clear indicator that components should update and produce side-effects. Further inputs unmistakably denote the beginning of a new sequence. Repetitive inputs end one sequence and begin another.  CHAPTER  8.  REUSE  GUIDELINES  115  For example, the sequence "A, B, A'" separates into two sequences, one involving "A, B," and another involving simply "A'." Domain-specific predictors apply rules in separating inputs into individual sequences. In time-varying systems, for instance, a sequence stops whenever a component updates its temporal state. All changes produced by a sequence of inputs are instantaneous and can not span an interval of time. In interacting with strongly related operations, components must resolve the difficulties that arise when side-effects affect the computations of other operations, such as those responding to external queries. For these operations, the components should terminate all sequences and induce side-effects. It is inappropriate for the components to ignore the open sequences and delay the introduction of side-effects. To do so would violate PROPERTY B —1, which requires components always to respond according to their current state. The unavoidable drawback to this response is that race conditions may inadvertently partition a single sequence into two. To avoid this drawback, it is best for components to issue warnings whenever terminating an open sequence. This problem will then be resolved by the proper coordination of scripts, not by the changing of components. P R O P E R T Y B—3:  Components  and their operations  should support totality  measures.  Totality requires components and their operations to produce well-defined results for all possible input values. By acting defensively, components ensure that they return meaningful values and that their operations perform correctly. Components must be responsible for reacting to all inputs because it is impractical to expect only valid inputs from the environment. Coordination activities are neither expected nor required to realize what range of inputs cause components to behave poorly. The coordination activities do not do so because information assessment is a computational, not an organizational, issue. Similarly, it is beyond the capabilities of components to determine whether the information they produce is acceptable for others since components neither acknowledge nor realize the identity of their communicating partners. Without such knowledge, it is impossible for components to evaluate the validity of the information they produce.  8.2.2  Contextual Example Revisited  It is also important that the components of the contextual example support the behavioral properties so that the components may be reused in multiple contexts. In observing PROPERTY  B —1, which requires that components base their responses on past inputs, the robots  and their controllers must compute their states carefully. This is especially critical for the  CHAPTER 8. REUSE GUIDELINES  116  contextual example because the robots and their controllers ultimately effect the state of the light. As illustrated in the diagrams of Figures 4.3 and 6.1, any information that inaccurately reflects the positions of the robots may toggle the light mistakenly. Having the robots respond appropriately is also helpful in debugging race conditions. If software interactions were required to maintain the consistency of the robots and their controllers, the interactions would need to monitor their execution carefully. If the interactions were too early or too late in updating the state of the components, the light would ultimately behave falsely. In conformity with PROPERTY B —2 and its corollary, which requires that components provide a set of unary operations, the controllers partition: their multi-parameter operations into many individual ones accepting single inputs. The controllers, for instance, provide separate operations for fixing their range, duration, method of interpolation, and current time. Any call to these operations updates a single parameter of the controllers and advances the state of the controllers to reflect their input continually. In observing PROPERTY B—3, which states that components should support totality measures, the robots and their controllers act defensively when accepting input. The robots may receive hand positions that are beyond their reach, and the controllers may receive temporal values beyond their maximal duration. Although components may receive improper information, software interactions still require outgoing information to act properly. The information permits the remainder of the software network to operate properly, although not necessarily accurately.  Future extensions to IBP may support exception handling, which  would help in resolving problems that arise when invalid information is communicated between components.  8.3  Temporal  Temporal properties determine when components act and how components integrate their computations with time. They require that software components be time-invariant and that 4  software components explicitly link their computations to A T , a change in time. The first requirement permits interactions to synchronize easily the states of multiple components. Components that update their own perceptions of time burden software interactions with tracking their temporal state. The second requirement ensures that software components behave properly with respect to the advancement of time. In many software systems, the advancement of time is neither constant nor regular. In the context of this work, a time-invariant component may vary its state according to the passing of time. It may not, however, determine how time passes or apply any operators, such as those that scale, to its perceptions of time. 4  CHAPTER  8.3.1  8.  REUSE  117  GUIDELINES  Properties  P R O P E R T Y T—1:  Components  should be  time-invariant.  A time-invariant component maintains state without advancing its own notions of time. Such a component operates within an environment in which time advances around it. The state of the component changes as the component receives notices from its environment. A time-invariant component is preferable to a time-varying one because an invariant one is relatively easy to coordinate. Coordination activities determine when and at what rate the component receives notices of temporal advancement. A component advancing its own notion of time is usually difficult to coordinate. It requires continual supervision to ensure that it, operates neither too quickly nor too slowly in relation to other components. A time-invariant component is also preferable in developing asynchronous behaviors, such as those commonly found in parallel simulations [50]. Coordination activities may purposefully assign distinct timing values to cooperating components. Each component acts naturally, oblivious to the differing perceptions of its peers. The coordination activities work with the components to resolve differences that arise from temporal distortions. P R O P E R T Y T — 2 : Components  should employ "explicit"  means to link their  computations  to time.  In time-varying systems, a component may apply either explicit or implicit means to link its computations to the passage of time. With explicit means, a component applies time as an integral value of its computations. As time changes, the computations update state. With implicit means, a component infers time from the invocations of its operations. The sampling rate of the invocations, not advances in time, determines when and at what rate the computations update state. The differences between the two means are exemplified in the coding in Figure 8.4, which presents two methods for the component Foo. The operation expBar makes use of an explicit notion of time while the operation impBar operates with respect to an implicit notion of time. There are two reasons to favor an explicit approach to time. First, an explicit approach supports the separation of computation and coordination. Coordination activities manage temporal progression while computational activities respond to changes in time. The two sequences of code in the latter half of Figure 8.4 illustrate how this separation simplifies the coding of time-varying systems. Each sequence updates the state of z, an instance of Foo. The sequence using expBar is far simpler than the sequence using impBar. The operation impBar requires its callers to control its evolution over time (line i2) and to pro-  CHAPTER  8.  REUSE  118  GUIDELINES  Explicit  void Foo::expBar( A T ) { . x = x + A T * dx; y = y + A T * dx; }  Implicit  void Foo::impBar() { x = x + dx;  (el) Foo z; int k=0; (e2) z.expBar( 10 ); (e3) z.expBar( k );  Figure 8.4: Explicit and implicit means for linking computations to time tect the operation from invalid time steps (line i3). Second, an explicit approach produces components that readily support discrete-event simulation. Components react properly to advances in time that are neither periodic nor whole. For example, expBar in Figure 8.4 readily accepts fractional values for k (line i3) while impBar does not. COROLLARY T—2: Components that employ implicit means should be bounded explicitly.  Occasionally, the application of an implicit approach to time by components is unavoidable. Some components, such as those that update incrementally, change state only when specific events arise. For these components, it is best to bind their updates to an explicit change in time. When time advances in great leaps, as could happen in some simulations, the components update their state accordingly. For example, if A S is the maximal time step bounding a state change, a component should update X/AS times when X units of time pass. This integration of the two means for updating state provides two benefits. First, it permits components to place a lower bound on the number of times they ought to update over a period of time. The lower bound loosens the ties that relate state changes to implicit invocations and their sampling rate. The. advancement of time contributes to the advancement of state. Second, it permits the components to provide an estimate of their future behavior. A sudden leap of time does not prevent components from computing an approximate state. P R O P E R T Y T—3: The operations of components that advance state with time or that facilitate synchronization should orchestrate their computations around AT, a change in time.  There are two kinds of operations that accept temporal information, those that advance state and those that facilitate synchronization. Operations that advance state respond to notices of temporal advancement. These operations ensure that components remain con-  CHAPTER 8. REUSE GUIDELINES  119  sistent over time. Operations that facilitate synchronization answer inquiries regarding temporal advancement. The inquiries ascertain whether the operations accept the next expected value of time. Time advances if all active operations provide a positive response to such inquiries. The temporal information may arrive at the operations as either A T , a relative change in time, or T, an absolute value of time. A T represents a time of temporal advancement or the elapsed time since an established starting time. The former A T indicates the amount of time that has passed since an operation was last accessed. This value of time works well for operations that perform numerical computations, such as differential equations. The latter A T indicates the amount of time that has passed since a specific activity has begun. This value of time works well for operations that base their calculations on a duration of time, such as functions that interpolate evenly over time. Operations that accept temporal information may orchestrate their computations around either value of A T . Each value of A T provides the same benefits. Both AT's permit operations to act non-linearly over time. They act at the rate at which they receive temporal information.  For example, the two operations in Figure 8.5 act dissimilarly over time.  Operation A acts more slowly because it receives fewer updates than Operation B. Neither operation acts as fast as global time does because neither receives continual updates. Both AT's also permit operations to disregard references to absolute time. Operations neither track nor compute temporal advancement. They permit coordination activities solely to determine when and how time advances.  Global Advancement  rrtt+rH- •mm\ lllllllli a 'b c d 1  \lllllllli d  12t  4++++4-Illlllllli  21t  1  hnil  24t  1  Operation A Operation B  Total  c  d  1  Figure 8.5: A timeline involving non-linear temporal advancement Should an operation, such as those applying numerical integrators, expect to receive a specific timestep, they should adapt to accept any value of A T . This enables the interactions of IBP to freely advance the progression of time and update simultaneously multiple operations with different expectations. The operations may adapt to an irregular timestep by applying an appropriate interpolatory function to estimate a new value. Upon reception of another timestep, the operation may update its state with regards to the time of its last  CHAPTER 8. REUSE GUIDELINES  120  update or to the last time that its state was set precisely without the aid an interpolatory function. For instance, Figure 8.6 presents the state of an operation over a given set of AT's. At time 15, the operation estimates its current value. At the time 20, the time of its next update, the operation may apply one of two options to adjust its state. With the first option, the operation updates in accordance to its previous estimate (green line). The previous estimate is applied to its current computations. With the second option, the operation updates in accordance to the value of its state at time 10, the last time that the operation updated properly (red line). The second option ignores the previous estimate and applies only well-derived values of a previous time. The first option is far simpler, but less accurate. It is up.to the discretion of the developer of the operation to decide which option is best. Real Values  state of operation  V  -©  - Estimated from Interpolated Value  Interpolated Value  5  10  15  +  20  time  Figure 8.6: The use of interpolation to estimate the value of a fixed timestep integrator  8.3.2  Contextual Example Revisited  It is important that the components of the contextual example observe the temporal properties so as to improve the components' reuse in contexts involving time. In observing PROPERTY  T—1, which states that components should be time-invariant, the components,  such as the controllers of the robots, maintain fixed states until accepting updates from temporal interactions. Normally, the interactions update the local times of the controllers so as to synchronize them to a global clock or to advance them forwards in time. In keeping with the global clock, the components support concurrent execution. In advancing ahead, the components provide future values, such as the future positions of a robot. Such positions are useful in enabling the robot to avoid collisions and in determining the robot's momentum. The temporal interactions may also adjust the progression of time within the controllers. By varying the rate of temporal progression, the interactions may alter the speed of an entire simulation, or the just the speed of a single robot. In following P R O P E R T Y T—2 and its corollary, which provides that components should favor explicit means in updating state, the components adapt well to changes in  CHAPTER 8. REUSE GUIDELINES  121  time. Time is integral to setting the states of the components or to establishing bounds on how often the components change state. For example, the controllers of the robots apply a parametric curve, such as that of Figure 8.7, in computing their states for a given value of time. The controllers extract a value from the curve by computing a parametric value, frac, with an explicit formulation or with a well-founded implicit form. The explicit formulation sets the value of frac with a time-based formula while the implicit form bounds an update to frac by AS.  Parametric Curve  operation getValuef A T) { frac = <equation>; val = curvePtf frac ); return val; }  (A) Explicit formulation AT frac =  duration  (B) Well-bounded implicit form for(i=0; i<AT; i+= \ frac+=.l  AS)  Figure 8.7: Two means for extracting a value from a parametric curve The explicit formulation relates a change in time to a duration of time. The ratio between the two values computes the parametric location on the curve. As time progresses, the components interpolate the orientation and location of the robot smoothly from beginning to end. This explicit formulation is useful in moving robots whose movements are guided by the position of the light. Conversely, the implicit form relates a change in the parametric value to an implicit invocation. Each time the code executes, it updates the parametric value by a multiple of one-tenth of a unit. P O S C O N T R O L , the controller moving R O B O T B , uses the implicit formulation to update the position of the robot that suffers from periodic breakdowns. For this robot, its working status, and not time alone, contributes to the functioning of its controller. In complying with P R O P E R T Y T—3, which states that components should synchronize their computations around A T , the components operate non-linearly over time. For example, the two forms of Figure 8 . 7 , as applied by the controllers in the contextual example, orchestrate their computations around A T . The first integrates A T directly into its computations while the second uses A T to rationalize approximate changes in state. Each of the two forms readily accepts a variable rate of temporal advancement and provides an approximation of its future values.  Chapter 9 Implementing Animating Methods: A  Comparison  This chapter presents the first of three ways to validate the applicability of IBS to visual simulation. It compares IBS directly with the tools of animation in terms of their effectiveness in supporting the use of several animating methods. The comparison provides empirical evidence of the utility of IBS in producing a wide range of scenes. The next two chapters present ways two and three. The second way demonstrates the utility of IBS in supporting the development of applications from different domains that rely upon animated visuals. The third way discusses the utility of IBS in improving the overall design of visual simulations.  9.1  Overview  The comparison of IBS with the tools of animation is achieved through three examples. Each of the examples consists of a general description and three separate implementations, produced from two representative tools and the  RASP  toolkit. The general description presents a  simple scene exemplifying the use of a particular animating method. The scenes differ within the examples so as to present a range of diverse application of IBS to visual simulation, and more importantly, to present a problem that suits a particular method. The representative tools accompanying each example were chosen by their design and availability. They are representative in that they present a diverse spectrum of implementation approaches for visual simulation that apply a particular animating method. Such diversity enlists greater comparison between IBS and the tools of animation, and offers greater insights into the advantages and limitations of IBS. By comparing the size, adaptability, and flexibility of various implementations, the chapter shows how IBS provides greater support for applying 122  CHAPTER 9. IMPLEMENTING  ANIMATING METHODS: A  COMPARISON  123  multiple animating methods, managing software complexity, and supporting software reuse.  9.1.1  Implementation and Analysis  The examples of this chapter use three animating methods common to visual simulation: key-frame  animation,  procedural  animation,  and particle system animation.  Each method  is useful to developers (and animators) in animating a wide variety of scenes. The three methods are representative of the current methods of animation in that each method uses a distinct set of data structures to apply a particular approach to animating state. Of the three methods, key-frame animation is the most effective but the least efficient for producing an animation at a high level of precision. It applies controls for explicitly guiding the evolution of state changes over time. State changes occur as the system interpolates between the values of key states. Useful tools for evaluating the application of keyframing are M E L (ALIAS|WAVEFRONT's scripting language) [6] and CORY (CLOCKWORK'S scripting tool) [56]. ALIAS is a widely popular, commercial software package, and CLOCKWORKS is an older yet representative system that proves useful in demonstrating the utility of keyframing and presenting its drawbacks. Procedural animation is the most accurate but the least flexible of the three methods for fabricating an animation with precise results. Procedural animation involves the use of rules or computational algorithms in updating state values over time. It is most effective in developing animations that involve physically-based motions [77, 152] or recursive data structures [13, 29]. Useful tools for evaluating the application of procedural animation are M A M / V R S [36]  and F R A N [41]. M A M / V R S is an object-oriented toolkit that separates the  specification of geometry and behavior, and FRAN is a publicly available tool for interactive animation that, like this thesis, is based upon an unconventional method of programming and composes an animation from an assembly of parts. Particle system animation is the most efficient but the least general of the three methods. It reproduces complex behaviors by abstractly describing the behaviors of many individual particles. The method updates the motions of a large number of particles to simulate the behavior of amorphous entities, such as water and fire, or of multi-entity crowds, such as flocks of birds or herds of animals. Useful tools for evaluating the application of particle systems are the PARTICLE SYSTEM A P I [96] and ASAS [127]. The PARTICLE SYSTEM API is a programming interface for the development of particle systems that separates the specification of particles and behavior, and ASAS is an experimental tool for multi-entity communications that separates the specification of behavior and time.  CHAPTER  9.2  9. IMPLEMENTING  ANIMATING  METHODS: A  COMPARISON  124  Key-Frame Animation  In the scene exemplifying the use of keyframe animation, three keyframes are applied to the movements of two objects, a ball and a camera. From simulation time 250 to 950, the camera and ball move independently along a "moving" path, initially established by an interpolation of the three keyframes. As the scene progresses, the entire path translates ten units along the x-axis in thirty units of time. The ball and camera travel along the path at different rates. Traveling twice as fast, the ball moves along the path in half the time. As the camera moves, it alters its orientation to observe the position halfway between the ball and 10 units above it. As shown in Figure 9.1, the camera's view continually changes as the positions of the ball and the path update over time.  ••> 15  translation  Figure 9.1: The camera adjusts its view as it follows the ball along a translating path. This example demonstrates the use of keyframing beyond that of strictly defining the values of key states. The example integrates the values of several keyframes and applies mathematical operators to them so as to produce a complex behavior.  The integration  and manipulation of keyframe values are techniques of increasing interest [168] since basic keyframing is extremely tedious to specify and arduous to apply when manipulating many values. Signal processing methods, such as sheering and filtering, are frequently applied with keyframing to produce new or alternative motions.  9.2.1  CORY'S  Implementation  C O R Y [98], the scripting system of C L O C K W O R K S , applies a two-tiered approach to keyframe animation.  It partitions an animation into a hierarchical set of  cues and scenes.  Cues script the goals of individual actions while scenes script the performance of cues. A n animation consists of several scenes, each acting independently and none overlapping in time.  CHAPTER  9. IMPLEMENTING  ANIMATING METHODS: A  COMPARISON  125  In encouraging top-down design, only the scenes of animation access the global clock, which manages simulation time. The execution times of all cues are relative to the start of the scene that governs them. The primary mechanism for producing key-frame animation is an evaluator, a timevarying value embedded into the definition of a cue. As time advances, the evaluator obtains a numeric value from a data-set, any function or object (such as that of a spline) that parameterizes its output to time. An evaluator modulates its value by modifying the output of its corresponding data-set. Modifications to the output are produced by altering a dataset's perceptions of time, such as shortening its duration, or by scaling the entire output of a data-set by a constant. Building the Scene The code of Figures 9.2, 9.4, and A . l presents an implementation of the scene with C O R Y . The code of Figure 9.2 begins by creating a camera and a ball (lines 1-2) and establishing the properties of kScene, a scene consisting of three cues (lines 3-18). The scene starts at time 250 (line 7) and continues forwards in time for a duration of 700 units (line 8). As the scene progresses, it animates continuously the positions and orientations of the camera and ball with the three cues: cam_cue, trs.cue, and ball-cue. cam_cue start-action  updates the position and orientation of the camera with two actions,  and t i c k . a c t i o n . s t a r t _ a c t i o n operates once to initialize four evaluators  with timing information, which is local to the timing of the cue. The timing information decides when the four evaluators begin and for what duration they compute, t i c k - a c t i o n executes repeatedly to provide the evaluators with timing values. The evaluators apply ;  the timing values to compute their final states, which are derived from the outputs of their data-sets. The evaluators cam_x, cam_y, and cam_z apply the outputs of their data-sets twice. Initially, the three evaluators apply their data-sets in setting the position of the camera (line 30-32).  Immediately afterwards, the three evaluators compute a midpoint between the po-  sition of the ball and the position 10 units above the camera (lines 33-41). The final lines 1  of t i c k - a c t i o n apply the midpoint with the formulas of Figure 9.3 in deriving the final orientation of the camera. trs_cue  animates the location of the path, which guides the positions of the camera  and ball, with two actions of its own. s t a r t - a c t i o n initializes trns_x, an evaluator for computing a translational value (lines 48-50), while t i c k _ a c t i o n applies the same evaluator ^or each evaluator, the attribute v_offset identifies an offset value while the attribute v_scale identifies a scaling value.  CHAPTER  9. IMPLEMENTING  ANIMATING  METHODS: A  COMPARISON  126  animate camera  (19) cam .cue := cue { perform when cue starts  create camera and ball  (1). Cam := camera { }; (2) Ball := superq { };  (20) (21) (22) (23) (24) (25) (26) (27) (28)  start_action = ( "cam_x t-offset = (cam.cue start?)", " t-scale = (cam-cue duration?);", "cam.y t-offset = (cam.cue start?)", " t-scale = (cam.cue duration?);", "cam_z t.offset = (cam.cue start?)", ." t-scale = (cam.cue duration?);" "look.y t.offset = (cam.cue start?)", " t_scale = (cam.cue duration?);")  (29) (30) (31) (32) (33) (34) (35) (36) (37) (38) (39) (40) (41)  tick-action = ( "Cam position = (cam_x value @ (cam.cue time?),", " cam.y value @ (cam.cue time?),", " camjz value @ (cam.cue time?)) "; "cam_x v.offset = (ball_x value @ (cam.cue time?);", " v_scale = 0.5;" "look.y v.offset = (ball.y value @ (cam.cue time?);", " v_scale = 0.5;" "cam_z v.offset = (ball.z value @ (cam.cue time?);", " v_scale = 0.5;" "Cam lookat = (camjc value @ (cam.cue time?),", " look.y value @ (cam.cue time?),", " cam.z value @ (cam.cue time?)); )"  establish scene  perform as time advances  (3) kScene := scene { (4) resolution = 24 (5) duration = 700 initialize cues  (6) (7) (8) (9) (10) (11) (12) (13)  start-action = ( "cam-cue start = 250", " duration = 700;", "trs-cue start = 250", " duration = 30;", "balLcue start =270.", " duration = 680;" ) identify cues  (14) cues = ( (15) trs-cue, cam-cue, (16) balLcue (17) ) (18) };  ...  (45) };  code to reset Cam.x, lookjy, and cam-Z ... cue to translate motion path  (47) trs.cue := cue { perform when cue starts  (48) (49) (50)  start.action = ( "trns_x t.offset = (cam.cue start?)", "t-scale = (trs.cue duration?);" ) perform as time advances  (51) tick-action = ( (52) "cam_x h_offset = (trns_x value @ (trs.cue time?); )" (53) "ball_x h.offset = (trns_x value @ (trs.cue time?); )" (54) }; Figure 9.2: An implementation in CORY for updating the offset value of cam_x, the evaluator that determines the "x" coordinate value of the camera, and of ball_x, the evaluator that determines the "x" coordinate value of the ball (lines 51-53). The code for cue ball.cue, which uses b a l l J C in animating the position of the ball, appears in Figure A . l of the appendix, ball.cue is similar to cam_cue in that both cues  CHAPTER  9. IMPLEMENTING  ANIMATING  METHODS: A  COMPARISON  camjx  =  [(cam-X value) + (cam.x v.offset)] * (carn.x scale) [(carnjx value) + (balljx value)] * 0.5  lookjy  =  [(look.y value) + (look.y v.offset)] * (look.y scale) [(cam.y value + 10) 4- (ball.y value)] * 0.5  cam.z  =  [(cam.z value) + (cam.z v.offset)] * (cam.z scale) [(camjz value) + (ball^: value)] * 0.5  127  Figure 9.3: The formulas applied by Figure 9.2 in computing the orientation of the camera.  apply three evaluators to animate state. As set by kScene of Figure 9.2, the two cues differ in their starting times and durations (lines 7-10).  b a l l . c u e begins earlier and operates for  a shorter period of time. Accordingly, the ball moves along the path twice as fast as the camera does. A partial list of the code for establishing the evaluators appears in Figure 9.4. evaluator references a spline.data key values.  2  Each  unit, a structure for keyframing that interpolates between  The spline.data units dataX, dataY, and dataZ are applied to animate the  positions of the ball and camera while dataY2, replicating the data of dataY with a difference of ten, is applied to animate the orientation of the camera.  create spline-keyframes (68) (69) (70) (71) (72) (73) (74) (75) (80) (81) (82) (83)  create evaluators cam_x := eval { data_name = "dataX" v_scale = 1.0; •};  cam.y :— eval { data-name = "dataY" v_scale =1.0; };  look.y := eval { datamame = "dataY2" v .scale = 1.0; };  (103) (104) (105) (106) (107) (108) (109) (110) (111) (112) (113) (114) (115) (116)  dataY := spline-data { init-der = 0.0 final.der = 0.0 time = 0.0 value = 75.0 time = 0.5 value = 75.0 time =1.0 value = 22.0 };  dataY2 := spline.data init.der = 0.0 final.der = 0.0 time = 0.0 value time = 0.5 value time = 1.0 value  {  = 85.0 = 85.0 = 32.0  };  Figure 9.4: A partial implementation of the evaluators of Figure 9.2  2  A complete listing of the evaluators appears in Figure A.2 of the appendix  CHAPTER 9. IMPLEMENTING ANIMATING METHODS: A COMPARISON 9.2.2  MEL'S  128  Implementation  M E L (Maya Embedded Language) is a scripting language that builds upon Alias\wavefronVs C++ A P I . The language permits developers to produce plug-ins and macros for MAYA, 3  Alias| wave/rout's product for producing computer animation. M E L consists of imperative controls, such as loops and variable assignment, and of objects and function sets. Objects hold basic structures, such as geometry and keyframes, while function sets are C++ classes that operate on objects. The description of a scene normally contains several objects and various functions for animating that object over time. M E L produces an animation from a scene's description by collating the scene's objects and function sets into a dependency graph, which M E L keeps hidden from the eyes of developers. State changes occur as the nodes of the graph enable communication between objects and function sets. To improve communications, M AYA evaluates the nodes of a graph lazily and progressively caches critical communications between nodes. Building the Scene  The code in Figure 9.5 presents an implementation of the scene with M E L .  The code  begins by creating a camera and ball (lines 1-3) and keyframing their movements with an animation path consisting of three control vertices (line 4). The camera, ball, and path are represented as objects while the keyframed animation is set with setKeyFrame, one function of a function set for animating objects. Keyframing is also applied to the animation path so as to progressively translate the path along the x-axis (lines 5-6). The translation of the control vertices, as performed by the function pathAnimation, determines the shape and location of the path, and ultimately, the position and orientation of the camera and ball (lines 7-9). To animate the viewpoint of the camera, a loop extracts the control vertices of the animation path and applies mathematical operators to them so as to compute the orientation of the camera (lines 9-20). The keyframing of the viewpoint,.as set by setKeyframe, defines 4  implicitly a path that traces the midpoints between the ball's positions and the positions 10 units above the camera (lines 16-18).  The commands to keyframe the location and  Alias|wavefront encourages its users to animate with M E L and MAYA'S graphical user interface. The current documentation for the C++ API is sparse and provides few examples. A comparison of the C++ API with that of IBS is worth investigating if and when the proper documentation avails. In reality, M E L animates a viewpoint by controlling the orientation of a camera, not by explicitly keyframing the viewpoint. Hence, lines 12 to 15 of Figure 9.5 are fictional, producing error. Nonetheless, the code accurately conveys a working implementation of the scene, were M E L to provide such capabilities. 3  4  CHAPTER 9. IMPLEMENTING  (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20).  ANIMATING METHODS: A COMPARISON  129  create camera and ball camera -p 10 10 10 -name kamera; yiewPlace -la 0 0 0 kamera; sphere -la 0 0 0 -name ball; create and animate an animation path string $path = 'curve -d 2 -p 32 75 18 -p 32 75 -18 -p 16 22 50 -name cameraPath'; setKeyframe -v 0 -t 0 -at tx $path; setKeyframe -v 10 -t 30 -at tx $path; animate the position of the camera and ball path Animation -stu 250 -etu 950 -c cameraPath kamera; pathAnimation -stu 250 -etu 600 -c cameraPath ball; initialize variables int $max = 'getAttr cameraPath.degree' + 'getAttr cameraPath.spans'; int $time = 250; produce animation curve for animating camera's viewpoint for( $i = 0; $i < $max; $\++ ) { float $cv[] = 'pointPosition cameraPath.cv[$i]'; $cv[0] = (cv[0] + . . . ) / 2.0; $cv[l] = (cv[l] + 100) + . . . ) / 2.0; $cv[2] = (cv[2] + . . . ) / 2.0; animate eye point setKeyframe -v $cv[0] -t $time -at lkx kamera; setKeyframe -v $cv[l] -t $time -at Iky kamera; setKeyframe -v $cv[2] -t Stime -at lkz kamera; Stime = $time + (700/$max);  } Figure 9.5: An implementation in M E L  orientation of the camera differ because the command pathAnimation applies only to an object's position, not to its attributes. The viewpoint of a camera is an attribute and requires keyframing by explicit means, not by an association with an animation path.  9.2.3  RASP'S  Implementation  The code in Figure 9.6 presents an implementation of the scene with  RASP.  The code con-  sists of two objects, two interpolators, three virtual ports, and six software interactions. The two objects represent the ball and camera; the two interpolators keyframe the movements of the two objects; the three virtual ports affect the orientation of the camera; and the six interactions communicate and disseminate data and temporal information to the objects, interpolators, and virtual ports. The definitions of the two objects and two interpolators appear at the top. The two objects, b a l l and kamera, are instances of Sphere and PerspCamera (lines 1-2). The two  CHAPTER 9. IMPLEMENTING  ANIMATING METHODS: A  COMPARISON  130  create camera and ball  (1) (2)  GeoSphere *ball = new Sphere; Camera *kamera = new PerspCamera; set keyframes with an interpolant  (3) EvolvePt *evK = new EvolvePt; (4) • evK->setMark( 0, Point3(32,75,18) ); (5) evK->setMark( 1, Point3(32,75,-18) ); (6) evK->setMark( 2, Point3(16,22,50) ); create interpolant to translate keyframe path  (7)  EvolvePt *evT = new EvolvePt( Point3(0.,0,0), Point3(10,0,0) ); create virtual ports for camera's viewing  (8) (9) (10)  OPort *ev = *evK->outCurVal() + *evT->outCurVal() . . OPort *uplook = *ev + Point3(0,100,0); OPort *midlook = (*uplook + *ball->outPosition())/2.0;  (11) (12) (13)  DurationEvent *dEvt = new DurationEvent( evK—>inDuration(), evT-MnDuration() ); TimeEvent *tEvt = new TimeEvent( evK->inCalcValue(), evT->inCalcValue() ); TimeEvent *tEvt2 = *tEvt * 2.0;  (14) (15) (16)  Event *evtl = new Event( ev, kamera—>inPosition() ); Event *evt2 — new Event( midlook, kamera—»inLookAt() ); Event *evt3 = new Event( ev, ball—»inPosition() );  (17) (18) (19)  Activity *act = new Activity( "moveCamera" ); act->addInitEvent( dEvt ); act->addActEvent( tEvt, evtl, evt2, tEvt2, evt3 );  (20) (21) (22) (23)  TmGroup *group = new TmGroup; Tmlnterval *inter = new Tmlnterval( 250, 950 ); TimingAct *camMv = new TimingAct( act ); group—>addNode( inter, camMv );  establish temporal interactions  establish software interactions  organize action unit  organize time graph  Figure 9.6: A n implementation in RASP  interpolators are evK (line 6) and evT (line 7). The first applies three keyframes to create a keyframe path (lines 4-6) while the second applies two positional values to translate the keyframe path along the x-axis. As illustrated in Figure 9.7, the virtual ports uplook and midlook reference the ports of the two interpolators and the ball to determine where the camera peers as it moves along the translating path (lines 9-10). uplook computes a position 10 units above the position of the camera, and midlook computes a viewing position for the camera halfway between the position of the ball and uplook. The virtual port ev computes the position of the camera for uplook by summing the outgoing values of the two interpolators (line 8). The software interactions e v t l and evt3 also apply ev for communicating the positions of the ball and  CHAPTER  9. IMPLEMENTING  ANIMATING  METHODS:  A  COMPARISON  131  camera over time.  I~} • r Controller  evtl  Positional  AT  clEvt  Communicational •  ;  Event  Temporal ED DurationEvent \ H TimeEvent  outVa inDuration inTime —  Camera  in Pos OPort *ev:  AT OPort *uplook = •  Orientation  Controller  +  Point3(0,10,0 L  inTime inDuration L outVa  OPort *midlook = Q  •  outPos  inLookAt  + •  )/2  Ball inPos  L_ outPos  I'  evt3  Figure 9.7: The software network produced by the code of Figure 9.6 The script "moveCamera" orchestrates the execution of the six software interactions (line 17-19). It divides the six interactions into two groups, one consisting of durEvt (line 11), and another consisting of t E v t , e v t l , evt2, tEvt2, and evt3. The first group executes once to set the durations of the interpolators while the second group executes repeatedly to advance the two interpolators in time and to animate the positions and orientations of the ball and camera (lines 14-16).  Ordered sequentially in the second group, the interactions  t E v t and t E v t 2 update both interpolators with a single time step, such that t E v t 2 doubles the time step of t E v t (lines 12-13).  Each of the two interactions alters the local times of  both interpolators so as to produce two different outgoing values for ev, which e v t l and evt3 use to the animate the positions of the ball and camera. With separate time steps and the same interpolators and virtual ports, the ball and camera move along the same path at different rates. The coding of the scene concludes with a small time graph that specifies the timing properties of act (lines 20-23). The script inherits the times 250 to 950 from i n t e r , a node of type INTERVAL. 9.2.4  Comparison  An inspection of the three tools shows that each offers benefits and disadvantages to keyframing. All three share similarities in being a specification for animation, and all three differ in their approaches to software design, software development, and software reuse. The differences affect how the tools manage complexity and ease the development of animated scenes. The implementations of C O R Y ,  MEL,  and  R A S P  are all similar in being specifications  CHAPTER  9. IMPLEMENTING  ANIMATING  METHODS:  A  COMPARISON  132  for animation, not procedural definitions that produce immediate results upon execution. The execution of the three implementations produces intermediary forms that provide instruction to a run-time evaluator. The execution of R A S P sequences software interactions; the execution of M E L organizes a dataflow graph; and the execution of C O R Y assembles cues into scenes. The run-time evaluator of each implementation examines the intermediary forms and executes them over time. The run-time evaluator, not the developers of animation, decides when and how the computer controls concurrent actions. Hence, the developers of animation are freed from explicitly embedding direct controls for parallelism, such as semaphores and co-routines, into the definitions of their keyframes. Moreover, the run-time evaluator is capable of integrating the values of several keyframes so as produce accumulative effects, such as that of moving a camera along a translating path. Despite their similarities, all three tools apply a different approach to keyframing. With R A S P , keyframing is applied by producing components and controlling their communications with interpolants. Small changes to either, which may be defined with interaction constraints or mathematical operators, produce new and interesting results. With M E L , keyframing is defined strictly with the application of function sets to objects, in which the communications between the two are defined implicitly. Thus, the communications between the two are difficult to vary so as to develop the communications most appropriate for a scene. For instance, the communications between pathAnimation and a camera are fixed to communicate only complete coordinate values. Hence, M E L is unable to keyframe the vertical coordinate value of a camera with only pathAnimation. Unlike what R A S P permits with software interactions, M E L is unable to accept readily a single value and integrate it with the camera's current position. Care must be taken either to extract the current position of the camera and integrate the position with new vertical values, or to control directly the dependency graph that interconnects the camera with pathAnimation. The former option provides pathAnimation with a complete set of coordinates while the latter requires the use of a additional function set that adds to the complexity of a script. With an additional function set, the script intermixes unsystematically two kinds of functions sets, one which abstractly manipulates the nodes of a dependency graph and another which manipulates directly the configuration of the same dependency graph.  5  With C O R Y , keyframing is applied by partitioning a scene into many individual elements, each of which manages a separate concern. The approach is beneficial to C O R Y in The current version of M E L (3.0) does not document properly the use of function sets in manipulating a script's dependency graph. A listing of all function sets indicates that such manipulations are possible, but documentation is lacking to show its use. 5  CHAPTER  9. IMPLEMENTING  ANIMATING METHODS: A  COMPARISON  133  that it maintains a separation between the temporal specification of a scene and the application of computational elements. Similar to that offered by R A S P , a hierarchy of abstractions decides how and when the computational elements perform over time. This approach differs from that of M E L , which binds tightly the temporal specification of a keyframed scene to the application of function sets with objects. The temporal specification is unavailable for independent update, analysis, and reuse. For instance, the temporal specification of "pathAnimation -stu 250 -etu 950 -c cameraPath kamera," which animates the camera from time 250 to 950 (line 7 of Figure 9.5), is inseparable from the command. To extract and apply the temporal specification elsewhere or to construct a temporal ordering similar to R A S P ' S time graph is impractical and nearly impossible. The three tools also differ in their respective syntax. M E L applies a syntax that resembles an imperative scripting language, such as T C L [111]. Hence, it is probably the easiest of the three to learn. With M E L , developers freely decide the use and organization of objects and function sets. The drawback to this approach is that developers must also discover their own means for effectively managing complexity and supporting reuse. M E L provides few rules for introducing new objects and functions sets, and few means for encouraging the reuse of its parts. For example, it is unclear how to manage the timing characteristics of the scene of Figure 9.2 at a high level of abstraction. It is also unclear whether the same scene can be encapsulated within an object so as to reuse that object and its keyframed values with the function sets of other scenes. The keyframed motions of the ball, for instance, could be useful for guiding the positioning of a secondary camera. RASP  and C O R Y each apply an unconventional syntax, and each involves a greater  learning curve. To apply either tool successfully, one must understand IBS and the use of software interactions respectively, or comprehend the structure of cues and scenes. Building with C O R Y involves the use of declarative expressions. Developers assemble animations by parameterizing data structures with character strings and numerical values. The ordering of data structures is similar to the hierarchical ordering of R A S P . However, the data structures of C O R Y are neither first-class nor organized via software composition. Hence, the structures neither support higher-order mechanisms for topology change, such as the meta-scripts of R A S P , nor provide mechanisms for component design, such as the encapsulation of scripts into components. Hence, it is easier with R A S P to coordinate the global effects of several keyframed scenes and to reuse the keyframed scenes in alternative contexts. Despite its unconventional syntax, only R A S P supports the development and use of keyframing with software composition. R A S P encourages the development and reuse of a hierarchical organization of components. For instance, the organization of an action  CHAPTER  9. IMPLEMENTING  ANIMATING METHODS: A  COMPARISON  134  graph builds hierarchies upon software interactions while a time graph builds hierarchies upon scripts. As noted by the WEB3D community and the many developers of graphics 6  tools [167, 145], hierarchies of components are easy to build and apply, and to share as a datafile. Moreover, the hierarchies can be applied separately to define, apply, and share, keyframe values. RASP  is the only implementation of the three that supports rules for building com-  ponents and reusing them effectively in a wide variety of contexts. Both RASP and CORY support a strong separation of concerns, but only RASP explicitly provides rules for maintaining that separation. To encourage reuse and minimize complexity, M E L simply suggests that all implementations not exceed fifty lines of code.  7  With RASP, it is obvious how  the components of the exemplary scene behave if time advances erratically and if multiple keyframing functions (or operations) attempt simultaneous updates to the components' states. The reuse guidelines of RASP associate the temporal states of all components to changes in time, not to absolute or implicit values of time, which may cause errors when time jumps erratically. Furthermore, the guidelines encourage the use of first-class operations, which components may apply to ensure that simultaneous updates caused by keyframing do not produce unwanted results.  9.3  Procedural Animation  In the scene exemplifying the use of procedural animation, a dexterous robot moves conditionally between two moving pucks, puckl and puck2, for 200 units of time. The robot 8  moves continuously between the two pucks until the animation ends. In between its movements, the robot waits at a puck for fifteen units of time. While the robot waits, it travels with the puck that lies underneath its feet. As shown in Figure 9 . 8 , the robot's movements are linear as it moves from puckl to puck2. When moving towards puck2, the robot leaps to a halfway point between the pucks. Before moving to puck2, the robot waits for the distance between the two pucks to be twelve units or less. If the distance is too great, the robot simply waits longer and travels further with puckl. This example is exemplary in presenting procedural animation because it applies formulas and rules to the evolution of state. The formulas determine where the robot moves while the rules determine when the moves occur. The formulas are fitting because they are To understand the views of the W E B 3 D community, see the proceedings of the Web3D Symposium on Computer Graphics, and visit the URL "www.web3d.org." In practice, developers using M E L create scripts that far exceed fifty lines of code [48]. This scene is a subset of the scene described in Lee [88]. 6  7  8  CHAPTER  9.  IMPLEMENTING  ANIMATING  METHODS:  A  135  COMPARISON  Figure 9.8: A robot moves procedurally between two pucks adequate but simple. To introduce greater realism, the formulas could easily be adapted to support physically-based motions. The conditions are fitting because they are simple, variable, and change with the state of the animation. The condition relating the distance of the pucks, for instance, changes with the movements of the pucks, and is valid only while the robot waits to move towards puck2.  9.3.1  MAM/VRS'S  MAM/VRS  Implementation  is an object-oriented toolkit, written in C++, that supports the specification of  animation and 3D interaction with time-dependent functions and behavior graphs, logical hierarchies of nodes controlling shape and time. Implemented as objects, time-dependent functions animate by accessing the methods of graphical objects. As time advances, the functions supply the methods with state updating values. The behavior graph and its nodes direct an animation over time. Similar to those of the time graph of this thesis, the nodes of the behavior graph either calculate or control the advancement of simulation time, or build event-dependent constraints, which respond to specific events, such as the collision of two objects. The nodes of the graph for controlling time are one of three types: temporal types, time setters,  or time modifiers.  data  Temporal data types represent moments or durations;  time setters specify or modify the timing requirements of computational elements; and time modifiers apply transformations to moments of time. The arrangement of the nodes outlines a plan for governing the performance of time-dependent functions. Building the Scene The code in Figures 9.9, 9.10, and A.3 implements the scene with M A M / V R S .  The code  defines the scene as the building of XtRobot, a class with one constructor (Figure 9.9) and one  CHAPTER 9. IMPLEMENTING  ANIMATING METHODS: A  COMPARISON  136  member function (Figure 9.10). The constructor defines and places several time-dependent 9  functions into a behavior graph, which the member function modifies in implementing the logic for animating the movements of the robot. The member function is periodically called upon by the behavior graph as a callback function.  XtRobot::XtRobot() { (1 (2 (3 (4 (5 (6 (7:  (8  o;  (io (11 (12 (13 (14 (15 (16 (17 (18 (19  initialize scene SO<MSceneView> view = MSceheView( ... ); canvas = new TClCanvas( ... ); puckl = new RCylinder(...); puck2 = new RCylinder(...); for moving robot (from p u c k l to puck2) in a straight path mapLinear = new MTimeLinearMap<RVector>(); moveToP2 = new MTimeCt<RComposite, const RVector&, RVector>( robot, &RComposite::setCenter, &RComposite::getCenter, mapLinrA ); moveToP2—•setTimeRequirement( MTimeRequirement( 0,0,30 ) ); for moving robot (from puck2 to puckl) along a curved path points = new RArray<RVector>(3); RArcCurve *curve = new RArcCurve( points->newIterator() ); MTimeCurveMap *mapCurve = new MTimeCurveMap( curve ); moveToPl = new MTimeCt<RComposite, const RVector&, RVector>( robot, &RComposite::setCenter, &RComposite::getCenter, mapCurve ); moveToPl—•setTimeRequirement( MTimeRequirement( 0,0,30 ) ); for producing F S A behavior fsaBehave = new MBehaviorCallBack; fsaBehave—>setTimeRequirement( MTimeRequirement( true ) ); fsaBehave-»setTimeCallback( new RMethodCallback( this, &XtRobot::fsaUpdate )); adding behaviors to time graph studio()—»addGeometry( canvas, view ); studioQ—>addBehavior( moveToPuck2, moveToPuckl ); studio()—»addBehavior( fsaBehave ); studio()-»switchOff( moveToPuckl ); }  Figure 9.9: An implementation in M A M / V R S The constructor for XtRobot initializes the scene (class) with several basic variables (lines 1-4) and two time-dependent functions, moveToPl (lines 5-7) andmoveToP2 (lines 8-12). The first function maps the movements of the robot to a straight line while the second maps the movements of the robot to a curved arc. Both the line and arc interpolate between several positions, which are unavailable to the constructor. The positions are only available when the scene executes. The constructor finalizes the scene by producing a third time-dependent 3  The code of Figure A.3 presents the class definition for X T R O B O T .  CHAPTER  9. IMPLEMENTING  ANIMATING  METHODS:  A  COMPARISON  137  void XtRobot::fsaUpdate() {  " '  ensures time event MTimeEvent *te = fsaBehave->currentTimeEvent(); if (!te) return; computes time in current state double currentTime = te-»getMoment().time(); Bool bigTime = (currentTime - lastTime) > 15; controls operating state of robot switch( state ) { case 0: adjust linear path mapLinear—»setBegin( robot—>getCenter() ); m ap L in ear-» s e tEnd( puck2-»getCenter() ); if acceptable, advance to next state if (bigTime & & abs( puckl->getCenter()-puck2->getCenter() ) < = 12) { lastTime = currentTime; studio()->switchOn( moveToP2 ); studio()-s-switchOff( moveToPl ); state = 1; } break; case 1: adjust curved path RVector midpoint = ( puckl-^getCenter() + puck2->center() ) / 2.0; points—»setElement( 0, robot—>getCenter() ); points-»setElement( 1, midpoint ); points->setElement( 2, puck2-»getCenter() ); if acceptable, advance to previous state if (longEnough) { . lastTime = currentTime;studio()->switchOn( moveToPl ); studio()->switchOff( moveToP2 ); state = 0; } break; }  (20) (21) (22) (23) (24) (25) (26) (27) (28) (29) (30) (31) (32) (33) (34) (35) (36) (37) (38) (39) (40) (41) (42) (43) (44) (45) (46) (47) }  Figure 9.10: The method called upon by the callback function of Figure 9.9  function (lines 13-15) and integrating all three functions into a behavior graph (lines 16-18). The third function, fsaBehave, uses the member function XtRobot: :fsaUpdate to toggle the activity of the functions effecting the motions of the robot. Applying similar functions to that which initially disables moveToPl (line 19), the member function switches the functions on and off to stagger their execution. The member function fsaUpdate for XtRobot performs two tasks. It updates the positions of the motion paths (lines 26-27, 36-39) and toggles the movements of the robot  CHAPTER  9.  IMPLEMENTING  ANIMATING  METHODS:  A COMPARISON  138  (lines 28-33, 40-45). The positions of the motion paths must change to accommodate the movements of the pucks. The positions of both pucks will undoubtedly change whenever the robot is in motion. The toggling of the robot's movements occurs if simulation time is advancing (line 20-21) and the current state has been active for fifteen units of time (lines 22-23). If the robot is moving towards puckl, the toggling will not begin until the distance between the two pucks is less than twelve spatial units (line 28). 9.3.2 FRAN  FRAN'S  Implementation  is an experimental library for interactive animation that embodies a continuous model  of time. Implemented as a Haskell library [68], F R A N embodies functional reactive animation, a declarative approach to building interactive content [41]. F R A N defines an animation as a collection of declarations, not as the by-product of a sequence of imperative commands. Consisting of data types and functions, the declarations compose time-varying values with events that trigger actions to animate state. For instance, the touching of a button by a user during an interactive animation could trigger the button to fade away. The declarations of F R A N prevent developers from meddling with the details of piecing an animation together as a collection of parts and commands. For instance, a translating object is neither an object with an attribute that translates nor an object undergoing translation by external forces. The translating object is defined explicitly as an object that translates. The benefit of applying F R A N to animation is that its declarative definitions compose easily and encourage reuse.  Building the Scene The code of Figures 9.11 and A.4 shows an approximate implementation of the scene with FRAN.  1 0  The code is inexact because the current version of F R A N (vl.13) is insufficient to  provide a complete implementation. The library lacks various functions, such as those for time management, and also lacks various operators, such as those for multiplying numbers and 3D points, which are critical for implementing the scene properly. Nevertheless, the code of Figure 9.11 begins with pseudo-code to define two moving pucks (lines 6-8). The remainder of the code proceeds to define a robot by its movements. The robot is defined declaratively as a geometric shape that follows a plan to alternate between two movements. The code of Figure A.4 appears as the preamble to the code of Figure 9.11. The preamble defines two pucks of different sizes and colors, and one robot colored green. The two pucks effect the positional values of posl and pos2 (lines 6-7 of Figure 9.11). 10  CHAPTER  9. IMPLEMENTING  ANIMATING  METHODS:  A  COMPARISON  139  The code concludes the animation by defining a display and associating the movements of the robot with those of the pucks (line 32).  animate position of the pucks (6) - Point3B pos = origin3 (7) Point3B posl = . . . code to animate posl-here ... (8) Point3B pos2 = . . . code to animate pos2 here ... plan for alternating the robot's movements between pucks (9) actions :: UserB -» Point3B -» Point3B -> GeometryB (10) actions u posl pos2 = (moveTol posl) 'untilB' stopl -=> (moveTo2 pos2) 'untilB' stop2 -=> actions u posl pos2 (11) (12) where timeln = . . . has been in active state for 15 units of time ... (13) (14) distl = distance3 pos posl (15) dist2 = distance3 pos pos2 check conditions (16) cond = predicate (distl ==* 0) u (17) stopl = cond &&* timeln check conditions (18) near — predicate (dist2 <* 12) u (19) stop2 = near &&* timeln move robot towards puck#l (20) moveTol :: UserB -» Point3B ->• GeometryB (21) moveTol u posl = (action u half 5) 'untilB' cond -=> (action u posl 5) (22) where (23) half = (pos + posA) / 2 (24) dist = distance3 pos half (25) cond = predicate( dist ==* 0) u move robot towards puck#2 (26) moveTo2 :: UserB -> Point3B -> GeometryB (27) moveTo2 u posB = (action u posB 10) interpolate positional values (28) action :: UserB -»• Point3B -> DoubleB -> GeometryB (29) action u goal duration = moveTo3 pos robot (30) where (31) pos = (userTime u) / duration * goal display robot and pucks (32) main = displayU ((actions u posl pos2) 'unionG' (puckl 'unionG' puck2)) Figure 9.11: A n implementation in F R A N  The plan that guides the robot is produced with the function actions, which defines the conditions for advancing the robot towards a specific puck (lines 9-19). When the robot is situated at puckl and has been advancing towards puckl for at least fifteen units of time, the function moves the robot towards puck2 (lines 16-17). When the robot is within twelve units of puck2 and has moved towards puck2 for at least fifteen units of time, the function  CHAPTER  9. IMPLEMENTING  ANIMATING METHODS: A COMPARISON  140  repeats the robot's original movement of advancing towards puckl (lines 18-19). The functions moveTol and moveTo2 define the advancing motions of the robot towards a particular puck (lines 20-27). moveTol simulates a leaping motion by advancing the robot to a halfway point, which raises the robot off the ground. Upon reaching the halfway point, the robot falls directly towards the puck (lines 20-25). Both functions produce their motions with the help of action, a function that interpolates the robot's position (lines 28-31). It moves the robot from its current position to a goal position over a duration of time.  9.3.3  RASP'S  Implementation  The code in Figure 9.13 presents an implementation of the robot's movements with R A S P . The code consists of one interpolator, seven software interactions, three virtual ports, and two virtual events. The interpolator computes the movements of the robot, which are determined by six interactions and one virtual port. The remaining virtual ports compute the distance between the pucks and the duration of the robot's movements. These values and the virtual events are applied by an FSA, which is embodied by an interaction, to toggle the movements of the robot. As illustrated in Figure 9.12, the FSA contains two states, one for each of the robot's movements. The code concludes by defining a script and embedding that script into a small time graph. The script updates f sa for a period of time, which is ultimately determined by the time graph.  State #1  State #2 Figure 9.12: A finite state automaton The first five interactions communicate information to and from the interpolator (lines 2-6). Over time, the interpolator receives updates of time and puck positions from the interactions evtO and jump.  The virtual events stateEvtl and stateEvt2 establish  relations between evtO, the event providing the interpolator with time, and the events that parameterize the interpolator with puck positions (lines 9-10). The event f sa, a finite state automaton with two states, executes these virtual events to move the robot between pucks (lines 11-13). f sa uses stateEvtl in its first state to initialize the interpolator with a linear  CHAPTER 9. IMPLEMENTING  ANIMATING METHODS: A  COMPARISON  141  interpolator  (1)  EvolutePt *ev = new EvolutePt(3);  (2) (3) (4)  Event *eb = new Event( robot—>outPosition(), ev—MnBeginVal() ); Event *efl = new Event( puckl—>outPosition(), ev—>inFinishVal() ); Event *ef2 = new Event( puck2->outPosition(), ev-MnFinishVal() );  (5) (6)  TimeEvent *evtO = new TimeEvent( ev—>inCalcValue() ); Event *eSet = new Event( ev—>outCurVal(), robot—>inPosition() );  (7)  OPort *mid = *(*(*robot->outPosition() + *puckl->outPosition())/2.0)+Point3(0,2,0);  (8)  Event *jump = new Event( mid, ev-HnMidVal() );  parameterize the interpolator  update the interpolator w/temporal values  compute midpoint of robot's jump to a puck communicate midpoint of jump establish virtual events  (9) (10)  VEvent *stateEvtl = *ef2 kk *evtO; VEvent *stateEvt2 = *efl kk *evt0 kk *jump;  (11) (12) (13)  FiniteStateEvent *fsa = new FiniteStateEvent( 1 ); fsa—>setState( 1, eb, stateEvtl ); fsa-4setState( 2, eb, stateEvt2 );  (14)  OPort *bClose2 = *distVP( robot->outPosition(), puck2->outPosition() ) < 12;  (15)  OPort *bTime = *fsa->outDurationCurState() >= 15;  (16) (17)  fsa->setTransition( 1, *bClose2 kk *bTime, 2 ); fsa->setTransition( 2, 15, 1 );  (18) (19)  Activity *act = new Activity(); act->addActEvent( fsa, eSet );  (20) (21)  TmGroup *group = new TmGroup; group—>addNode( new Tmlnterval( 0, 200 ), new TimingAct (act) );  establish declarative method w/finite state automaton  determine if robot is less than 12 units from puck2 does robot's time in its current state exceed 15 state transitions  establish an action unit  establish a time graph  Figure 9.13: An implementation in R A S P  path (line 12). In its second state, fsa uses stateEvt2 to initialize the interpolator with a curved path (line 13). As illustrated in the software network of Figure 9.14, jump uses the virtual port mid to provide the interpolator with an intermediate position for leaping (line 8). The intermediate position is produced by averaging the positions of the robot and puckl, and then applying a horizontal offset (line 7). The virtual ports bdose2 and bTime establish the conditions for advancing f s a to its second state, the moving of the robot towards puck2 (line 16). Produced with the function distVP, bClose2 returns true if the distance between the robot  CHAPTER  9. IMPLEMENTING  ANIMATING  METHODS:  A  COMPARISON  142  and puck2 is less than twelve spatial units (line 14). bTime accesses a port from fsa and returns true if fsa spends more than fifteen units of time in its current state (line 15).  eSet Robot in POS  L  Puck#l  _ J inPos  L .  outPos  jumpl  Positional inBegin outCurVal inMidVal —m~ I . . . . , inCalcVal g I mrinish Controller  OPort *mid = ( • + • ) / 2+ Point3<0,2,0) t  I  ] Communicational j  ouiPos. i  Puck#2 I inPos  I'.'.'.'.  eb  L  efl  ••' OPort *bClose2 = *distVPQ + • t  outPos i~  K 12 ef2  • Event  Temporal  FSA  TimeEvent  outDurationCurSttite _  OPort *bTime = • >= 15  Figure 9.14: The software network produced by the code of Figure 9.13  9.3.4 Comparison The implementations of M A M / V R S , F R A N , and RASP vary greatly in design. M A M / V R S and FRAN rely heavily upon user-defined functions and callbacks in creating a procedural animation. The functions either define the formulas that compute state or establish the conditions that control the evolution of state. RASP, in contrast, relies heavily upon the creation of scripts, which assemble a procedural animation as a collection of interactions and components. The interactions apply conditions to the transport of information, and the components apply formulas for computing the evolution of state. Reinforced with interaction constraints and multi-modeling methods, the definition of a procedural animation is confined to the body of a single script. Unlike the functions and callbacks of M A M / VRS and FRAN, the single scripts of RASP localize semantic information and develop from a methodic approach, two important traits in easing the development and application of procedural animation. The script for animating the movements of the robot procedurally, for instance, is easily changed to accept new formulas and conditions that determine when and how the robot moves. Of the three tools, neither M A M / V R S nor FRAN provides effective mechanisms for controlling state changes with high-level abstractions or hierarchical structures. Apart from MAM/VRS  providing a behavior graph, both tools encourage developers to create their own  abstractions. Although the approach provides much freedom to developers, it is ineffective in promoting a consistent approach to procedural animation that is effective in managing complexity and encouraging software reuse. For example, the finite state automata (FSA)  CHAPTER  9.  IMPLEMENTING  ANIMATING  METHODS:  A  COMPARISON  143  that is embodied within the implementations of M A M / V R S and FRAN is difficult to apply in new contexts, as it is tedious to isolate the definition of the FSA from the code that toggles the movements of the robot. An FSA, which RASP supports as an explicit abstraction, is a useful device for implementing other procedural animations, such as those that adapt the motions of a robot to its environment. On solid surfaces the robot walks, and on slippery surfaces the robot flies or swims. RASP  and M A M / V R S encourage a separation of concerns while F R A N does not. The  behavior graphs of M A M / V R S are similar to the time graphs of RASP in that both graphs manipulate the temporal aspects of a scene, independent of the nature of the scene's computations and the presentation of the scene's shapes.  11  FRAN  intentionally binds the de-  scription of components with the code that animates their behavior. With this coupling, the two concerns are difficult to extract or modify independently for use in other procedural animations. For instance, FRAN embeds the timing of the example scene directly into the definition of the robot. The timing is neither easy to change nor easy to extract for use with other procedurally-defined movements, such as those having the robot swimming between two pucks. Furthermore, the timing, as it stands, is difficult to adapt because it accepts the use of temporal relations, expressions that schedule the timings of multiple actions. Procedural animation benefits greatly from the use of temporal relations in that the animation is capable of associating its actions with the execution of other actions.  9.4  Particle System Animation  In the scene exemplifying the use of particle system animation, one hundred particles slowly drift across a flat plane (see Figure 9.15). Moving for 500 units of time, the particles advance smoothly until they near a pole, which moves randomly on the surface of the plane. As the particles near the pole, they adjust their movements to avoid collision. When the particles are free from collision, they resume their original movements and drift freely over the plane. When the particles move beyond the boundaries of the plane, they terminate and begin again. This example is representative of particle system animation because it moves many particles as a whole and applies commands to the particles so that they behave as a group. As a group, the particles move incrementally across the plane until they near the pole. Once they are near the pole, the group moves actively to avoid collisions. M A M / V R S provides a rich set of temporal operators that may prove useful for augmenting capabilities in managing time and computation separately. 1 1  RASP'S  CHAPTER  9.  IMPLEMENTING  ANIMATING  METHODS:  A COMPARISON  144  Figure 9.15: A particle system avoids a moving pole 9.4.1 ASAS  ASAS'S  Implementation  is an object-oriented system that animates with the use of actors and  newtons.  Actors  are independent computing processes that operate by way of message passing. They are initialized; activated, and terminated by scripts, themselves, or other actors. The actors of  ASAS  can respond to external state changes. Their behaviors are determined not only  by their actions, but also by their relationship with their environment. Newtons are state variables that update automatically with the advancement of simulation time. Upon request, they interpolate between states according to the values of piecewise, continuous cubic curves. Newtons are especially useful for producing equations that repeatedly return new values. Building the Scene The code of Figures 9.16 and A.5 present an approximate implementation of the scene with ASAS.  1 2  The code is inexact because the documentation for AsAS is incomplete and the  definitions of key functions are unavailable. To code the scene properly, the implementation of Figure 9.16 introduces set!, a new command for resetting the value of a variable. The semantics for set! is borrowed from  S C H E M E [149],  a language for functional programming  that shares many similarities with the original design and syntax of A SAS. The code of Figure 9.16 begins with a script that animates 100 particles for 500 units of time (lines 10-16). The function create-particles recursively calls upon make-particle to model every particle as an actor (lines  17-21).  The actor provides shape to a particle  and animates its position over time. The parameters  <rand?>  initialize the local variable  of make-particle, deltaV and position, with random starting values (lines  24-25).  To  control the position and movements of a particle, make-particle applies two conditionals. The first conditional compares the position of the particle with that of the pole. If the pole is nearby, the particle moves to a new position computed by the function offset, appearing in The code of Figure A.5 appears as the preamble to the code of Figure 9.16. The preamble creates a moving pole and defines offset, a function for updating the position of a particle, such that the particle avoids the position of the pole. 12  CHAPTER  9.  IMPLEMENTING  ANIMATING  METHODS:  A  COMPARISON  145  set timing of particles (10) (script moving-particles (11) . . (local: (runtime 500) (12) (number 100)> (13) (animate (cue (at 0) (14) (start (create-particles number))) (15) (cue (at runtime) (16) (cut)))) create many particles via recursion (17) (defop create-particles (18) (param: index) (19) (if (zerop index) (20) (then nothing) (21) (else (group-actors (make-particle) (create-particles (dif index 1)))))) produce one particle having shape, position, and velocity (22) (defop make-particle (23) (actor (local: (mote (recolor color particle)) (24) (deltaV (vector <randX> <randY> <randZ>))) (25) (position (vector <randX> <randY> <randZ>)) set particle (26) (grasp mote) check if particle is near pole (27) •• (if (== (dist position pole)) poleRadius4) (28) (set! position (offset (dist position pole) position deltaV) (29) (else (set! position (plus position deltaV)) check if particle moves beyond plane range (30) (if (> position (vector -999 -999 215)) (31) (set! position (vector <randX> <randY> <randZ>))) move and view particle (32) (move position) (33) (see mote))  Figure 9.16: An implementation in AsAS Figure A.5. The second conditional determines whether the position of the particle exceeds the boundaries of the plane. If that occurs, the position of the particle is reset to a new initial value. The code for make-particle concludes with functions to update the position of the particle and to visualize the particle's shape. 9.4.2  PARTICLE API'S  Implementation  The PARTICLE API is a library of C + + functions for programming a particle system. Stylistically similar to the functions of O P E N G L [105], which is a commonly accepted tool for graphical programming, the functions of the PARTICLE A P I base their behavior upon a global state. Every function call affects the global state and the performance of successive  CHAPTER  9.  IMPLEMENTING  ANIMATING  METHODS:  A COMPARISON  operations. The functions of the PARTICLE A P I are of four kinds: state, group, and action list.  146  action,  Each kind produces a separate effect to the global state. State functions  establish or modify the global state by controlling the attributes of particles. Group functions organize collections of particles into particle groups, which are then acted on by action functions. Group functions either add (or remove) particles from a group, or render the particles to a display. Action functions establish the behavior of a particle group. They produce effects, such as gravity and bouncing. Action list functions assemble action functions into scripts, such that the action functions apply easily to particle groups. The action list functions permit complex effects to be managed and applied like primitives. Building the Scene The code in Figure 9.17 presents an implementation of the scene with the PARTICLE API. The code begins with the creation of a particle group (lines 1-2) and with the preparation of an action list (lines 3-11). The particle group is identified with the handle particleJiandle. The action list contains five functions for establishing the attributes of the particles, such as their velocity and color (lines 5-7), and for determining where the particles roam in physical space (lines 8-9).  Particles that roam beyond the physical space disappear and  restart from the beginning. The code ends by setting the current time step and identifying a rendering loop, which determines how the particles avoid the moving pole and how the particles are drawn (lines 12-18).  Each pass of the rendering loop instructs the particles  of particleJiandle to avoid the current position of the moving pole and to appear on a display as line segments.  9.4.3  RASP'S  Implementation  The code appearing in Figures 9.18 and 9.19 presents an implementation of the scene with RASP.  The code of Figure 9.18 animates a particle system with port groups, high-level  abstractions that consolidate many individual ports into a single port providing or consuming multiple values. Established in the coding of Figure 9.19, the port groups represent the positions and velocities of every particle and of every motion controller. As illustrated in the software network of Figure 9.20, the code applies the port groups to establish three virtual ports to determine whether the particles are in close proximity to the pole (line 19-21). Particles are close if their distances are less than four times the radius of the pole (line 19). The code uses four additional virtual ports to ensure that the movements of the particles avoid the position of the moving pole (lines 22-25). The virtual ports direct the  CHAPTER 9. IMPLEMENTING  (1) (2) (3) (4) (5) (6) -.(7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18)  ANIMATING METHODS: A COMPARISON  147  make a particle group int particle-handle = pGenPartideGroups(l, 100); pCurrentGroup(particleJiandle); create action list int action-handle = pGenActionLists(l); pNewActionList (action-handle); set the attributes of particles pVelocityD( PDLine, 0,0,1.5, 0,0,2.5 ); pColorD(1.0, PDLine, 0.8,0.9,1.0, 1.0,1.0,1.0 ); pSize(l.O); generate particles pSource(10, PDBox, -5,2,-130, 5,3.5,-105 ); '"• check if particles drift beyond plane pSink(false, PDPlane, 0,0,215, 0,0,-1); move particles to their new positions pMove(); complete action list pEndActionList(); set time step, avoid pole, draw the particles pTimeStep( 1.0); while(l) { updatePolePosition(); pAvoid( 1, .1, 0, PDCylinder, 0,0,0, 0,pole.position,0, pole_radius,0 ); pCallActionList( action-handle ); pDrawGroup(GLilNES);  } Figure 9.17: An implementation with the PARTICLE A P I  particles away from the pole in proportional increments by perturbing the current positions with random values. The code applies the virtual ports with two conditional software interactions, cEvt and cEvt2 (lines 27-35). cEvt applies one of two sets of software interactions for each particle. If a particle nears the pole, cEvt communicates the virtual port newPos to physicsP, the port group that references the particle's position (lines 31-32). If the particle is not close, cEvt updates the position of the particle by communicating information from the motion controllers, motionP and motionV. cEvt2, performed after a particle moves, tests if a particle moves beyond the extent of the plane (lines 27-29).  If so, the particle  moves to the beginning and starts again. The code concludes with the definition of an action unit and a time graph. The action unit sequences the events tEvt and cEvt (lines 36-38). tEvt advances the motion controllers over time while cEvt controls the positioning of the particles. The time graph identifies an interval time for the entire animation and passes it onwards to the action unit (lines 39-42). The code of Figure 9.19 is a partial implementation of the preamble to the code  CHAPTER 9. IMPLEMENTING  ANIMATING METHODS: A COMPARISON  148  check if particles are near pole (19) (20) (21)  OPort *minDist = *(pole->outRadius()) * 4.0; OPort *distance = dist( groupO->outAHPorts(), pole->outPosition() ); OPort *isNear = *distance < *minDist;  (22) (23) (24) (25)  OPort OPort OPort OPort  (26) (27) (28) (29)  OPort *offPlane = *(groupO->outAHPorts()) > Point3(-999,-999,215); CondEvent *cEvt2 = new CondEvent( offPlane ); cEvt2-»setTrueEvt( new CallEvent( rand2->inMakeVal()) ); cEvt2->setTrueEvt( rand2->outVal(), groupI->inIndexPort() );  (30) (31) (32) (33) (34) (35)  CondEvent *cEvt = new CondEvent( isNear ); cEvt-»setTrueEvt( new CallEvent( rand->inMakeVal()) ); cEvt->setTrueEvt( newPos, physicP->inOnePort() ); cEvt->setFalseEvt( motionP->outIndexPort(), physicP-»inOnePort() ); cEvt->setFalseEvt( motionV->outIndexPort(), physic V-MnOnePort() ); cEvt->setFalseEvt( cEvt2 );  (36) (37) (38)  TimeEvent *tEvt = new TimeEvent(motions-+inTime()); Activity *act = new Activity; act->addActEvent( tEvt, cEvt );  (39) (40) (41) (42)  TimingAct *tAct = new TimingAct( act ); Tmlnterval *interval = new Tmlnterval ( 1, 500 ); TmGroup *group = new TmGroup; group-^addNode( interval, tAct );  compute new position of particles *fact = *minDist / *distance; *diff = *norm( *(groupO->outAllPorts()) - *(pole->outPosition() )) * *fact; *off = *(*diff + *(groupO->outAHPorts())) + *(rand-»outVal()); *newPos = *off + *motionV->outAllPorts();  check if particles drift beyond plane  check if particles near pole  assemble activity  assemble time graph  Figure 9.18: An implementation in  of Figure 9.18.  3  R A S P  The preamble organizes one hundred particles and one hundred motion  controllers into several port groups, which assemble many ports into a single abstraction (lines 3-4).  For instance, the port group groupO provides the positions of many particles  by collating the ports of the particles that provide positional information (lines 3 and 12). All the positions are obtained instantaneously by accessing groupO's port outAHPorts; the function d i s t does this in ascertaining the distances between the particles and the pole (line 20 of Figure 9.18). Upon request, d i s t computes all the distances and stores them in the virtual port distance.  3  A complete implementation of the preamble appears in Figure A.6 of the appendix.  CHAPTER  9.  IMPLEMENTING  ANIMATING  METHODS:  A  149  COMPARISON  random number generators (1) (2)  RandomPt3 *rand = new RandomPt3( Point3(-l,0,0), Point3(l,0,0) ); RandomPt3 *rand2 = new RandomPt3( Point3(-5,2,-130), Point3(5,3.5,-105) );  (3) (4)  OPortGroup *groupO = new OPortGroup; IPortGroup *groupI = new IPortGroup;  assemble ports into groups  load ports of particles into groups (10) (11) (12) (13) (14)  for(int i=0; i<100; i++) { MotionLaw *motion = new LawsOfMotion; Physics *plnfo = (Physics*) particles[i]->getAttribute( Physics::getClassTypeId() ); groupO->addPort( particles[i]-»outPosition() ); groupl—»addPort( particles[i]-»inPosition() );  (20)  }  Figure 9.19: A partial implementation of the preamble to the code of Figure 9.18  OPort *isNear = •  <  •  OPort *minDist = •  * 4.0!  OPort *fact = •  Q  efl Motion AT -MMmi—• I inlndexPort  (MlRtllliUS  Pole  „ I  OPort *distance = dist(Q + •  »  !  )  /  5fl  Rand  Mm—m^~\ inMakeVal  GroupO owAllPom  ( o m m u n k i i t H m a !  ;  • Even. • CallEvrnl Temporal  !  OPort *diff = *norm(D - •  OPort *off=*(D-  (• +•  )*  [—'  •  )) : OPort *newPos = •  MotionV  t +  • »  mtllruk-xPml oiitAIIPimx  [  I—.  oulVctl  ef2  J-.  r  ("physicV  | \^LuM—w*- I  inlntkxPorl  1  OPort *offPlane = • > Point3(-999,-999,2I5) t  H TimeEvent  eb  jump |  Rand2 | iriMukeVa! outVal  MotionP oulMlPorts  eb  PhysicP 1 inlndexPort  Figure 9.20: The software network produced by the code of Figure 9.18  9.4.4  Comparison  The implementations of A S A S , the P A R T I C L E A P I , and R A S P differ in their approach to producing a particle system. A S A S treats every particle of a particle system as a separate entity with its own behavior, the P A R T I C L E A P I treats a group of particles as one entity with one behavior, and R A S P treats many particles with individual behaviors as a single entity. In producing the scene, A S A S defines and animates many individual particles. Unlike R A S P and the P A R T I C L E A P I , A S A S does not support a separation between the code that defines a  CHAPTER  9. IMPLEMENTING  ANIMATING METHODS: A  COMPARISON  150  component and the code that animates its properties. Every particle contains the code that animates its behavior. Hence, to alter the behavior of the particle system, one must update the definition of a particle. Moreover, to reuse the behavior elsewhere, as in the guiding device for a flock of birds, one must extract its implementation from the particle. In producing the scene, the PARTICLE APIcreates a particle group and selectively binds the group with attributes. This design manages complexity by accommodating hierarchical development and encourages reuse by separating the definitions of particles and. attributes. Particle groups grow hierarchical as they embed other particle groups directly into their definitions. The PARTICLE A P I is useful but lacking in support for producing and applying particles systems of a wide variety. The tool offers no means for applying other animating methods, such as keyframing and procedural animation. To keyframe the movements of a particle system, developers must introduce their own measures, which may often be inconsistent across several applications. Furthermore, the API lacks a means for animating the attributes of particle groups or producing conditionals that decide when and how attributes apply. Particle systems with changing attributes produce fantastical results as they change their colors and behaviors over time. As developers do in using other animating methods, they must produce their own methods to vary the application of attributes. Most often, these methods involve the use of imperative commands that are troublesome to use in coordinating the behaviors of concurrent particle systems. For instance, the code in Figure 9.17 applies a "while" loop to update the particles with the positions of the moving pole. The while loop is well suited to handle particle systems that operate exactly in parallel but not those that begin and end at different times. When the times vary, many conditionals must be applied to ensure that every particle system operates properly. The implementation of RASP applies an intermediary approach to the modeling of a particle system. It creates individual particles and collates them into a group with port groups. Although this design is less efficient than one that manages an entire group as a whole, like the PARTICLE API, the design permits the particles of a particle system to exhibit individual behaviors. Port groups are capable of collating many kinds of ports from many. kinds of components. Each of the ports of a port group need only provide or consume the same type of values. For instance, the port groups of Figure 9.18 could easily accept a port from a component of type Flocking, a behavior for modeling the movements of birds and fish. Every particle of this particle system would move similarly, except one that would act as though it was part of a flock. Furthermore, several port groups may be combined into one so as to collate the effects of several particle systems. Each of the particle systems act independently while being controlled and reused as a single group.  CHAPTER  9. IMPLEMENTING  RASP  ANIMATING METHODS: A  COMPARISON  151  is the only implementation of the three that readily permits adaptive time steps  and enables concurrent particle systems to update at separate rates. Not every component or interaction of a particle system needs to update at the same rate or frequency. A particle group nearing collision may apply a smaller time step than a particle group moving freely in space. A smaller time step is often critical to avoid temporal aliasing, artifacts arising from the use of inappropriate time steps. ASAS and the PARTICLE A P I are less flexible in manipulating time in that they require the use of explicit controls in varying temporal progression. For instance, to animate two particle groups with separate time steps, the PARTICLE API  requires the use of multiple "while" loops or the use of conditionals that  compute separate time steps for each group.  9.5  Summary of Evaluation  This chapter has provided an empirical comparison between RASP and six different tools of animation: CORY, M E L , M A M / V R S , FRAN, ASAS, and the PARTICLEAPI. Each tool is representative of an alternative approach to animation. The comparison divided the tools into three sets and had each set implement an exemplary scene. The sets and scenes were chosen to present one of three animating methods common to visual simulation: keyframeanimation, procedural animation, and particle system animation.. The comparison demonstrates that IBS, as embodied by the RASP toolkit, is adept at applying multiple animating methods, supporting complexity measures, and facilitating reuse. IBS is capable of keyframing an animation and integrating its use with interaction constraints so as to apply controls to coordinate the evolution of state. The controls ease the use of operators in manipulating keyframe values, so that the keyframes offer greater reuse and are adaptable to run-time conditions. The software interactions of IBS freely apply keyframing to animate the states of all components, not only those components that enlist special functions. IBS is also capable of animating a scene procedurally and integrating it with multimodeling methods so as to guide the evolution of state with formulas and conditions. The multi-modeling methods localize the formulas and conditions that determine when and how an animation performs. The formulas and conditions are neither dispersed over multiple functions nor contained in callbacks. The methods also localize semantic information so as to facilitate the management of complexity. Moreover, the interactions and components that produce a procedural animation are readily accessible and easy to replace because they are confined to the definition of a script.  CHAPTER  9.  IMPLEMENTING  ANIMATING  METHODS:  A  COMPARISON  152  IBS is also capable of producing a particle system and integrating it with the reuse guidelines so that the behavior of particles varies over time. By communicating behavioral information to the particles, software interactions enable the particles to adapt to run-time conditions and join into groups that exhibit variable behaviors. Because the reuse guidelines allow a particle system to accommodate adaptive time steps, two particle systems may operate at separate rates. The separation of computation from coordination, as supported by IBS and the general specification, is instrumental in facilitating the development of an animation and enabling its changes. Despite its absence in all but one of the six tools, this feature is quite useful in developing an animation from an assembly of parts and in controlling the timing of an animation independent of the behavior of its actions. As an assembly of parts, an animation is enabled to create hierarchies and to intermix sets of heterogeneous components. With separate controls, an animation is freely enabled to set the timing of its actions to vary with run-time conditions or the execution of other actions.  Chapter 1 0 Implementing Domain Applications: A Comparison This chapter presents the second of three ways proposed by this thesis to validate the design and application of IBS to visual simulation. This chapter demonstrates the utility of IBS to support the development of applications from different domains that rely upon animated visuals for presenting results or for conveying time-varying behaviors. Two domains under investigation are algorithmic animation and scientific visualization. Each domain introduces a variety of concerns affecting the development and application of visual simulation. In presenting its goals, this chapter applies a format similar to that of chapter 9. It presents an exemplary scene for a particular domain and compares the scene's development with R A S P and with two other tools representative of the domain. The scenes demonstrate that IBS helps developers build applications in these domains. The designs of the application need not be altered radically to accommodate the use of IBS. In addition, each example provides further evidence of the benefits of IBS in supporting multiple animating methods, managing software complexity, and encouraging software reuse.  10.1  Overview  The examples of this chapter focus upon two domains in which visual simulation is key: algorithmic animation and scientific visualization. Each domain develops animation for a different purpose. Algorithmic animation uses animation to present a visual representation of the execution of an algorithm, such as one that sorts data or one that explores the structure of a complex data set. Those who build algorithmic animations encourage a strong separation between the code that animates and the code that defines an algorithm. The 153  CHAPTER 10. IMPLEMENTING  DOMAIN APPLICATIONS: A COMPARISON  154  separation facilitates easy changes to each and simplifies the encoding of an algorithm, the execution of which is being animated [142]. Useful tools for comparing the application of algorithmic animation are P O L K A [143] and O B L I Q - 3 D [104]. P O L K A , the latest version of T A N G O , is a popular tool for developing algorithmic animations, and O B L I Q - 3 D is a high-  level, fast-turnaround 3D animation system that, like this thesis, bases its approach upon an unconventional method of programming. Scientific visualization uses animation as a tool for observing the behaviors of dynamic systems and for presenting the values of time dependent data sets, such as that of a :  weather pattern. As time progresses, an animation varies the position, representation, and attributes, such as color, of graphical entities representative of data sets and their states. Developers of scientific visualizations systems encourage the use of high-level abstractions for managing data sets and high-level functions for manipulating the values of the data sets. The data sets of scientific visualizations are usually large and are normally evaluated as a whole. Useful tools for building scientific visualizations are V T K [134] and V R M L [159]. V T K is an open source, freely available visualization system that builds upon a dataflow architecture, the most commonly applied approach to scientific visualization. Each node of the dataflow architecture performs an elementary operation on a complete data set, for example, filtering the data set or extracting an iso-surface from its values. V R M L is a general specification for interactive graphics that is gaining popularity in the visualization community [57]. It is viewable on the world wide web and is easily shared with others. In addition, V R M L ' s approach to animation, which involves the movement of data between computational structures, is compatible with the dataflow architectures common in scientific visualizations.  10.2  Algorithmic Animation  In the scene involving algorithmic animation, twenty cylinders of varying heights animate "selection sort," a sorting algorithm that repeatedly swaps the smallest elements of an array to the front. As shown in Figure 10.1, each cylinder corresponds to a value and a position in the array. As the elements of the array swap to the front, their corresponding cylinders also exchange positions in 20 units of time. When two cylinders exchange positions, they follow a separate path so as to avoid collision. When the algorithm finishes, the array is sorted and the twenty cylinders appear in order, from smallest to largest. The visualization of a sorting algorithm is the most widely applied example in presenting algorithmic animation. Sorting algorithms are commonly known, and their visualizations are clear and meaningful. The physical attributes of the cylinders, such as shape and lo-  CHAPTER 10. IMPLEMENTING  DOMAIN APPLICATIONS:  A COMPARISON  155  swapping  V  9  12 12  10 10  14  t 12  16 16 14 . , 14 ^ ^  16 16 12 12  • 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 -  K  M  sorted  l^.  j unsorted  ^  Figure 10.1: A visualization of selection sort cation, provide form and structure to the definition of an array and to the values that the array contains. 10.2.1 POLKA  POLKA'S  Implementation  (Parallel program-focused Object-oriented Low Key Animation) is a l\ dimensional  1  animation system, written in C++, that visualizes the execution of algorithms and programs. A successor to T A N G O [143], P O L K A consists of a variety of procedures and abstractions for simplifying the animation of abstract concepts, such as variable swap and logical comparison. P O L K A partitions an animation into three levels of abstraction: animator, view, and constituent. The animator level is the global manager of an animation. It executes visual effects in response to animation events, which are registered at the beginning of a program's body. The animation events identify the critical actions or phases of an algorithm, such as branching or swapping, that characterize the algorithm's behavior. The view level is the graphical perspective of a program. It controls the advancement of time and the presentation of visual elements. The constituent level is the low-level design of an animation. It animates :  the visual content of an animation with AnimObjects, such as lines and circles, and Actions, such as rotations and translations. Building the Scene The code of Figures 10.2, 10.3, and 10.4 presents an implementation of the scene with P O L K A . The code consists of two class definitions and one main body. The two classes, MyAnimator and Rects, produce the animator and view level of the animation. The first class uses the A 2^D image is a 2D image that employs visual cues, such as shadow drops and gradient shading, to simulate the appearance of 3D imagery. 1  CHAPTER 10. IMPLEMENTING  DOMAIN APPLICATIONS:  A COMPARISON  156  second in interpreting the animation events of the main body. The second class produces the animation using AnimObjects and Actions, constituent level objects that produce rectangles and their movements.  2  • create user's Animator (1)  My Animator anim;  (2) (3) (4) (5)  anim. Register AlgoEvt ("Init" ,NULL); anim. Register AlgoEvt( "Input", "dd"); anim.RegisterAlgoEvt( "Display", "d"); anim.Register AlgoEvt( "Exchange", "dd");  (6)  anim.SendAlgoEvt( "Init" );  register events with Animator  initialize Animator provide Animator with data  (7) (8) 0) (10)  for (int count=0; count<10; ++count) { a[count] = int(rand()*10); anim. Send AlgoEvt ( "Input" , count, a[count] );  (11)  anim.SendAlgoEvt("Show", 10 );  (12) (13) (14) (15) (16) (17)  for (int i=0; i<9; i++) for(int j=i+l; j<10; j++) if (a[i] > a[j]) { swap( a[i], ap] ); anim.SendAlgoEvt("Swap",i,j);  }  draw view of Animator perform selection sort  } Figure 10.2: An implementation in POLKA  The main body, shown in Figure 10.2, registers four animation events with anim, an instance of My Animator (lines 1-5).  The animation events notify anim of key values, such  as those of the array, and of key events that arise during the execution of the body. The code begins the sorting algorithm by initializing anim (line 6) and then creating an array of random values. The array values are provided to anim as animation events, referred to as "Inputs" (lines 7-10). The code concludes with an implementation of "selection sort," which sends an "Exchange" animation event to anim whenever two indices of the array exchange their values (lines 12-17). The code of Figure 10.3 provides a partial implementation of the class interface for myAnimator (lines 18-34) and Rects (lines 35-52). myAnimator uses Rects to define and 3  animate the movements of twenty rectangles. The class interfaces for Rects and MyAnimator 2  3  Since the current version of POLKA does not support cylinders, the scene renders rectangles. A complete implementation of both classes appears in Figure A.7 of the appendix.  CHAPTER  10. IMPLEMENTING  DOMAIN APPLICATIONS:  A COMPARISON  157  identify the correspondence between the animation events of the algorithm and the functions that animate the rectangles (lines 21-30). For instance, my Animator interprets the animation event "Swap" by instructing Rects to swap the positions of two rectangles. The class Rects renders and animates the cylinders by storing an array of rectangles and maintaining a copy of  the array that the main body sorts (lines 50-51).  (18) (19) (20)  class My Animator: public Animator { public: MyAnimator(); handle events int Controller () { if (Istrcmp(AlgoEvtName,"Init")) r.Init();  (21) (22) (24) (28) (29)  if (Istrcmp (AlgoEvtName," Swap")) r. S wap (Animlnts [0], Animlnts [ 1]);  }  (31) (32) (33) (34)  private: Rects r;  };  (38)  class Rects: public View { public: Rects(); functions int Init();  (42)  int Input( int, int );  (47) (48) (49) (50) (51) (52)  int Ready(int); int Swap(int,int); private: int max, values[20]; Rectangle *blocks[20];  (35) (36) (37)  };  Figure 10.3: An implementation in P O L K A  The code of Figure 10.4 defines Ready and Swap, the two functions of Rects that are responsible for animating the movements of the rectangles. Ready creates the rectangles with RectangleGroup,  an abstraction for producing a column of rectangles. The initial values for  the RectangleGroup determine the column's attributes, such as size and spacing (lines 5363). Ready  controls the timing of the animation by indicating that each rectangle animates  for a duration of 20 units of time (lines 64-66). Swap animates the exchange of two rectangles by accessing their current positions, computing a midpoint, and forming three actions (lines 68-72).  The three actions, defined as ml, m2, and mid, apply the two positions and the  midpoint in moving the rectangles in opposite directions. One rectangle moves directly from start to finish, while the other moves from start to finish via the midpoint (lines 73-79). Swap  concludes by swapping the values of the array that reference the rectangles. Rects  must swap the variables to maintain a correspondence between the array that it applies in producing rectangles and the array that the main body sorts (lines 80-82).  CHAPTER  10.  IMPLEMENTING  DOMAIN  int Rects:: Ready (int n)  (53) (54) (55) (56) (57) (58) (59) (60) (61) (62) (63)  (64) (65) (66) (67)  A COMPARISON  158  int Rects::Swap(int i, int j)  { create rectangles RectangleGroup objs; initialize rectangles strcpy(objs.color,"black"); objs.spacetype = SpacingNone; objs.horiz = 1; objs.align = AlignBottom; objs.fill = 1.0; objs.useints = 1; objs.intvals = values; objs.intmin = 0; objs.intmax = max; objs.Make(this, blocks,n,.l,.l,.9,.9); rectangles appear now for (int i=0; ijn; ++i) blocks[i]—>Originate(0); draw rectangles time = Animate(time,20); return(20); }  APPLICATIONS:  { (68) (69) (70) (71) (72) (73) (74) (75) (76) (77) (78) (79) (80) (81) (82) (83)  get LocPtr LocPtr int x = int y = LocPtr  locations &: compute midpoint loci = blocks[i]->Where(PART._SW); loc2 = blocksp]->Where(PARTJ3W); locl->XCoord()+loc2->-XCoord())/2.0; locl->YCoord()+loc2->YCoord())/2.0; mid = new Loc( x, y + .8); produce movements Action ml("MOVE",locl,mid,7); Action m2("MOVE",mid,loc2,7); Action mid("MOVE",loc2,locl,7); apply movements to rectangles int len = blocks[i]-»Program( time,&ml ); len += blocks[i]—>Program( time+len,&m2 ); blocks[j]—»Program( time,&mid ); time = Animate( time,len ); swap values of internal array Rectangle *tempr = blocks[i}; blocks[i] = blocks[j]; blocksp] = tempr; return(len);  }  Figure 10.4: An implementation in P O L K A  10.2.2  OBLIQ-3D'S Implementation  Derived from Modula-3 [26], O B L I Q - 3 D is an animation system that couples Obliq, an object-oriented, lexically scoped, untyped, interpreted language with AnimSD, an animation library with a 3D programming interface. OBLIQ-3D animates a scene by creating views, abstractions that encapsulate the definition of graphical objects with time-variant properties. Graphical objects consist of geometric shapes, lights, and cameras. Time-variant properties are functions of time that describe the attributes of the graphical objects. As time advances, the attributes of the graphical objects vary. The time-variant properties that animate state are of three kinds: asynchronous, synchronous, and dependent. Asynchronous properties vary independently, synchronous properties vary when signaled, and dependent properties vary continuously in accordance to the changes of others.  CHAPTER  10.  IMPLEMENTING  DOMAIN  APPLICATIONS:  A COMPARISON  159  Building the Scene The code segments of Figures 10.5 and 10.6 present an implementation of the scene with OBLIQ-3D.  The first segment implements "selection sort" while the second defines the ani-  mation's view. In implementing the algorithm, the sorting procedure notifies the view when . the columns of the animation should exchange positions. (1) (2) (3) (4) (5) () 6  (7) (8) (9) (10) ( ) n  (12) (13) (14) (15) (16) (17) (18) (19) (20)  let aig == proc( view ) let C:= [ {t=>2.5,id=>0},{t=>1.0,id=>l},{t=>2.0,id=>2},;.. 1; view.start( #(C) ); for i==0 to #(C)-1 do view.addColumn( i, C[i].t ); end; perform selection sort var ii = 0, jj = 0; for i==0 to #(C)-2 do for j=i+l to #(C)-1 do ii := getlndex( C, i ); jj := getlndex( C, j ); swap columns and indices if C[ii].t > CfjjJ.t then view.swapColumns( jj, ii ); swaplndices( C, ii, jj ); end; end; end; end; add view to display RootGO_NewStd().add( view.scene ); alg( view ); Figure 10.5: An implementation in OBLIQ-3D  The code in Figure 10.5 implements "selection sort" by initializing an array of values and parameterizing the animation's view with columns of a particular index and height (lines 2-6). Each column retains an index that corresponds to its location within the array. The sorting algorithm calls upon getlndex and swaplndices to swap the indices of columns that have their positions switched in the array (lines 10-11,14). The method getlndex locates the column whose position corresponds to a specific index of the array. The algorithm instructs 4  the view to update the positions of two columns by calling upon the method swapColumns (line 13).  The concluding statements of Figure 10.5 add the view to a root object and  instruct the sorting algorithm to begin (line 19-20). 4  The root object is a special kind of  The coding of getlndex and swaplndices appear in Figure A.8 of the appendix.  CHAPTER  10. IMPLEMENTING  DOMAIN APPLICATIONS:  A COMPARISON  160  graphical object that creates a visual display. The code of Figure 10.6 implements the view of the animation as view, an object storing three variables and three functions. The three variables define a scene, an animation handle, and an array of columns (lines 22-24).  The scene assembles the columns into a  single group, the animation handle controls the movements of the columns, and the array stores references to individual columns. The three functions of view are start, addColumn and swapColumn.  start initializes the view (lines 25-28) while addColumn increasingly  augments the view with a new column created from two time-variant properties, bottom and top (lines 29-36). bottom synchronizes its value with the animation handler while top forms a dependency with bottom. The two properties control the orientation of the column so that the column always remains vertical, regardless of its location. When bottom changes its value in response to a call from the animation handler, top adjusts its value to place itself directly above bottom. swapColumn animates the movements of two columns, which are selected in the function's argument list (lines 37-45). The function swaps the locations of the columns by having each column move towards the position of the other. The first column travels a path close to the midpoint while the second moves in a straight line. 10.2.3  RASP'S  Implementation  The code of Figures 10.7 and 10.8 presents an implementation of the scene with R A S P . The code in Figure 10.7 implements the sorting algorithm with conventional programming constructs while the code in Figure 10.8 scripts the movements of two cylinders with keyframing. The execution of the algorithm progressively produces a time graph that contains multiple instances of swapAct, the script for animating the movements of the two cylinders. By its position in the time graph, each instance of swapAct inherits an interval of time that begins when the previous interval ends. Thus, the swapping of columns occurs in succession, which emulates the behavior of the sorting algorithm. The code of Figure 10.7 animates "selection sort" by associating geometric columns with the indices of an array and by progressively producing a time graph that presents the algorithm's execution. Each successful comparison applied by the algorithm adds two nodes to the time graph (lines 12-13).  The first node, of type TmShift, updates an inherited  interval by 20 units of time. The second node, of type TimingAct, maintains a reference to swapAct, the script of Figure 10.8 that animates the exchange of two cylinders (line 11). Collectively, the two nodes specify that swapAct begins with the end of the previous script and that swapAct animates for 20 units of time. The procedure getModelWithlndex returns a column with a particular index within the array (lines 4-5). The index associated with a  CHAPTER 10. IMPLEMENTING  (21) • (22) (23) (24)  DOMAIN APPLICATIONS:  A COMPARISON  161  establishes view let view = { scene => GroupGO_New(), ah => AnimHandle_New(), columns => ok,  initialize the view (25) (26) - (27) .(28)  start => meth( self, m ) self.columns := array_new( m, ok ); self.scene.flushQ; end,  adds columns to view (29) (30) (31) (32) (33) (34) (35) (36)  addColumn => meth( self, u, height ) let bottom = PointProp_NewSync( self.ah, [u-2,0,0] ); let top = PointProp_NewDep( meth(self,time), Point3_Plus( bottom.value(time), [0,height,0] ), end ); self.columnshi] := CylinderGO_New( bottom, top, 0.2 ); self.scene.add( self.columns[u] ); end,  (37) (38) (39)  swapColumns => meth( self, u, v ) let pv = self.columns[u].getProp( CylinderGO-Pointl ); let pu = self.columns[v].getProp( CylinderGO-Pointl );  swap the positions of two columns  move first column via midpoint (40) (40) (41) (41)  let p = pu.get(); let m = ( pu.get() + pv.get() ) / 2.0; pv.getBeh().linMoveTo( [m[0],m[l]+5,m[2]], 0, 1 ); v.getBeh().linMoveTo( [p[0],p[l],p[2]], 1, 1 ); P  move second column directly (42) (43) (44) (45)  let q = pv.get(); u.getBeh().linMoveTo( [q[0],q[l],q[2]], 0, 2 ); self.ah.animate(); end P  Figure 10.6: A n implementation in OBLIQ-3D  column corresponds to a position in the array being sorted, not to the position within the array that stores the columns. The code of Figure 10.8 animates the movements of two cylinders by using two interpolators, eight software interactions, and two virtual ports. The two interpolators swap the positions of the cylinders to visualize the exchange of information (line 21-22).  As  illustrated in the software network of Figure 10.9, the interactions i E v t l and iEvt2 operate inversely when parameterizing the two interpolators with a beginning and an end (lines 2324).  i E v t l communicates the positions of the two cylinders as values to begin and end the  CHAPTER 10. IMPLEMENTING  (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18)  DOMAIN APPLICATIONS:  A COMPARISON  162  Model *bldl, *bld2; TmGroup *group = new TmGroup; group-^addNode( new Tmlnterval(20) ); performs "selection" sort for(i=0; K M A X - 1 ; i++) { for(j=i+l; j<MAX; j++) { bldl = getModelWithlndex(i); bld2 = getModelWithlndex(j); float hi = getHeight( bldl ); float h2 = getHeight( bld2 ); compare values and swap if needed if (hi > h2) { sort = swapAct( bldl, bld2 ); group->addNode( new TmShift(20) ); group—>addNode( new TimingAct(sort) ); swaplndices( bldl, bld2 ); k++;  }  }  }  Figure 10.7: A n implementation in RASP first interpolator; iEvt2 communicates the same two positions as values to end and begin the second interpolator. The virtual ports c t r and mid compute an intermediary position for the first interpolator to use in computing a collision-free path for the first cylinder (lines 25-26).  The interaction mEvt communicates the intermediary position to the interpolator  when the animation begins (line 27). The concluding statements divide the eight interactions into two groups, initial and acting (lines 32-34). The initial events parameterize the interpolators with durations and initial values. durEvt, for instance, parameterizes both interpolators with a single interval of time (line 28).  The a