TRANSFORMATIONS ON DEPENDENCY GRAPHS:FORMAL SPECIFICATION AND EFFICIENT MECHANICALVERIFICATIONbySreeranga Prasannakumar RajanB.Tech., Indian Institute of Technology (Madras, India), 1986MS., University of Southern California (Los Angeles), 1987A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinTHE FACULTY OF GRADUATE STUDIES(Department of Computer Science)We accept this thesis as conformingto the re uired standardTHE UNIITYRITISH COLUMBIAOctober 1995©Sreeranga Prasannakumar Rajan, 1995In presenting this thesis in partial fulfilment of the requirements for an advanceddegree at the University of British Columbia, I agree that the Library shall make itfreely available for reference and study. I further agree that permission for extensivecopying of this thesis for scholarly purposes may be granted by the head of mydepartment or by his or her representatives. It is understood that copying orpublication of this thesis for financial gain shall not be allowed without my writtenpermission.(Signature)__________________________Department of CO)pk%t Ci1.The University of British ColumbiaVancouver, CanadaDate oL’e 2, itc,cDE-6 (2/88)AbstractDependency graphs are used to model data and control flow in hardware andsoftware design. In a transformational design approach, optimization and refinement transformations are used to transform dependency-graph-based specificationsat higher abstraction levels to those at lower abstraction levels. In this dissertation,we investigate the formal specification and mechanical verification of transformationson dependency graphs. Among formal methods, the axiomatic method provides amechanism to specify an object by asserting properties it should satisfy. We showthat an axiomatic specification coupled with an efficient mechanical verification isthe most suitable formal approach to address the verification of transformations ondependency graphs.We have provided a formal specification of dependency graphs, and verified thecorrectness of a variety of transformations used in an industrial synthesis framework. Errors have been discovered in the transformations, and modifications havebeen proposed and incorporated. Further, the formal specification has permitted usto examine the generalization and composition of transformations. In the process,we have discovered new transformations that could be used for further optimizationand refinement than were possible before. We have devised an efficient verification scheme that integrates model-checking and theorem-proving, the two majortechniques for formal verification, in a seamless manner.11First, we focus on the dependency graph formalism used in the high-level synthesis system part of the SPRITE project at Philips Research Labs. The transformations in the synthesis system are used for refinement and optimization of descriptionsspecified in a dependency graph language called SPRITE Input Language (SIL). SILis an intermediate language used during the synthesis of hardware described usinglanguages such as VHDL, SILAGE and ELLA. Besides being an intermediate language, it forms the backbone of the TRADES synthesis system of the University ofTwente. SIL has been used in the design of hardware for audio and video applications.Next, we present schemes for seamless integration of theorem-proving and model-checking for efficient verification. We use the Prototype Verification System (PVS)to specify and verify the correctness of the transformations. The PVS specificationlanguage, based on typed higher order logic allows us to investigate the correctness using a convenient level of abstraction. The PVS verifier features automaticprocedures and interactive verification rules to check properties of specifications.We have integrated efficient simplifiers and model-checkers with PVS to facilitateverification.Finally, we show how our method can be applied in the study of formalisms forhybrid/real-time systems, optimizing compilers, data-flow languages, and softwareengineering. Based on the applications of our method on such off-the-shelf for111malisms, we substantiate our claim - that an axiomatic specification coupled withan efficient mechanical verification is the most suitable approach to specify andverify transformations on dependency graphs independent of underlying behaviormodels.ivContentsAbstract iiTable of Contents VList of Tables ixList of Figures xiAcknowledgement xivDedication xv1 Introduction 11.1 Related Work 101.1.1 LAMBDA 121.1.2 Formal Ruby 131.1.3 Digital Design Derivation 141.1.4 Transformations in SAW 141.1.5 Verification of Transformations in SILAGE 151.1.6 Transformations in Software Design 152 Overview of SIL 17v2.1 Structural Aspects of SIL 182.2 Behavioral Aspects of SIL 192.3 Transformations in SIL 262.4 Operational and Denotational Semantics of Dependency Graphs 272.4.1 Operational Semantics of Dependency Graphs 302.4.2 Denotational Semantics of Dependency Graphs 323 Specification and Verification in PVS 333.1 PVS Specification Language 343.2 PVS Verification Features 353.3 Notes on Specification Notation 363.4 Specification and Verification Examples in PVS 394 Efficient Mechanical Verification 564.1 Introduction 564.2 Motivation 584.3 Terminology 604.4 Integration Scheme 614.5 PVS Specification Logic and Verification Architecture: Review . 634.6 Integration Algorithm 644.6.1 Supplying Contextual Information 644.7 Model-Checking within Theorem Proving 654.7.1 Propositional mu-calculus and Temporal Logic: Overview 674.7.2 mu-Calculus and fairCTL in PVS 684.7.3 Translation from PVS to mu-calculus 714.8 Example: BDD-based Propositional Simplification in PVS 744.9 Examples: BDD-based Model-Checking in PVS 82vi4.10 Discussion and Conclusions. 835 Specification of SIL Graph Structure in PVS 875.1 Port and Port Array 895.2 Edges 915.3 Node, Conditional Node and Graph 945.4 Well-formedness of a SIL Graph 1006 Specification of SIL Graph Behavior and Refinement 1056.1 Behavior 1076.2 Refinement and Equivalence 1096.2.1 Compositionality 1276.3 Axiomatic Specification of Behavior and Refinement: A Synopsis . . 1327 Specification and Verification of Transformations 1407.1 Overview 1427.2 Common Subexpression Elimination 1447.3 Cross-Jumping Tail-Merging 1487.4 Other Transformations and Proofs 1517.5 Devising New Transformations 1537.5.1 Generalization and Composition of Transformations 1537.5.2 Investigations into “What-if?” Scenarios 1558 Applications to Other Domains 1628.1 Optimizing Compiler Transformations 1648.2 Data-flow Languages 1688.3 Structured Analysis and Design 1708.3.1 DFD and STD 172vii8.3.2 Transformation of DFD to SIL 1748.3.3 Example Illustration 1768.3.4 Transformation to SIL 1798.3.5 Specification in PVS 1808.4 Constraint nets 1848.4.1 Axiomatization of Constraint Nets 1868.5 Separation of Control Flow from Data Flow 1889 Discussion and Conclusions 1909.1 Intent versus Implementation 1929.2 From Informal to Formal Specification 1939.3 Axiomatic Approach vs Other Formal Approaches 1969.4 Conclusions and Directions for Further Research 199A Peterson’s Mutual Exclusion Algorithm: Automatic verification 215B Definitions, Axioms and Theorems 234B.1 Definitions 234B.2 Axioms 238B.3 Theorems 242C Proof Transcripts 248C.1 Common Subexpression Elimination 248C.2 Cross Jumping Tail Merging 254viiiList of Tables2.1 Example of a structured operational semantics rule 314.1 Parts of j-calculus theories in PVS 754.2 PVS theory for CTL operator definitions in terms of greatest andleast fixpoints 764.3 PVS theory for fairCTL operator definitions in terms of greatest andleast fixpoints 774.4 State machines and their properties in PVS: Mutual-Exclusion Protocol Specification 845.1 PVS types for data-flow edge and sequence edge 925.2 PVS specification of conditional node as a record type 965.3 Node as a subtype of a conditional node 996.1 Using weights to enforce linear ordering of data-flow edges forming ajoin: PVS specification 1166.2 Using weights to determine join behavior 1176.3 Weight when the condition on a conditional node is false 1186.4 Absence of join: exclusive data-flow edge 1196.5 Array version of exclusive data-flow edge 1206.6 A theorem on join of exactly two data-flow edges 1226.7 Order preserved by refinement and optimization 122ix6.8 Order preserved by refinement and exclusive data-flow edge 1246.9 Graph refinement: property expressing relation between outputs andinputs of graphs independent of underlying behavior 1256.10 Predicates for expressing the sameness of nodes 1267.1 Correctness of common subexpression elimination 1477.2 PVS specification of preconditions for cross-jumping tail-merging 1607.3 Correctness of cross- jumping tail-merging 1617.4 Experimental results for proofs of various transformations on a Sparc20 with 32 Mb memory 1618.1 A typical program involving a branch: using pseudo code 1648.2 A typical program involving a branch: using assembly language. . . 164xList of Figures1.1 Cross jumping tail merging: incorrectly specified in informal document 41.2 Example of a dependency graph with control specification 61.3 SIL transformations and verification in PVS 82.1 Different kinds of SIL ports 182.2 An example of a SIL graph description 202.3 SIL node: informal description 212.4 SIL edges: informal description 222.5 SIL Join and Distribute: informal description 232.6 Combinational adder: SIL graph repeated over clock cycles 242.7 Cumulative adder: SIL graph with DELAY node 252.8 Cumulative adder: unfolded SIL graph 262.9 Partial specification of a multiplexor 272.10 Implementation specification of a multiplexor 282.11 Example SIL transformation: retiming 282.12 A simple node used as an example for operational semantics 304.1 BDD Integration Scheme 625.1 SIL data-flow and sequence edges 935.2 SIL conditional node 97xi5.3 Node as a subtype of a conditional node. 996.1 Example: refinement of ports due to non-deterministic choice beingmade deterministic 1106.2 Example: array refinement does not imply every individual port refinement 1136.3 Using weights for ordering data-flow edges 1166.4 Using weights to determine join behavior 1176.5 Weight when the condition on a conditional node is false 1196.6 Absence of join: exclusive data-flow edge 1206.7 Order preserved by refinement and optimization 1236.8 Order preserved by refinement and exclusive data-flow edge 1256.9 Graph refinement: property expressing relation between outputs andinputs of graphs independent of underlying behavior 1266.10 Compositionality of refinement 1317.1 Common subexpression elimination 1447.2 Cross-jumping tail-merging: corrected 1487.3 Cross-jumping tail-merging: incorrectly specified in informal document. 1497.4 Cross-jumping tail-merging: generalized and verified 1517.5 Cross-jumping tail-merging: inapplicable when two nodes are mergedinto one 1557.6 Further optimization impossible using existing transformations. . . . 1567.7 Inapplicability of cross-jumping tail-merging after common subexpression elimination: due to precondition restrictions 1577.8 Inapplicability of common subexpression elimination after cross-jumpingtail-merging: due to precondition restrictions 1587.9 A simple new transformation: obvious, post-facto 1598.1 A typical dependence flow graph involving a branch 165xii8.2 SIL dependency graph for program 1678.3 Basic building blocks of a DFD 1728.4 DFD to SIL Tiansformation 1758.5 DFD for cruise control of an automobile 1778.6 STD for cruise control of an automobile 1788.7 SIL for cruise control of an automobile 1798.8 Constraint net for modeling integral computation over continuoustime 186xliiAcknowledgmentI am indebted to Jeff Joyce for initiating me into mechanical verification. Jeff hasbeen an excellent advisor by consistently providing directions for research and givingthe freedom to boldly explore the ideas on my own. I am thankful to Paul Gilmore,Alan Mackworth, and Mabo Ito for providing enlightening comments and discussions. I thank Carolyn Talcott of Stanford University for initiating me into formalmethods and semantics of programming languages. The members of the IntegratedSystems Laboratory at UBC provided a stimulating environment for research. Amajor part of the work reported here was done while I was at Philips Research Laboratories, Eindhoven, The Netherlands from September 1993 till April 1994. Theauthor wishes to thank Ton Kostelijk for the invitation to work on this project, andfor providing illuminating suggestions, support and a homely environment. I amgrateful to Corrie Huijs, Wim Kloosterhuis, T. Krol, Jaap Hofstede, Peter Middelhoek, and Wim Smits for the cooperation, review and corrections. Thanks to groupleader G. Beenker and CAD group members for a pleasant stay in Eindhoven.I am grateful to SRI International for hosting me and providing support duringthe last year. The idea of combining theorem-proving and model-checking wouldnot have matured without the excellent guidance of N. Shankar, M. Srivas, and J.Rushby of SRI International. Thanks to David Cyrluk, Pat Lincoln, and Sam Owreof SRI International, J.U. Skakkebaek of TU Denmark, and J. Hooman, G. Janssenof TU Eindhoven, and D. Stringer-Calvert of York University (UK) for discussions.I would not have embarked on pursuing this scientific enquiry without the constant support and encouragement of my illustrious parents: Sri K. Prasanna Kumarand Smt. T. Nagarathnamma, and my celebrated brothers: Sri P. Venkat Ranganand Sri P. Srihari Sampath Kumar, and the guiding light of my venerable Gurus Swami Chinmayanandaji, Paramahamsa Yoganandaji, and Matha Amrithanandamayi. Finally, I am indebted to my wife, Suneetha, who has been an immensesource of inspiration and support for a successful culmination of this dissertation onthe auspicious occasion of Vijaya Dashami in the month of Dasara celebrations inIndia.xivDedicated to my parentsSri K. Prasanna Kumar and Smt. T. NagarathnammaTo Know That by Knowing Which everything else becomesKnown.— from the Upanishads, the fountain head of Sanathana Dharma of India.Among creations, I am the beginning, the middle and also theend, 0 Arjuna; among sciences I am the Science of the Selfand I am the logic in all arguments.— from the Srimad Bhagawad Geeta, Chapter 10, Verse 32.xvChapter 1IntroductionDependency graphs’ are graph-based specifications of data and control flow in asystem. They are used to model systems at a high level of abstraction in bothhardware and software design. Typically, dependency graphs are represented pictorially as graph structures with an associated behavior. A transformation transforms one graph structure into another by removing or adding nodes and edges. Inhigh-level synthesis of hardware, a sequence of transformations is used for refinement of dependency-graph-based specifications at an abstract behavior level intodependency-graph-based implementations at the register-transfer level. Further,register-transfer-level implementations could be converted to concrete hardware de1n the literature, they are also known as control-flow/data-flow graphs and signal-flow graphs.1signs by low-level logic synthesis. An informal representation of dependency graphswould lead to subtle errors, making it difficult to verify the correctness of the transformations. The problem we have addressed in this work is, how the correctness oftransformations on dependency graphs can be formally specified and verified.The behavioi of a dependency graph is the set of all tuples, where each tuplehas input data values and corresponding output data values of the dependencygraph. In a transformational design approach, a sequence of transformations is usedfor refinement of specifications into concrete implementations. A transformation iscorrect if the sequence of behaviors allowed by the implementation is a subsequenceof the behaviors permitted by the specification. Trivial implementations that allowan empty sequence of behaviors can be ruled out by showing either, that at least onebehavior is allowed by the implementation, or that the implementation is equivalentto its specification with respect to behavior. The solution to the problem of verifyingthe correctness of transformations we have sought in this work, is independent ofmodels of behavior underlying dependency graphs.A typical transformation employed in optimizing compilers is cross-jumping tail-merging [EMH+ 93], shown in Figure 1.1. In this transformation, two identical nodeson dependency paths that are never active at the same time are merged into onenode. However, as we found out using our approach explained in this paper, the2Usually known as input/output behavior.2transformation does not preserve behavior. Informally, the reason is as follows. Theconditions True/False on open/filled bullets in Figure 1.1 on the bottom/top of thenodes control the execution of the nodes. Further, we stipulate3 that the value ofp2 is determined by the edge that transmits a well-defined value to it - i.e. when cis true, q20 determines the value of p2, while q21 determines its value otherwise. Ingraph Gi, when c is false, the value of qlO is arbitrary, and so is the value of plO.If we choose the value of ri to be that of plO, the value of rl is also arbitrary. Ingraph G2, when c is false, the value of p2 is that of q21. In this case, the value of r2is (yl * y2). Thus, the set of outputs for a given yi and y2 is the set of all valuesallowed by the type of the output port in graph Gi. Whereas in graph G2, the set ofoutputs is a singleton set containing the element (yl * y2). Since, with identicalinputs the corresponding sets of possible outputs are unequal, the behaviors of thegraphs are not equivalent. A corrected and generalized cross-jumping tail-mergingtransformation is presented in Chapter 7.The main contributions of this work are the following:• A formal specification of dependency graphs, independent of underlying behavior models, has been achieved.• A set of optimization and refinement transformations on dependency graphs3Section 2.2 explains this stipulation on how data conflicts at ports called joins, such as p2, areresolved.3Gi G2Behavior(G1) = Behavior(G2) =IF cTHEN qIO=(xl + x2) IF C THEN q20= (xl + x2)ELSE qi I = (yl * y2) ; ELSE q21 (yl * •This is derived(p10= qlO) AND (pll=qll); IF c THEN (p2=q20) •.fromthe(rl=plO) OR (rI= P11) ELSE (p2 = q21); }r2=p2 determined bythe edge7 whose sourceBehavior(G1) = Behavior(G2) is well-defined.Figure 1.1: Cross jumping tail merging: incorrectly specified in informal document.have been verified. Generalization of transformations have also been proposed.• Errors have been discovered in the transformations used in industrial strengthhardware design. Modifications for the erroneous transformations have beenproposed and incorporated.• New transformations have been devised that could be used for further optimization and refinement than was possible before.• An efficient verification scheme that provides a seamless integration of theoremproving and model-checking - the two major formal verification techniques -has been achieved.4• Application of our work in other domains such as modeling hybrid/real-timesystems, optimizing compilers, data-flow languages, and software engineeringhas provided rigor for formalisms used in those domains.Formal methods could be divided into two main categories: property-orientedmethods and model-oriented methods [Win9O]. In a property oriented method, thesystem under consideration is specified by asserting properties of the system, minimizing the details of how the system is constructed. While, in a model-orientedmethod, the specification describes the construction of the system from its components. An axiomatic approach is a property-oriented method. Typically, a small setof properties, called axioms, are asserted to be true, while other properties, calledtheorems, are derived. In this work, we have chosen a property oriented method.We propose an axiomatic specification coupled with an efficient verification methodto study the correctness of transformations on dependency graphs [Raj94aj. As wediscuss later in Chapter 9, an axiomatic approach does not require us to developa concrete behavioral model for dependency graphs, thus enabling it to be simplerand more general than other formal approaches.A dependency graph4 is a graph-based representation of the behavior of a system.It consists of nodes representing operations or processes, and directed edges representing data dependencies and data flow through the system. In addition, control41n this dissertation, the term dependency graph includes control-flow/data-flow graphs andsignal-flow graphs.5r = d * (IF c THEN (a+b))Figure 1.2: Example of a dependency graph with control specification.flow could also be represented in a dependency graph in several ways. We show anexample of such a graph in Figure 1.2.In order to present our work in a concrete context, we consider a transformational design approach used in the high-level behavioral synthesis system as partof the SPRITE project at Philips Research Labs (PRL). In this approach, transformations are used for optimization and refinement of descriptions specified usingthe SPRITE Input Language (SIL). Descriptions in SIL at a register-transfer levelcould eventually be converted to gate-level hardware designs by a logic synthesisapplication such as PHIDEO at PRL.SIL is an intermediate language used during the synthesis of hardware describedusing hardware description languages such as VHDL {oEE88], SILAGE {Hi185j, andELLA [Compu9O]. It also forms the backbone of the TRADES synthesis systemat the University of Twente. Important features of SIL include hierarchy and design freedom. Design freedom is provided by permitting several implementation6choices for a SIL description. Implementation choices are constrained by allowingan implementation suggestion in a SIL description. The implementation suggestion may be tailored by using refinement and optimization transformations. SILhas been used in the design of hardware for audio and video signal processing applications such as a direction detector for the progressive scan conversion algorithm [vdWvMM94, Mid94a]. In one of the applications [Mid94bj, a reduction ofpower consumption by 50% has been achieved.Many of the optimization transformations used in SIL are inspired by those usedin compiler optimization, such as dead-code elimination and common subexpressionelimination. An optimized SIL graph has to satisfy the original graph with respectto behavior. This satisfaction can be guaranteed by showing the correctness ofthe optimization transformations. Correctness means that every behavior allowedby an optimized SIL graph implementation is required to be one of the behaviorsallowed by its SIL graph specification. An informal specification of SIL has beenpresented and documented as part of the SPRITE project [KeH92, KMN+92].A detailed denotational semantics of SIL for showing the correctness of transformations has been worked out earlier [HHK92, HK93]. The optimization and refinement transformations have been specified informally as part of the SPRITEproject [EMH93, Mid93, Mid94b].We use the Prototype Verification System (PVS) [OSR93b], an environment for7VHDL SILAGEPHIDEOGATE LEVELFigure 1.3: SIL transformations and verification in PVSformal specification and verification. The PVS specification language, based ontyped higher order logic, permits an axiomatic method to develop specifications.This entails expressing properties of a system at a convenient level of abstraction.The choice of a high level of abstraction obviates the need to provide a detaileddefinition of the behavior of data flow graphs. Thus, without specifying a detailedbehavioral model of a SIL description, we can still compare descriptions with respectto their behavior, thus establishing the correctness of transformations [Raj94b].However, we stress that this work addresses the transformations as intended intheir informal specification, and not verification of the software implementations oftransformations. We show SIL and our work in the context of the synthesis systemin Figure 1.3.8The rest of this dissertation is organized as follows: Chapter 2 gives an overviewof SIL. In Chapter 3, we give a brief description of the PVS formal system. Wediscuss in Chapter 4 our work on integration of model-checking to PVS for efficientverification. This work has enabled verification of many classes of hardware designsto be fully automatic [KK94]. In Chapter 5, we describe the specification of structure of SIL graphs, while in Chapter 6, we describe the specification of behavior,refinement, and equivalence of SIL graphs. We present the specification and verification of transformations in Chapter 7. In Chapter 8, we show how our formalismcould be applied in other domains such as, modeling of real-time systems, compileroptimization, data-flow languages, and structured methods in software design. Finally, following a general discussion, conclusions are summarized in Chapter 9. Acompilation of the specification of SIL and its verified properties in PVS is givenin Appendix B. Transcripts of the verification in PVS for two transformations discussed in detail in this paper are listed in Appendix C. The theories and prooftranscripts given in the appendices were automatically generated from the corresponding PVS specifications and proof runs. In the remainder of this chapter, wediscuss related work done in the past.91.1 Related WorkThere have been some efforts in analysis and verification of refinement transformations in the past. One of the earliest efforts on formalizing the correctness ofoptimization transformations is by Aho, Sethi and Uliman [ASU71]. They formalized a restricted class of transformations known as Finite Church-Rosser (FCR)transformations, and provided simple tests to check properties of such transformations. However, none of the past work has dealt with transformations on dependencygraphs in general. Most of the efforts have concentrated on specialized hardware description languages and programming languages. Many other efforts have provideda formal operational and denotational semantics for variations of dependency graphformalisms. But, the operational and denotational semantics, unlike axiomatic semantics, are tied to a specific behavior model. This makes them unsuitable to verifytransformations on dependency graphs of arbitrary size. Further, it would not bepossible to extend such concrete operational or denotational models to dependencygraphs used in a multitude of formalisms such as in software engineering and realtime systems modeling.A formal model was proposed for verifying correctness of high-level transformations by McFarland and Parker {MP83]. Transformations used in YIF (YorktownInternal Form) [BCD+88] have been proved to be behavior preserving [Cam89]. Inthis work, a strong notion of behavior equivalence based on an operational seman10tics tied to a particular model of representation is used. A formal system usingtransformations for hardware synthesis has been discussed by Fourman [Fou9OJ. Webriefly discuss this work in Section 1.1.1. A synthesis system for a language basedon an algebraic formalism has been presented by Jones and Sheeran [JS9Oj, and itsformalization has been presented by Rossen [Ros9O]. This effort is explained brieflyin Section 1.1.2. Another algebraic approach to transformational design of hardware has been worked out by Johnson [Joh84]. A short discussion on this approachis presented in Section 1.1.3. In the work on tying formal verification to siliconcompilation [JLR91], a preliminary study with an emphasis on the use of formalverification at higher levels of VLSI design was presented. Correctness of register-transfer-level transformations for scheduling and allocation has been dealt with in[Vem9O]. An automatic method for functional verification of retiming, pipeliningand buffering optimization has been presented by Kostelijk [KvdW93]. It has beenimplemented in a CAD tool called RetLab as part of PHIDEO. A formal analysisof transformations used in Systems Architect Workbench (SAW) high-level synthesis was studied by McFarland [McF93]. This work is discussed briefly in Section1.1.4. A post-facto verification method for comparing logic level designs against arestricted class of data flow graphs in SILAGE was presented by Aelten and others [AAD93, AAD94]. A formalization of SILAGE transformations in HOL wasstudied by Angelo [Ang94]. A concise description of this work appears in Section1.1.5. An approach based on the execution model for representation languages in11BEDROC high-level synthesis system [CBL92] has been used to verify the correctness of optimization transformations. A formal verification of an implementationof a logic synthesis system has been reported by Aagard and Leeser [AL94], but itdoes not provide a mechanical verification for optimization and refinement transformations in high-level behavioral synthesis. In Section 1.1.6, we briefly discussthe work on formal specification and verification of refinement transformation insoftware design.1.1.1 LAMBDALAMBDA [Fou9Oj is formal system based on higher order logic for designing hardware from high level specifications. In this formalism, a design state is representedas an inference rule derived within the framework of higher order logic. A refinementis a rule derived within this logic that can be applied to an abstract design state toarrive at a concrete design state. The different kinds of refinements that are appliedare temporal, data and behavioral. However, a definite set of refinement and optimization transformations have not been presented. ELLA, a hardware descriptionlanguage has been formalized in LAMBDA.121.1.2 Formal RubyIn this work, an algorithmic specification of sequential and combinational circuitsis specified in a language called Ruby [JS9O], based on an algebraic formalism. Thealgebraic formalism consists of relations and operations on relations such as composition, inversion and conjugation. Types are defined as equivalence relations. Datastructures such as lists and tuples are used to represent larger hardware structures.A parallel composition operator allows specification of hardware composed of independent modules. Other operators such as row and column are introduced forsuccinct specification of regular structures such as systolic arrays.Ruby has been formalized {Ros9O] in a proof checking system called ISABELLE.ISABELLE, based on type theory, allows syntactic embedding of other logics. Afragment of Ruby corresponding to combinational circuits, delay elements, serialcomposition and parallel composition called Pure Ruby is specified as a type. Properties and proof rules such as induction on Ruby terms are then derived on the typedefinition. The rest of the language is then specified using this type.The axiomatization specifies signals as functions of time and properties of relations on signals. General properties of Ruby relations have been formalized. However, in order to derive properties, the semantic embedding involves signals corresponding to a circuit implementation. A Ruby specification itself, and hence itsformalization even at a high level is geared to be directly translatable to a circuit13realization having a regular structure. Thus, this formalism is at a lower level ofabstraction than our formalization of SIL. A general concept of refinement is notformalized. The formalism does not present a well-defined set of transformations,to be used to refine and optimize Ruby programs, other than retiming.1.1.3 Digital Design DerivationThis is an algebraic approach to transformational design of hardware [Joh84]. Inthis formalism, a functional specification is translated into a representation of aDeterministic Finite State Machine specification called behavior tables [RTJ93].The behavior tables are transformed into a digital design. In a behavior table,rows represent state transitions and columns represent both control and data flow.Some examples of transformations are column merging, deletion and renaming. Thetransformations are not formally verified.1.1.4 Transformations in SAWIn this work, a formal analysis of transformations [McF93] used in System Architect’sWorkbench (SAW) [TDW88j is carried out. In this system, hardware describedat the register-transfer level or higher using ISPB {Bar8lj is translated into behavior expressions. Behavior expressions use sequences and relations on sequences to14represent the input/output behavior of the specified hardware. Optimization transformations are carried out on the behavior expressions representations. A numberof transformations such as constant folding and loop unwinding have been analyzedrevealing a few conceptual errors.1.1.5 Verification of Transformations in SILAGESILAGE [Hil85] is an applicative hardware description language. This is used todescribe hardware represented as data flow graphs. Transformations such as commutativity and retiming are used to optimize and refine SILAGE descriptions. Inthis work [Ang94], the syntax and semantics of SILAGE programs have been formalized as predicates in HOL [GM93] The denotational semantics of SILAGE have beenformalized in HOL. The equivalence of SILAGE programs is specified with respectto this denotational semantics. The transformations are then specified as functionsfrom one formal SILAGE program to another. The correctness of transformationsare thus verified with respect to the denotational semantic notion of equivalence.1.1.6 Transformations in Software DesignThere have been several efforts in specification and verification of refinements usedin program development from high level specifications Most of the efforts choose a15specification formalism and develop a notion of correctness, and an associated setof transformations based on the semantics of the formalism.The refinement calculus [Bac88] for specifications based on Dijkstra’s guardedcommand language and weakest precondition semantics has been formalized inHOL [vWS91]. Transformations such as data refinement and superposition havebeen verified to be correct. A formalization of incremental development of programs from specifications for distributed real-time systems has been worked out inPVS [Hoo94]. In this formalism, an assertional method based on a compositionalframework of classical Hoare triples is developed for step-wise refinement of specifications into programs.The KIDS [Smi9O]system is a program derivation system. High level specifications written in a language called Refine are transformed by data type refinementsand optimization transformations such as partial evaluation, finite differencing, intoa Refine program.16Chapter 2Overview of SILThe descriptions in SIL are characterized as graphs. They are used to describesynchronous systems. A denotational semantics of SIL has been worked out byHuijs [HK93]. An operational semantics of data-flow graphs has been worked outby de Jong [dJ93J. The behavior of a SIL graph is derived from the behaviors ofstructural building blocks of the graph. We briefly explain the structural aspectsin section 2.1, the behavioral aspects in Section 2.2, and the transformational approach in Section 2.3. Finally, we provide a brief overview of formal operational anddenotational semantics in Section 2.4.170Input Access point Non-inverted Condition Access point.Output Access point Inverted Condition Access pointFigure 2.1: Different kinds of SIL ports.2.1 Structural Aspects of SILThe basic building blocks of a SIL graph are the nodes for operations such as addition, multiplication, and multiplexing. The nodes have ports (also known as accesspoints) for input, output, and an optional condition input. Every port is associatedwith a type, which specifies the set of data values that the port can hold. We showthe different kinds of port in Figure 2.1.While input and output ports can be of any type, a condition input port isalways Boolean. A node with condition input port is known as a conditional nodeto stress the presence of the condition inputs.The ports of the nodes are connected by edges. SIL has different kinds of edges,of which, we address sequence edge and data-flow edge:• A data-flow edge is used to specify the direction of communication of datavalues from a source port to a sink port. Each data-flow edge has exactly oneport at its head and exactly one port at its tail. A source port can be the tailof more than one data-flow edge, in which case it is called a distribute, and a18sink port the head of more than edge, in which case it is called a join.• A sequence edge specifies an ordering between two ports. The ordering is usedto indicate that one of the ports has the overriding influence on the valueof the sink port, to which the two ports are connected by data-flow edges.Each sequence edge has exactly one port as its tail and one port as its head.Sequence edges are primarily used to resolve potential conflicts at joins. Allsource ports that are tails of data-flow edges with a join as a head must belinearly ordered by sequence edges.• The nodes and edges form a SIL graph. A SIL graph itself can be viewed asone single node, and used to construct another SIL graph in a hierarchicalmanner. Figure 2.2 is an example of a SIL graph.2.2 Behavioral Aspects of SILThe behavior of a SIL graph is determined by the behavior of individual nodes andtheir connectivity, which determines the data flow. By behavior, we mean the setof tuples, where each tuple has input data values and corresponding data values ofinternal and output ports. The values of internal and output ports are constrainedby the data relations of the nodes and the connectivity of the ports in the graph.When the ports of interest are the outermost input / output (I/O) ports of the SIL19condition access pointnodegraph, then it is called external or I/O behavior.Each node is associated with a data relation and an order relation. The datarelation of a node constrains the outputs of the node according to the inputs of thenode. That this is a relation, and not a function, implies nondeterminism allowingseveral implementation choices for the nodes. This contributes to design freedom.Any state information implicit in the node is incorporated into its data relation. Inthe case of a conditional node, the output is constrained by the data relation onlywhen the condition input of the node is true. When the condition input is false, theoutput is not defined. The order relation specifies constraints such as, the outputport of a node assumes a value after the value of its input ports have been asserted.This is particularly important in a hierarchically built node. We illustrate thesesequence edgeoutput access pointdata hierarchicalFigure 2.2: An example of a SIL graph description.20data relation = Rorder relation = out appears_later_than i ANDout appears_later_than jnon-conditional nole conditional node with “FALSE conditionFigure 2.3: SIL node: informal description.concepts in Figure 2.3.The communication of data values in a SIL graph is modeled by a single tokenflow concept, similar to the concept in Signal Flow Graphs (SFG) [Hi1851. A tokenis an atomic symbol denoting data. A token generated at an output port (source)is transmitted through a data-flow edge, emanating from the source, exactly once.The token is consumed at an input port (sink) to which the edge is connected. Theaction of communicating a token through a data-flow edge makes the sequence ofvalues that the sink can assume equal to the sequence of values that the source canassume. However, there is one exception to this when a token communicated to theconditional port of a conditional node denotes a data value that is false. In thiscase, the output port, unconstrained by the data relation of the conditional node, isnot defined. When such an output is a source of a data-flow edge, we force the sinkof such a data-flow edge to assume some well-defined arbitrary value. If we do notmake this exception, the sink data values would also not be well-defined. Since aJout = undefinedJ21data flow edge data flow edgevalue=v value=v value = value =undefined well-defined arbitraryA sequence edge B>vi v2v2 appears_later_than viFigure 2.4: SIL edges: informal description.sink is an input port, it is undesirable to have undefined inputs in practice. In termsof the token flow concept, a sequence edge from port A to port B describes thatthe token fired from B determines the value of a sink port C connected to A andB by data-flow edges, overriding the effect on the value of C due to the token firedfrom A. In such a case, we say that the sequence edge orders port A less than portB. A data-flow edge has an implicit sequence edge from its source to its sink. Wedepict these ideas in Figure 2.4. It should be noted that the token flow concept is anabstract model of the behavior of a SIL graph. The sequence edge is an artifact usedto resolve conflicts at joins. A sequence edge does not indicate temporal ordering ofthe data values that ports would assume when a SIL graph is executed.The ordering of token communication plays an important part in resolving conflicts at ports. One such conflict occurs when multiple data-flow edges from differentsources connect into a single sink. Such a sink port is called a join, as shown inFigure 2.5. To resolve the conflict at a join, first all the data-flow edges that havesources that have assumed well-defined data values are selected. Then, among those22join distributeFigure 2.5: SIL Join and Distribute: informal description.selected data-flow edges, the edge that is responsible for communicating the lasttoken determines the behavior of the join. With the definition of SIL, there willbe exactly one such data-flow edge. Thus, the source ports are linearly ordered, sothat the last of the well-defined data values arriving at the sink is always specified.If all the data-flow edges to the join originate from sources whose data values areundefined, then the data value that can appear at the join is arbitrary.The counterpart of a join is a source from which multiple data-flow edges originate. Such a port, known as a distribute, is shown in Figure 2.5. If a distributeis a source that assumes well-defined data values, then the sink to which it is connected by a data-flow edge, will assume a sequence of data values identical to thedistribute. Otherwise, if the data values that may appear at the distribute are notdefined, the sequence of data values that may appear at the corresponding sink portsare arbitrary.A SIL graph models the behavior of a system during a single clock cycle. Thereis no explicit notion of state in a SIL graph. The repetition of a SIL graph, calledunfolding over multiple clock cycles gives the behavior of the system across clock23t=ob(O)b(1)c(1)= a(1) + b(1)a( 1)t=2.b(2)c (2) = a(2) + b(2)a(2)t=3Figure 2.6: Combinational adder: SIL graph repeated over clock cycles.cycles. We depict an example of a combinational adder in Figure 2.6 unfolded overthree clock cycles. The DELAY node, one of the primitive nodes in SIL is used tomodel data flow between clock cycles, and thus encapsulates state information. Wecan unfold the SIL graph shown in Figure 2.7 over multiple clock cycles to result ina SIL graph without the DELAY node. The cumulative adder example in Figure 2.8illustrates the unfolding of a SIL graph with a DELAY node. It should be notedthat comparing two graphs with respect to behavior would not involve the stateinformation encapsulated in a DELAY node - since the behavior of a SIL graph wouldbe a snapshot of the execution of the SIL graph in a single clock cycle. In contrast,the execution histories would have to be taken into account for comparing two statet=124a c(t) = a(t) + c(t-1)Figure 2.7: Cumulative adder: SIL graph with DELAY node.machine models.The ordering imposed by sequence edges reduce non-determinism. This leads toa restriction on implementation choices allowed by its corresponding specification.We illustrate the implementation of a simple multiplexor in Figure 2.10 by reducingnon-determinism in a specification shown in Figure 2.9 using a sequence edge. Whenc is true, the value of d is a if the order is such that value of port p1 is communicatedrather than that of port p2. If the order is such that p2 has the overriding influence,then the value of d is b. While, when c is false the value of b is determined bythe port p2, due to the behavior of the conditional port and join discussed earlierin section 2.2. The sequence edge in the multiplexor implementation as given inFigure 2.10, imposes that the value communicated to b is that of port p1 when c istrue. Again, when c is false, port p2 determines the value of b.25t=ot=1c(-1) = 0c(t) = a(t) + c(t-1)Figure 2.8: Cumulative adder: unfolded SIL graph.2.3 Transformations in SILA transformation is viewed as modifying the structure of a graph into another graph.The modification is done by removing and/or adding nodes and edges. Such modifications should not violate the behavior of the original graph.In SIL, there are a number of optimization and refinement transformations{EMH+ 93]. Many of the optimization transformations are inspired by optimizingcompiler transformation techniques such as Common Subexpression Elimination,ac(O) = a(O) + 0at=2c(1) = a(l) + c(0)a c(2) = a(2) + c(l)t=326Cd= (NOTc)IMPLIESbFigure 2.9: Partial specification of a multiplexor.Cross-Jumping Tail-Merging and algebraic transformations involving commutativity, associativity, and distributivity. Other optimization transformations includeretiming. Refinement transformations include type transformations such as real tointeger, integer to Boolean, and implementing data relations of the nodes by concreteoperators {Mid94b]. We show a retiming transformation example in Figure 2.112.4 Operational and Denotational Semantics of Dependency GraphsIn general, the formal semantics of a specification language can be provided by thefollowing three methods:p1abp227Cd= IFcTHEN aELSE bSequence edge from p2to p1 means that, the token at p1overrides the token from p2 in determining the value at dFigure 2.10: Implementation specification of a multiplexor.• Axiomatic semantics: the properties corresponding to the basic syntax of thelanguage are asserted as axioms, and the method of deriving the semantics ofother language constructs is given by a set of inference rules. The axiomaticspecification of dependency graphs provided in this dissertation is an axiomaticsemantics.• Operational semantics: the syntactic elements of the language are associatedwith states, and the execution of the syntax is modeled as state transitions.abp2Figure 2.11: Example SIL transformation: retiming.28Thus, it provides a computation model for the specification language - i.e.,how computation is performed in the specification.• Denotational semantics: the syntactic elements of the language are mapped toan abstract domain of values by a semantic function. Thus, it describes whata specification computes.An axiomatic semantics is suited to reason about specification language in general, as well as particular instances of specifications written in the specificationlanguage. An operational semantics of a specification language provides a mechanism to an interpret a specification. The operational semantics described in aninferential style is called structured operational semantics {Gun92]. Denotationalsemantics provides a compositional semantic function for a specification language.The overview of operational and denotational semantics of dependency graphs givenhere is based on earlier work [dJ93, HK93].We first define a dependency graph G as a 7-tuple (N, P2,PQ?, E, I, 0, C), where• N is the set of nodes.• is the set of input ports.• Pout is the set of output ports.• Pcd is the set of condition ports.29p2Figure 2.12: A simple node used as an example for operational semantics• E: P x PT UPd is the set of edges.• I: V — Powerset(P2)is a mapping from nodes to input ports.• 0: V— Powerset(P0t)is a mapping from nodes to output ports.• C: V —* Powerset(Pd) is a mapping from nodes to condition ports.2.4.1 Operational Semantics of Dependency GraphsAn operational semantics of dependency graphs describes how a computation in thedependency graph takes place. We briefly outline here the structured operationalsemantics of dependency graphs {dJ93]. In this style, the computation in a dependency graph is viewed as a sequence of state transitions, where each state transitiontakes place by executing one node in the dependency graph. State transitions aregiven in the form of inference rules. A state is represented by a mapping from pairsof node, input ports to a domain of streams:S: N x Pin Dtreamwhere Dtream is the domain on streams.30S(R,pll) —* S’(R,pll);S(R,p12)— S’(R,p12)S(R1,p2) — S’(R1,p2)Table 2.1: Example of a structured operational semantics rule. See Figure 2.12Every state transition rule is based on an update rule that specifies the stateupdate on an execution of a node. The state update would cause the output streamto be updated. We give an example of a semantic rule for the graph given inFigure 2.12 where S (R,p) on a port p of node R denotes the stream obtained byappending an element to the stream S(R,p). The expression in Table 2.1, above thehorizontal line describes the state transition of the input ports of the node given inFigure 2.12. The expression below the horizontal line describes the state transitionof the output port of the node. Thus, the structured operational semantics rulemeans that, if the state transition above the horizontal line is executed, then thestate transition below the horizontal line gets executed.For verification, a comparison of two dependency graphs would be based on thecomparison of the output streams, obtained by the computation of the two graphswith identical input streams for each of the graphs. A containment of streams wouldindicate that the graph whose stream is contained in that of the other graph is arefinement of that graph. However, because the operational model is at a lower levelof abstraction than the axiomatic model, as seen in the following chapters of thisdissertation, one would not be able to generalize the results for dependency flow31graphs of arbitrary structure and application domain. For example, a node with anarbitrary or indefinite number of input/output ports would not be handled by theoperational semantics.2.4.2 Denotational Semantics of Dependency GraphsA denotational model specifies what a dependency graph computes. It is compositional. In a denotational semantic model, a meaning function maps syntactic objectssuch as ports, edges and nodes to an abstract domain of semantic objects. Thus,for example, if we define the meaning of a port p as:I[1l = DstreamC j[(p x p —* p)}jwhere Dstream, the domain of streams, is associated with a partial order anda least element ±. The denotational semantics of the graph shown in Figure 2.12 isobtained by composing the denotations of the ports and the node:= [R(I[p11I, I[l2)32Chapter 3Specification and Verificationin PVSThe Prototype Verification System (PVS) [OSR93b, SOR93a] is an environment forspecifying entities such as hardware/software models and algorithms, and verifyingproperties associated with the entities. An entity is usually specified by assertinga small number of general properties that are known to be true. These knownproperties are then used to derive other desired properties. The process of verification involves checking relationships that are supposed to hold among entities. Thechecking is done by comparing the specified properties of the entities. For example,33one can compare if a register-transfer-level implementation of hardware satisfies theproperties expressed by its high-level specification.PVS has been used for reasoning in many domains, such as in hardware verification [Cyr93, KK94], protocol verification, algorithm verification [LOR93, ORSvH95I,and multimedia [RRV95]. We briefly give the features of the PVS specification language in Section 3.1, the PVS verification features in Section 3.2, and highlight thesyntax of the PVS specification language in Section 3.3. Finally, in Section 3.4 wegive some example specifications and verification sessions in PVS.3.1 PVS Specification LanguageThe specification language [OSR93b] features common programming language constructs such as arrays, functions, and records. It has built-in types for reals, integers,naturals, and lists. A type is interpreted as a set of values. One can introduce newtypes by explicitly defining the set of values, or indicating the set of values, by providing properties that have to be satisfied by the values. The language also allowshierarchical structuring of specifications. Besides other features, it permits overloading of operators, as in some programming languages and hardware descriptionlanguages such as VHDL.343.2 PVS Verification FeaturesThe PVS verifier [SOR93aj is used to determine if the desired properties hold inthe specification of the model. The user interacts with the verifier by a small set ofcommands. The verifier contains procedures for boolean reasoning, arithmetic and(conditional) rewriting. In particular, Binary Decision Diagram (BDD) [BRB9O,Jan93aj based simplification may be invoked for Boolean reasoning. It also featuresa variety of general induction schemes to tackle large-scale verification. Moreover,different verification schemes can be combined into general-purpose strategies forsimilar classes of problems, such as verification of microprocessors [Cyr93, KK94].A PVS specification is first parsed and type-checked. At this stage, the type ofevery term in the specification is unambiguously known. The verification is donein the following style: we start with the property to be checked and repeatedlyapply rules on the property. Every such rule application is meant to obtain anotherproperty that is simpler to check. The property holds if such a series of applicationsof rules eventually leads to a property that is already known to hold. Examplesillustrating the specification and verification in PVS are described in Section 3.4.353.3 Notes on Specification NotationIn PVS specifications1,an object followed by a colon and a type indicates that theobject is a constant belonging to that type. If the colon is followed by the key wordVAR and a type, then the object is a variable belonging to that type.For example,x: integery: VAR integerdescribes x as a constant of type integer, and y as a variable of type integer2.Sets are denoted by {...}: they can be introduced by explicitly defining theelements of the set, or implicitly by a characteristic function.For example,{O,1,2}{x: integer I even(x) AND x / 2}The symbol is read as such that, and the symbol 1= stands for not equal to ingeneral. Thus, the latter example above should be read as “set of all integers x,such that x is an even number and x is not equal to 2”.1PVS specifications in this dissertation are enclosed in framed boxes.21n C, they would be declared as const mt x; mt y.36New types are introduced by a key word TYPE followed by its description as aset of values. If the key word TYPE is not followed by any description, then it istaken as an uninterpreted type.Some illustrations are:even_time: TYPE = {x: naturall even(x)}unspecified_type: TYPEOne kind of type that is used widely in this work is the record type. A recordtype is like the struct type in the C programming language. It is used to packageobjects of different types in one type. We can then treat an object of such a typeas one single object externally, but with an internal structure corresponding to thevarious fields in the record.The following operators have their corresponding meanings:FORALL x: p(x)means for every x, predicate3 p(x) is true3A predicate is a function returning a Boolean type: {true, false}.37EXISTS x: p(x)means for at least a single x, predicate p(x) is trueWe can impose constraints on the set of values for variables inside FORALL andEXISTS as in the following example:FORALL x, (yl y = 3*x): p(x,y)which should be read asfor every x and y such that y is S times x, p(x,y) is true.A property that is already known to hold without checking is labeled by a namefollowed by a colon and the keyword AXIOM. A property that is checked using therules available in the verifier is labeled by a name followed by a colon and thekeyword THEOREM. The text followed by a h in any line is a comment in PVS.We illustrate the syntax as follows:38axi: AXIOM Y. This is a simple axiomFORALL (x:nat): even(x) = x divisible_by 2thi: THEOREM ‘1. This is a simple theoremFORALL (x:nat): prime(x) AND x / 2 IMPLIES NOT even(x)We also use the terms axiom and theorem in our own explanation with the samemeanings. A proof is a sequence of deduction steps that leads us from a set of axiomsor theorems to a theorem.3.4 Specification and Verification Examples in PVSWe illustrate here three examples from arithmetic. The first two examples are takenfrom the tutorial [SOR93b]. The last example illustrates the use of a general purposestrategy to automatically prove a theorem of arithmetic. The first example is thesum of natural numbers up to some arbitrary finite number n is equal to n *(‘n+1)/2.The specification is encapsulated in the sum THEORY. Following introduction of n asa natural number nat, suxn(n) is defined as a recursive function with a terminationMEASURE as an identity function on n. Finally, the THEOREM labeled closed..±orin isstated to be proved.39sum: THEORYBEGINn: VAR natsum(n): RECURSIVE nat =(IF n = 0 THEN 0 ELSE n + sum(n - 1) ENDIF)MEASURE (LAMBDA n: n)closed_form: THEOREM sum(n) = (a * (n + 1))/2END sumThe THEORY is first parsed and type checked, and then the prover is invokedon the closedlorm THEOREM. The proof is automatic by applying induction andrewriting. The proof session is as follows:closed_form{1} (FORALL (n: nat): (sum(n) = (n * (n + 1)) / 2))40Running step: (INDUCT “n”)Inducting on n, this yields 2 subgoals:closed_f orm.1{1} sum(O) = (0 * (0 + 1)) / 2Running step: (EXPAND “sum”)Expanding the definition of sum, this simplifies to:closed_form.1{1} 0=0/2Rerunning step: (ASSERT)Invoking decision procedures, this completes the proof of closedform. 1.closed_form.2{1} (FORALL (j: flat):41(surn(j) = Cj * Cj + 1)) / 2IMPLIES sum(j + 1) = ((j + 1) * Ci + 1 + 1)) / 2))Running step: (SKOLEM 1 (“j!l”))For the top quantifier in 1, we introduce Skolem constants: (j ! 1), this simplifies to:closed_forin.2{1} sum(j!1) = (j!1 * (j!1 + 1)) / 2IMPLIES sum((j!1 + 1)) = ((j!1 + 1) * ((j!1 + 1) + 1)) / 2Running step: (FLATTEN)Applying disjunctive simplification to flatten sequent,this simplifies to:closed_form.2{—1} sumCj!1) = (j!1 * Cj!1 + 1)) / 2{1} sum(Cj!1 + 1)) = (Cj!1 + 1) * ((j!1 + 1) + 1)) / 2Running step: (EXPAND “sum” +)42Expanding the definition of sum,this simplifies to:closed_form.2[—1] sum(j!1) = (j!1 * (j!1 + 1)) / 2{1} (j!1 + 1) + sum(j!1) = (j!1 * j!1 + 2 * j!1 + (j!1 + 2)) / 2Running step: (ASSERT)Invoking decision procedures, this completes the proof of closedform.2.Q.E.D.Run time = 8.09 secs.Real time = 9.89 secs.NIL>The next example illustrates that decision procedures solve the steps involvingarithmetic and equality reasoning automatically. While, in the creative step of43supplying the proper instantiation for an existential quantification, the user has tointeract with the prover. We first present the following PVS THEORY specifying that3 cent stamps and 5 cent stamps can be used in combination in place of any stampwhose value is at least 8 cents.stamps : THEORYBEGINi, j, k: VAR natstamps: LEMMA (FORALL i: (EXISTS j, k: i+8 = 3*j + 5*k))END stampsstampsThe proof follows by induction:{1} (FORALL i: (EXISTS j, k: i + 8 = 3 * j + 5 * k))Rimning step: (INDUCT “i”)Inducting on i, this yields 2 subgoals:stamps.144{1} (EXISTS (j: flat), (k: nat): (0 + 8 = 3 * j + 5 * k))Here we have to supply an instantiation interactively.Running step: (QUANT 1 (“1” “1”))Instantiating the top quantifier in 1 with the terms: (11), this simplifies to:stamps.1{1} 0+8=3*1+5*1Running step: (ASSERT)Invoking decision procedures, this completes the proof of stamps. 1.stamps.2{1} (FORALL (j: nat):((EXISTS (j_0: nat), (k: nat): (j + 8 = 3 * j_0 + 5 * k))IMPLIES (EXISTS (j_1: nat), (k: nat):(j + 1 + 8 = 3 * j_1 + 5 *45Running step: (SKOLEM 1 (“j!l”))For the top quantifier in 1, we introduce Skolem constants: (j!1), this simplifies to:stamps .2{1} (EXISTS (j_O: nat), (k: flat): (j!1 + 8 = 3 * j_O + 5 * k))IMPLIES (EXISTS (j_1: nat), (k: nat): (j!1 + 1 + 8 = 3 * j_1 + 5 * k))Running step: (FLATTEN)Applying disjunctive simplification to flatten sequent, this simplifies to:stamps .2{—1} (EXISTS (j_O: nat), (k: flat): (j!1 + 8 = 3 * j_O + 5 * k)){1} (EXISTS (j_1: flat), (k: nat): (j!1 + 1 + 8 = 3 * j_1 + 5 * k))Running step: (SKOLEM —1 (ujI2hI “k’l”))For the top quantifier in -1, we introduce Skolem constants: (j!2k!1), this simplifies to:stainps.246{—1} j!1 + 8 = 3 * j!2 + 5 * k!1[1] (EXISTS (j_1: flat), (k: nat): (j!1 + 1 + 8 = 3 * j_1 + 5 * k))The following steps require user interaction:Running step: (CASE “k!l=O”)Case splitting on k!1=O, this yields 2 subgoals:stamps.2.1{—1} k!1 = 0[—2] j!1 + 8 = 3 * j!2 + 5 * k!1[1] (EXISTS (j_1: nat), (k: nat): (j!1 + 1 + 8 = 3 * j_1 + 5 * k))Running step: (QUANT 1 (“j!2—3” “2”))Instantiating the top quantifier in 1 with the terms: (j!2-3 2), this yields 2 subgoals:stamps.2.1.1[—1] k!1 = 0[—2] j!1 + 8 = 3 * j!2 + 5 * k!147{1} j!1 + 1 + 8 = 3 * (j!2— 3) + 5 * 2Running step: (ASSERT)Invoking decision procedures, this completes the proof of stamps.2. 1.1.stamps.2.1.2 (TCC):[—1] k!1 = 0[—2] j!1 + 8 = 3 * j!2 + 5 * k!1{1} j’2—3>=ORunning step: (QUANT 2 (“j!2+2” “k!l—l”))No suitable (+ve EXISTS/-ye FORALL) quantified formula found.No change on: (QUANT 2 (j!2+2 k!l-1))stamps.2.1.2 (TCC):[—1] k!1 = 0[—2] j!1 + 8 = 3 * j!2 + 5 * k!148{1} j!2—3>=ORunning step: (ASSERT)Invoking decision procedures, this completes the proof of stamps. 2.1.2. this completes the proof of stamps.2.1.stainps.2.2[—1] j!1 + 8 = 3 * j!2 + 5 * k!1{1} k!1 = 0[2] (EXISTS (j_1: nat), (k: flat): (j!1 + 1 + 8 = 3 * j_1 + 5 * k))Running step: (ASSERT)Invoking decision procedures, this simplifies to:stamps.2.2[—1] j!1 + 8 = 3 * j!2 + 5 * k!1[1] k’l=O{2} (EXISTS (j_1: nat), (k: nat) (j1 + 9 3 * j_i + 5 * k))49Running step: (QUANT 2 (“j!2+2” “k!l—l”))Instantiating the top quantifier in 2 with the terms: (j!2+2 k!l-l), this simplifies to:stamps.2.2[—1] j!1 + 8 = 3 * j!2 + * ii[1] k!1 = 0{2} j!1 + 9 = 3 * (j!2 + 2) + 5 * (k!1 — 1)Running step: (ASSERT)Invoking decision procedures, this completes the proof of stamps.2.2.This completes the proof of stamps.2.Q.E.D.Run time = 10.67 secs.Real time = 11.65 secs.50NIL>Finally, the following example illustrates the use of a general purpose strategyinduct -rewrite-bddsimp, that involves induction, rewriting and propositional simplification. The theorem is based on the property of a Fibonacci sequence: 1, 1,2, 3, 5 Here, an element, except the first two, is the sum of the its two immediate predecessors. If we denote the sum of n (n > 0) elements in the sequenceby fibsum(n), then we are required to prove the property that the sum is equal tofib(n+2) + 1. The PVS specification can be given as follows:51fib: ThEORYBEGINn: VAR natfib(n): RECURSIVE nat =IF ii = 0 THEN 1ELSIF n = 1 THEN 1ELSE fib(n - 2) + fjb(n - 1)ENDIFMEASURE LAMBDA n: nfibsum(n): RECURSIVE nat =IF n = 0 THEN 3ELSE fib(n) + fibsum(n- 1)ENDIFMEASURE LAMBDA n: nFibSumThm: THEOREMfibsum(n) = fib(n + 2) + 1END fib52The verification proceeds automatically by using a strategy based on induction,rewriting and propositional simplification as follows:FibSumThm{1} (FORALL (n: nat): fibsum(n) = fib(n + 2) + 1)Rule? (auto—rewrite—theory “fib”)Adding rewrites from theory fibAdding rewrite rule fibAdding rewrite rule fibsumAuto-rewritten theory fibRewriting relative to the theory: fib,this simplifies to:FibSumTbm[1] (FORALL (n: nat): fibsuin(n) = fib(n + 2) + 1)Rule? (induct—rewrite—bddsimp “n”)fibsum rewrites fibsum(O)53to 3fib rewrites fib(O)to 1fib rewrites fib(1)to 1fib rewrites fib(2)to 2fib rewrites fib(j ! 1 + 1)to IF j!1 + 1 = 1 THEN 1 ELSE fib(j!1 + 1— 2) + fib(j!1 + 1 — 1) ENDIFfib rewrites fib(j ! 1 + 2)to fib(j!1)+ IF j!1 + 1 = 1 THEN 1ELSE fib(j!1 + 1— 2) + fib(j!1 + 1 — 1)ENDIFfibsum rewrites fibsum(j ! 1 + 1)to IF j!1 + 1 = 1 THEN 1 ELSE fib(j!1 + 1— 2) + fib(j!1 + 1 — 1) ENDIF+ fibsum(j!1)fib rewrites fib(j ! 1)to IF j!1 = 1 THEN 1 ELSE fib(j!1— 2) + fib(j!1— 1) ENDIFfib rewrites fib(j!1 + 3)to IF j!1 + 1 = 1 THEN 1 ELSE fib(j!1 + 1— 2) + fib(j!1 + 1— 1) ENDIF54+ fib(j!1)+ IF j!1 = 0 THEN 1ELSE fib(j!1— 1)+ IF j!1 = 1 THEN 1ELSE fib(j!1— 2) + fib(j1— 1)ENDIFENDIFfib rewrites fib(j !1)to IF j!1 = 1 THEN 1 ELSE fib(j!1— 2) + fib(j!1— 1) ENDIFBy induction on n and rewriting,Q.E.D.Run time = 10.43 secs.Real time = 30.62 secs.55Chapter 4Efficient MechanicalVerification4.1 IntroductionIn this chapter, we describe the seamless integration of decision procedures for efficient model-checking and propositional simplification within a generic theoremproving environment. The decision procedures are based on Binary Decision Diagram (BDD)’ [Bry92]. A survey by Gupta [Gup92] provides an overview of various‘By BDD, we mean Reduced Ordered BDD (ROBDD).56formal verification techniques including theorem-proving and model-checking.In model-checking [McM93J, a typical implementation specification is a state-machine. The verification that the implementation satisfies a property is carriedout by reachability analysis. The relationship that a model I satisfies a property Sis written as:II=sIn generic theorem-proving, the specification could be of any form belonging to thelogical language2 of the theorem-prover. The verification of a property proceeds bya series of application of deduction rules such as induction. The relationship thatan implementation I satisfies a property specification S is written as:I- I SOur seamless integration allows the model-checker to be invoked just like anotherdecision procedure inside the theorem-prover. We have applied our integrated system to verification of safety and liveness properties of a variety of hardware andsoftware examples, in which the entities may be of arbitrary size.The rest of the chapter is organized as follows. We first give the motivation forour work on integration in Section 4.2. Next, we give a brief account of terminol2A typical logical language is based on typed higher-order logic.57ogy in Section 4.3. In Section 4.4 we describe, the integration scheme used in thedecision procedure. In Section 4.5, we recapitulate the PVS specification logic andverification architecture. The algorithm used to implement the integration of thedecision procedure is described in Section 4.6, with a brief mention of supplying contextual information and a proof of correctness. An integration of model-checking asa decision procedure within theorem proving is described in Section 4.7. In section4.8 we give a small illustration of using the decision procedure. Finally, discussionand conclusions are summarized in section 4.10.4.2 MotivationWhereas generic theorem-proving provides powerful abstraction mechanisms, model-checking based on BDDs provides efficient propositional simplification, and verification of properties of state-machine specifications. For example, model-checkingcould be used to verify a typical cache-coherence protocol that has a small number (typically 5) of processors, it suffers from the state-explosion problem [McM93,BCM+90bj for a larger number of processors. Further, if we have an unspecifiedarbitrary number of processors model-checking would not be able to handle theverification problem. But, generic theorem-proving would be capable of handlinga verification problem that involves arbitrary structure. However, theorem-provingdoes not provide efficient automatic procedures for propositional simplification and58state-machine verification.The BDD-based decision procedures are used as efficient simplifiers for logicalformulas in the generic proof checker. This, essentially involves abstracting logicalformulas into well-formed expressions of boolean type, which are suitable for simplification by an external BDD-based simplifier, and getting back simplified booleanexpressions. The simplified boolean expressions are then mapped back into corresponding well-formed logical formulas for further proof-checking. Additionally, theBDD-based simplifier is supplied with contextual information associated with termsof the logical formulas in the proof checker.There has been earlier work on linking VOSS, a symbolic model-checker withHOL, another proof checker [JS93]. Their work, in hardware verification, was thefirst attempt to use a combined approach of theorem proving and BDD-based modelchecking. However, the focus in that work was on using VOSS to eventually determine the truth of a conjecture in hardware specification that has been sufficientlyreduced to a concrete specification using HOL. The model-checker was used onlyin the final step of a proof. The form of logical formulas sent to the model-checkerwas restricted due to a syntax-based interface scheme. One such restriction wasthat the atomic terms of the formula had to be Boolean. There was no mechanismfor supplying contextual information from the proof checker to the model checker.Also, a general facility to map back results obtained from VOSS into HOL was not59present.Our work, on the other hand, does not impose any restrictions on the structureof logical formulas. The components of a non-atomic Boolean term could be ofany arbitrary type. Further, it allows contextual information to be supplied to theBDD-based simplifier. We use the BDD-based procedure as if, it is one of severaldecision procedures available in the proof checker, by admitting results from thesimplifier to be mapped up into the logic domain. The simplifier uses Reduced Ordered BDD (ROBDD), a canonical representation of boolean expressions [BRB9O],to compute greatest (ii) and least () fixpoints [Eme9O, BCM9Ob]. Propertiesexpressed as temporal logic formulas are transformed into computation of greatestand least fixpoints. Due to the canonicity property of ROBDDs, we can interpretour scheme as using execution or computation to speed up proof checking. Wehave implemented the system in Prototype Verification Systems (PVS) from SRIInternational [OSR93b] using the BDD-based simplifier from TUE [Jan93a, vEJ94].4.3 TerminologyHere, we briefly account the terms we use to describe the objects in the logic domain. We follow the terminology employed in Gentzen’s sequent calculus succinctlysummarized by Rushby [Rus93].60• A term is a constant symbol, a variable symbol, or a function application.• An atomic formula is a predicate symbol applied to a term.• A propositional connective is one of “-‘, A, V, , andIF-THEN-ELSE.• A quantifier is either V or• A well-formed formula (wff) is an atomic formula, a quantified wff or a seriesof wff s combined using propositional connectives.• A sentence is a wff that does not contain free (unbound) variables. The wffis then said to be closed.• A sequent is of the form P 1— z where P and are each a set of sentences.The sentences in P are assumptions, while those in L are conclusions. Themeaning of a sequent, intuitively, is that the conjunction of assumptions impliesthe disjunction of conclusions.4.4 Integration SchemeA sentence in a sequent is mapped to a tree structure, whose nodes are closed wffsthat are related by any one of the propositional connectives. The terminal leavesof this tree are wffs which cannot be further mapped to such a tree structure (i.e.61wffO bo IvarOLogic______ ______ ______fl wff2 <> boolvarl boolvar2 <> ROBDDSentencewff2 wff3boolvar2 boolvar3Wffs related by LABFPropositional connectivesFigure 4.1: BDD Integration Schemethe terminal wffs do not have any more top-level propositional connectives). Thewffs are then substituted by variables of boolean type, with syntactically equalwffs being replaced by a single variable. A new formula is reconstructed back aftersubstitution. We call this a Least Abstract Boolean Formula (LABF). Furthermore,for each wff in the original tree, the global context in the logical domain is checkedfor any other propositional relation (i.e. if it appears with a wff combined by apropositional connective) with other wffs in that tree. Such contextual assertionsare also similarly mapped up to an LABF and contextual formulas formed. Thenew formula and the contextual formulas are then sent to the BDD-based simplifierfor simplification3.A simplified formula is received back, and the variables in it aresubstituted by their corresponding wffs. This simplified formula is introduced inthe original sequent. We illustrate this scheme in Figure 4.1.3By simplification, we mean obtaining the simplest sum-of-cubes.624.5 PVS Specification Logic and Verification Architecture: ReviewWe have described the PVS specification language and verification features in Chapter 3. In this section, we briefly review specification and verification in PVS described in detail in Chapter 3. PVS specification logic is based on typed higher-orderlogic [OSR93b]. It supports reals, Abstract Data Type (ADT) definition, subtypingand dependent typing. It also features common programming language constructssuch as arrays, tuples and records. The specification language also permits overloading. Further, it allows parametric and hierarchical structuring of specifications.The verification architecture consists of a type checker and a proof checker associated with decision procedures for propositional reasoning, arithmetic, (conditional)rewriting, beta reduction, and data type simplification among others. The user interacts with the system by a small set of proof commands. Besides other facilities,one can combine proof commands into strategies (tactics). It provides internal hooksto attach foreign decision procedures.A PVS specification is first parsed and type-checked. The type-checker producestype correctness conditions (tccs) that assures the well-formedness of the specification. The tcc obligations are discharged using, in part, the proof-checker. At thisstage, the type of every term in the specification is unambiguously known. The63proof checking is conducted using Gentzen’s style sequent calculus [OSR93b]. Inthis backwards style, we start with the conjecture to be proved and repeatedly applydeductive rules, which might use decision procedures. The conjecture is a theoremif such a series of applications of proof rules eventually leads to an axiom or anothertheorem.4.6 Integration AlgorithmIn our implementation of the BDD-based decision procedure integration, we use theavailable internal PVS hooks for attaching the BDD-based simplifier to the proofchecker. A sentence in a sequent under consideration in a proof is recursively checkedfor wffs combined by propositional connectives. Terms whose types are finite setsare efficiently encoded in terms of boolean values. Such terms are generally used todescribe finite state machines, and properties of the state machines.4.6.1 Supplying Contextual Information• The terms in a sentence could be associated with facts in the global context ofthe proof checking environment. This information has to be used, in general, bydecision procedures. A typical situation in which contextual information is used,occurs due to the introduction of abstract data types (ADTs). The proof checker64generates axioms for enumerated type definitions, a kind of ADTs. Two kinds ofaxioms are generated:• Exclusivity axiom: this states that a variable declared as an enumerated type,can only be exactly one member of the type at a time.• Inclusivity axiom: this states that a variable declared as an enumerated typeis at least one of the members of the type.The contextual information such as the above axioms are supplied to the BDD simplifier with the restriction operator. The restriction operation in the BDD simplifierevaluates a boolean expression under the assumption that the other boolean formulaholds.4.7 Model-Checking within Theorem ProvingThe integration of a model-checker or any finite state enumeration tool with a theorem prover can be done using either a “shallow embedding” or a “deep embedding”of the interface to the model-checker in the theorem prover. For example, in theHOL/VOSS implementation [JS93], which uses a shallow embedding, only the language (a very restricted temporal logic) used to specify the properties to be provedin VOSS was embedded in HOL. The model over which the validity of the proper65ties are checked exists only in the model-checker and is not defined in the theoremprover. In a deep embedding, the approach used here, the semantics of the temporallogic and the model are represented in the theorem prover. An advantage of deepembedding is that it is possible to reason about the embedding and the model inthe theorem prover also. For example, the process of abstracting a model before itis sent to the model-checker can be formalized in the theorem prover.In our integration scheme [RSS95], Computation Tree Logic (CTL) [McM93]operators that are parameterized with respect to a binary next-state relation N aredefined in PVS in terms of mu-calculus, which is itself embedded in the logic ofPVS. Temporal logic formulas involving a specific interpretation for N, also definedin PVS, are transformed into mu-calculus formulas involving greatest and least fix-points [BCM9Oa, Eme9O] by rewriting in PVS. For a class of temporal formulas,where N is known to be finite, the simplified mu-calculus formulas are translatedand sent to an external BDD-based mu-calculus simplifier to compute greatest andleast fixpoints. If the simplifier returns true then the original temporal formula isconsidered to be proved. The rewriting step, the translation process and the invocation of the simplifier is encapsulated as an automatic proof strategy that can beinvoked as a primitive PVS command called model-check.664.7.1 Propositional mu-calculus and Temporal Logic: OverviewPropositional mu-calculus is an extension of propositional calculus that includesuniversal and existential quantification on propositional variables4. In addition toatomic propositional terms and predicates on propositional terms, propositionalterms may contain predicate variables bound by greatest (ii) and least () fixpointoperators. It is strictly more expressive than CTL, and provides a framework toexpress fairness (fairCTL) and extended temporal modalities [EL85].There have been several variations of mu-calculus proposed in the past [C1e89,EL85, Koz83, Par89, BCM9Oa]. We closely follow the formal definition of thesyntax of propositional mu-calculus from Clarke and others [BCM9Oaj that formsthe basis of the model-checker [Jan93b] used in this work. Let be a finite signature,in which every symbol is a propositional variable or a predicate variable with apositive arity. The two syntactic categories formulas and relational terms are definedin the following manner. A formula is either a conventional propositional expressionor a relational term. A relational term is one of the following:• Z, a predicate variable in E.• Az, z2,. .. , z.f, where f is a formula and z1,z2,. .. ,z are propositional variables in E.4Quantification on propositional variables serves to succinctly express conjunction and disjunction, and does not enhance expressive power.67• j.iZ.P[Z] is the least fixpoint of F, where Z is a predicate variable in E and Fis a relational term formally monotone in Z. Similarly, ziZ.P is the greatestfixpoint of F, and is equivalent to the negation of the least fixpoint of -‘P[--Z}.Temporal logics have proved to be feasible in specification and verification ofproperties of concurrent programs and state-machines models [Eme9O, McM93].Temporal logics such as CTL with extensions of fairness (fairCTL) and other temporal modalities, and PLTL can be succinctly expressed using the mu-calculus [EL85,BCM9Oa] defined above. Additionally, it has been shown that LTL model-checkingcan be reduced to fairCTL model-checking [CGH94J.4.7.2 mu-Calculus and fairCTL in PVSWe develop a fixpoint theory based on an ordering by logical implication as follows.The membership of an element in a set is denoted by its characteristic predicate.An ordering on the predicates corresponding to set inclusion is introduced by logicalimplication on the predicates, i.e., Pi P2 V s: p1(s) p2(s). Monotonicity isdefined as a relation that preserves such an ordering. A fixpoint of a functionalpp, when applied to a predicate argument equals the predicate itself. The GreatestLower Bound gib of a set set_of _pred is the intersection of all members of theset. The Least Upper Bound lub of a set set_of _pred is the union of all membersof the set. The least fixpoint (mu) of a monotonic functional pp is the gib of the68set of functions satisfying the property that the application of the functional to apredicate p is less than or equal to the predicate p itself. The greatest fixpoint(nu) of a monotonic functional pp is the LUB of the set of functions satisfying theproperty that the application of the functional to a predicate p is greater than pitself:mu (pp) = gib (LAMBDA p: pp(p) <= p)flu (pp) lub (LAMBDA p: pp(p) <= p)We have mechanically verified in PVS, the property that predicates ordered bylogical implication constitute a complete lattice.The mu-calculus theory thus developed is used to develop the PVS theory (shownin Table 4.1 of the Appendix) for fairCTL. The fixpoint and fairCTL theories areparametrized by an uninterpreted type t. In a state-machine model specification,the type t is instantiated to the type of the state. In fairCTL theory, first thenext state relation N is defined as a binary predicate on a pair of states. Thetemporal operators EX, EG, EU, and EF with existential path quantifiers are definedin a standard manner, and the corresponding operators AX, AF, AG and AU withuniversal path quantifiers are defined by introducing appropriate negations in thecorresponding existential versions. An example definition for EG is the following:EG(N,f):pred[t] = nu (LAMBDA Q: (f AND EX(N,Q)))69FairnessA fairness constraint expresses a condition that should hold sufficiently often along apath [CG87]. Fairness constraints cannot be directly expressed in CTL. We use themore expressive mu-calculus to specify the fair temporal logic operators as follows.We first define fairEG(N,f)(Ff):fairEG(N,f) (Ff) :pred[t] =nu(LAMBDA P: EU(N, f, f AND Ff AND EX(N, P)))“there exists a computation path in the state machine model defined by the nextstate relation N, in which a fair formula Ff holds infinitely often and formula f holdsglobally along the path”, where N is the next state relation, Ff is the formula thathas to hold infinitely often, and f is the formula that has to hold globally on allsuch fair paths. The definition of fairness we have used above is based on the workby Emerson and Lei [EL85], which also allows other notions of fairness. The otherfairCTL operators are defined by using the fairEG operator in the same manner asBurch et al. [McM93, BCL+94]. The advantage of having an explicit formalizationof fairness in a verification system is that it allows one to check if there exists atleast one fair path in a given model. Without such an explicit formalization, there isa danger of imposing fairness constraints that might lead to empty models in whichevery property holds trivially.704.7.3 Translation from PVS to mu-calculusThe PVS specification language {OSR93a] features basic types such as reals, integers,naturals, and lists, and a rich set of common programming language constructs suchas scalars, arrays, functions, and records for structuring specifications. However,since the low-level BDD-based mu-calculus model-checker accepts only the languageof propositional mu-calculus as discussed in Section 4.7.1, an automatic translationis provided from a fragment of the PVS language to propositional mu-calculus.The fragment of the PVS language that is translated into mu-calculus consists ofexpressions whose types are hereditarily finite: i.e., types which are either themselvesfinite or constructed from expressions which are of finite types. However, booleanexpressions in which subexpression types are not hereditarily finite are translatedinto a single boolean variable. For example, the equality x = 1, where x is a variableof type natural, is translated into a simple boolean variable bvarxl. This schemeensures that the output of translation is always a legal mu-calculus expression evenwhen the input contains a subterm of non-finite type.Let the category t—expr denote the set consisting of propostional expressions,scalars, records, tuples, arrays of a specified finite length, equalities on t—exprs, conditional expressions on t-exprs, t-exprs with quantifications on boolean variables,and t-exprs quantified by bt or v. In the following, we give an abstract descriptionof the translation in terms of the function pvs-to—mu. The input to pvs—to—mu is71either a t—expr belonging to the PVS language, or a list whose elements are either t—exprs or nested lists of t—exprs. The output of pvs—to—mu is a mu-calculusformula defined in Section 4.7.1.1. pvs-to-mu(prop-expr) = prop-expr, where prop-expr is a propositionalexpression. Thus, true, false, boolean variables, and boolean expressionsconsisting of only simple boolean variables are translated without change.2. pvs-to—mu(scalar) = binary-encoding(scalar), where a scalar is eithera variable or a constant of enumerated type, and binary-encoding is the corresponding boolean representation of the scalar in a binary encoding scheme.For example, let instr: TYPE = {jmp ,mov , add} be a scalar type, and x bea scalar variable of instr type. Scalar constants jmp, mov, and add are translated as lists [true, true], [true ,false], and [false ,false], respectively.The scalar variable x is translated as a list [bvarl ,bvar2].3. pvs—to-inu(nestedlist_of_t—expr) = flattened_tuple_of_t-exprFor example, [[bvarl ,bvar2] ,bvar3] is translated as bvarl ,bvar2 ,bvar3.Such a translated tuple will eventually be incorporate as part of argumentsto predicates, and as variables bound by A and quantifiers. Thus P Cx, x) istranslated as P(bvarl,bvar2,bvarl,bvar2), where x = instr.4. pvs—to—mu(record) = [ pvs—to—mu(fieldl),. . .,pvs—to--mu(fieldN)],72where record = [# fieldi,.. .,fieldN #] and fieldi,...,fieldlJ aret—exprs. A tuple is translated in a similar manner. For example, let state:TYPE = [# xi,x2: instr #] be a record type, and si: VAR state be arecord variable. The translation output gives a list [bvar-xi ,bvar-xi ,bvar-x2,bvar-x2]. This will eventually be translated into a mu-calculus expression viarules 7,9, and 10. arguments to predicates5. pvs-to-mu(array([O. . .N]—fType))= [ pvs—to—mu(e].i), . . . ,pvs—to—mu(elN)] where the range type fType ishereditarily finite, eli,. . . ,elN are the elements of the array, and N is specifiedand finite.6. pvs-to-mu(t-exprl = t-expr2)= (pvs—to-mu(t-expri) = pvs-to-inu(t—expr2)).7. pvs—to—mu(nestedi.ist_of_t-expr_i = nestedlist_of..t—expr2) =A pvs—to—mu(t—expri [i] = t—expr2 [i]), i.e., equality on lists is translatedto the conjunction of the equalities on the elements of the flattened lists.8. pvs—to—mu(t—exprs = IF test THEN then—t-exprs ELSE else—t--exprs)= IF test ThEN pvs-to-mu(t-expr = then-t-exprs)ELSE pvs—to-mu(t—expr = else-t-exprs).i.e., the inner IF expressions are lifted to the top.739. pvs-to-mu(Vx.t-expr) = V pvs-to-mu(x). pvs-to-mu(t-expr). This translation also holds for and A binders in place of V.10. pvs—to—mu(.P(ei, e2,... , =.P(pvs—to—mu(e),pvs—to2,..., pvs—to—mu(e)), where e1, e2,. .. , e,are t-exprs. This translation also holds for z-’ binder in place of .Other constructions such as functions and subtypes of hereditarily finite types couldalso be translated The fragment of the PVS language given above is rich enoughto express specifications and properties of state-machine models in a structuredmanner. In comparison to language front-ends for other model-checkers such asSMV [McM93], the PVS language is more expressive, and has the significant advantage of a proof system for the language.4.8 Example: BDD-based Propositional Simplificationin PVSWe give here a small example to illustrate a PVS specification for a part of a tinymicroprocessor. The simplicity of the example allows us to explain our approachwithout digressing into many specification details. In this example, we define atheory named tinymicro and, declare time as type real and instructions as an74mucalculus [t : TYPE]: THEORYBEGINIMPORTING orders [pred [t]]<=(pl,p2) = FORALL s: p1(s) IMPLIES p2(s)ismono(pp) = (FORALL pl,p2: p1 < p2 IMPLIES pp(pl) < pp(p2))isf ix (pp,p) = (pp(p) = p)glb(set_of_pred)(s) = (FORALL p: member(p,set_of_pred) IMPLIES p(s))‘h least fixpointmu (pp) = gib (LAMBDA p: pp(p) < p)islfp (pp,pl) =isf ix (pp,pl) ANDFORALL p2: isf ix (pp,p2) IMPLIES p1 < p2lub (setofpred)(s):bool = EXISTS p: member(p,setofpred) AND p(s)Y. greatest fixpointnu (pp) = lub (LAMBDA p: pp(p) < p)ff: VAR [pred[t]—>predEt]]lfp_is_lfp: THEOREMismono(ff) IMPLIES islfp(ff,lfp(ff))END mucalculusTable 4.1: Parts of bt-calculus theories in PVS75ctlops It: TYPE]: THEORYBEGINIMPORTING mucalculusu,v,w: VAR tf,g,Q,P,pl,p2: VAR pred[t]N: VAR [t, t—>bool]h Higher order propositional connectivesAND(f,g)(u):bool = f(u) AND g(u);OR(f,g)(u):bool = f(u) OR g(u)NOT(f)(u):bool = NOT f(u);IMPLIES(f,g)(u):bool = f(u) IMPLIES g(u);IFF(f,g)(u):bool = ±(u) 1FF g(u)Y. CTL operatorsEX(N,f)(u):bool =EG(N,f):pred[t] =EU(N,f,g):predft]EF(N,f):pred[t] =AX(N,f):pred[t] =AF(N,f):pred[t] =AG(N,f):pred[t] =AU(N,f,g):pred[t]AND AF(N,g)END ctlops(EXISTS v: (f(v) AND N(u, v)))nu (LAMBDA Q: (f AND EX(N,Q)))= mu (LAMBDA Q: (g OR (f AND EX(N,Q))))EU(N, (LAMBDA U: TRUE),f)NOT EX(N, NOT f)NOT EG(N, NOT f)NOT EF(N, NOT f)= NOT EU(N, NOT g, (NOT f AND NOT g))Table 4.2: PVS theory for CTL operator definitions in terms of greatest and leastfixpoints.76fairctlops [t :TYPEJ: THEORYBEGINIMPORTING ctlopsu,v,w: VAR tt,g,Q,P,pi.p2: VAR pred[t]N: VAR It, t—>bool]X tairCTL operators: CTL extended with fairnesstairEG(N,f) CFt) :pred[tJ =nu(LAMBDA Q: £ AND mu(LAMBDA P: EXCN, £ AND ((Ft AND Q) OR P))))tairEXlist_tor_tairEG(N,t,P,Q)(Ftlist) Cu): RECURSIVE bool =CASES Ft list OFnull: TRUE,cons(x,y): EX(N, £ AND (Cx AND Q) OR P))(u) ANDtairEXlist_tor_tairEG(N,t,P,Q)(y)(u)ENDCASESMEASURE (LAMBDA N,f,P,Q: (LAMBDA Ft list: length(Ftlist)))tairEG(N,t) (Ftlist) :pred[t) =nu(LAMBDA P: EUCN, t, £ AND Ft AND EXCN, P)))Fair?(N,Ft) :pred[t] = tairEG(N,LAMBDA u: TRUE) (Ft)Fair?(N,Ft list) :predlt] = tairEG(N,LAMBDA u:TRUE)(Ftlist)tairEX(N,t)(Ft):pred[t] = EX(N,t AND Fair’?(N,Ft))£airEX(N,t)(Ftlist):pred[t] = EX(N,t AND Fair?(N,Ft list))tairEU(N,t,g)(Ft):pred[t] = EU(N,f,g AND Fair?(N,Ft))tairEU(N,t,g)(Ftlist):pred[t] = EU(N,t,g AND Fair?(N,Ftlist))tairEF(N,t)(Ft):pred[t] = EF(N,t AND Fair?(N,Ft))£airEF(N,t)(Ftlist):pred[t] = EF(N,t AND Fair?(N,Ftlist))tairAX(N,t)(Ft):pred[t] = NOT tairEXCN, NOT t)(Ft)tairAX(N,t)CFtlist):pred[t] = NOT fairEX(N, NOT t)(Ftlist)tairAF(N,t)(Ft):pred[t] = NOT tairEG(N,NOT t)(Ft)tairAF(N,t)(Ftlist):pred[t] = NOT £airEG(N,NOT t)(Ftlist)£airAG(N,t)(Ft):pred[t] = NOT tairEFCN,NOT t)(Ft)tairAG(N,t)(Ftlist):pred[t] = NOT tairEF(N,NOT t)(Ftlist)tairAU(N,t,g)(Ft):pred[t] = NOT tairEU(N,NOT g,NOT £ AND NOT g)(Ft)AND tairAF(N,g)(Ft)END tairctlopsTable 4.3: PVS theory for fairCTL operator definitions in terms of greatest andleast fixpoints.77enumerated type of 3 elements. We then, declare variables t and instr. The functionsymbol tinydef is declared as a predicate on the tuple [time ,instructions]. Thedefinition of tinydef is given using the IF-THEN-ELSE construct. The theoremtinytheorem is to be proved.tinymicro: THEORYBEGINtime: TYPE = realinstructions: TYPE = JMP, MOVE, NOOPt: VAR timeinstr: VAR instructionstinydef: pred [[time ,instruct ions]]tinydef(t, instr)IF t < 2 THEN instr = NOOPELSE instr = JMP OR instr = MOVE OR instr = NOOPENDIFtinytheorem: THEOREMFORALL t, instr:t > 2 IMPLIES tinydef(t, instr)END tinymicro78We now elucidate the verification using ROBDD Decision Procedure in the context of the above example in PVS. A PVS goal appears in the following form: theformulas above the dashed line (turnstile) are assumptions, and those below the lineare conclusions. The proof checking is done in a backwards style described in section4.5.We first set up the PVS goal and remove the universal quantifiers:tinytheorem{1} FORALL t, instr: t > 2 IMPLIES tinydef(t, instr)For the top quantifier in 1, we introduce Skolem constants: (t!1 instr!1) this simplifiesto:tinytheorem{1} t!1 > 2 IMPLIES tinydef(t!1, instr!1)Applying disjunctive simplification to flatten sequent, this simplifies to:tinytheorem79{—1} t!1 > 2{1} tinydef(t!1, instr!1)Expanding the definition of tinydef this simplifies to:tinytheorem[—1] t!1 > 2{1} IF t!1 < 2 THEN instr!1 = HOOPELSE instr!1 = JMP OR instr!1 = MOVE OR instr!1 = NOOPENDIFHere, we choose to use our ROBDD decision procedure on formula 1 in the abovesequent. The integration algorithm, first constructs an LABF by assigning booleanvariables to the expression t!1 <2 and the equalities in THEN-branch and ELSE-branch of the IF-THEN-ELSE expression. Then the contextual information is extracted corresponding to instruction enumerated type defined in the tinymicro theory above. Here the contextual assertions are:80((iistr!1 = JMP) OR (instr!1 = MOVE) OR (instr!1 = NOOP) AND(NOT ((instr!1 = JMP) AND (instr!1 = MOVE))) AND...The BDD simplifier, then evaluates the IF-THEN-ELSE expression under the restrictionthat the enumerated-type exclusive and inclusive axioms hold. In this example, only theinclusive axiom is useful. We then get back a simplified expression as below:tinytheorein[—1] t!1 > 2{i} instr!1 = NOD? OR NOT t!1 < 2We then invoke built-in arithmetic and propositional decision procedures to finally arriveat:tinytheorein81C—i] t!i > 2{i} TRUEwhich is trivially true.4.9 Examples: BDD-based Model-Checking in PVSWe illustrate here, a typical mutual-exclusion protocol due to Peterson [Pet8l}.In this protocol, processors might be in one of the four states: idle, entering,critical, exiting. However, no two processors might be in the critical section atthe same time. Using PVS, we can reduce an N-processor mutual-exclusion protocolinto 2-processor mutual-exclusion by successively running a tournament competitionfor N/2 processors at each competition stage, until we arrive at a 2-processor corn82petition [Sha94]. At this stage, the BDD-based model-checking decision procedureis called to verify the mutual-exclusion property of the reduced problem. We givethe PVS specification of the 2-processor mutual-exclusion protocol in Table 4.4. Theverification proceeds by rewriting temporal operators into ii and ji, and calling theBDD-based simplifier to compute fixpoints and map back the results into PVS. Theautomatic proof transcript is given in Appendix A .4.10 Discussion and ConclusionsReasoning can be done by a series of applications of syntactic inference rules. However, we can choose a representation for the formulas and the simplification doneat the representation level more efficiently. We can view this representation as asemantic model of the formulas. The PVS architecture of decision procedures can5The proofs were run on a SUN/Sparc-2 with 32 Meg memory83Peterson: THEORYBEGINIMPORTING ctlops‘I. four states for each processorstateenum: TYPE ={idle, entering, critical, exiting}‘I. global state for the 2 processors “pci” and “pc2”, a shared “try”‘I. variable and “semi”,”sem2”: two semaphores each for processors “pc{i,2}’staterec: TYPE = [# pci, pc2: stateenum, try, semi, sem2:boolean #)s,si,s2: VAR staterecstate, statel, state2: VAR staterec‘I. next state relation for processor pcinextpi (state1, state2) =IF idle?(pci(statei)) THENidle?(pci(state2)) OR (entering?(pci(state2)) AND try(state2))ELSIFentering?(pci(statel)) AND try(statei) AND NOT sem2(statei) THENcritical? (pci (state2))ELSIF critical?(pcl(statei)) THEN(critical?(pci(state2)) AND try(state2)) OR (exiting?(pci(state2)) ANDNOT try(state2)) ELSE idle?(pci(state2)) ENDIF‘1. similarly for nextp2 and nextsem; composite next state relationN(statei,state2) = nextpi(statei,state2) AND nextp2(statei,state2) ANDnextsem(statei , state2)Y. initial state for “pci”, “pc2” and semaphoresinit(state): boolean = idle?(pci(state)) AND idle?(pc2(state))initsem(state): boolean = NOT semi(state) AND NOT sem2(state)‘I. No two processors are in the critical section at the same timesafe: THEOREMFORALL state:(init(state) AND initsem(state)) IMPLIESAG(N, LAMBDA state: NOT (critical?(pci(state)) ANDcritical? (pc2(state) ) ) ) (state)‘I. There is at least one path from all states that lead eventually‘I. to a state in which a process enters a critical sectionlive: THEOREMAG(N, (LAMBDA s:entering?(pci(s)))=>EF(N, LAMBDA s: critical?(pci(s))))(s)‘h Some paths could lead to deadlock (starvation)not_live: THEOREMAG(N, (LAMBDA s:entering?(pci(s)))=>AF(N, LAMBDA 5: critical?(pci(s))))(s)‘h There is at least one fair departing path from the initial state‘h Starvation (deadlock) freedomfair: THEOREM(mit(s) AND initsem(s)) => Fair?(N, LAMBDA 5: critical?(pci(s)))(s)END PetersonTable 4.4: State machines and their properties in PVS: Mutual-Exclusion ProtocolSpecificationS4.be viewed, in part, allowing the simplification to be done at a representation level.Here, in our work, ROBDD is the representation level. Since, ROBDDs are canonicalrepresentations of boolean expressions, we can view the reduction of a sentence toROBDD representation as performing computation or execution. From this perspective, this work is an outgrowth of the work on studying executability of higher orderlogic specifications [Raj92, RJS 93]. We have used our integrated system in a varietyof examples with dramatic improvements in speed up of propositional reasoning inPVS. This has led to fully automatic proof procedures for many classes of hardwaredesigns including n-bit ALUs and pipelined microprocessors {KK94, SM95]. We haveused the combination of theorem-proving and model-checking for automatically verifying hardware and software specifications, that would not have been possible byeither one of them.The simplified form, we have chosen is the simplest sum-of-products form. However, further minimization can be carried out using boolean optimization methods.It should be noted, however, ROBDDs are sensitive to ordering of the variables appearing in the ROBDD. In our implementation, we allow the ROBDD simplifier tochoose an arbitrary ordering, and appropriately trigger heuristic dynamic orderingif the BDD graph grows large.85Another possible approach to using BDD-based simplification in proof checkingis to build deductive mechanisms starting from a BDD basis. However, such adata structure based approach would pose limitations on the proof checker. TheHOL-VOSS approach [JS93] and our approach propose that proof checker formthe foundational basis. The deductive machinery would then have the flexibilityto use a variety of data structures and decision procedures apart from BDD-basedextensions.In a few cases such as multipliers, the BDDs could grow unreasonably large. Weenvisage that, in such cases, BDD-based simplification can be augmented with otherproof checking tools available in PVS. We also plan to explore the use of ROBDDsin a wider context of proof checking and verification. Finally, our work has openedavenues for integrating other specialized decision procedures (such as for hardwareverification) with other built-in powerful decision procedures available in a genericverifier such as PVS. For this reason, our seamless integration that allows a modelchecker to be used just as another decision procedures plays a key role in verificationin-the-large. The system has been used in verification of a core implementation ofthe Fibre Channel Protocol [NRP95j.86Chapter 587Specification of SIL GraphStructure in PVSA specification of the structure of SIL graphs is developed step by step in thisChapter. We introduce an entity in a SIL graph, and give its specification in PVS.We repeat some of the definitional concepts reviewed in Chapter 2 to put them inthe context of our specification. We explain the specification of ports in Section 5.1,followed by the specification of edges in Section 5.2 and nodes and SIL graphs inSection 5.3. Finally, in Section 5.4 we establish the properties that need to hold fora SIL graph to be well-formed, and thus have a proper behavior.885.1 Port and Port ArrayA port is a placeholder for data values. The set of data values that it can holdcan be restricted, and such a set is denoted by a type. For example, a port that isallowed to hold only true and false is of Boolean type. We would like to model aSIL graph and associated transformations for any desired set of data values. Wedefine a port as a placeholder for an arbitrary set of data values, by defining it asan uninterpreted type:port: TYPEWe can create various ports by introducing names such as p0, p1, p2, and declaring them as variables (VAR) of type port:89p0, p1, p2: VAR portAn array of ports is defined as a record type containing two type fields. The firstfield size of type nat — the set of natural numbers {0, 1, 2,. .. } — specifies the size ofthe array. The second field is the array of ports, whose size is equal to that specifiedby the first field. Such a typing, in which the type of one field depends on anotherfield is known as dependent typing. The ARRAYis specified as a function that takesa member from the set of natural numbers less than size and gives a member of typeport:parray: TYPE = E# size:nat,port_array: ARRAY[{i:natli<size} -> port]905.2 EdgesAn edge is a directed line connecting two ports. Mathematically, it is a relation ontwo ports. For convenience, we will call the port from which the edge is directed thesource, and the port to which the edge is directed to the sink. There are two kindsof edges in SIL: data-flow edge and sequence edge. A data-flow edge between twoports indicates the flow of a token from the source to the sink. A sequence edgebetween two ports specifies the ordering between them: we will say that a port Ais less than a port B if and only if, the token fired at B determines the value of asink port C connected to A and B, rather than the token fired at A. A data-flowedge between two ports enforces an implicit ordering between the source and sink.The source is strictly less than the sink. There is no token flow through a sequenceedge.We specify both kinds of edges as relations on ports. They modify the behaviorof a SIL graph in different ways. We postpone the discussion of the properties ofthese relations to the next chapter, and just specify the types of the relations aspredicates— pred— on pairs of ports. A true value of the predicate indicates the91dfe: pred[[port ,port)]sqe: pred [[port ,port)]Table 5.1:PVS types for data-flow edge and sequence edge; see Figure 5.1.presence of an edge between the ports, while a false value indicates the absence ofan edge between the ports. The predicate dfe is the data-flow edge relation, and sqeis the sequence edge relation as shown in Table 5.1.We can explicitly define corresponding relations between arrays of ports. Forexample, we define the data-flow edges between arrays of ports as:92-€dfesqeFigure 5.1: SIL data-flow and sequence edges; see Table 5.1.par, parO: parraysame_size(par,parO) =size(par) = size(parO)dfear(parO,(pari:{parlsame_size(par,parO)})) =FORALL Ci i<size(parO)):dfe(port..array(parO) Ci) ,port.array(pari) Ci))The direction of the edges is from the first port to the second port. We illustratethis in Figure 5.1.935.3 Node, Conditional Node and GraphA node is a structure that takes inputs and gives outputs, satisfying a data relationassociated with the node. Some of the typical nodes are adders and multiplexersassociated with corresponding addition and multiplexing data relations. We alsoassociate an order relation, which imposes an order on the inputs and outputs.Externally, a node receives inputs at input ports, and delivers outputs at outputports. Since a port is a placeholder for a definite set of data values — of a definitetype — the input and output values should belong to the type of the input and outputports.A conditional node is a node having special Boolean inputs, which control whetherthe data relation between the inputs and outputs holds. Such inputs are known asconditions. The conditions could appear either inverted or noninverted. If all of thenoninverted conditions on a node are true, and all the inverted conditions are false,then the outputs and inputs of the node satisfy its data relation. But, if any oneof the noninverted conditions is false or any one of the inverted conditions is true,then the output has an arbitrary value. In such a case, the output value is restricted94only by the type of the output port. Effectively, we can replace all the conditionports of a conditional node by just one condition port, which takes the conjunctionof the condition inputs with appropriate inversions [EMH+93].A graph is a structure constructed by using ports, edges, nodes, and conditionalnodes. However, we can hide the structure of a graph, and externally view it asa node with input and output ports, data and order relations. We can then specify graphs as nodes with internal structure and internal relations. This allows forhierarchical construction of smaller graphs into larger graphs.In our specification, we first introduce a conditional node in PVS as a recordtype as shown in Table 5.2, where• inports are the input ports declared as parray type — that is, they are takentogether as one array of an unspecified size.• outport is an output port declared just as a port. In this work we consider a95cnode: TYPE =inports: parray,outport: port,intports: parray,condport: port,cond:predtport],datarel: pred[[{p:parraylsize(p)=size(inports)},port]],orderrel :predt[{p:parraylsize(p)=size(inports)},port]],intrel: pred [ [parray , parray])Table 5.2: PVS specification of conditional node as a record type; see Figure 5.2.single output port for convenience in specification. However, in general, outputshould also be declared as an array of ports, as is the case for hierarchicallybuilt graphs and for primitive nodes such as SPLIT.• intports are the internal ports declared as a parray type to specify the internalports the conditional node might have. Such a conditional node would be ahierarchically built graph.• condport is a single port providing access for the condition input.• cond is a condition function giving the value of the condition on the conditionport: this can be either true or false. This is declared as a type pred/port] —that is, a predicate on port.• datarel is the data relation governing the output value based on the inputs.This is declared as a predicate or relation pred on a tuple. The first type in96I Iinports intports intrelFigure 5.2: SIL conditional node; see Table 5.2.the tuple is a subset of port arrays, whose size is the same as the inports, andthe second type is a port corresponding to the outport.• orderrel is declared as exactly the same type as datarel. The difference liesonly in that, it governs the order of output and input values. This is not seenin the structural type definition here.• intrel is the internal relation corresponding to the internal structure and connectivity of the conditional node. This is derived from the internal ports andthe edges connecting the internal ports.The conditional node is shown in Figure 5.2.condport—datarel— — orderel1’97We introduce predicates to compare the structures of conditional nodes basedon their number of input ports:cnO, cnl: cnodesame_size(cnO,cnl) =size(inports(cnO)) = size(inports(cnl))A node without a conditional port is modeled as a conditional node with thecondition on its conditional port being always true. The advantage of such a modeling is that it captures both an unconditional node and a conditional node whoseconditional port is always set to true. Since they have identical behavior, it minimizes our model by having just one structure for both. In PVS, a feature known assubtyping allows one to define a type, which denotes a subset of values of an afreadydefined type. We specify the node type in Table 5.3 by using this PVS subtypingfeature. The Figure 5.3 illustrates this specification.98node: TYPE = {n:cnodel cond(n) = LAMBDA (p:port):TRUE}Table 5.3: Node as a subtype of a conditional node; see Figure 5.3.Figure 5.3: Node as a subtype of a conditional node; see Table 5.3.We model a graph exactly the same as a conditional node, since we have constructed a conditional node to have internal structure and internal relation. Thisallows for viewing a graph as another node, and thus allows for a hierarchical construction of larger graphs. We specify a graph as a type equal to a conditional nodetype:graph: TYPE = cnode995.4 Well-formedness of a SIL GraphA SIL graph has to satisfy certain structural rules governing the connectivity ofports. Only then can the behavior of a SIL graph be well-defined. For example, wecannot connect two input ports by a data-flow edge: a source has to be an outputport, while a sink has to be either an input port or a conditional port. The structuralrules are stated as axioms in PVS.Every port has to be exactly one of an input port, output port, and conditionalport: no port can be left dangling. Even the terminal I/O ports at the SIL graphboundary are associated with special I/O nodes. We express this as two axioms —inclusivity and exclusivity— as follows:100port..inclusive_ax: AXIOMFORALL (p:port): is_inport(p) OR is_outport(p) OR is_condport(p)port_exclusive_ax: AXIOMFORALL (p:port): is_inport(p) IMPLIESNOT (is_outport(p) OR is_condport(p)) ANDis_outport (p) IMPLIESNOT (is_inport(p) OR is_condport(p)) ANDis_condport (p) IMPLIESNOT (is_inport(p) OR is_condport(p))where isinport, is_outport, and is_condport are appropriately defined, asserting theexistence of a (conditional) node whose input/output/condition is the port beingconsidered, as indicated in the following PVS specification:101is_inport(p) = (EXISTS cn, (i:{j :natlj<size(inports(cn))}):pinport (cn, i))is_outport(p) = (EXISTS cn: poutport(cn))is_condport(p) (EXISTS cn: p=condport(cn))That a port can be one of the internal ports of a conditional node is consistent withthe properties defined here, because even internal ports should be one of the threetypes of ports.A data-flow edge is legal only if it connects an input port to an output or aconditional port:102dfe_port_ax: AXIOMFORALL pl,p2:dfe(pl,p2) IMPLIES (is_outport(pl) AND(is_inport(p2) OR is_condpor’t (p2)))We can derive that self data-flow edges are forbidden by the properties of portsand the data-flow edge from the above property. If we make p1=p2 in the aboveaxiom, and use the port exclusivity axiom (given earlier) that any port can beexactly one of input, output and condition, we get the corresponding theorem forpreventing self data-flow edges:self _edge_not_th: THEOREMFORALL (p:port): NOT dfe(p,p)It should be noted that data-flow edges between output ports of a node and the103input ports of the same node are not prohibited.Self sequence edges are also prohibited, since sequence edges impose strict ordering on ports. This has to be asserted as an axiom, as we have not imposed anyrestrictive property on the sequence edge:self_seq_edge_not_ax: AXIOMFORALL (p:port) NOT sqe(p,p)Since sequence edge introduces ordering on ports, we expect sqe to be transitive.But, in order to have a clear separation of structure and behavior, we do not imposethe property on sqe here. However, as we will see in Chapter 6, we formalize theordering due to the sequence edges, and due to the behavior of a condition nodewhen the condition port has a false value, by introducing weights on pairs of ports.The transitivity property is then imposed on the ordering of weights.104Chapter 6105Specification of SIL GraphBehavior and RefinementWe informally discussed in Chapter 2, the behavior of a SIL graph. We recall thatthe behavior is the set of ordered tuples of data values that the ports of the graphcan assume, and an external or I/O behavior is the set of ordered tuples of values atthe I/O ports of the SIL graph. The behavior of a SIL graph is determined by thedata relations and order relations of the nodes, connectivity due to the data-flowedges, and ordering imposed by sequence edges. Any implicit state information ina SIL graph is contained in the data relations of the nodes. Thus, a comparison ofbehaviors in any given clock cycle would not require comparing execution historiesdue to possible implicit states in a SIL graph. We discuss behavior in Section 6.1,followed by a presentation of refinement and equivalence in Section 6.2. Finally,in Section 6.3, we give a brief synopsis of the axioms and theorems developed andexplained in earlier sections of this Chapter.1066.1 BehaviorA detailed definition of behavior would require establishing a concrete formal semantics of SIL, since the data values and ordering can be arbitrary. A denotationalsemantics of SIL has been discussed by Huijs [HK93]. However, at the level ofabstraction we have chosen to specify, we bring about high-level properties of dependency graphs, refinement and equivalence that should hold independent of adetailed behavior model. We thus obviate the need to specify a concrete behavioralmodel of dependency graphs. Such mechanisms for specification (by defining theproperties that have to hold) constitute our axiomatic approach. As we will see inthe next chapter, we compare two SIL graphs by asserting the properties that needto be satisfied by the graphs with respect to their behavior. We can thus establishthe correctness of transformations. A modification in the concrete behavioral modelfaithful to the properties on which we have based our approach would not changeour specification and verification results. Further discussion of the advantages ofour approach is postponed to Chapter 9.The behavior associated with an access point or a port is described by the same107uninterpreted type, as we used in the introduction of the structural specification ofa port:port: TYPEThis is the stage where the specification of structure and behavior coincide. Thetype denoting the set of values being unspecified gives us the freedom to model thebehavior (as with the structure) irrespective of the value type.1086.2 Refinement and EquivalenceWe have developed specification techniques to describe concepts comparing SILgraphs with respect to behavior. A SIL graph SG2 is a refinement of another SILgraph SG1, if the behavior exhibited by SG2 is allowed by SG1. S02 can then bean implementation of its specification SG 1. In order to define graph refinement,we first describe port reflnement, and derive graph refinement from the structuralconnectivity of a SIL graph.We introduce an abstract refinement relation on ports:silimp: pred[[port ,port]]The refinement relation on ports could be interpreted as follows. A port p1 is109qiq2 p2q3 q3value(pl) = (value I value = value(ql)}value(p2) = (value I value = value(q 1) ORvalue = value(q2) ORvalue = value(q3)value(pl) C value(p2)silimp(pl,p2): p1 is a refinement of p2Figure 6.1: Example: refinement of ports due to non-deterministic choice beingmade deterministic.a refinement of a port p2, if the set of data values allowed by p1. is a subset ofvalues allowed by p2. An instance of such a relation comes about due to the nondeterministic choice as illustrated in Figure 6.1. Another kind of refinement couldbe a data type refinement: when one port is a subtype of another. The refinementrelation has to be reflexive and transitive. We do not impose antisymmetry to allowthe definition of equivalence as a special case of refinement:110silimp(pl,pl)si].imp_trans_ax: AXIOMsilimp(pl,p2) AND si].imp(p2,p3) IMPLIESsilimp(pl ,p3)The refinement relation between arrays of ports is introduced by a property statingthat a refinement relation between all corresponding ports of the port arrays impliesa refinement relation between the port arrays.111pan, par2: parraysilimpar(parl ,pan2)silimpar_def_ax: AXIOMFORALL panl,(par2lsame_size(parl,par2):(FORALL (ii i< size(panl)):EXISTS j: silimp(port_anray(parl) Ci) ,port_array(par2) Ci))) 1FFsilimpar(panl ,par2)It should be noted that the refinement between port arrays does not necessarilyimply the refinement relation between corresponding individual ports of the portarrays. We illustrate this notion with an example in Figure 6.2. The reason forunderconstraining the definition of port array refinement is to allow refinements forgraphs which might have different numbers of input and output ports. We can thusallow behavioral refinement without overconstraining the structures of the graphs.The properties of reflexivity and transitivity that have to be satisfied by the112fql1.. Sq21ql q22q23.qi q2silimp(ql 1,q22) AND silimp(q12,q22) AND silimp(q13,q22)silimpar(q 1 ,q2)Figure 6.2: Example: array refinement does not imply every individual port refinement.refinement relation on port arrays are similar to those satisfied by the refinementrelation on ports:si].impar_refl_th: THEOREMsilimpar (par ,par)silimpar_trans_ax: AXIOMsi].impar(parl ,par2) AND si].impar(par2,par3) IMPLIESsilimpar(parl ,par3)113The equivalence of SIL graphs sileq is defined by introducing the symmetryproperty in the refinement relations defined above:sileq(pl,p2) = silimp(pl,p2) ANDsilimp(p2,pl)sileqar(parl,par2) = silimp(parl,par2) ANDsilimp(par2,parl)A data-flow edge connecting two ports modifies the behavior of the sink in accordance with other data-flow edges connecting the same edge output. If a port is thesink of multiple data-flow edges, then the behavior of the sink port is determined byan ordering of the source ports. Such a port is called a join. The sequence edges in aSIL graph indicate such an ordering. However, since the ordering could be affectedby the behavior of a conditional node, we need a general mechanism to specify theordering. We model this ordering by associating weights with the data-flow edges,rather than source ports. Introducing weights to represent sequence edges also, permits a clear separation of structure from and behavior: whereas a sequence edge is114a structural entity, weight is a behavioral entity that could be derived not only fromsequence edges, but also due the behavior of a conditional node. We first introduceweight as an uninterpreted type. A function w on ports would return a weight, whilea function war on arrays of ports would return a weight:weight: TYPEw: [port,port -> weight]war: [parray,parray -> weight]The ordering is used to determine the behavior of a join. This means that we needto compare the weights on the data-flow edges that form a join. The weights on data-flow edges that do not form a join need not be compared. However, the definitionof SIL specifies that no two data-flow edges communicate tokens simultaneouslyinto a join, and no two weights on the edges forming a join can be equal. Thissuggests that we need a reflexive, transitive, and antisymmetric ordering relation onweights: such a relation is called partial order. We define a partial ordering relation<= on weights, and assert the fact that the weights on a set of data-flow edgesare linearly ordered if and only if the associated data-flow edges form a join. Since115<: pred C tweight ,weight)]partial_order (<=)dfe_w_ax: AXIOMFORALL (pO:port),(pl:port),(p2:port):NOT (p0 = p1)IMPLIESdfe(pO,p2) AND dfe(pl,p2)IMPLIES((w(pO,p2) < w(pl,p2) ORw(pl,p2) < w(pO,p2)) ANDw(pO,p2) / w(pl,p2))Table 6.1: Using weights to enforce linear ordering of data-flow edges forming a join:PVS; see Figure 6.3.p0w(pO,p2) <w (pl,p2)df—-- P2 ORw(p l,p2) < w(pO,p2)& w(pl,p2)p1Figure 6.3: Using weights for ordering data-flow edges; see Table 6.1.we do not compare the weights on edges that do not form a join, the weights arepartially ordered on the set of all data-flow edges in the graph. We give the PVSspecification of this property in Table 6.1 and illustrate it in Figure 6.3.We describe the property that the behavior of a join depends on the ordering ofthe data-flow edges, by comparing weights on the edges flowing into the join port.116Table 6.2: Using weights to determine join behavior; see Figure 6.4.p1w(pl,p2) is the maximumIMPLIESsilimp(p l,p2)Figure 6.4: Using weights to determine join behavior; see Table 6.2.The greater the weight on a data-flow edge, the later the token is communicatedthrough it. We state the property that the join port is a refinement (an implementation) of the source whose associated data-flow edge has the maximum weight inthe axiom shown in Table 6.2. It should be noted that we do not impose equivalencesileq, a relation stronger than refinement silimp. This would give the freedomto connect a port p1 to p2, when the set of data values allowed by p1 is always asubset of the set of data values allowed by p2. The property is shown in Figure 6.4.join_ax: AXIOMFORALL pl,p2:(FORALL p:w(p,p2) <= w(pl,p2)) IMPLIESsilimp(pl,p2)dfe117cond_bottom_ax: AXIOMNOT cond(cn) (condport(cn)) IMPLIESFORALL p:dfe(outport(cn) ,p) IMPLIESFORALL (n :node): dfe(outport(n) ,p) IMPLIESw(outport(cn) ,p) <= w(outport(n) ,p)Table 6.3: Weight when the condition on a conditional node is false; see Figure 6.5.We still have to capture the notion of behavior of ports connected to the outputport of a conditional node. The behavior of the output port of a conditional node,when the condition port holds a false value, is not defined. In the case where a joinport is connected to a conditional node, the behavior of the join is solely determinedby edges that propagate well-defined values. This situation is specified by makingthe associated weight of the data-flow edge emanating out of a conditional node theleast of all the weights associated with other data-flow edges. The other data-flowedges, with which the comparison is performed should be connecting the join port tooutput ports of nodes or conditional nodes whose condition is never false. However,this does not preclude a join port to have an arbitrary value- because, it doesnot prohibit a graph construction where the join port is connected exclusively toa single conditional node or multiple conditional nodes whose conditions are false,and whose output ports are connected to the join port. The property is specified asan axiom in Table 6.3, and illustrated in Figure 6.5.118w(outport(cn),p) is the LEASTFigure 6.5: Weight when the condition on a conditional node is false; see Table 6.3.xdfe(pl,p2) = dfe(pl,p2) ANDFORALL p:(p /= p1) IMPLIES NT dfe(p,p2)Table 6.4: Absence of join: exclusive data-flow edge; see Figure 6.6.We can derive the behavior due to a data-flow edge whose sink is not the outputof any other data-flow edge. We will call such an edge an exclusive data-flow edge— xdfe defined in Table 6.4 and shown in Figure 6.6.We can explicitly define an exclusive data-flow edge relation for arrays of portsas in Table 6.5. We can prove the property that an exclusive data-flow edge providesa refinement relation between the source and the sink. However, for this propertyto hold, we have to impose a restriction on the source port — that it has to beoutport(cn)p119p1 p2 No other “dfe” coming into p2:>® i.e. p2 is not a join.xdfeFigure 6.6: Absence of join: exclusive data-flow edge; see Table 6.4.par, parO: parrayxdfear(parO, (parl:{parlsame...size(par,parO)})) =FOPJLLL (ili<size(parO)):xdfe(port_array(parO)(i) ,port_array(parl)(i))Table 6.5: Array version of exclusive data-flow edgean output port of a nonconditional node. If the source is an output port of aconditional node, then the value false on the condition port will produce an undefinedvalue on its output port. However, a data-flow edge transforms the undefined valueinto an arbitrary value of an appropriate type of sink. The undefined value is notcommunicated by a data-flow edge because the sink could be an input to anothernode. In practice, as we have pointed out in Chapter 2, an input is required to bea well-defined value, while an output generated could be an undefined value:120xdfe_siliinp_th: THEOREMFORALL (nl:node) , (p2:port):xdfe(outport(nl) ,p2) IMPLIES silimp(outport(nl) ,p2)We again point out that the relation between the source and the sink is a refinementrather than equivalence. This weaker relation leads to more optimization than if itwere equivalence. This issue is discussed further in Chapter 7 as part of generalizations of transformations.A useful theorem involving a join of exactly two data-flow edges, shown in Table 6.6, states that the behavior of a join associated with exactly two data-flowedges is equal to the behavior of the port from which the edge with a greater weightemanates.We postulate that the ordering on edges is preserved by behavioral refinement(and therefore also equivalence). We express the property in PVS as an axiom in121d±e2_j oin_th: THEOREM(dfe(pl,p3) AND dfe(p2,p3) AND(FORALL p0:dfe(pO,p3) IMPLIES ((p0 = p1) OR (p0 = p2))))IMPLIESIF w(pl,p3) <= w(p2,p3) THENsileq(p2 ,p3)ELSE sileq(pl,p3)END IFTable 6.6: A theorem on join of exactly two data-flow edgesp0_preserve_ax: AXIOMw(pO,p2) < w(pl,p2) ANDsi].iznp(pO,pOO) ANDsilimp(p2,p22) ANDdfe(pOO,p22) ANDdfe(pll,p22)IMPLIESw(pOO,p22) <= w(pll,p22)Table 6.7: Order preserved by refinement and optimization; see Figure 6.7.Table 6.7 and show it in Figure 6.7. We can then derive useful extensions of thisproperty of preserving order by behavioral refinement. One useful extension forcomparing SIL graphs expresses that the order is preserved with an introduction ofan exclusive data-flow edge between an output port of a node and another port. Thisis shown in Figure 6.8. The statement of the property is the theorem in Table 6.8.122silimpplsilimp.w(pO,pl) < w(pl,p2) IMPLIES w(pOO,p22) < w(pl 1,p22)Figure 6.7: Order preserved by refinement and optimization; see Table 6.7.Similarly, we have corresponding postulates and theorems for arrays of portsinstead of individual ports. However, we have to make a slight modification on comparing port arrays for inequality— that is, we will interpret the inequality operator1= to mean that the port arrays do not have any port in common. We have sucha facility of overloading operators and functions in PVS. In comparing behaviors ofSIL graphs, we find that the properties expressed using arrays of ports instead ofindividual ports, make specifications more succinct and economical.Finally, we need a refinement relation for graphs. A graph refines or implements another graph, when the data relation of the implementing node is containedin the data relation of the specification node. We call the implementing graph therefinement of the specification node. Instead of describing the graph refinement by123po...preserve_xdfe_th: THEOREMw(pO,p2) <= w(pl,p2) ANDsiliinp (p2, outport (n3)) ANDxdfe(outport(n3) ,p4) ANDdfe(pOO,p44) ANDdfe(pll,p44) ANDsilimp(pO,pOO) ANDsilimp(pl,pll) ANDsilimp (p4 ,p44)IMPLIESw(pOO,p44) < w(pll,p44)Table 6.8: Order preserved by refinement and exclusive data-flow edge; see Figure6.8.describing containment of their data relations, we specify the relationship by using ahigher level property. It is the property that, when the inputs of the implementationgraph are a refinement of the inputs of the specification graph, then the outputsof the implementation graph have to be refinements of the specification graph. Itshould be noted that any state information implicit in a SIL graph is encapsulated inthe data relations, thus obviating the need to consider behavior histories, rather thana single clock cycle behavior. Furthermore, to incorporate hierarchically structuredgraphs, we extend the nodes to have multiple output ports. The PVS specificationin Table 6.9 illustrates the property in Figure 6.9.This allows us to compare output ports, given a relationship among the input124p44w(pO,p2) < w(pl,p2) IMPLIES w(pOO,p44) < w(pll,p44)Figure 6.8: Order preserved by refinement and exclusive data-flow edge; see Table6.8.Table 6.9: Graph refinement: property expressing relation between outputs andinputs of graphs independent of underlying behavior; see Figure 6.9.ports and the relationship between the nodes. It should be noted that this representsa typical example of how we express a property for comparing ports, without adetailed representation of the input/output ports and data relations of the nodes.We also introduce convenient predicates in Table 6.10 to express that two nodes,having the same number of input ports (i.e., they are of the same..size), have anequates relationship if they are refinements of each other. In model-theoretic terms,the data relations of two nodes having equates relationship are identical. However,silimp• .p0ow(pOO,p22)w(pl,p2)p1w(pl l,p22)refinement_ax: AXIOMFORALL (nO:node),(nl:node)saine_size(nO,nl)):ref ines (nO ,nl) AND silimpar(inports(nO) , inports(nl)) IMPLIESsilimpar (outports (nO) , outports (ni))125Figure 6.9: Graph refinement: property expressing relation between outputs andinputs of graphs independent of underlying behavior; see Table 6.9.Table 6.10: Predicates for expressing the sameness of nodesin our axiomatic approach, we obviate the need to refer to data relations.silimpar silimpequates(cnO, (cnl. I same_size(cnl,cnO))) =ref ines(cnO, cnl) AND ref ines(cnl, cn0)sks(cn0,cnl) =saine_size(cn0,cnl) AND equates(cno,cnl)1266.2.1 CompositionalityThe refinement axiom presented in Table 6.9 allows us to provide a compositionalproof of correctness of transformations: i.e., the refinement of component subgraphsof a graph G ensures the refinement of G. This allows us to assert that if a transformation is applied locally to a subgraph g of a graph G, leaving the rest of the graphunchanged, then the graph G undergoes a global refinement transformation.Thus, we can state the following theorem:Theorem (Compositionality of Refinement):Let g, ga,.. .,g be the component subgraphs of G, and g, g,. . .,g the subgraphsof G -. If every g’ is a refinement of gj, then G - is a refinement of G. i.e.,(V i. refines(g,g)) ref ines(G’,G)Proof:We carry out the proof by induction on the graph structure. The base case isstraightforward by considering any one of the subgraph components 9j, whose refinement is given as g. Thus, the statement that every subgraph component g isa refinement of g holds trivially by the precondition given in the statement of thetheorem:127V i. refines(g,g1)The induction step as shown in Figure 6.10 consists of showing that given a graphG’ consisting of subgraphs G, whose inputs are connected to the outputs of G’h andoutputs are connected to the inputs of G, is a refinement of a graph G consistingof subgraphs Gh, G: and G, whose corresponding refinements are G, G andG. Assuming that the inputs to the graphs are in silinipar relationship, i.e.silimpar(inputs(G’),inputs(G)), we shall show that the outputs of the graphsare in silimpar relationship: i.e.,silimpar(outputs (G’) , outputs (G)).Due to the refinement relationship G’h refines Gh:refines(G,Gh),128and since the inputs to G, G’ are the same as the inputs to Gh and G, by refinementaxiom of Table 6.9, every output of G’h is also an output of Gh:silimpar (outputs (G) ,outputs (Gh)).Since the outputs of G’h and Gh are connected to the inputs of G and G2, thereforeevery input of G is also an input of G:si1impar(inputs(G) ,inputs(G)),which together with the fact that G refines G2:refines(G,G2),129ensures that the every output of G is also an output of G2:siliiupar(outputs (G) ,outputs (Ge)).By a similar argument, we can deduce that every output of G, is also an output ofG3:silimpar(outputs(G’,) ,outputs(G3),which in turn is equivalent to the statement that every output of G’ is also an outputof G:silimpar(outputs(G’) ,outputs(G)).130refines(G’h , A refines(G , G ) A refines(G’ , G) refines(G’,G)Figure 6.10: Compositionality of refinement.Thus, by the refinement axiom, @ refines G:Q.E.Dref ines(G’, G).1316.3 Axiomatic Specification of Behavior and Refinement: A SynopsisIn this section, we provide a synopsis of the key axioms and theorems presented inthis chapter. We first provide a set of notations and definitions, following which wegive the axiomatic specification. Let r be the set of ports, and II be the power setof ir. Also, let B = {TRUE, FALSE} indicate the Boolean domain. We define adependency graph I’ as a tuple: <I,O,Int,C,Cond,Drel,Orel,Intrel>whereI E II is the set of input portso€H is the set of output portsmt e H is the set of internal ports - these are the ports of internal nodes thathierarchically constitute the top-level conditional nodeC E II is the set of conditional portsCond: r — B is the Boolean condition on a portDrel: [I, 0] — B is the data relation relating inputs and outputs.Ore 1: [I, 0] — B is the order relation relating inputs and outputs.Intrel: [mt , Int] —p B is the internal data/order relation relating internal ports.132A simple dependency graph F.9, without any boolean conditional ports is equivalent to a dependency graph with all the conditions on the conditional ports alwaysbeing TRUE: i.e., Vp E C: Cond(p) = TRUE.Let denote refinement on ports: p P2 indicates that that the set of valuesport P1 can assume is a subset of the values port P2 can assume. The relationis a partial order, i.e., a reflexive, antisymmetric, and transitive relation. We shalldenote equivalence between ports by the notation Let denote refinement ondependency graphs: y -y indicates that the behavior of 71 is contained in thatof 72•The data flow edge between two ports is formalized by dfe, a relation on ports- i.e., dfe(pi,p2)indicates a data flow edge from P1 to P2. A sequence edge is formalized by introducing weights on data flow edges, and providing a partial orderingon the weights. However, weights on the data flow edges forming a join are linearly ordered. Let w(pi,P2) indicate the weight associated with the data flow edgeconnecting P1 to p2 We are now ready to provide the axioms.133Axiom 6.3.1 (Outputs and Inputs are constrained by datarel)V-yEF. Cond(C(-y)) D Drel(7)(I(’y),O(-y))Axiom 6.3.2 (Weights on data flow edges are linearly ordered)Vpo,pi,p E ir.P0 P1D (dfe(po,p2)A dfe(pi,p2))D ((w(po,p2) w(pl,p2) V v(p1,p2) w(po,p)) A w(po,p2) w(pl,p2))Axiom 6.3.3 (Ordering of weights is preserved by )VPO,POo,P1,P11,P2,P22 E ir.(w(po,p2) w(pl,p2) Ap0 P00 Ap2 P22 A dfe(poo,p22) A dfe(pii,p22))134Dw(poo,p22) w(p11,p2)Axiom 6.3.4 (Effect of FALSE value on the condition port)V7 E r,p1 E 0(y).Cond(C(y))D V p E ir.dfe(p1,p)D V 7s E F8, P2 E O(7 ).dfe(p2,p) D w(p1,p) w(p2,p)Axiom 6.3.5 (Behavior of a join)135V P1,P2 E ir.dfe(p1,p2)A V p E ir.dfe(p,p2)D w(p,p2) w(p1,p2))D P1 P2Axiom 6.3.6 (Refinement of Dependency Graphs)V70,1 E F8,j0 E I(7o),pi e I(7),pj e O(70),poi E 0(71).70 7 A Pio PuD PoO PolDefinition 6.3.1 (Exclusive data flow edge)xdfe(pi,p2) dJ de(p1,p2)A Vp.(p P1) D dfe(p,p2))136Theorem 6.3.1 (An xdfe enforces equivalence)V7EI’3,p12Elr,p3 E 0(7). xdfe(O(7),p2D (0(7) P2)Proof: Mechanically verified in PVS.Theorem 6.3.2 (Behavior of a join of two data flow edges)V P1,P2,P3 E ir.dfe(pi,p3) A dfe(p2,p3)A V P0 E ir.((P0 P1) V (P0 P2))D -‘ dfe(po,p3))D IF w(pi,p3) w(p2,p3) THEN137P2 P3ELSE P1 P3Proof: Mechanically verified in PVS.Theorem 6.3.3 (Compositionality of Refinement)Let‘yi, 72, . .‘YN be the component subgraphs of F,and -y,. ..,-y be the subgraphs of F’.If every 7 is a refinement of “y, then F’ is arefinement of F.i.e.,(Vi. (y 7))DF’ F)138Proof:Proof follows by induction on the graph structure, as presented in Section 6.2.1.139Chapter 7140Specification and Verificationof TransformationsThe formal model of the SIL graph structure and behavior can be used to specify andverify the correctness of transformations. Here, we present optimization transformations, such as Common Subexpression Elimination and Cross-Jumping Tail-Merging.We have verified the correctness of other optimization transformations, and a similartechnique can be adopted for verifying the correctness of refinement transformations.We present an overview of specification and verification of transformations in Section 7.1. We explain in detail common subexpression elimination in Section 7.2 andcross-jumping tail-merging in Section 7.3. We briefly mention specification and verification of other transformations and proofs in Section 7.4, and generalization andcomposition of transformations in Section 7.5.1. In Section 7.5.2, we illustrate withan example the usefulness of the axiomatic specification in investigating “what-if”scenarios. Finally, in Section 7.5, we illustrate a new transformation devised in theprocess of generalization and “what-if” analysis. This transformation can be used141for further optimization and refinement. This could not have been achieved by theexisting transformations defined in the current synthesis framework.7.1 OverviewThe general method we employ to specify and verify transformations consists of thefollowing steps:1. Specify the structure of SIL graph on which the transformation is to be applied.The structure specification could be of graph templates or classes of SIL graphsrather than a particular concrete graph.2. Assert that the structure of the SIL graph satisfies the preconditions imposedon its structure for applying the transformation. The preconditions would142consist of constraints imposed on structural connectivity and ordering throughsequence edges.3. Specify the structure of the SIL graph expected after the transformation isapplied.4. In the case of verifying refinement, we impose the constraint that the corresponding inputs of the SIL graphs before and after transformation are silimpar— that is, the set of input values to the SIL graph after transformation is asubset of the set of input values to the SIL graph before the transformation.For behavioral equivalence, the constraint is imposed as sileqar: the sets ofinput values to both graphs are identical.5. Verify the property that the outputs of the SIL graph before transformationare silimpar— that is, the outputs of SIL graph after transformation arerefinements of corresponding outputs of the SIL graph before transformation.In the case of behavior preserving transformation, the corresponding outputsare verified to be sileqar.143parIn this transformation, two nodes of the same kind, which take identical inputs, aremerged into one node as shown in the Figure 7.1.We first specify the preconditions imposed on the nodes and the input portsconnected to the nodes:parrp2p1-Figure 7.1: Common subexpression elimination; see Table 7.1.7.2 Common Subexpression Elimination144• The nodes must be of the same kind• The ports connected to the input ports of one node must be identical to thoseconnected to the input ports of the other node.• The input ports should not be left dangling: they are required to have anincoming data-flow edge.For convenience, we will assume that the joins at the input ports of the nodeshave been resolved. Such a resolution of the joins would leave exactly one data-flowedge connecting each input port of the nodes. Relaxing that assumption would notchange our verification of correctness of the transformation, except for an additionalstep of resolving the joins before the transformation is applied:145preconds(dotO,(dotl:{dotllequates(dotO,doti)})) :boolean =7. input ports of dotO and doti are connected to identical ports,7. and there exists at least one such set of ports(FORALL (par I is_outportar(par) AND same_size(par, inports(dotO))):xdfear (par, inports(dotO)) 1FF xdfear(par, inports(dotl))) AND(EXISTS (par I is_outportar(par) AND same_size(par, inports(dotO))):xdfear (par, inports (dotO)))We then specify the structure of the graphs before and after applying the transformation. The statement of correctness is asserted as a theorem that, if the inputsfor the graph are sileq then the outputs of the graph are sileq. The theorem is statedin Table 7.1.146CSubE: THEOREMFORALL dotO, (dotllequates(dotl ,dotO)),(dotO1equates(dotO1,dotO)):((preconds(dotO ,dotl) AND‘1. structure before transformation(FORALL (par I is_outportar(par) AND same_size(par,inports(dotO))):xdfear (par, inports (dotO)) 1FF(EXISTS (parrlis_outportar(parr) ANDsame_size (parr, inports (dotO))):‘I. ports connecting to dotO and dotOl are equivalent(sileqar(par,parr) AND xdfear(parr,inports(dotOl))))))IMPLIESh corresponding output ports of graphs before and after transformation arY. equivalent(FORALL pl,p2:((xdfe(outport(dotO) ,pl) ORxdfe(outport(dotl) ,pl)) ANDxdfe(outport(dotOl) ,p2)) IMPLIESsileq(pl ,p2)))Table 7.1: Correctness of common subexpression elimination; see Figure 7.1.147pp00Figure 7.2: Cross-jumping tail-merging: corrected.7.3 Cross-Jumping Tail-MergingIn the cross-jumping tail-merging transformation, two conditional nodes whose output ports connect to the same sink are checked for being mutually exclusive — that is,if the conditions on both of the conditional ports are not true (or false) at the sametime (when exactly one of them is true at any time). In such a case, the two nodescan be merged into one unconditional node of the same kind, and the conditionsmoved to the nodes of the subgraph connecting it. We show this transformation inFigure 7.2.In the course of our specification in PVS, we found a mistake in the informal148qOOCFigure 7.3: Cross-jumping tail-merging: incorrectly specified in informal document.specification of the transformation. We show the erroneous transformation that wasgiven in the original informal specification in Figure 7.3.However, the same mistake was discovered later by inspection of the informalspecification [K1o94] independently, without the aid of our formalization. The errorthat occurred in the original informal specification was the incorrect placing of theconditions on the nodes. With such a placing, the correctness of the transformationdepends on the ordering of the output ports of dotO and doti. When condition cis true, the values at qi and so ri are arbitrary, while the values at qO and rO arewell-defined. Thus if an ordering is imposed such that the port ppO gets the valueat ri, then that value would be arbitrary. However, in the transformed figure, thecondition c being true results in an ordering such that rOl gets the value of qOO,and vice-versa when c is false. Thus, the transformation would not be correctnessqi 1149preserving.The placing of the conditions as given in Figure 7.3 is leads to violation of preconditions - because it prohibits comparing two ports joined exclusively to conditionalnodes — that is, xdfe(pJ,p) AND is_outport_of_conditionalnode(pl) does not ensuresileq(pl,p2). We found this violation at the very early stage of stating the theoremcorresponding to the transformation. Further, we could relax the mutual exclusiveness constraint. We introduce a weak assumption that the ordering of the data-flowedges coming out of the nodes dot0 and dotl in the original graph is the same asthe ordering of the data-flow edges coming into the node dot0l in the optimizedgraph. We have suitably modified, generalized, and verified the transformation.The generalized transformation is shown in Figure 7.4. The PVS specification ofthe preconditions is shown in Table 7.2, and the theorem statement is shown inTable 7.3.150pp00Figure 7.4: Cross-jumping tail-merging: generalized and verified; see Table 7.3.7.4 Other Transformations and ProofsWe have specified and verified other transformations, such as copy propagation, constant propagation, common subexpression insertion, commutativity, associativity,distributivity and strength reduction described by Engelen and others [EMH93j.In general, the proofs of transformations, proceed by rewriting, using axiomsand proved theorems, and finally simplifying to a set of Boolean expressions containing only relations between ports and port arrays. At this final stage the BDDsimplifier in PVS is used to determine that the conjunction of Boolean expressionsis indeed true. We show the experimental results for verifying the various transwO < wi 1FF warO < wanpar00wOpan wi parllwan151formations in Table 7.4. The proofs are semi-automatic. A few intermediate stepssuch as instantiation of existentially quantified variables need manual interaction,while other steps are automatic. Examples of inference rules [SOR93a] used in theproofs are skolem’ for removing universal quantifiers, assert to apply arithmeticdecision procedures and rewriting, bddsimp for Boolean reasoning using BDD, andinst? for heuristic instantiation of existential quantifiers. The PVS decision procedures for rewriting, and arithmetic and Boolean reasoning use a number of lowerlevel inference rules that are hidden from the user. Examples of proof transcriptsfor common subexpression elimination and cross-jumping tail-merging are given inAppendix C.1527.5 Devising New TransformationsHaving a mechanized formal approach such as ours, as opposed to approaches thatare informal or formal approaches not mechanized has an advantage in the aspectof modifying specifications- the experiments of modifying specifications could beperformed in a framework, that allows one to rapidly verify that the modificationsdo not violate the correctness properties. In this section, we show how we coulddevelop new transformations using our mechanized formal approach.7.5.1 Generalization and Composition of TransformationsWe have seen earlier, in Chapter 7.3, that the specification has assisted in generalizing the transformation. In addition, we can make other observations on using153our work to generalize many transformations. For example, by replacing the equivalence relation sileq by silimp, we find that the optimization transformations canbe generalized as refinement transformations, and the preconditions imposed by thetransformations could be relaxed.The general technique to investigate composition of transformations is to determine that the preconditions imposed by one transformation are satisfied by another transformation. This also applies in the case where a transformation couldbe applied on one subgraph, while another could be applied on a disjoint subgraph,without having to take into account the effect of one transformation on the preconditions imposed by another. For example, common subexpression elimination(CsubE) produces a subgraph with an output port that is a distribute. Whereas,copy propagation (Cppy Prop) [EMH93] can be applied only to a subgraph thatdoes not have a distribute output port. We can determine in our specification thatif we perform CsubE, the conjunction of the subgraph relation thus obtained andthe preconditions for performing Copy Prop on the same subgraph are false.154NOT (wO < wi 1FF warO < wan)parOOwOP1’0°wanpanllFigure 7.5: Cross-jumping tail-merging: inapplicable when two nodes are mergedinto one.7.5.2 Investigations into “What-if?” ScenariosOne of the benefits of our formalism is that it allows us to provide answers to questions on the applicability of transformations, and provide formal justifications thatsupport the answer. A question that comes up quite often in a transformationaldesign process is whether a transformation that has been applied on a graph couldstill be applied with small changes in the graph. We illustrate this point in thecontext of a situation that resulted during the tranìsformational design of a direction detector [Mid94a]. It involved a variation of the cross-i umping tail-mergingtransformation. In Figure 7.4, if we merge the nodes nodes dotO and doti in thegraph before applying the transformation, the precondition for the transformationwould no longer be true. This is shown in Figure 7.5.pan155abFigure 7.6: Further optimization impossible using existing transformations.Since the nodes are merged, wO = wi. While, due the ordering imposed by join,either warO < wan or wan < warO. Thus the equivalence relation wO < wi 1FFwanO < wan no longer holds, and so the precondition for the application of thetransformation is violated. This precludes the application of the transformation onthe modified graph.In Section 7.5.2, we argued that cross-jumping tail-merging could not be appliedin cases as shown in Figure 7.5. However, we would like to have such a transformation for further optimization in cases as shown in Figure 7.6. We can view this as atransformation derived from the process of generalizing cross-jumping tail-mergingData relations of nO, n 1, nO 1 = RIData relations of dotO, doti, dotOl = R2pp00ppO = IF c THEN R2(R1(a,b)) ppOO = R2(R1(a,b))ELSE R2(R1(a,b))156Data relations of nO, n 1, nO I = R 1Data relations of dotO, dot 1, dotOl = R2abFigure 7.7: Inapplicability of cross-jumping tail-merging after common subexpression elimination: due to precondition restrictions.and common subexpression elimination. In this transformation, two identical nodeswith mutually exclusive conditions (i.e exactly one node will be active at any time)have inputs from identical nodes, which in turn have identical inputs. At first, it appears that we could apply a combination of common subexpression elimination andcross-jumping tail-merging. If we apply common subexpression elimination first, toobtain a single node whose output is connected to the mutually exclusive nodes,then we cannot apply cross-jumping tail-merging as shown in Figure 7.7. On theother hand, if we apply cross-jumping tail-merging first, the outputs of the otherpair of identical nodes form a join at the input of the single node obtained. In thiscase, we cannot apply common subexpression elimination as shown in Figure 7.8.pp0157Data relations of nO, ni, nOl = RiData relations of dotO, dot 1, dotO 1 = R2aFigure 7.8: Inapplicability of common subexpression elimination after cross-jumpingtail-merging: due to precondition restrictions.The problem can be solved by devising a new and simple transformation asfollows. In the description of common subexpression elimination shown in Figure 7.1,the outputs of nodes dotOl and doti were required to be not connected to join ports.However, we can relax this constraint, and provide a new and simple transformationthat can be used to optimize a dependency graph. We show the new transformationin Figure 7.9. We could have arrived at the transformation in an ad hoc mannersimply by examining the semantics of a conditional expression. However, we devisedthe transformation after examining by doing a “what-if” analysis formally in theproblem of composing two transformations. This suggests that our formal modelcan be used to devise new transformations in a methodical manner.pp0158a data relation of node ni = Ridata relation of node n2 = R2-IF c THEN p1 = R1(a,b)ELSE p1 R2(a,b)refines(n2,ni) ANDrefines(n2,nl) IMPLIES R2 c Ripl =Rl(a,b)Figure 7.9: A simple new transformation: obvious, post-facto.159sks(cnl:cnode,cn2:cnode) = equates(cnl,cn2) AND same_size(cni,cn2)preconds(dotO,(doti :{dot Isks (dot ,dotO)}),(dotOl : {dot I sks(dot , dotO)}),(parO : {parl is_outportar(par)&same_size(par, inports(dotO))}),(pan :{panl is_outportar(par)&same_size(par, inports(dotO))}),(parOO : {panl is_outportar (par)&same_size (par, inports(dotO))}),(parli : {panlis_outportar(par)&sarne_size(par,inports(dotO))}),ppo ,ppOO)h connectivity at the input ports of SIL graph before transformationxdfear (parO, inports(dotO)) AND xdfear(pani, inports(doti)) AND(w(outport(dotO),ppO) < w(outport(dotl),ppO) 1FFwar(parOO,inports(dotOi)) < war(parii,inports(dotOl))) AND‘h connectivity at the output ports of SIL graph before transformationdfe(outport(dotO),ppO) AND dfe(outport(doti),ppO) AND(FORALL pp: ((pp 1= outport(dotO)) OR (pp 1= outport(doti)))IMPLIES NOT dfe(pp,ppO)) ANDh connectivity at the input ports of SIL graph after transformationdfear(parOO,inports(dotOi)) AND dfear(parii,inports(dotOl)) AND(FORALL (par Isize(par)=size(parOO)):(par /= parOO AND par / parli)IMPLIES NOT dfear(par,inports(dotOl))) AND‘h connectivity at the output ports of SIL graph after transformationxdfe(outport(dotOi) ,ppOO) AND‘h corresponding input ports of graph before and after transformationare equivalentsileqar(parO,parOO) AND sileqar(parl,panii)Table 7.2: PVS specification of preconditions for cross-jumping tail-merging160CjtM: THEOREMFORALL (dotO: cnode)LETsks = LAMBDA (cnO:cnode),(cnl:cnode):same_size(cnO,cnl) AND equates(cnO,cnl),sk = LAMBDA (n:cnode):sks(n,dotO),ios = LAMBDA par: is_outportar (par) & same_size (par, inports (dotO))INFORALL (doti Isk(dotl)),(dotOl Isk(dotOl)),(parO ios (parO)),(parllios(parl)),(parOO i ios (parOO)),(par11ios(par11))‘h structure and preconditions on graphs before and after transformationpreconds(dotO,dotl ,dotOl ,par0,parl ,paroO,parll,ppo,ppOO)IMPLIES/ corresponding output ports are equivalentsileq(ppO ,ppOO)Table 7.3: Correctness of cross-jumping tail-merging; see Figure 7.4._Transformation ] Run time in Seconds]Common subexpression elimination 30Common subexpression insertion 25Cross-jumping tail-merging 56Copy propagation 10Constant propagation 2Strength reduction 2Commutativity 3Associativity 3Distributivity 3Retiming 3Self-inverse 1Table 7.4: Experimental results for proofs of various transformations on a Sparc-20with 32 Mb memory161Chapter 8162Applications to Other DomainsIn this chapter we discuss how we could apply our formal approach to investigateformalisms used in modeling real-time systems, optimizing compilers, data-flow languages, and Structured Analysis and Design (SA/SD). We give a high-level general methodology of application in each of the formalisms. Section 8.1 describesthe graph-based formalism used in optimizing compilers. Data-flow languages thathave data-flow graphs as the foundational basis are the subject of Section 8.2. InSection 8.3, we show how we can capture the data-flow diagrams used in SA/SDapproach in our formalism. In Section 8.4, we discuss briefly the constraint netformalism used in modeling real-time systems. Finally, in Section 8.5, we give abrief discussion of separation of control-flow and data-flow.163X 1Y : 2IF CX 1) THEN Y : 3Table 8.1: A typical program involving a branch. See Figure 8.1MVI #1,XMVI #2, YCMP X,#1JZ LabelLabel: MVI #3, YTable 8.2: A typical program involving a branch: assembly language version ofprogram in Figure 8.1.8.1 Optimizing Compiler TransformationsThere are a variety of transformations described in literature [AC72, EMH+93] foroptimization and refinement in optimizing compilers. A notion similar to dependency graphs known as dependence graphs [PBJ+91], are used as an intermediaterepresentation of a program during the process of compilation. Like dependencygraphs, dependence flow graphs also combine control-flow and data-flow into onegraph. We consider a typical program [PBJ91] given in Table 8.1, whose low-levelassembly program is given in Table 8.2. The associated dependence flow graphrepresentation is given in Figure 8.1.164Figure 8.1: A typical dependence flow graph involving a branch. See Table 8.1165In Figure 8.1, the load operator reads the contents of a storage location andoutputs the value as a token. While, the store operator is the inverse of the loadoperator receives a value on a token and stores it into a memory location. Dependence arcs that sequence operations on location y go into switch and mergeoperators, which implement flow of control. These operators serve to combine control information with data dependencies. We provide a simple transformation ofthe dependence flow graph of Figure 8.1 to a dependency graph such as SIL. Thecorresponding dependency graph is shown in Figure 8.2.The straight-forward transformation from dependence flow graphs to dependency graphs suggests that the formal model developed for dependency graphs suchas SIL could be applied for specification and verification of transformations usedin optimizing compilers [Raj95b]. A number of transformations such as commonsubexpression-elimination and cross-jumping tail-merging, whose specification andverification have been discussed in Chapter 7 have direct origins in optimizing compilers.166Figure 8.2: SIL dependency graph for program. See Table 8.1YY1678.2 Data-flow LanguagesData-flow languages [WP94] are based on the data-flow graph paradigm. In thisparadigm, a program is modeled as a set of operator nodes, with input and outputports. The ports are connected by edges that carry data. Because of the absenceof control specification, every operator node executes whenever the data is availableon its input ports. Thus, it models concurrent execution implicitly. The data-flowparadigm is based on a single-assignment property [Cha79, WP94]. The single-assignment property means that the variables are assigned exactly once. Becausethe execution of statements need not be in the lexical order of the written program,a statement gets executed as soon as all the variables in the statement are welldefined. Thus, the single-assignment property ensures that the execution of theprogram is well-defined.The main characteristics of data-flow languages are the following:168• Absence of side effects.• Single assignment semantics.• Inherent concurrency.• Absence of sequencing.Because of implicit concurrency and transparent control flow, deadlocks happenless often. Lucid [AW77, WP94] is one of the early data-flow languages designedto be amenable to program transformations and formal verification. Although itdeveloped as a simple non-procedural language with temporal logic operators toprove properties of programs, it has grown to be an intensional multidimensionalprogramming language with applications in parallel computing [AFJWed]. Lustre [HFB93], a synchronous declarative language based on the data-flow paradigmhas been used to specify synchronous hardware and real-time systems. Programswritten in Lustre are compiled into a sequential code. Its control structure is afinite automaton obtained by exhaustive simulation of Boolean variables appearingin the program. Id [WP94], another data-flow language developed for programminga data-flow computer, has been designed as a block-structured, expression-oriented,single-assignment language avoiding notions of sequential control. SISAL [WP94] is169a stream-oriented single-assignment applicative language. An intermediate dependency graph called IF1 [WP94] is used to represent SISAL programs between parsing and machine code generation. IF1 is an acyclic graph representation with fourcomponents: operator nodes, edges denoting data-flow, types denoting data typeson edges, and graph boundaries surrounding groups of nodes and edges. Optimization and refinement transformations such as common-subexpression-elimination andintroduction of explicit memory management operations are performed to obtain another IF1 graph at a lower abstraction level.8.3 Structured Analysis and DesignStructured methods in software design are used to capture and analyze the requirements of the system under consideration. A majority of the methods use informalgraphical notations for requirements capture. One of the most widely used meth170ods is Structured Analysis/Design (SA/SD) approach [DeM79, Won94] for softwareanalysis. In this approach, a combination of data flow diagrams (DFD) and controlspecifications such as state transition diagrams (STD) are used as graphical notations among others. Thus, there is a clear separation of data flow and control flow.There is also an associated data dictionary that contains more information on thedata described in the data flow diagram. There have been some efforts in providinga formal sematics for DFD in VDM [LPT93]. However, they do not address controlspecifications and transformations on DFDs.We first give a brief overview of DFD and STD in Section 8.3.1. In Section 8.3.2,we present a transformation of DFD to SIL. The SIL specification could then be represented in PVS using the formalism described in Chapter 5. Section 8.3.3 describesthis process with an example. We present a brief discussion on the separation ofcontrol from data in Section 8.51718.3.1 DFD and STDControl Flow EdgeData Flow EdgeControl Specification Bar(Eg: STD)DFDs are used to describe flow of data through a network of processes that acton the data. The building blocks which form a DFD shown in Figure 8.3 are thefollowing:StateData TransformerData StoreT ExternalL Process JEventiActionDFD Basics STD BasicsFigure 8.3: Basic building blocks of a DFD172• Data transformers: system processes represented as nodes perform an actionon inputs and produce outputs.• Data flow edges: directed edges between data transformers representing flowof data.• Control specification bar: a vertical bar representing a control specificationsuch as an STD. It is used as a link between data specification and controlspecification.• Control flow edges: directed edges carrying di screte valued data into datatransformers. Control flow data are generated by control specifications suchas STDs.• Data stores: used as memory elements for storage of data.• External process: processes part of the environment to indicate inputs to thesystem.STDs represent states and state transitions of the system. A transition is labelledby an event that enables it, and the action triggered by it. A transition could triggeran action that can either be setting the value of a control signal in the DFD oractivating a process. We show a simple STD in Figure 8.3. The control signals173generated by STD are communicated to the control flow edges of the DFD througha vertical bar that represents the STD.8.3.2 Transformation of DFD to SILWe present a transformation from DFDs into SIL as shown in Figure 8.4. In SIL,unlike in SA/SD, control flow is integrated with data flow. So, we can represent bothcontrol information present in the STD specification and data flow description ofDFDs in SIL. We represent both data transformers and data stores as (conditional)nodes in SIL. Data stores are represented as nodes whose operation is to producean output equal to the input it was given sometime in the past. Such a nodeis called a delay node in SIL. External processes are represented by input nodes.Every node except the input nodes have specific input, output and conditional ports.Input nodes have only output ports. Data flow edges of DFDs are transformed intodata flow edges in SIL except that they connect output ports to input ports of174Conditional nodeData TransformerData StoreExternalL ProcessData Flow EdgeControl Flow EdgeData Flow Edge>.Data Flow EdgeSequence Edge0 Conditional PortControl Specification Bar Join(Eg: STD)Figure 8.4: DFD to SIL Transformation175nodes rather than nodes in DFDs. Control flow edges are data flow edges in SILthat connect output ports to conditional ports. They communicate only binarydata values. A discrete valued data could be encoded in terms of binary datavalues. Additional sequentiality and additional control information present in anSTD are captured by the data flow edges and the notion of join in SIL. Once thistransformation is done, we can use our formalization of SIL in PVS to specify systemsthat have been specified and analyzed using SA/SD approach.8.3.3 Example IllustrationIn order to describe how our formalism can address the specification problem inSA/SD, we consider an example often used to illustrate the SA/SD approach [Tea9l].The example we have chosen is based on the cruise control of an automobile. Aspecification in the SA/SD approach is as shown using DFD in Figure 8.5 and STDshown in Figure 8.6.176Cruise.. STDResume.Accelerate_onAccelerate_offDeactivate- - -Des. SpeedActual_speed \__,/ - - --. Throttle_Setting_IncreaseThrottle_Setting_decreaseFigure 8.5: DFD for cruise control of an automobileThe capture_speed process captures the current value of the actual speed, andstores it as the desired speed in the data store. The control_speed process computesthe difference between the actual speed and desired speed, and increases/decreasesor does not change the throttle_setting. The increas_speed process retrieves thedesired speed, and updates it by increasing it according to a predetermined amount.The control signals are generated as given in the STD.SSS177ICruiseControlIdle“Capture_Speed;enable “Control_Speed” Cruiseltrigger “Capture_Speed;enable “Control_Speed’— Deactivate! IReady _ kill “Control_Speed” Cruise_Controller_Resume Is_In_Control___j_________Resume!enable “Control_Speed”Accelerate_Off! Accelerate_On!kill “Increase_Speed” enable “Increase_Speed”AcceleratingDeactivate/kill “Control_Speed”;kill “Increase_Speed”Figure 8.6: STD for cruise control of an automobile178Cruise Resume & DeactivateActual Speed8.3.4 Transformation to SILWe transform the SA/SD specification into the data flow graph formalism of SILas shown in Figure 8.7. An important point that should be noted is the join atthe input port of the Desired Speed node. Depending on which one of the controlsignals: cruise control or the deactivate & accelerate on/off is true, the input portsget the value from the output of the node whose condition is true. This incorporatesthe specification expressed by the STD.Actual SpeedThrottleSettingDeactivate & Accelerate on/offFigure 8.7: SIL for cruise control of an automobile1798.3.5 Specification in PVSWe use the axiomatic specification of SIL presented in Chapter 5 to represent theSIL description of cruise control in PVS. We first specify the data relations of eachof the nodes as uninterpreted constants of proper types as follows:control_speed: [port, port —> port]capture_speed, desired_speed, increase_speed: [port -> port]We specify each of the nodes by instantiating a general conditional node withthe required number of input/output/conditional ports, a data relation and an orderrelation.180capture_speed_node(cn:cnode) :cnode =cn WITH [inports:= inports(cn) WITH [size := 1]]WITH [outport:= outport(cn)]WITH [condport:= condport(cn)]WITh [datarel:= LAMBDA (par:parray) , (p:port):p= capture_speed(port..array(par) (0))]WITH [orderrel: = LAMBDA (par :parray), (p :port):p < port_array(par)(0)]control_speed_node(cn:cnode) :cnode =cn WITH [inports:= inports(cn) WITH [size : 2]]WITH [outport:= outport(cn)]WITH [condport:= condport(cn)]WITH [datarel : LAMBDA (par:parray) , (p:port):p control_speed (port_array (par) (0),port_array(par) (1))]WITH [orderrel : LAMBDA (par :parray) , (p :port):p < port_array(par)(O)]181increase_speed_node(cn:cnode) :cnode =cn WITH [inports:= inports(cn) WITH [size : 1]]WITH [outport:= outport(cn)]WITH [condport:= condport(cn)]WITH [datarel : LAMBDA (par:parray) (p:port):p= control_speed(port_array(par)(O))]WITH [orderrel:= LAMBDA (par:parray) , (p:port):p < port_array(par)(O)]desiredspeednode(cn:cnode) :cnode =cn WITH [inports:= inports(cn) WITH [size : 1]]WITH [outport: outport(cn)]WITH [condport:= condport(cn)]WITH [datarel : LAMBDA (par :parray), (p:port):p= desired_speed(port_array(par) (0))]WITH [orderre). LAMBDA (par :parray) , (p:port):p < port_array(par)(0)]182The connectivity of the graph is specified using data flow edges expressed by thePVS relation sileq.cruise_control(cruise ,actuai_speed,deactivate,acce].erator,resi.ime , throttlesettling,cnl,cn2,cn3) =sileq(inports(capture_speed(cnl) (0)) ,actual_speed) ANDsileq(condport(capture_speed(cnl)) ,cruise) ANDsi].eq(condport(increase_speed(cn3)), deactivate&accelerate) ANDsileq(inport(control_speed(cn4))(O) ,actual_speed) ANDsileq(outport(control_speed(cn4)) ,throttle_setting) ANDsileq(outport(capture_speed(cnl)) ,inport(desired_speed(cn2))) ANDsileq(outport(desired_speed(cn2)) ,inport(increase_speed(cn3))) ANDsileq(outport(increase_speed(cn3)) , inport(desired_speed(cn2))) ANDsileq(outport(desired_speed(cn2)), inport(control_speed(cn4)) (1))1838.4 Constraint netsA constraint net [ZM92, Zha94, ZM93, ZM94, ZM95j is a network of nodes representing functions and locations, and edges representing flow of data between input/output ports of the nodes. Thus, a constraint network represents a set ofconstraints on input/output traces. Constraint nets are used to model both discreteand continuous behavior of hybrid/real-time systems. In order to relate constraintnets to dependency graph formalism, we depart slightly from the description givenby Zhang and Mackworth [ZM93, ZM95]. A constraint net is a five-tuple CN =<L,I,O,T,E>, where• L is a finite set of locations or stores.• I is a finite set of input ports.• 0 is a finite set of output ports.• T: Powerset(I)— 0 is a function label with a special name called transductionlabel.184• E:L x I UO is a set of edges between locations and ports.where Powerset(I) is the set of all subsets of I.A transduction associated with a node is causal function that maps traces oninput ports to a trace on an output port. The traces could be continuous or discretedata streams. A constraint net can be viewed as a set of equations whose leastsolution gives its meaning. An constraint net example that models the computationof the distance as an integration of velocity over time is shown in Figure 8.8. Thenodes drawn as boxes indicate transductions, while circles indicate locations. Thevalue xO is the initial value of the integral (i.e distance at the initial time). Thetransduction Vt i5 the velocity at time t.185Figure 8.8: Constraint net for modeling integral computation over continuous time8.4.1 Axiomatization of Constraint NetsA constraint net, with transductions restricted to simple time-independent functionson traces can be viewed as a typical dependency graph without non-determinism.However, our formalization of dependency graphs allows general transductions tobe specified by instantiating uninterpreted type for input port to be a trace of data,and having the type of output port as data, while data itself is an uninterpretedtype:186data: TYPE h uninterpretedtrace: sequence [data]input_port: TYPE = traceoutput_port: TYPE = datatime: TYPE = rca].transduction: TYPE = [Einput_port,time] -> output_port]The causality in dependency graphs is specified by sequence edges, and controlspecification in dependency graphs. The locations have to be viewed as a specialkind of a node called store node. Thus, a constraint net can be viewed as a dependency graph, where the transductions would be associated with data relations thatcould be functions involving time as an additional parameter. Since, we have leftthe data type uninterpreted in our axiomatization of refinement (silimp), we havethe freedom to extend it to cover higher refinement on higher order relations andthus, refinements for constraint nets. Such extensions would be difficult with an operational or denotational semantic framework. There are however, some restrictionsto be imposed to make further use of our axiomatization of dependency graphs. Forexample, we have to restrict the number of output ports to one, for every node, andjoins should not be allowed. Thus, the axiomatic specification we have provided187for dependency graphs could be used with minor modifications to specify constraintnets.8.5 Separation of Control Flow from Data FlowAs we have observed in earlier sections, control flow description is separated fromdata flow description in SA/SD, while control and data form an integrated specification in SIL. Both approaches have some advantages and disadvantages. As anadvantage, if the control flow is decoupled from data flow, a data flow could thenbe associated with a choice of control flow specification. The disadvantage withthis approach would be the difficulty of applying transformations: one would haveto have separate sets of transformations for each, and a precise definition of theinterface between control flow and data flow. The advantage with having an integrated control and data flow specification, as in SIL, is the ability to precisely define188an integrated set of optimization and refinement transformations that involve bothcontrol flow and data flow.189Chapter 9190Discussion and ConclusionsOne of the goals of a transformational design approach is to achieve designs that arecorrect by construction. We recall from Chapter 1 that a transformation is correctif the set of behaviors allowed by the implementation derived from the transformation is a subset of the behaviors permitted by the original specification. Inthis work, we have attempted to help accomplish the goal of correctness by construction in verifying the correctness of transformations used in dependency graphformalisms [Raj95a]. However, we have to note the distinction between the transformations as documented and intended by the informal specification and the transformations actually implemented in software. We explain this distinction in Section 9.1.In Section 9.2, we briefly present our experience in developing a formal specificationfrom an informal document. We highlight the advantages of an axiomatic approachin Section 9.3. Finally, Section 9.4 summarizes the conclusions.1919.1 Intent versus ImplementationOur verification has addressed the transformations as documented and intendedby the informal specification, and not the transformations actually implemented insoftware. One has to determine manually if the implemented transformations do,in fact, carry out the intended transformations that have been verified. In general,there is no practical mechanized method to check if software programs (such asthose implemented in C) satisfy their specifications. But, in order to check thecorrectness of the implemented transformations, one has to first ensure that theintended transformations as documented are correct.The correctness problem of the implemented transformations could be partlytackled in another manner. We can compare the dependency graph that is takenas the input by the software for transformation with the dependency graph that isthe output of the software after applying the transformation. However, this wouldentail developing concrete behavioral models of the dependency graphs. But, aconcrete behavior model basis would make the applicability of the formalizationmore restricted.1929.2 From Informal to Formal SpecificationThe most difficult part in this investigation has been developing a proper formalspecification from informal specifications. Even though the informal specificationswere well-documented, creating a formal specification required expressing informalideas such as behavior and mutual exclusiveness in mathematically precise terms.One particular detail in this respect is the following: the informal document describes a value of a conditional node as undefined when the condition on its condition port is false. Introducing a notion of undefined value would need a specialentity to be introduced for every data type. Further, we would also have to associatea meaning with such special entities. To avoid specification difficulties in statingwhat undefined means, we chose to specify how an undefined value affects the overallbehavior of a subgraph in which such a node is embedded. Such choices have to bemade with care towards specification and verification ease.One of the first tasks that aids the specification process is the choice of abstraction level: how much of the detail present in the informal document should thespecification represent? The choice could be based on how the formal specification193has to be verified. For example, we chose not to represent behavior at all: we couldexpress behavioral equivalence (refinement) by an equivalence (refinement) relation,and express the properties that needed to be satisfied by the SIL graphs.Another important issue in developing a formal specification from an informaldocument is deciding on data structures to represent entities specified informally. Itis desirable to have a formal specification that very closely resembles the informaldocument. This is essential to map a formal specffication back to its informal document. It is essential also for understanding a formal specification, and for tracingerrors that have been found in the specification back to its informal representation.We can highlight one such data structure that PVS allows us to use: the recordtype. As we have seen in Table 5.2, it permits us to package all the fields of aconditional node cn, and then access the individual fields such as inports of the cnby inports (cn). This syntax closely resembles the informal specification. Besidesproviding a simple syntax, the record type also allows making the type of one fielddepend on the type of another field. We have seen such dependent typing in ourdefinition of arrays of ports parray in Chapter 5. Alternatively, we could have usedAbstract Data Types (ADT) in our formal specification. This would have an advantage of encapsulating well-formedness of the structure of dependency graphs within194the behavior specification. However, this would mean imposing an abstract syntaxstructure for the behavior. Since our investigation primarily involves transformations which transform structure, it would be difficult to work with a specificationthat has an integrated structure and behavior.The properties we have tabled in our formalism could form the basis of studyinghow we could formulate a composite behavior from smaller behavioral relations. Inan earlier work at the register-transfer level [KvdW93j, an automatic procedure forfunctional verification of retiming, pipelining and buffering optimization has beenimplemented in RetLab as part of the PHIDEO tool at PRL. We have arrived atproofs of properties that could form the basis of a semiautomatic procedure forchecking refinement and equivalence at higher levels.1959.3 Axiomatic Approach vs Other Formal ApproachesThe advantage in an axiomatic framework is that we could assert properties of SILgraphs that have to hold, without having to specify in detail the behavioral relationsor their composition and equivalence. We could therefore embed off-the-shelf data-flow diagrams used in the Structured Analysis/Design approach [DeM79, Won94]in our formalism. One particular example of the advantage of our approach isestablishing refinement and equivalence, without expressing the concrete relationbetween outputs and inputs of nodes. This property, expressed in Table 6.9 andFigure 6.9, does not use any information on the concrete data and order relationsof the nodes. Moreover, the automatic verification procedures, simple interactivecommands, and many features such as editing and rerunning proofs in PVS madethe task of checking properties and correctness much easier than anticipated.In contrast to an axiomatic approach, a model-oriented approach would comparetwo dependency graph models with respect to behavior. Such a model-comparisonmethod would involve verifying that the behavior of the transformed model satisfiesthe behavior of the original model. However, this entails developing concrete behav196ioral models of the dependency graphs, and formulating the meaning of behavioralrefinement, and equivalence. Such concrete modeling of behavior, refinement andequivalence would impose restrictions on the domains where the formalization couldbe applied. For instance, we laid down the definition of refines between two graphsin Section 6.2 axiomatically instead of proposing a subset relationship between thedata relations of the graphs. Furthermore, such a modeling would make it inconvenient to study the correctness of transformations on graphs with arbitrary structure.For example, in our approach, we could handle nodes with an unspecified numberof ports in studying the correctness problem. On the other hand, an operationalor denotational model of a language relates closely to a computation model of thelanguage. This implies that it would be difficult to reason about the properties ofthe language without reference to the undelying behavior model. For example, wehave pointed out earlier in Section 8.4 that, we would not be able to use the operational/denotational semantics developed for dependency graphs [dJ93, HK93] toreason about constraint nets without major modifications to the semantics. However, in our axiomatic specification, we could interpret a port as representing atrace, and data relations as higher-order functions on traces with an optional timeparameter. Such a simple interpretation suggested that the axiomatic specification developed for dependency graphs in this dissertation could be used to specifyand reason about constraint nets. However, denotational and operational modelsworked out by de Jong and Huijs [dJ93, HK93] could be used as a concrete model197that satisfies the axiomatic specification.As a typical example, we are given the behavioral relations of the nodes in aSIL graph and the structural connectivity of the graph. There is no general way tocompose these relations into a single behavioral relation for comparison with thatobtained from another SIL graph. Moreover, from the behavioral description inSIL, it is not possible in general to extract a state machine or a finite automatonmodel, and use state machine or automata comparison techniques. This is due tothe generality of the dependency graph behavior. In addition, since many synthesistransformations are applied to descriptions of behavior within a single clock cycle,there is no explicit notion of state in such a description. This reinforces the judgmentthat state machine or automata comparison techniques are not suitable.1989.4 Conclusions and Directions for Further ResearchIn this work, we have provided an axiomatic specification for a general dependencygraph specification language. We have given a small set of axioms that capture a general notion of refinement and equivalence of dependency graphs. We have specifiedand verified about a dozen of the optimization and refinement transformations. Wefound errors in this process, and suggested corrections. We have also generalized thetransformations by weakening the preconditions for applying the transformations,and verified their correctness. In this process, we have devised new transformationsfor further optimization and refinement than would have been possible before.The preconditions for transformations that existed in the SPRITE and TRADEStransformational design prior to our specification and verification task were strongerthan necessary in order to make them purely syntactic checks made at run-time.However, precondition weakening resulting from our formal approach, involves comparing weights on edges as illustrated in Figure 7.4. If the weights on the edges arethe result of an execution of a subgraph in SIL, then run-time checks for preconditions would have to be enhanced to compute the weights, and then compare the199resulting weights. The tool used for designing systems in SIL could communicatewith PVS by sending a SIL graph with the computed weights on the edges, andreceive a possible set of transformed graphs. Our work has also aided investigatinginteractions between the transformations, and thus the importance of the order ofapplying the transformations. The transformations we have verified are being usedin industry to design hardware from high level specifications.We have enhanced existing theorem proving technqiues by integrating efficientsimplifiers such as model-checkers based on BDD. This enabled an efficient mechanical verification of the correctness of transformations on dependency graphs. Ourenhancement to theorem proving techniques has facilitated automatic techniquesto solve large verification problems in hardware and software that could not havebeen tackled by either model-checking or theorem proving alone. The transformations we have verified are being used in industry to design hardware from high levelspecifications.An investigation into the correctness of other transformations such as those involving scheduling and resource allocation in dependency graphs could form a part200of further research. Another direction for extending the research is to study howverification techniques and results could be seamlessly integrated with Very LargeScale Integration (VLSI) Computer-Aided Design (CAD) tools, software/hardwareco-synthesis/co-design tools, and Computer-Aided Software Engineering (CASE)tools. The approach we have used, based on expressing properties at a high level,does not depend on the underlying model of behavior. This enabled us to use ourformalism for dependency graph specifications in other areas such as structuredanalysis in software design. Thus, the ability to capture an off-the-shelf formalism underpins our thesis that an axiomatic specification coupled with an efficientmechanical verification is the most suitable approach to investigate the correctnessof transformations on generic dependency graphs, independent of the underlyingbehavior models.201Bibliography[AAD93] F.V. Aelten, J. Allen, and S. Devadas. Verification of relationsbetween synchronous machines. IEEE Transactions on Computer-Aided Design, 12(12), December 1993.[AAD94] F.V. Aelten, J. Allen, and S. Devadas. Even-based verificationof synchronous globally controlled logic designs against signal flowgraphs. IEEE Transactions on Computer-Aided Design, 13(1), January 1994.[AC72] Francis E. Allen and John Cocke. A catalogue of optimization transformations. In Randall Rustin, editor, Design and Optimization ofCompilers, Courant Computer Symposium 5. Prentice Hall, Inc.,Englewood Cliffs, NJ, March 1972.[AFJWed] E.A. Aschroft, A.A. Faustini, R. Jagannathan, and W.W. Wadge.Multidimensional Declarative Programming. Oxford UniversityPress, Oxford, U.K., to be published.[AL94] M. Aagaard and M. Leeser. PBS: Proven boolean algorithm. IEEETransactions on Computer-Aided Design, 13(4), January 1994.202[Ang94] C. Angelo. Formal Hardware Verification in a Silicon CompilationEnvironment by means of theorem proving. PhD thesis, K.U. Leuven/IMEC, Belgium, February 1994.[ASU71} A.V. Aho, Ravi Sethi, and J. D. Ullman. Code optimization andfinite church-rosser systems. In Randall Rustin, editor, Design andOptimization of Compilers, Courant Computer Symposium 5. Prentice Hall, Inc., Englewood Cliffs, NJ, March 1971.[AW77] E.A. Aschroft and W.W. Wadge. Lucid: A nonprocedural languagewith iteration. CA CM, 20(7):519—526, 1977.[Bac88] R.J.R. Back. A calculus of refinements for program derivations.Acta Informatica, 25:593—624, 1988.[Bar8l] M. R. Barbacci. Instruction set processor specifications (isps): Thenotation and applications. ieeetc, C-30(1):24—40, 1981.[BCD88] R.K. Brayton, R. Camposano, C. DeMicheli, R.H.J.M. Otten, andJ.T.J. van Eijndhoven. The yorktown silicon compiler system. InD. Gajski, editor, The Yorktown Silicon Compiler System. Addison-Wesley, 1988.[BCL94] J. R. Burch, E. M. Clarke, D. E. Long, K. L. McMillan, andD. L. Dill. Symbolic model checking for sequential circuit verification. IEEE Transactions on Computer-Aided Design, 13(4):401—424, April 1994.[BCM90aJ J. R. Burch, E. M. Clarke, K. L McMillan, D. L. Dill, and L. J.Hwang. Symbolic model checking: 1020 states and beyond. In5th Annual IEEE Symposium on Logic in Computer Science, pages428—439, Philadelphia, PA, June 1990. IEEE Computer Society.[BCM90b] J.R. Burch, E.M. Clarke, K.L. McMillan, D.L. Dill, and J. Hwang.Symbolic model checking: 1020 states and beyond. In Proceedings203of the Fifth Annual Symposium on Logic in Computer Science. Association for Computing Machinery, July 1990.[BRB9O] K.S. Brace, R.L. Rudell, and R.E. Bryant. Efficient implementationof a BDD package. In Proceedings of the 27th ACM/IEEE DesignAutomation Conference, pages 40—45. Association for ComputingMachinery, 1990.[Bry92] Randal E. Bryant. Symbolic boolean manipulation with orderedbinary-decision diagrams. ACM Computing Surveys, 24(3):293—318,September 1992.[Cam89] R. Camposano. Behavior preserving transformations in high levelsynthesis. In Hardware Specification, Verification and Synthesis:Mathematical Aspects, pages 106—128, Ithaca, NY, July 1989. Volume 408 of Lecture Notes in Computer Science, Springer-Verlag.[CBL92I R. Chapman, G. Brown, and M. Leeser. Verified high-level synthesis in BEDROC. In Proceedings of the 1992 European DesignAutomation Conference. IEEE Press, March 1992.[CG87] E.M. Clarke and 0. Grumberg, editors. Annual Review of Computer Science, Volume 2. Annual Reviews, Inc., Palo Alto, CA,1987.[CGH94] E. Clarke, 0. Grumberg, and K. Hamaguchi. Another look at LTLmodel checking. In David Dill, editor, Computer-Aided Verification, CAV ‘94, pages 415—427, Stanford, CA, June 1994. Volume818 of Lecture Notes in Computer Science, Springer-Verlag.[Cha79] Donald D. Chamberlin. The “single-assignment” approach to parallel processing. Selected papers on Data Flow Architecture, Part 1,May 1979.[Cle89] R. Cleaveland. Tableau-based model checking in the propositionalmu-calculus. Technical Report 2/89, University of Sussex, March2041989.[Compu9o} Computer General Electronic Design U.K. The ELLA LanguageReference Manual. Computer General Electronic Design The NewChurch Henry St. Bath BA1 1JR U.K., 1990.[Cyr93] D. Cyrluk. Microprocessor verification in PVS: A methodology andsimple example. Technical report, SRI International, December1993. Report CSL-93-12.[DeM79] Tom DeMarco. Structured Analysis and System Specification. Your-don Press, New Jersey, 1979.[dJ93] G.G de Jong. Generalized data flow graphs: theory and applications.PhD thesis, Eindhoven University of Technology, The Netherlands,October 1993.[EL85j E.A. Emerson and C.L Lei. Efficient model checking in fragmentsof the propositional mu-calculus. In Proceedings of the 10th Symposium on Principles of Programming Languages, pages 84—96, NewOrleans, LA, January 1985. Association for Computing Machinery.[Eme9O] E. Allen Emerson. Temporal and modal logic. In Jan van Leeuwen,editor, Handbook of Theoretical Computer Science, volume B: Formal Models and Semantics, chapter 16, pages 995—1072. Elsevierand MIT press, Amsterdam, The Netherlands, and Cambridge,MA, 1990.[EMH93] W.J.A Engelen, P.F.A. Middeihoek, C. Huijs, J. Hofstede, andTh. Krol. Applying software transformations to SIL. TechnicalReport SPRITE deliverable Ls.a.5. 2/UT/Y5/M6/ 1A, Philips Research Laboratories, Eindhoven The Netherlands, April 1993.[Fou9O} M. P. Fourman. Formal system design. In J. Staunstrup, editor,Formal Methods for VLSI Design. IFIP, North-Holland, 1990.205[GM93] M. J. C. Gordon and T. F. Meiham, editors. Introduction to HOL:A Theorem Proving Environment for Higher-Order Logic. Cambridge University Press, Cambridge, UK, 1993.[Gun92] Carl A. Gunter. Semantics of programming languages : structuresand techniques. MIT Press, Cambridge, MA, 1992.{Gup92] Aarti Gupta. Formal hardware verification methods: A survey. Formal Methods in Systems Design, 1(2/3):151—238, October 1992.[HFB93] N. Halbwachs, J-C. Fernandez, and A. Bouajjani. An executabletemporal logic to express safety properties and its connection withthe language lustre. In Sixth International Symposium on Lucidand Intensional Programming. Universite Laval, April 1993.[HHK92] C. Huijs, J. Hofstede, and Th. Krol. Transformations and semantical checks for SIL-1. Technical Report SPRITE deliverableLS .a.5.1 /UT/Y4/M6/ 1, Philips Research Laboratories, EindhoveriThe Netherlands, November 1992.[Hi185] P. N. Hiffinger. Silage: a high-level language and silicon compilerfor digital signal processing. In Proceedings of IEEE Custom Integrated Circuits Conference, pages 213—216, Portland, OR, May1985. IEEE.[HK93j C. Huijs and Th. Krol. A formal semantic model to fit SIL fortransformational design. In Proceedings of Euromicro Microprocessing and Microprogramming 39, liverpool, September 1993.[Hoo94] Jozef Hooman. Correctness of real time systems by construction.In H. Langmaack, W.-P. de Roever, and J. Vytopil, editors, FormalTechniques in Real- Time and Fault- Tolerant Systems, pages 19—40,Lübeck, Germany, September 1994. Volume 863 of Lecture Notesin Computer Science, Springer-Verlag.206[Jan93a] G. L. J. M. Janssen. ROBDD Software. Department of ElectricalEngineering, Eindhoven University of Technology, October 1993.[Jan93b] G. L. J. M. Janssen. ROBDD Software. Department of ElectricalEngineering, Eindhoven University of Technology, October 1993.[JLR91] J. Joyce, E. Liu, J. Rushby, N. Shankar, R. Suaya, and F. vonHenke. From formal verification to silicon compilation. In IEEECompcon, pages 450—455, San Francisco, CA, February 1991.[Joh84] S. D. Johnson. Synthesis of Digital Designs from Recursion Equations. MIT Press, Cambridge MA, 1984.[Joh95J Steven D. Johnson, editor. CHDL ‘95: 12th Conference onComputer Hardware Description Languages and their Applications,Chiba, Japan, August 1995. Proceedings published in a single volume jointly with ASP-DAC ‘95, CHDL ‘95, and VLSI ‘95, IEEECatalog no. 95TH8102.[JS9Oj G. Jones and M. Sheeran. Circuit design in ruby. In J. Staunstrup,editor, Formal Methods for VLSI Design. IFIP, North-Holland,1990.[JS93j Jeffrey J. Joyce and Carl-Johan H. Seger. Linking Bdd-based symbolic evaluation to interactive theorem proving. In Proceedings ofthe 30th Design Automation Conference. Association for Computing Machinery, 1993.[KeH92} W.E.H. Kloosterhuis, M.R.R. eyckmans, J. Hofstede, C. Huijs, Th.Krol, O.P. McArdle, W.J.M. Smits, and L.G.L. Svensson. TheSPRITE input language SIL-1, language report. Technical ReportSPRITE, deliverable Ls.a.a / Philips / Y3 / M12 / 2, Philips Research Laboratories, Eindhoven The Netherlands, October 1992.[KK94] Ramayya Kumar and Thomas Kropf, editors. Preliminary Proceedings of the Second Conference on Theorem Provers in Circuit207Design, Bad Herrenaib (Blackforest), Germany, September 1994.Forschungszentrum Informatik an der Universität Karisruhe, FZIPublication 4/94.[K1o94] W.E.H. Kloosterhuis. Personal communication. January 1994.[KMN92] Th. Krol, J.v. Meerbergen, C. Niessen, W. Smits, and J. Huisken.The SPRITE input language, an intermediate format for high levelsynthesis. In Proceedings of European Design Automation Conference, pages 186—192, Brussels, March 1992.[Koz83j D. Kozen. Results on the propositional mu-calculus. TheoreticalComputer Science, pages 333—354, December 1983.[KvdW93] A. P. Kostelijk and A. van der Werf. Functional verification forretiming and rebuffering optimization. In Proceedings of The European Conference on Design Automation with the European Eventin ASIC Design, Paris, February 1993. IEEE Computer Society.[LOR93] Patrick Lincoln, Sam Owre, John Rushby, N. Shankar, andFriedrich von Henke. Eight papers on formal verification. TechnicalReport SRI-CSL-93-4, Computer Science Laboratory, SRI International, Menlo Park, CA, May 1993.[LPT93] P.G. Larsen, N. Plat, and H. Toetenel. A formal semantics of dataflow diagrams. Formal Aspects of Computing, 3(1), 1993.[McF93] M.C. McFarland. Formal analysis of correctness of behavioral transformations. Formal Methods in System Design, 2(3):231—257, June1993.[McM93] Kenneth L. McMillan. Symbolic Model Checking. Kluwer AcademicPublishers, Boston, MA, 1993.[Mid93] P.F.A. Middelhoek. Transformational design of digital circuits. InProceedings of the Seventh Workshop Computersystems, Eindhoven208The Netherlands, November 1993.[Mid94a] P.F.A. Middeihoek. Transformational design of a direction detector for the progressive scan conversion algorithm. Technical report,Computer Science, University of Twente, Enschede, The Netherlands, May 1994. Preliminary.[Mid94b] P.F.A. Middeihoek. Transformational design of digital signal processing applications. In Proceedings of the ProRISC/IEEE workshop on CSSP, pages 175—180, Eindhoven The Netherlands, March1994.[MP83] M.C. McFarland and A.C. Parker. An abstract model of behaviorfor hardware descriptions. IEEE Transactions on Computers, C32(7):621—636, July 1983.[NRP95] Vijay Nagasamy, Sreeranga P. Rajan, and Preeti R. Panda. Fiberchannel protocol: Formal specification, verification, and designtrade-off/analysis. In Proceedings of the 1995 Silicon Valley Networking Conference and Exposition. Systech Research, April 1995.[0EE88] The Institute of Electrical and Electronics Engineers. IEEE Standard VHDL Language Reference Manual 1076-88. IEEE Press, NewYork, 1988.[ORSvH95] Sam Owre, John Rushby, Natarajan Shankar, and Friedrich vonHenke. Formal verification for fault-tolerant architectures: Prolegomena to the design of PVS. IEEE Transactions on SoftwareEngineering, 21(2):107—125, February 1995.[OSR93a] S. Owre, N. Shankar, and J. M. Rushby. The PVS SpecificationLanguage (Beta Release). Computer Science Laboratory, SRI International, Menlo Park, CA, February 1993.[OSR93b] S. Owre, N. Shankar, and J. M. Rushby. User Guide for the PVSSpecification and Verification System (Beta Release). Computer209Science Laboratory, SRI International, Menlo Park, CA, February1993.[Par89] D. Park. Finiteness is mu-effable. Technical Report 3, The University of Warwick, March 1989. Theory of Computation Report.[PBJ91] Keshav Pingali, Micah Beck, Richard Johnson, Mayan Moudgill,and Paul Stodghill. Dependence flow graphs: An algebraic approachto program dependencies. In 18th ACM Symposium on Principlesof Programming Languages, pages 6 7—78, January 1991.[Pet8l] G.L. Peterson. Myths about the mutual exclusion problem. Information Processing Letters, 12:115—116, 1981.[Raj92] Sreeranga P. Rajan. Executing hol specifications: Towards an evaluation semantics for classical higher order logic. In L. Claesen andM. Gordon, editors, Higher Order Logic Theorem Proving and itsApplications (5th International Workshop, HUG ‘92), Leuven, Belgium, September 1992. North-Holland.[Raj94a] Sreeranga P. Rajan. Transformations in high-level synthesis: Formal specification and efficient mechanical verification. TechnicalReport SRI-CSL-94- 10, SRI International, Menlo Park, California,October 1994.[Raj94b] Sreeranga P. Rajan. Transformations in high level synthesis: Specification and verification. Technical Report NL-TN 118/94, PhilipsResearch Laboratories, Eindhoven, The Netherlands, April 1994.[Raj95a] Sreeranga P. Rajan. Correctness of transformations in high levelsynthesis. In Johnson [Joh95], pages 597—603.[Raj95bj Sreeranga P. Rajan. Formal verification of transformations on dependency graphs in optimizing compilers. In Proceedings of theCalifornia Software Engineering Symposium. IRUS, University ofCalifornia, Irvine, March 1995.210[RJS93] Sreeranga P. Rajan, J.J. Joyce, and C-J. Seger. From abstract datatypes to shift registers. In Jeffrey J. Joyce and Carl-Johan H. Seger,editors, Higher Order Logic Theorem Proving and its Applications(6th International Workshop, HUG ‘93), Vancouver, Canada, August 1993. Number 780 in Lecture Notes in Computer Science,Springer-Verlag.[Ros9O] Lars Rossen. Formal ruby. In J. Staunstrup, editor, Formal Methods for VLSI Design. IFIP, North-Holland, 1990.[RRV95] Sreeranga P. Rajan, P. Venkat Rangan, and Harrick M. Vin. A formal basis for structured multimedia collaborations. In Submitted to2nd IEEE International Conference on Multimedia Computing andSystems, Washington, D.C., 1995. IEEE Press.[RSS95] Sreeranga P. Rajan, N. Shankar, and M. Srivas. An integration ofmodel-checking with automated proof checking. In 7th Conferenceon Computer-Aided Verification, July 1995.[RTJ93] Kamlesh Rath, M. Esen Thna, and Steven D. Johnson. An introduction to behavior tables. Technical report, Computer ScienceDepartment, Indiana University, Bloomington, IN, December 1993.No. 392.[Rus93] John Rushby. Formal methods and certification of critical systems. Technical Report SRI-CSL-93-7, Computer Science Laboratory, SRI International, Menlo Park, CA, December 1993. Alsoavailable as NASA Contractor Report 4551, December 1993.[Sha94] N. Shankar. Personal communication. October 1994.[5M95] Mandayam K. Srivas and Steven P. Miller. Applying formal verification to a commercial microprocessor. In Johnson [Joh95], pages493—502.211{Smi9O] Douglas R. Smith. Kids: A semi-automatic program developmentsystem. IEEE Transactions on Software Engineering, SE-16(9),September 1990.[SOR93a] N. Shankar, S. Owre, and J. M. Rushby. The PVS Proof Checker:A Reference Manual (Beta Release). Computer Science Laboratory,SRI International, Menlo Park, CA, February 1993.[SOR93b] N. Shankar, S. Owre, and J. M. Rushby. PVS Tutorial. ComputerScience Laboratory, SRI International, Menlo Park, CA, February1993. Also appears in Tutorial Notes, Formal Methods Europe ‘93:Industrial-Strength Formal Methods, pages 357—406, Odense, Denmark, April 1993.[TDW88] D. Thomas, E.M. Dirkes, R.A. Walker, J.V. Rajan, J.A. Nestor,and R.L. Blackburn. The system architect’s workbench. In Proceedings of the 25th ACM/IEEE Design Automation Conference,pages 337—343. Association for Computing Machinery, 1988.[Tea9l] Teamwork. Teamwork CASE Tool Manuals, 1991.[vdWvMM94] A. van der Werf, J.L. van Meerbergen, 0. McArdle, P.E.R. Lippens,W.F.J. Verhaegh, and D. Grant. Processing unit design. In Proceedings of the SPRITE workshop on “VLSI Synthesis for DSP”,Eindhoven, March 1994. Philips Research Labs.[vEJ94] C. A. J. van Eijk and G. L. J. M. Janssen. Exploiting structuralsimilarities in a BDD-based verification method. In Kumar andKropf [KK94J, pages 151—166.[Vem9O] R. Vemuri. How to prove the completeness of a set of register leveldesign transformations. In Proceedings of the 27th A CM/IEEE Design Automation Conference, pages 207—212. Association for Computing Machinery, 1990.212[vWS91] J. von Wright and K. Sere. Program transformations and refinements in hol. In M. Archer, J.J. Joyce, K.N. Levitt, and P.J. Windley, editors, Higher Order Logic Theorem Proving and its Applications (4th International Workshop, HUG ‘91), Davis, CA, August1991. IEEE Computer Society.{Win9O] Jeannette M. Wing. A specifier’s introduction to formal methods.IEEE Computer, 23(9):8—22, September 1990.{Won94] M. Wong. Informal, semi-formal, and formal approaches to thespecification of software requirements. Technical report, Department of Computer Science, UBC, Vancouver, Canada, September1994.[WP94] P.G. Whiting and R.S.V. Pascoe. A history of data-flow languages.Technical Report TR-SA-94-02, Division of Information Technology, Commonwealth Scientific and Industrial Research Organization, Carlton, Australia, March 1994.[Zha94} Y. Zhang. A foundation for the design and analysis of robotic systems and behaviors. Technical report, Department of ComputerScience, UBC, Vancouver, Canada, September 1994.[ZM92] Y. Zhang and A.K. Mackworth. Constraint nets: A semantic modelof real-time embedded systems. Technical Report 92-10, Department of Computer Science, UBC, Vancouver, Canada, 1992.[ZM93j Y. Zhang and A. K. Mackworth. Constraint programming in constraint nets. In First Workshop on Principles and Practice of Constraint Programming, pages 303—312, 1993.[ZM94j Y. Zhang and A. K. Mackworth. Will the robot do the right thing?In Proceedings of Artificial Intelligence, pages 255—262, Banif, Alberta, May 1994.213[ZM95] Y. Zhang and A. K. Mackworth. Constraint nets: a semantic modelfor hybrid dynamic systems. Theoretical Computer Science, 138(1),February 1995.214Appendix APeterson’s Mutual ExclusionAlgorithm: Automaticverificationsafe{1} (FORALL(state: staterec):(init(state) AND initsem(state))IMPLIESAG (N,LAMBDA215state:NOT(critical?(pcl(state))AND critical?(pc2(state))))(state))Running step: (Model—Check)mit rewrites mit (state’ 1)to idle?(pcl(state!1)) AND idle?(pc2(state!1))initsem rewrites initsem(state!1)to NOT seml(state!1) AND NOT sem2(state!1)EU rewritesEU(N, (LAMBDA U: TRUE),NOT LAMBDA state: NOT (critical?(pcl(state)) ANDcritical? (pc2 (state) ) ) )to mu(LAMBDA(Q: pred[staterecj):(NOTLAMBDAstate: NOT (critical?(pcl(state)) ANDcritical? (pc2 (state)))OR ((LAMBDA U: TRUE) AND EX(N, Q))))EF rewritesEF(N, NOT LAMBDA state: NOT (critical?(pcl(state)) ANDcritical? (pc2 (state))))to mu(LAMBDA(Q: pred[staterec]):(NOTLAMBDAstate: NOT (critical?(pcl(state)) ANDcritical? (pc2(state)))OR ((LAMBDA U: TRUE) AND EX(N, Q))))AG rewritesAG(N, LAMBDA state: NOT (critical?(pcl(state)) ANDcritical? (pc2(state))))to NOTmu (LAMBDA(Q: pred[staterecj):(NOTLAMBDAstate: NOT (critical?(pcl(state)) AND216critical? (pc2(state)))OR ((LAMBDA U: TRUE) AND EX(N, Q))))NOT rewrites(NOTmu (LAMBDA(Q: pred [staterec]):(NOTLAMBDAstate: NOT (critical?(pci(state)) ANDcritical? (pc2 (state) ) )OR ((LAMBDA U: TRUE) AND EX(N, Q)))))(state!i)to NOTmu (LAMBDA(Q: pred[staterec]):(NOTLAMBDAstate: NOT (critical?(pci(state)) ANDcritical? (pc2 (state)OR ((LAMBDA U: TRUE) AND EX(N, Q))))(state’i)nextpi rewrites nextpl (u, v)to IF idle?(pci(u)) THEN idle?(pci(v)) OR(entering?(pci(v)) AND try(v))ELSIF entering?(pci(u)) AND try(u) AND NOT sem2(u) THENcritical? (pci Cv))ELSIF critical?(pci(u)) THEN (critical?(pci(v)) AND try(v))OR (exiting?(pci(v)) AND NOT try(v))ELSE idle?(pci(v))ENDIFnextp2 rewrites nextp2(u, v)to IF idle?(pc2(u)) THEN idle?(pc2(v)) OR(entering?(pc2(v)) AND NOT try(v))ELSIF entering?(pc2(u)) AND NOT semi(u) THEN critical?(pc2(v))ELSIF critical?(pc2(u)) THEN (critical?(pc2(v)) AND NOT try(v))OR (exiting?(pc2(v)) AND tryCv))ELSE idle?(pc2(v))ENDIFnextsem rewrites nextsem(u, v)to IF(entering? (pci (u))OR entering?(pc2(u)) OR exiting?(pci(u)) ORexiting? (pc2(u)))217THEN (entering?(pci(u)) IMPLIES senti(v))AND (entering?(pc2(u)) IMPLIES sem2(v))AND (exiting?(pci(u)) IMPLIES NOT semi(v))AND (exiting?(pc2(u)) IMPLIES NOT sem2(v))ELSE (senii(v) = semi(u)) AND (sem2(v) = sem2(u))ENDIFN rewrites N(u, v)to IF idle?(pciCu)) THEN idle?(pci(v)) OR(entering?(pci(v)) AND try(v))ELSIF entering?(pci(u)) AND try(u) AND NOT sem2Cu) THENcritical? (pci Cv))ELSIF critical?(pci(u)) THEN (critical?(pci(v)) AND try(v))OR (exiting?(pci(v)) AND NOT try(v))ELSE idle? (pci Cv))ENDIFANDIF idle?(pc2(u)) THEN idle?Cpc2(v))OR (entering?(pc2(v)) AND NOT try(v))ELSIF entering?Cpc2(u)) AND NOT seml(u) THEN critical?(pc2(v))ELSIF critical?(pc2Cu)) THEN (critical?Cpc2(v)) AND NOT try(v))OR Cexiting?(pc2(v)) AND tryCv))ELSE idle?Cpc2Cv))ENDIFANDIFCentering? (pci Cu))OR entering? Cpc2 Cu))OR exiting?(pci Cu)) OR exiting?Cpc2Cu)))THEN Centering?CpciCu)) IMPLIES semiCv))AND (entering?Cpc2Cu)) IMPLIES sem2Cv))AND Cexiting?CpclCu)) IMPLIES NOT semiCv))AND(exiting? Cpc2 Cu))IMPLIES NOT sem2Cv))ELSE (semlCv) = semiCu)) AND (sem2(v) = sem2Cu))ENDIFBy rewriting and mu-simplifying,Rim time = 3i.66 secs.218Real time = 40.55 secs.NIL>The proof transcript of the liveness property proceeds in the same manner asabove:live{1} (FORALL(s: staterec):AG(N,(LAMBDA s: entering?(pcl(s)))IMPLIES EF(N, LAMBDA s: critical?(pcl(s))))(s))Running step: (Model—Check)EU rewrites EU(N, (LAMBDA U: TRUE), LAMBDA s: critical?(pcl(s)))to mu(LAMBDA(Q: pred[staterec]):(LAMBDA s: critical?(pcl(s)) OR ((LAMBDA u: TRUE) AND EX(N, Q))))EF rewrites EF(N, LAMBDA s: critical?(pcl(s)))to mu(LAMBDA(Q: predEstaterec]):(LAMBDA s: critical?(pcl(s)) OR ((LAMBDA U: TRUE) AND EX(N, Q))))EU rewritesEU(N, (LAMBDA U: TRUE),NOT((LAMBDA s: entering?(pcl(s)))IMPLIESmu (LAMBDA(Q: pred [staterec]):(LAMBDA s: critical?(pcl(s))219OR ((LAMBDA U: TRUE) AND EX(N, Q))))))to mu(LAMBDA(Q: pred[staterec]):(NOT((LAMBDA s: entering?(pcl(s)))IMPLIESmu (LAMBDA(Q: predEstaterec]):(LAMBDA s: critical?(pcl(s))OR ((LAMBDA U: TRUE) AND EX(N, Q)))))OR ((LAMBDA U: TRUE) AND EX(N, Q))))EF rewritesEF(N,NOT((LAMBDA s: entering?(pcl(s)))IMPLIESmu (LAMBDA(Q: pred[staterec]):(LAMBDA s: critical?(pcl(s))OR ((LAMBDA U: TRUE) AND EX(N, Q))))))to mu(LAMBDA(Q: pred[staterec]):(NOT((LAMBDA s: entering?(pcl(s)))IMPLIESmu (LAMBDA(Q: predistaterec]):(LAMBDA s: critical?(pcl(s))OR ((LAMBDA U: TRUE) AND EX(N, Q)))))OR ((LAMBDA u: TRUE) AND EX(N, Q))))AG rewritesAG (N,(LAMBDA s: entering?(pcl(s)))IMPLIESmu (LAMBDA(Q: predEstaterec]):(LAMBDA s: critical?(pcl(s))OR ((LAMBDA u: TRUE) AND EX(N, Q)))))to NOTmu (LAMBDA(Q: predEstaterec]):220(NOT((LAMBDA s: entering?(pcl(s)))IMPLIESmu (LAMBDA(Q: predEstaterec]):(LAMBDA 5: critical?(pcl(s))OR ((LAMBDA U: TRUE) AND EX(N, Q)))))OR ((LAMBDA u: TRUE) AND EX(N, Q))NOT rewrites(NOTmu (LAMBDA(Q: pred[staterec]):(NOT((LAMBDA s: entering?(pcl(s)))IMPLIESmu (LAMBDA(Q: pred[staterec]):(LAMBDA s: critical?(pcl(s))OR ((LAMBDA u: TRUE) AND EX(N, Q)))))OR ((LAMBDA U: TRUE) AND EX(N, Q)))))(s!1)to NOTmu (LAMBDA(Q: pred[staterec]):(NOT((LAMBDA s: entering?(pcl(s)))IMPLIESmu ( LAMBDA(Q: pred[staterec]):(LAMBDA s: critical?(pcl(s))OR ((LAMBDA U: TRUE) AND EX(N, Q)))))OR ((LAMBDA U: TRUE) AND EX(N, Q))))(s!1)nextpl rewrites nextpl(u, v)to IF idle?(pcl(u)) THEN idle?(pcl(v)) OR(entering?(pcl(v)) AND try(v))ELSIF entering?(pcl(u)) AND try(u) AND NOT sem2(u) THENcritical? (pcl(v) )ELSIF critical?(pcl(u)) THEN (critical?(pcl(v)) AND try(v))OR (exiting?(pcl(v)) AND NOT try(v))ELSE idle? (pci (v))ENDIFnextp2 rewrites nextp2(u, v)221to IF idle?(pc2(u)) THEN idle?(pc2(v)) OR(entering?(pc2(v)) AND NOT try(v))ELSIF entering?(pc2(u)) AND NOT semi(u) THEN critical?(pc2(v))ELSIF critical?(pc2(u)) THEN (critical?Cpc2(v)) AND NOT try(v))OR (exiting?(pc2(v)) AND try(v))ELSE idle? (pc2(v))ENDIFnextsem rewrites nextsem(u, v)to IF(entering? (pci Cu))OR entering?(pc2(u)) OR exiting?(pci(u)) ORexiting? Cpc2 Cu) ) )THEN (entering?(pci(u)) IMPLIES semiCv))AND (entering?(pc2(u)) IMPLIES sem2(v))AND Cexiting?CpciCu)) IMPLIES NOT semiCv))AND Cexiting?(pc2(u)) IMPLIES NOT sem2(v))ELSE (senii(v) = semi(u)) AND (sem2(v) = sem2(u))ENDIFN rewrites N(u, v)to IF idle?Cpci(u)) THEN idle?Cpci(v)) OR(entering?Cpci(v)) AND tryCv))ELSIF entering?(pciCu)) AND tryCu) AND NOT sem2(u) THENcritical? (pci Cv))ELSIF critical?(pciCu)) THEN (critical?Cpci(v)) AND try(v))OR (exiting?(pci(v)) AND NOT try(v))ELSE idle? (pci Cv))ENDIFANDIF idle?(pc2(u)) THEN idle?(pc2(v))OR Centering?Cpc2Cv)) AND NOT try(v))ELSIF entering?(pc2(u)) AND NOT semiCu) THEN critical?(pc2(v))ELSIF critical?(pc2(u)) THEN (critical?Cpc2Cv)) AND NOT tryCv))OR Cexiting?Cpc2(v)) AND tryCv))ELSE idle? (pc2 Cv))ENDIFANDIFCentering? Cpci Cu))OR entering? (pc2 Cu))OR exiting?(pciCu)) OR exiting?(pc2(u)))THEN (entering?CpciCu)) IMPLIES semi(v))222AND (entering?(pc2(u)) IMPLIES sem2(v))AND (exiting?(pci(u)) IMPLIES NOT semi(v))AND(exiting? (pc2 (u))IMPLIES NOT seiu2(v))ELSE (semi(v) = sezni(u)) AND (sem2(v) = sem2(u))END IFnextpl rewrites nextpi(u, v)to IF idle?(pci(u)) THEN idle?(pci(v)) OR(entering?(pci(v)) AND try(v))ELSIF entering?(pci(u)) AND try(u) AND NOT sem2(u) THENcritical?(pci(v))ELSIF critical?(pci(u)) THEN (critical?(pci(v)) AND try(v))OR (exiting?(pci(v)) AND NOT try(v))ELSE idle? (pci Cv))ENDIFnextp2 rewrites nextp2(u, v)to IF idle?(pc2(u)) THEN idle?(pc2(v)) OR(entering?(pc2(v)) AND NOT try(v))ELSIF entering?(pc2(u)) AND NOT semi(u) THEN critical?(pc2(v))ELSIF critical?(pc2(u)) THEN (critical?(pc2(v)) AND NOT try(v))OR (exiting?(pc2(v)) AND try(v))ELSE idle? (pc2 Cv))END IFnextsem rewrites nextsem(u, v)to IF(entering? (pci Cu))OR entering?(pc2(u)) OR exiting?Cpci(u)) ORexiting? (pc2 Cu)))THEN (entering?(pciCu)) IMPLIES semlCv))AND (entering?Cpc2Cu)) IMPLIES seni2(v))AND Cexiting?(pci(u)) IMPLIES NOT semi(v))AND (exiting?Cpc2(u)) IMPLIES NOT sem2Cv))ELSE (semi(v) = semi(u)) AND (sem2(v) = sein2(u))END IFN rewrites N(u, v)to IF idle?(pci(u)) THEN idle?(pci(v)) OR(entering?(pci(v)) AND try(v))ELSIF entering?(pci(u)) AND try(u) AND NOT sem2(u) THENcritical? (pci Cv))ELSIF critical?(pci(u)) THEN Ccritical?(pclCv)) AND try(v))223OR (exiting?(pci(v)) AND NOT try(v))ELSE idle? (pci Cv))ENDIFANDIF idle?(pc2(u)) THEN idle?(pc2(v))OR (entering?(pc2(v)) AND NOT try(v))ELSIF entering?(pc2(u)) AND NOT seml(u) THEN critical?Cpc2(v))ELSIF critical?(pc2(u)) THEN (critical?(pc2(v)) AND NOT try(v))OR Cexiting?(pc2(v)) AND try(v))ELSE idle?(pc2(v))ENDIFANDIFCentering? (pci Cu))OR entering? (pc2 Cu))OR exiting?(pci Cu)) OR exiting?(pc2(u)))THEN Centering?Cpci(u)) IMPLIES semi(v))AND (entering?(pc2Cu)) IMPLIES sem2(v))AND (exiting?(pci(u)) IMPLIES NOT semiCv))AND(exiting? (pc2 Cu))IMPLIES NOT sem2(v))ELSE (serai(v) = semi(u)) AND (seni2(v) = sem2(u))ENDIFBy rewriting and mu-simplifying,Q.E.D.Run time = 47.86 secs.Real time = 66.98 secs.NIL>The following proof that every path is free of deadlock fails as expected:224not_live{1} (FORALLCs: staterec):AG (N,(LAMBDA s: entering?(pcl(s)))IMPLIES AF(N, LAMBDA s: critical?(pcl(s))))(s))Rule? (Model-Check)EG rewrites EG(N, NOT LAMBDA s: critical?(pcl(s)))to nu(LAMBDA(Q: pred[staterec]): (NOT LAMBDA s: critical?(pcl(s)) ANDEX(N, Q)))AF rewrites AF(N, LAMBDA s: critical?(pcl(s)))to NOTflu (LAMBDA(Q: pred[staterec]):(NOT LAMBDA s: critical?(pcl(s)) AND EX(N, Q)))EU rewritesEU(N, (LAMBDA U: TRUE),NOT((LAMBDA s: entering?(pcl(s)))IMPLIES NOTflu (LAMBDA(Q: pred[staterec]):(NOT LAMBDA s: critical?(pcl(s)) ANDEX(N, Q)))))to mu(LAMBDA(Q: predEstaterec]):(NOT((LAMBDA s: entering?(pcl(s)))IMPLIES NOTnu (LAMBDA(Q: predistaterec]):(NOT LAMBDA s: critical?(pcl(s))AND EX(N, Q))))OR ((LAMBDA U: TRUE) AND EX(N, Q))))EF rewritesEF (N,NOT225((LAMBDA s: entering?(pcl(s)))IMPLIES NOTflu (LAMBDA(Q: pred[staterec]):(NOT LAMBDA s: critical?(pcl(s)) ANDEX(N, Q)))))to mu(LAMBDA(Q: pred[staterec]):(NOT((LAMBDA s: eflteriflg?(pcl(s)))IMPLIES NOTflU (LAMBDA(Q: pred[staterec]):(NOT LAMBDA s: critical?(pcl(s))AND EX(N, Q))))OR ((LAMBDA u: TRUE) AND EX(N, Q))))AG rewritesAG(N,(LAMBDA s: efltering?(pcl(s)))IMPLIES NOTflu (LAMBDA(Q: predEstaterec]):(NOT LAMBDA s: critical?(pcl(s)) AND EX(N, Q))))to NOTmu (LAMBDA(Q: pred[staterec]):(NOT((LAMBDA s: entering?(pcl(s)))IMPLIES NOTflu (LAMBDA(Q: pred[staterec]):(NOT LAMBDA s: critical?(pcl(s))AND EX(N, Q))))OR ((LAMBDA U: TRUE) AND EX(N, Q))))NOT rewrites(NOTmu (LAMBDA(Q: pred[staterec]):(NOT((LAMBDA s: entering?(pcl(s)))IMPLIES NOT226flu (LAMBDA(Q: predEstaterec]):(NOT LAMBDA s: critical?(pci(s))AND EX(N, Q))))OR ((LAMBDA U: TRUE) AND EX(N, Q)))))(s!i)to NOTmu (LAMBDA(Q: predEstaterec]):(NOT((LAMBDA s: entering?(pci(s)))IMPLIES NOTflu (LAMBDA(Q: pred[staterec]):(NOT LAMBDA s: critical?(pci(s))AND EX(N, Q))))OR ((LAMBDA u: TRUE) AND EX(N, Q))))(s!i)rtextpi rewrites nextpi(u, v)to IF idle?(pci(u)) THEN idle?(pci(v)) OR(entering?(pci(v)) AND try(v))ELSIF entering?(pcl(u)) AND try(u) AND NOT sem2(u) THENcritical? (pci(v) )ELSIF critical?(pci(u)) THEN (critical?(pcl(v)) AND try(v))OR (exiting?(pci(v)) AND NOT try(v))ELSE idle? (pci Cv))END IFnextp2 rewrites nextp2(u, v)to IF idle?(pc2(u)) THEN idle?(pc2(v)) OR(entering?(pc2(v)) AND NOT try(v))ELSIF eriteriflg?(pc2(u)) AND NOT semi(u) THEN critical?(pc2(v))ELSIF critical?(pc2(u)) THEN (critical?(pc2(v)) AND NOT try(v))OR (exiting?(pc2(v)) AND try(v))ELSE idle? (pc2 (v))ENDIFnextsem rewrites nextsem(u, v)to IF(enterifig? (pci Cu))OR entering?(pc2(u)) OR exiting?Cpci(u)) ORexiting? (pc2 Cu)))THEN Centering?(pci(u)) IMPLIES semi(v))AND (entering?(pc2(u)) IMPLIES sem2(v))AND (exiting?(pci(u)) IMPLIES NOT semi(v))227AND (exiting?(pc2(u)) IMPLIES NOT sem2(v))ELSE (semi(v) = semi(u)) AND (sem2(v) = sem2(u))ENDIFN rewrites N (u, v)to IF idle?(pci(u)) THEN idle?(pci(v)) OR(entering?(pci(v)) AND try(v))ELSIF entering?(pci(u)) AND tryCu) AND NOT sem2(u) THENcritical? (pci Cv))ELSIF critical?(pci(u)) THEN (critical?(pci(v)) AND try(v))OR (exiting?(pci(v)) AND NOT try(v))ELSE idle? (pci Cv))ENDIFANDIF idle?(pc2(u)) THEN idle?(pc2(v))OR (entering?(pc2(v)) AND NOT try(v))ELSIF entering?(pc2(u)) AND NOT semiCu) ThEN critical?(pc2(v))ELSIF critical?(pc2(u)) THEN (critical?(pc2(v)) AND NOT try(v))OR Cexiting?(pc2(v)) AND try(v))ELSE idle? (pc2 Cv))ENDIFANDIF(entering? (pci Cu))OR entering? (pc2 Cu))OR exiting?CpciCu)) OR exiting?(pc2Cu)))THEN (entering?(pci(u)) IMPLIES semiCv))AND Centering?(pc2(u)) IMPLIES sem2(v))AND (exiting?Cpci(u)) IMPLIES NOT semi(v))AND(exiting? (pc2 Cu))IMPLIES NOT sem2(v))ELSE (semi(v) = semiCu)) AND (sem2Cv) = sem2(u))END IFnextpi rewrites nextpiCu, v)to IF idle?(pci(u)) THEN idle?CpciCv)) OR(entering?(pciCv)) AND tryCv))ELSIF entering?CpciCu)) AND try(u) AND NOT sem2Cu) THENcritical? (pci Cv))ELSIF critical?CpciCu)) THEN (critical?(pciCv)) AND try(v))OR (exiting?(pci(v)) AND NOT tryCv))ELSE idle? (pci Cv))228END IFnextp2 rewrites nextp2 Cu, v)to IF idle?(pc2(u)) THEN idle?(pc2(v)) OR(entering?(pc2(v)) AND NOT try(v))ELSIF entering?(pc2(u)) AND NOT semi(u) THEN critical?(pc2(v))ELSIF critical?(pc2(u)) THEN (critical?(pc2(v)) AND NOT try(v))OR (exiting?(pc2(v)) AND try(v))ELSE idle?(pc2(v))ENDIFnextsem rewrites nextsem(u, v)to IF(entering? (pci(u) )OR entering?(pc2(u)) OR exiting?(pci(u)) ORexiting? (pc2 Cu)))THEN (entering?(pci(u)) IMPLIES seml(v))AND (entering?(pc2(u)) IMPLIES sem2(v))AND (exiting?(pci(u)) IMPLIES NOT semi(v))AND (exiting?(pc2(u)) IMPLIES NOT sein2(v))ELSE (semi(v) = semi(u)) AND (sem2(v) = sem2(u))ENDIFN rewrites N(u, v)to IF idle?(pci(u)) THEN idle?(pci(v)) OR(entering?(pci(v)) AND try(v))ELSIF entering?(pci(u)) AND tryCu) AND NOT sem2(u) THENcritical? (pci Cv))ELSIF critical?(pci(u)) THEN (critical?(pci(v)) AND try(v))OR (exiting?(pci(v)) AND NOT try(v))ELSE idle? (pci Cv))ENDIFANDIF idle?(pc2(u)) THEN idle?(pc2(v))OR (entering?Cpc2Cv)) AND NOT try(v))ELSIF entering?(pc2(u)) AND NOT semi(u) THEN critical?(pc2(v))ELSIF critical?(pc2(u)) THEN (critical?(pc2(v)) AND NOT tryCv))OR (exiting?(pc2(v)) AND try(v))ELSE idle? Cpc2 Cv))ENDIFANDIF(entering? (pci(u) )OR entering? (pc2 (u))229OR exiting?(pcl(u)) OR exiting?(pc2(u)))THEN (entering?(pcl(u)) IMPLIES seml(v))AND (entering?(pc2(u)) IMPLIES sem2(v))AND (exiting?(pcl(u)) IMPLIES NOT seml(v))AND(exiting? (pc2 Cu))IMPLIES NOT sem2(v))ELSE (semlCv) = seml(u)) AND Csem2(v) = sem2(u))ENDIFBy rewriting and mu-simplifying,this simplifies to:not_live{-1} TRUERule? qDo you really want to quit? (Y or N): yRun time = 31.82 secs.Real time = 59.33 secs.>The following proof illustrates the use of fairCTL operators to verify the propertythat there is at least one departing fair path from the initial state.fair{1} (FORALLCs: staterec):Unit(s) AND initsem(s))IMPLIES Fair?(N, LAMBDA s: critical?(pcl(s)))Cs))230Running step: (Model-Check)mit rewrites mit (s! 1)to idle?(pci(s!i)) AND idle?(pc2(s!i))initsem rewrites initsem(s!i)to NOT seiui(s!i) AND NOT sem2(s!i)fairEG rewrites fairEG(N, (LAMBDA U: TRUE))(LAMBDA s:critical? (pci(s) ))to nu(LAMBDA(Q: pred[staterec]):(LAMBDA U: TRUE)ANDmu (LAMBDA(P: pred[staterec]):EX (N,(LAMBDA U: TRUE)AND((LAMBDA s: critical?(pci(s)) AND Q)OR P))))Fair? rewrites Fair?(N, LAMBDA s: critical?(pci(s)))to nu(LAMBDA(Q: predEstaterec]):(LAMBDA U: TRUE)ANDmu (LAMBDA(P: pred[staterec]):EX (N,(LAMBDA U: TRUE)AND((LAMBDA s: critical?(pci(s)) AND Q)OR P))))nextpi rewrites nextpi(u, v)to IF idle?(pci(u)) THEN idle?(pci(v)) OR(entering?(pcl(v)) AND try(v))ELSIF entering?(pci(u)) AND try(u) AND NOT sem2(u) THENcritical? (pci Cv))ELSIF critical?(pci(u)) THEN (critical?(pci(v)) AND try(v))OR (exiting?(pci(v)) AND NOT try(v))ELSE idle?(pci (v))ENDIFnextp2 rewrites nextp2 (u, v)231to IF idle?(pc2(u)) THEN idle?(pc2(v)) OR(entering?(pc2(v)) AND NOT try(v))ELSIF entering?(pc2(u)) AND NOT semi(u) THEN critical?(pc2(v))ELSIF critical?(pc2(u)) THEN (critical?(pc2(v)) AND NOT try(v))OR Cexiting?(pc2(v)) AND try(v))ELSE idle?(pc2(v))ENDIFnextsem rewrites nextsem(u, v)to IF(entering? (pci Cu))OR entering?(pc2Cu)) OR exiting?Cpci(u)) ORexiting? (pc2 Cu)))THEN Centering?(pci(u)) IMPLIES semi(v))AND (entering?Cpc2(u)) IMPLIES sem2(v))AND (exiting?CpciCu)) IMPLIES NOT semi(v))AND (exiting?(pc2(u)) IMPLIES NOT sem2(v))ELSE (semi(v) = semi(u)) AND (sem2(v) = sem2(u))ENDIFN rewrites N(u, v)to IF idle?(pci(u)) THEN idle?(pciCv)) OR(entering?Cpci(v)) AND tryCv))ELSIF entering?Cpci(u)) AND try(u) AND NOT sem2(u) THENcritical? (pci(v) )ELSIF critical?(pci(u)) THEN (critical?(pci(v)) AND try(v))OR (exiting?(pci(v)) AND NOT tryCv))ELSE idle?(pci(v))ENDIFANDIF idle?Cpc2Cu)) THEN idle?(pc2(v))OR (entering?(pc2(v)) AND NOT try(v))ELSIF entering?(pc2(u)) AND NOT semi(u) THEN critical?(pc2(v))ELSIF critical?(pc2(u)) THEN (critical?(pc2(v)) AND NOT tryCv))OR Cexiting?(pc2(v)) AND try(v))ELSE idle? (pc2 Cv))ENDIFANDIF(entering? (pci Cu))OR entering? (pc2 Cu))OR exiting?Cpci(u)) OR exiting?(pc2(u)))THEN (entering?(pci(u)) IMPLIES semi(v))232AND Centering?(pc2(u)) IMPLIES sem2(v))AND (exiting?(pcl(u)) IMPLIES NOT seml(v))AND(exiting? (pc2 Cu))IMPLIES NOT sem2(v))ELSE (seml(v) = seml(u)) AND (sem2(v) = sem2(u))ENDIFBy rewriting and mu-simplifying,Q.E.D.Run time = 12.84 secs.Real time = 25.13 secs.NIL>233Appendix BDefinitions, Axioms andTheoremsB.1 Definitionsdefinitions: THEORYBEGINport: TYPE234parray:TYPE= [# size: nat, porLarray: ARRAY[{i: nat i < size} — port] #1cnode:TYPE =[# inports: parray,outport: port,intports: parray,condport: port,condit: pred[port],datarel : pred[[{p: parray size(p) = size(inports)}, port]],orderrel: pred[[{p: parray I size(p) size(inports)}, port]],intrel: pred[[parray, parray]] #1node: TYPE = {n: cnode condit(n) = A (p: port): TRUE}cnO, cnl: VAR cnodepar, parr, parO, pan, parOO, parli, par2, par3: VAR parraysame..size(cnO, cnl) : boolean = size(inports(cnO)) size(inports(cnl))same..size(parO, pan) : boolean = size(parO) size(parl)refines(cnO, cnl) : boolequates(cnO, cnl) : bool = refines(cnO, cnl) A refines(cnl, cnO)sks(cnO, cnl) : boolean = sam&size(cnO, cnl) A equates(cnO, cnl)silimp : pred[[port, port]]po,P1,P2,P3,P4: VAR porti: VAR natsilimpar(panl, (par2 : {par I same..size(par, panl)})) : booleansileqar(panl, (par2 : {par I samesize(par, parl)}))boolean = silimpar(parl, par2) A silimpar(par2, pan)sileq(pi,p2): boolean = silimp(pl,p2) A silimp(p2,pi)weight: TYPEIMPORTING orders[weight]235pred[[weight, weight]]dfe: [port, port —+ boolean]U: [port, port —* weight]cn, cn2, cn3: VAR cnoden,n0,n123:VAR nodep: VAR portinport(cn, (i : {j : nat j < size(inports(cn))}))port = (port_array(inports(cn))) (i)intport(cn, (i : {j : nat j < size(intports(cn))}))port = (port_array(intports(cn)))(i)is.outport(p): boolean = (3 cn: p = outport(cn))isinport(p): boolean(3 cn, (i : {j : nat j < size(inports(cn))}) : p = inport(cn, i))iscondport(p): boolean = (3 cn: p = condport(cn))xdfe(pi,p2): boolean =dfe(p1,p A (Vp: (p # P1) D —dfe(p,p2))war: [parray, parray — weight]dfear(parO, (pan: {par I same..size(par, parO)})) : boolean =(V (i : nat i < size(parO))dfe(port_array(parO) (i), port_array(parl) (i)))xdfear(parO, (pan: {par same..size(par, parO)})): boolean =(V (i : nat i < size(parO))xdfe(port_array(parO) (i), port_array(panl) (i)))i&.node.outport(p): boolean = 3 n: p = outport(n)is.outportar(par): boolean =V (i : nat i < size(par)) : isnode_outport(port_array(par)(i))iscnod&outport(p): boolean = 3 (cn: cnode) : p = outport(cn)236is.cnoutportar(par): boolean =V (i nat i < size(par)) : is_cnode_outport(port_array(par)(i))END definitions237B.2 Axiomsaxioms: THEORYBEGINIMPORTING definitionscn, cnO, cnl, cn2, cn3: VAR cnodecnodeax: AXIOMV cncondit (cn) (condport(cn))D datarel(cn) (inports(cn), outport(cn))par, parO, pan, par2, par3: VAR parrayi: VARnatP,Po,P1,P2,P3,P4: VAR portsiimpar.ax: AXIOMV pan, (par2: parray I same.Aze(parl, par2)):V (i : nat i < size(panl))silimp (port.array(panl) (i), port_array(par2) (i))D silimpar(panl, par2)sileqrefi..ax: AXIOM sileq(pi ,Pi)si1eq.sym_ax: AXIOM sileq(pi,p2) = sileq(p2,pl)sileq.transax: AXIOMVpo,pi,p2:(sileq(po,p1) A sileq(pl,p2) D sileq(po,p2))refinement,ax: AXIOMV (no : node), (ni : node I same..size(no,ni))refines(no, n1) A silimpar(inports(no), inports(n1))D silimp(outport(no), outport(nj))edgeax: AXIOM partiaLorder?( : pred[[weight, weight]])238n,no,ni,rl.253:VAR nodedfe_portaxl: AXIOM dfe(p1,p2) D is_outport(pi)dfeporLax2: AXIOM dfe(p1,P2) D (isinport (pa)V is.condport (P2))selLedge..noLax: AXIOM -, dfe(p,p)df&wax: AXIOMVp0,p112P0 P1D((dfe(po,p2) A dfe(p1,p2))D(w(po,p2) w(pj,p)V w(p1,p2) w(po,p2)))cond.bottom.ax: AXIOM(condit(cn) (condport(cn)))D(V p:dfe(outport(cn) , p)D(V P0:dfe(pi,p)D w(outport(cn),po) w(pi,po)))jOinax: AXIOM(dfe(p1,p2)A (Vp: dfe(P,p2) D w(p,p2) w(pi,p2)))D sileq(p1,P2)distrbax: AXIOM xdfe(p,p1) A xdfe(p,p2) D sileq(p1,p2)Poo,P11,P22,P33,P44 : VARportpopreserveax: AXIOM(w(po,p2) w(p,p)A sileq(po,poo)A sileq(p1, 1)A sileq(p2,p2)A dfe(poo,p22) A dfe(pii,p22))D w(poo,p22) w(p11,p22joinarax: AXIOMV pan, (par2: parray samesize(par2, pan))(dfear(parl, par2)239A(V (par: parray I sam&size(par, pan)):dfear(par, par2)D war(par, par2) war(parl, par2)))D sileqan(panl, par2)dfe.war.ax: AXIOMV parO,(pan: parray I same.size(parl, parO)),(par2: parray same..size(par2, parO))(dfear(parO, par2) A dfear(parl, par2))(war(parO, par2) war(panl, par2)V war(parl, par2) war(parO, par2))END axioms240\TJ7B.3 Theoremstheorems: THEORYBEGINIMPORTING axiomsPo,P1,P2,P3,P4: VAR portpar, parr, parO, pan, parOO, panil, par2, par3: VAR parrayn,n0,fl12n3:VAR nodenode_data.ith: THEOREM V n: datarel(n)(inports(n), outport(n))sileqtransinv.th: THEOREMVpo,p1,2(sileq(po,p) A sileq(po,p2) D sileq(pl,p2))i: VAR natsileqar(parl, (par2 : {par I samesize(par, panl)})) : boolean =V (i I i < size(parl))sileq(port_array(parl) (i), port_array(par2) (i))si1eqarefLth: THEOREM sileqar(panl, pan)sileqar.symth: THEOREMV pan, (par2: {par same.size(par, parl)})sileqar(parl, par2) sileqar(par2, pan)sileqartransth: THEOREMV pan,(par2: {par same..size(par, parl)}),(par3: {par same.size(par, parl)})(sileqar(parl, par2) A sileqar(par2, par3))D sileqar(parl, par3)sileqar.transinv..th: THEOREMV pan,(par2: {par I same.size(par, parl)}),242(par3: {par same..size(par, parl)})(sileqar(parl, par2) A sileqar(parl, par3))D sileqar(par2, par3)in.eqarimpouteq: THEOREMV (no: node),(ni : node sks(no,ni))sileqar(inports(no), inports(ni))D sileq(outport(no), outport(nj))cn, cnO, cnl, cn2, cn3: VAR cnodexdfe.sileq_th: THEOREM xdfe(outport(ni),p2)D sileq(outport(nl),p2)xdfe2.sileq.th: THEOREM(xdfe(outport(no), P1)A xdfe(outport(n),3A sileq(outport(no), outport(n2)))D sileq(pi,pPoo,P11,P22,P33,P44 : VAR portpo_preserv&xdfe.th: THEOREM(w(po,p2) w(p,p)A sileq(p outport(n3))A xdfe(outport(n),4A dfe(poo,p44)A dfe(pii,p44)A sileq(po,poo)A sileq(pi,pii)A sileq(p4, 4))D w(poo,p4) w(p,p)dfe2.jointh: THEOREM(dfe(p1,p3)A dfe(p2,p3)A(V p0:((P0 Pi) V (po P2)) D — dfe(po,p3)))D IF w(p1,p3) < w(p2,p) THEN sileq(p2,p3)ELSE sileq(p1,p3)ENDIFinports...sileqarth: THEOREMV (nd: node):V (no : node same_size(no, rid)),(ni : node I samesize(ni, nd))((V (i I i < size(inports(nd))), n243xdfe(outport(n), inport(no, i))(3 (nn: node)sileq(outport(n), outport(nn))A xdfe(outport(nn), inport(n1i))))A(V (i < size(inports(nd)))3 n : xdfe(outport(n), inport(no,i)))D sileqar(inports(no), inports(ni )))inportseqar.th: THEOREMV (no : node), (nj node same..size(no, n1))((V (i I i < size(inports(no))),nxdfe(outport(n), inport(no, i)). xdfe(outport(n), inport(ni, i)))A(V (i I i < size(inports(no)))3 n: xdfe(outport(n), inport(no, i)))D sileqar(inports(no), inports(n)))xdfearsileqarth: THEOREMV (parO is_outportar(parO)), (pan I same.size(parl, paro))xdfear(parO, pan) D sileqar(paro, pan)inportsar.eqar_th: THEOREMV (no node), (n1 node same..size(no,ni))(((V (par I is_outportar(par) A same...size(par, inports(no)))xdfear(par, inports(no)) xdfear(par, inports(n1)))A(3 (parisoutportar(par)A same..size(par, inports(rio)))xdfear(par, inports(no))))D sileqar(inports(no), inports(ni)))inportsar..sileqarth: THEOREMV (no node), (n1 : node I same.size(no,ni))(((V (par is_outpontar(par) A samesize(par, inports(no)))xdfear(par, inports(mo))(3 (parrisoutportar(par)A sam&size(parr, inports(no)))(sileqar(par, parr)244A xdfear(parr, inports(ni)))))A3 (paris_outportar(par) A same.size(par, inports(no)))xdfear(par, inports(no)))D sileqar(inports(no), inports(ni)))dfear2join.th: THEOREMV pan,(par2 I same..size(par2, pan)), (par3 I samesize(par3, pan))(dfear(panl, par3) A dfear(par2, par3))A(V (par I samesize(par, pan))(par pan V par $ par2) D —‘ dfean(par, par3))D IF war(parl, par3)war(par2, par3) THEN sileqar(par2, par3)ELSE sileqar(parl, par3)ENDIFdot, dotO, doti, dotOO, dotOl: VAR nodepreconds(dotO, (doti : {doU I equates(dotO, dotl)})) : boolean(V (par I is_outportar(par) A same...size(par, inports(dotO)))xdfear(par, inports(dotO)) xdfear(par, inports(dotl)))A(3 (par is_outportar(par) A same.size(par, inports(dotO)))xdfear(par, inports(dotO)))CSubE: THEOREMV dotO,(doti I equates(dotl, dotO)), (dotOl I equates(dotOl, dotO))((preconds(dotO, doti)A(V (parisoutportar(par)A same.size(par, inports(dotO)))xdfear(par, inports(dotO))(3 (parris.outportan(parr)A same..size(parr, inports(dotO)))(sileqar(pan, parr)A xdfear(parr, inports(dotOl))))))245D(Vpi,p2:((xdfe(outport(dotO),pi) V xdfe(outport(dotl),pi))A xdfe(outport(dotOl) ,p))D sileq(pi,p2)))pp, ppO, ppOO: VAR portcn22, cn33: VAR cnodepreconds(dotO, (doti: {dot I sks(dot, dotO)}),(dotOO : {dot I sks(dot, dotO)}),(parO: {par I is_outportar(par) A same.size(par, inports(dotO))}),(pan : {par I is_outportar(par) A same..size(par, inports(dotO))}),(parOO: {par I is_outportan(par) A same.size(par, inports(dotO))}),(panli : {par I is_outportar(par) A same.size(par, inports(dotO))}),ppO, ppOO)boolean =xdfear(panO, inports(dotO))A xdfear(parl, inports(dotl))A(war(parO, inports(dotO)) war(panl, inports(dotl))= w(outport(dotO), ppO) w(outport(dotl), ppO))A(war(parO, inports(dotO)) war(parl, inports(dotl))= war(parOO, inports(dotOO))war(panll, inports(dotOO)))A dfe(outport(dotO), ppO)A dfe(outport(dotl), ppO)A(V pp:((pp outport(dotO))V (pp outport(dotl)))D dfe(pp, ppO))Adfear(parOO,inports (dotOO))Adfear(parll,inports(dotOO))A(V (parsize(par)= size(parOO))(par parOO246V par $ paril)dfear(par,inports(dotOO)))Axdfe(outport(dotOO),ppoo)Asileqar(parO,parOO)Asileqar(parl,paril)CjtM: THEOREMV dotO:LET sk = n: sks(n, dotO),ios = )par:is_outportar(par) A same.size(par, inports(dotO))INV (doti sk(dotl)),(dotOO I sk(dotOO)),(parO ios(parO)),(pan I ios(parl)),(parOO I ios(parOO)),(panll I ios(panll))preconds(dotO, dot 1, dotOO, parO,pan, parOO, panil, ppO, ppOO)D sileq(ppO, ppOO)END theorems247Appendix CProof TranscriptsC.1 Common Subexpression EliminationTerse proof for CSubE.248CSubE:{1} V dotO, (doti I sameicind(dotl, dotO)),(dotOl I sameicind(dotOl, dotO))((preconds(dotO, doti)A(V (par I is_outportar(par) A size(par) = size(inports(dotO)))xdfear(par, inports(dotO))(3 (parrI is_outportar(parr) A size(parr) = size(inports(dotO)))(sileqar(par, parr) A xdfear(parr, inports(dotOl))))))D(V p,P2:((xdfe(outport(dotO) ,pi) V xdfe(outport(dotl) ,pi))A xdfe(outport(dotOl) ,P2))D sileq(pi,p2)))Expanding the definition of preconds,For the top quantifier in 1, we introduce Skolem constants: (dotO!1 dotl!1 dotOl!1),Applying disjunctive simplification to flatten sequent,249Applying inportsareqar_th where n0 gets dotO!1, ni gets dotl!1,Replacing using formula -2,Replacing using formula -3,Invoking decision procedures,Applying inportsar..sileqar_th where no gets dotO! 1, n1 gets dotOl ! 1,Replacing using formula -5,Replacing using formula -4,Invoking decision procedures,250Deleting some formulas,For the top quantifier in 1, we introduce Skolem constants: (p ps),Applying sileqar_transinv_th where pan gets inports(dotO! 1), par2 gets inports(dotl 1),par3 gets inports(dotOl!1),Invoking decision procedures,Applying ineqarimpouteq where no gets dotO!1, n1 gets dotOl!1,Applying ineqarimpouteq where no gets dotl!1, n1 gets dotOl!1,Applying xdf&sileqth where251Instantiating quantified variables,Instantiating quantified variables,Instantiating quantified variables,Invoking decision procedures,Deleting some formulas,Applying sileq_transinvth whereInstantiating the top quantifier in-i with the terms: outport(dotO!i), p, outport(dotOl!1),Instantiating the top quantifier in-i with the terms: outport(doti ! 1), p, outport(dotOi! 1),252Applying sileq..transax whereInstantiating the top quantifier in -1 with the terms: p, outport(dotOl!l), p,Applying bddsimp, which is trivially true. This completes the proof of CSubE.Q.E.D.253C.2 Cross Jumping Tail MergingTerse proof for CjtM.CjtM:{1} (V (ppO, ppOO: port)V (dotO: node)LET sk: [node — bool] =A (n: node): same..size(n, dotO) A sameicind(n, dotO),ios: [parray — bool] =A (par: parray) : is_outportar(par)A size(par) = size(inports(dotO))IN V (doti: node I sk(dotl)), (dotOl: node I sk(dotOl)),(parO : parray I ios(parO)), (pan : parray I ios(parl)),(parOO: parray I ios(parOO)),(parli : parray I ios(parll))preconds(dotO, doti, dotOl, parO, pan, parOO, parli, ppO, ppOO)D sileq(ppO, ppOO))Expanding the definition of preconds,254For the top quantifier in 1, we introduce Skolem constants: (ppO!1 ppOO!1),For the top quantifier in 1, we introduce Skolem constants: (dotO!1),For the top quantifier in 1, we introduce Skolem constants: (dotl!1 dotOl!1 parO!1parl!1 parOO!1 parll!1),Applying disjunctive simplification to flatten sequent,Applying xdfear.sileqarth whereInstantiating quantified variables,Instantiating quantified variables,255Applying dfe2join_th whereInstantiating the top quantifier in-i with the terms: outport(dotO!l), outport(dotl!1),ppO! 1,Replacing using formula -8,Replacing using formula -9,Replacing using formula -10,Applying dfear2jointh whereInstantiating the top quantifier in-i with the terms: parOo!i, parll!1, inports(dotol!1),256Replacing using formula -12,Replacing using formula -13,Replacing using formula -14,Letting warOl name war(parO!1, inports(dotO!1)) war(parl!1, inports(dotl!1)),Letting warOOl name war(parOO! 1, inports(dotOl ! 1)) < war(parl 1! 1, inports(dotOl ! 1)),Letting w01 name w(outport(dotO!1), ppO!1) w(outport(dotl!1), ppO!1),Replacing using formula -1,Hiding formulas: -1,257Replacing using formula -1,Hiding formulas: -1,Replacing using formula -1,Hiding formulas: -1,Invoking decision procedures,Deleting some formulas,Deleting some formulas,Replacing using formula -6,258Replacing using formula -5,Hiding formulas: -5, -6,Applying sileqar.transinv.th whereInstantiating the top quantifier in -1 with the terms: parO!1, inports(dotO!1),parOO!1,Instantiating the top quantifier in -1 with the terms: parl!1, inports(dotl!1),pan 1! 1,Applying ineqanimpouteq whereInstantiating the top quantifier in -1 with the terms: dotO!l, dotOl!1,259Instantiating the top quantifier in -1 with the terms: dotl!1, dotOl!l,Applying xdfesileqth whereInstantiating quantified variables,Invoking decision procedures,Deleting some formulas,Applying sileqartransth whereInstantiating the top quantifier in -1 with the terms: inports(dotl!1), parll!1,inports(dotOl!1),260Instantiating the top quantifier in -1 with the terms: inports(dotO!1), parOO!1,inports(dotOl !1),Invoking decision procedures,Deleting some formulas,Applying sileqtransinvth whereInstantiating the top quantifier in -1 with the terms: outport(dotO!1), ppO!l,outport(dotOl 1),Instantiating the top quantifier in -1 with the terms: outport(dotl!1), ppO!l,outport(dotOl!1),Applying sileq.trans...ax where261Instantiating the top qnantifie in-i with the terms: ppO!i, outpart(dotOi!i),ppOO!i,Applying bddsimp,which is trivisily true.This completes the proof of C3tX.Q.E.D.262
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Transformations on dependency graphs : formal specification...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Transformations on dependency graphs : formal specification and efficient mechanical verification Rajan, Sreeranga Prasannakumar 1995
pdf
Page Metadata
Item Metadata
Title | Transformations on dependency graphs : formal specification and efficient mechanical verification |
Creator |
Rajan, Sreeranga Prasannakumar |
Date Issued | 1995 |
Description | Dependency graphs are used to model data and control flow in hardware and software design. In a transformational design approach, optimization and refinement transformations are used to transform dependency-graph-based specifications at higher abstraction levels to those at lower abstraction levels. In this dissertation, we investigate the formal specification and mechanical verification of transformations on dependency graphs. Among formal methods, the axiomatic method provides a mechanism to specify an object by asserting properties it should satisfy. We show that an axiomatic specification coupled with an efficient mechanical verification is the most suitable formal approach to address the verification of transformations on dependency graphs. We have provided a formal specification of dependency graphs, and verified the correctness of a variety of transformations used in an industrial synthesis frame work. Errors have been discovered in the transformations, and modifications have been proposed and incorporated. Further, the formal specification has permitted us to examine the generalization and composition of transformations. In the process, we have discovered new transformations that could be used for further optimization and refinement than were possible before. We have devised an efficient verification scheme that integrates model-checking and theorem-proving, the two major techniques for formal verification, in a seamless manner. First, we focus on the dependency graph formalism used in the high-level synthesis system part of the SPRITE project at Philips Research Labs. The transformations in the synthesis system are used for refinement and optimization of descriptions specified in a dependency graph language called SPRITE Input Language (SIL). SIL is an intermediate language used during the synthesis of hardware described using languages such as VHDL, SILAGE and ELLA. Besides being an intermediate language, it forms the backbone of the TRADES synthesis system of the University of Twente. SIL has been used in the design of hardware for audio and video applications. Next, we present schemes for seamless integration of theorem-proving and model-checking for efficient verification. We use the Prototype Verification System (PVS) to specify and verify the correctness of the transformations. The PVS specification language, based on typed higher order logic allows us to investigate the correctness using a convenient level of abstraction. The PVS verifier features automatic procedures and interactive verification rules to check properties of specifications. We have integrated efficient simplifiers and model-checkers with PVS to facilitate verification. Finally, we show how our method can be applied in the study of formalisms for hybrid/real-time systems, optimizing compilers, data-flow languages, and software engineering. Based on the applications of our method on such off-the-shelf formalisms, we substantiate our claim - that an axiomatic specification coupled with an efficient mechanical verification is the most suitable approach to specify and verify transformations on dependency graphs independent of underlying behavior models. |
Extent | 3719099 bytes |
Genre |
Thesis/Dissertation |
Type |
Text |
FileFormat | application/pdf |
Language | eng |
Date Available | 2009-04-27 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
IsShownAt | 10.14288/1.0051377 |
URI | http://hdl.handle.net/2429/7597 |
Degree |
Doctor of Philosophy - PhD |
Program |
Computer Science |
Affiliation |
Science, Faculty of Computer Science, Department of |
Degree Grantor | University of British Columbia |
GraduationDate | 1995-11 |
Campus |
UBCV |
Scholarly Level | Graduate |
AggregatedSourceRepository | DSpace |
Download
- Media
- 831-ubc_1995-060470.pdf [ 3.55MB ]
- Metadata
- JSON: 831-1.0051377.json
- JSON-LD: 831-1.0051377-ld.json
- RDF/XML (Pretty): 831-1.0051377-rdf.xml
- RDF/JSON: 831-1.0051377-rdf.json
- Turtle: 831-1.0051377-turtle.txt
- N-Triples: 831-1.0051377-rdf-ntriples.txt
- Original Record: 831-1.0051377-source.json
- Full Text
- 831-1.0051377-fulltext.txt
- Citation
- 831-1.0051377.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0051377/manifest